The hardest part of dealing with state in a complex system is maintaining consistency in different places. Some instances of this this can be avoided by creating single-sources-of-truth, but in other cases you can't unify your state, or you're dealing with an external stateful system (like the DOM), and you have no choice but to find a way to keep separate pieces of state in sync.
I should probably write a blog post on this
I've been developing a convention in React over the last year that uses this idea and it's very very nice. I'm also trying to write a platform-agnostic UI language specifically for designers that makes this a first class concept.
That mental model has been the most natural for me as well. It really clicked for me when reading Trygve Reenskaug's descriptions of MVC [1], particularly this paragraph (emphasis mine) and the illustration that follows:
> The essential purpose of MVC is to bridge the gap between the human user's mental model and the digital model that exists in the computer. The ideal MVC solution supports the user illusion of seeing and manipulating the domain information directly. The structure is useful if the user needs to see the same model element simultaneously in different contexts and/or from different viewpoints.
[1]: https://folk.universitetetioslo.no/trygver/themes/mvc/mvc-in...
How does the UI get notified of changes? (like the article discusses some changes might come from different part of the UI, or might even come from external like in a chat client)
How do you handle actions of the user? (Of course in the controller in MVC, but how does it work exactly?)
See this: https://flutter.dev/docs/get-started/flutter-for/declarative
Absolutely. The error is in trying to treat GUI elements as an actual data model. That and premature optimization trying to only update things as they change.
Once you take that approach the only remaining hard part is that for complex applications, events can trigger state dependent behavior. But that should be at a layer under the GUI toolkit.
This complete example will toggle the checkbox every second (1000ms), or the user can click to update the variable. The checkbox watches variable "v".
#!/usr/bin/wish
checkbutton .button -variable v -text "Click me"
pack .button
proc run {} {
global v
after 1000 run
set v [expr !$v]
}
runCasey Muratori first spoke about IMGUI back in 2005, but didn't actually implement it practically: https://www.youtube.com/watch?v=Z1qyvQsjK5Y
In around 2013, Omar Cornut wrote an incredibly high quality practical implementation of Muratori's concept called _Dear ImGui_: https://github.com/ocornut/imgui Using both Cornut's library and Muratori's mindset is incredibly powerful and liberating. Things that would require days of work and multiple files in Qt or Cocoa can be finished in four hours and a couple of hundred lines of IMGUI code. It also uses an order of magnitude less CPU/memory resources (which was an issue for us as we were rendering and displaying RAW video footage in real-time).
I find it amazing that this way of thinking hasn't completely dominated frontend programming by now. There is literally no downside to using the IMGUI concept - entire classes of bugs disappear, and your code is simultaneously smaller, easier to maintain and more logical in flow. It's also a shitload more fun to write (something I think that SWE culture overlooks too much) - you spend the majority of your time writing code that directly renders graphics to the screen, rather than fixing obscure bugs and dealing with the specifics of subclassing syntax.
There's a big downside, the performance penalty. For a GUI that renders moderately complex objects, the cost of not caching becomes overwhelming, the equivalent of losing 20 years in GL architecture advances. Pushing the VBO to the GPU each frame is the same as losing indexing. My own application doesn't render at 60 fps in immediate mode.
I find that people who believe that IMGUI is somehow faster than RMGUI are game developers who have been taken in by marketing, because basic knowledge of GPU programming (i.e. what is indexed rendering) is enough to see that this couldn't possibly be true. Their UIs are usually simple enough that the performance penalty is not important. And many RMGUIs have heavy styling, visually incomparable to their IMGUI counterparts, which makes the average RMGUI far heavier.
Though the bigger reality for most front-end devs in 2021 is that most of us are building for the web, and the web doesn't really allow for immediate-mode
I think this is what Elm solved (and then by plagiarising the same approach, React and Vue). Make the interface declarative rather than imperative and voila - state is automatically in sync.
Good GUIs have way too many requirements to be controlled (via a controller) by the model. As a typical example, the fact that a button should only be activated if there is data in a textbox has usually nothing to do with the underlying model. The model should only ever contain valid and consistent data.
1) Define a domain model + service(s) that fundamentally addresses the logical business functionality without any notion of a specific UI or its stateful nature. This service should be written in terms of a singleton and injected as such. This is something that should be able to be 100% unit testable, and could be exposed as a prototype in a console application or back a JSON API if necessary (i.e. pretend you are writing a SAAS).
2) Define UI state service(s) that take dependency on one or more domain services. These should be scoped around the logical UI activity that is being carried out, and no further. The goal is to minimize their extent because as we know more state to manage means more pain. These would be injected as Scoped dependencies (in our use of Server-Side Blazor), such that a unique instance is available per user session. Examples of these might be things like LoginService, UserStateService, ShoppingCartService, CheckoutService, etc. These services are not allowed to directly alter the domain model, and must pass through domain service methods on injected members. Keep in mind that DI is really powerful, so you can even have a hierarchy of these types if you want to better organize UI event propagation and handling.
3) Define the actual UI (razor components). These use the @inject statement to pull in the UI state services from 2 above. In each components' OnRender method, we subscribe to relevant update event(s) in the UI service(s) in order to know when to redraw the component (i.e. StateHasChanged). Typically, we find that components usually only inject one or 2 UI state services at a time. Needing to subscribe to many UI state events in a single component might be a sign you should create a new one to better manage that interaction.
This is our approach for isolating the domain services from the actual UI interactions and state around them. We find that with this approach, our UI is quite barren in terms of code. It really is pure HTML/CSS (with a tiny JS shim) and some minor interfacing to methods and properties on the UI state services. This is the closest realistic thing I've seen so far in terms of the fabled frontend/backend isolation principle. Most of the "nasty" happens in the middle tier above, but it is still well-organized and easy to test. By enforcing immutability on the domain services, we ensure discipline with how the UI state services must consume the model. Blazor then goes on to eliminate entire classes of ridiculous bullshit by not forcing us to produce and then consume an arbitrary wire protocol to get to our data.
> By enforcing immutability on the domain services, we ensure discipline with how the UI state services must consume the model
Could you expand on that part? Do you mean that your domain services use an append-only approach to manage state?
Second, with this type of modeling on the separation of GUIs and system, what type of movement do you think there will be in Microsoft and Google going towards an even more minimal computer, almost entirely reliant on the cloud space for compute power. Google's machines are practically there, but from what you've mentioned above seems like a more fully "realized" approach than Google's.
In the aggregate, what ends up being most effective for me, is rapid prototyping, and “paving the bare spots.”[0]
I find that I do a terrible job of predicting user mental models.
The rub with prototypes, is that they can’t be lash-ups, as they inevitably end up as ship code. This means that a lot of good code will get binned. It just needs to be accepted and anticipated. Sometimes, I can create a standalone project for code that needs to go, but I still feel has a future.
So there’s always a need for fundamental high quality.
What has been useful to me, is Apple’s TestFlight[1]. I write Apple software, and TestFlight is their beta distribution system.
I start a project at a very nascent stage, and use TestFlight to prototype it. I use an “always beta” quality approach, so the app is constantly at ship quality; although incomplete.
It forces me to maintain high quality, and allows all stakeholders to participate, even at very early stages. It’s extremely motivating. The level of enthusiasm is off the charts. In fact, the biggest challenge is keeping people in low orbit, so they don’t start thinking that you are a “WIZZARD” [sic].
It also makes shopping around for funding and support easy. You just loop people into the TestFlight group. Since the app is already at high quality, there’s no need for chaperones or sacrifices to The Demo Gods.
I like that it keeps development totally focused on the actual user experience. They look at the application entirely differently from me.
[0] https://littlegreenviper.com/miscellany/the-road-most-travel...
Statecharts is currently probably the most undervalued tool when it comes to programming GUIs with state. Statecharts are a continuation of state machines, but with less footguns and better abstractions to be able to build larger systems.
In the end, you either build a GUI that are using state machines implicitly, or explicitly. Tends to be less prone to bugs if you do so explicitly.
If you're interested, here is some starting points (copied from an older comment of mine):
Here is the initial paper from David Harel: STATECHARTS: A VISUAL FORMALISM FOR COMPLEX SYSTEMS (1987) - https://www.inf.ed.ac.uk/teaching/courses/seoc/2005_2006/res...
Website with lots of info and resources: https://statecharts.github.io/
And finally a very well made JS library by David Khourshid that gives you lots of power leveraging statecharts: https://github.com/davidkpiano
While we're at it, here are some links to previous submissions on HN regarding statecharts with lots of useful and interesting information/experiences:
- https://news.ycombinator.com/item?id=18483704
- https://news.ycombinator.com/item?id=15835005
- https://news.ycombinator.com/item?id=21867990
Yuck. Use continuations with anonymous fns that tell the GUI what to do next. The state is in the closure implicitly!
I wish I had an articulate counterargument, but just look at this: https://xstate.js.org/docs/#promise-example
Statecharts are a formalism for modeling stateful, reactive systems.
Beautiful is in the eye of the beholder, and never moreso when the programmer is blind to the cost of repetition and state.
Arc got it right. http://www.paulgraham.com/arcchallenge.html
Write a program that causes the url said (e.g. http://localhost:port/said) to produce a page with an input field and a submit button. When the submit button is pressed, that should produce a second page with a single link saying "click here." When that is clicked it should lead to a third page that says "you said: ..." where ... is whatever the user typed in the original input field. The third page must only show what the user actually typed. I.e. the value entered in the input field must not be passed in the url, or it would be possible to change the behavior of the final page by editing the url.
(defop said req
(aform [onlink "click here" (pr "you said: " (arg _ "foo"))]
(input "foo")
(submit)))I don't necessarily agree with all the implementation details of xstate, in particular to where the logic tend to be located in practice, and the reliance on the Actor model for many things in the wild. I rather try to guide people to Statecharts as a paradigm overall, and if you happen to use JS, I think xstate is probably the most mature library there. But as all libraries/frameworks, they can be over-relied upon.
If you're in the Clojure/Script world, which is where I mainly locate myself, then https://lucywang000.github.io/clj-statecharts/ is all you need and so far the library I've had the best luck with.
https://www.edx.org/course/programming-for-everyone-an-intro...
Many complicated views have their own internal models for things like where they are scrolled, what columns are shown, or what elements of a tree are expanded. But those compound views are written so that, from the outside, they appear exactly the same as any other view.
> I’d love to hear what the functional programming camp has to say about this problem
It's called functional reactive programming.
> a view updating itself from the model should never trigger transactions on the database
i.e. a view rendering from state/messages, should never re-emit new state/messages.
The separation of an underlying data model from any particular way of presenting its current state is a powerful idea that has proven its value many times. We’ve used it to build user interfaces, from early GUIs when we didn’t yet have the kinds of libraries and platform APIs we enjoy today, through games with unique rendering requirements and a need to keep frame rates up, to modern web and mobile app architectures. The same principle is useful for building non-user interfaces, too, in areas like embedded systems where you are “presenting” by controlling some hardware component or distributed systems where you are “presenting” to some other software component as part of the larger system.
But I found that it's almost impossible to describe to someone used to event-/callback-driven UIs why exactly that is. You really need to try it yourself on a non-trivial UI to "get it".
Accessibility is hard. The OS needs to know visual tree to be able to conclude things like “this rectangle is a clickable button with Hello World text”.
Complex layouts are hard. Desktop software have been doing fluid layouts decades before the term was coined for web apps, because different screen resolutions, and because some windows are user-resizable. Layout can be expensive (one reason is measuring pixel size of text), you want to reuse the data between frames.
Animations are hard. Users expect them because smartphones introduced them back in 2007: https://streamable.com/okvhl Note the kinetic scrolling, and soft “bumps” when trying to scroll to the end.
Drag and drop is very hard.
There's nothing in the idea that forbids immediate mode UI frameworks from keeping any amount of internal state between frames to keep track of changes over time (like animations or drag'n'drop actions), the difference to traditional UI frameworks is just that this persistent state tree is hidden from the outside world.
Layout problems can be solved by running several passes over the UI description before rendering happens.
For accessibility, the ball is mainly in the court of the operating system and browser vendors. There need to be accessibility APIs which let user interface frameworks connect to screen readers and other accessibility features (this is not a problem limited to immediate mode UIs, but custom UIs in general).
1. Application data
2. Presentation data
3. Presentation rendering
The first is the single source of truth for your application state, what we often call a “model” or “store”. It’s where you represent the data from your problem domain, which is usually also the data that needs to be persistent.
The second is where you collect any additional data needed for a particular way of presenting some or all of your application data. This can come from the current state of the view (list sort order, current page for a paginated table, position and zoom level over a map, etc.) or the underlying application data (building a sorted version of a list, laying out a diagram, etc.) or any combination of the two (finding the top 10 best sellers from the application data, according to the user’s current sort order set in the UI). This is often a relatively simple part of the system, but there is no reason it has to be: it could just as well include setting up a complicated scene for rendering, or co-ordinating a complicated animation.
The final part is the rendering code, which is a translation from the application and presentation data into whatever presentation is required. There isn’t any complicated logic here, and usually no state is being maintained at this level either. The data required for rendering all comes ready-prepared from the lower layers. Any interactions that should change the view or application state are immediately passed down to the lower layers to be handled.
The important idea is that everything you would do to keep things organised around the application data also applies to the presentation data. Each can expose its current state relatively directly for reading by any part of the system that needs to know. Each can require changes of state to be made through a defined interface, which might be some sort of command/action handler pattern to keep the design open and flexible. Each can be observable, so other parts of the system can react to changes in its state.
It just happens that now, instead of a single cycle where application data gets rendered and changes from the UI get passed back to the application data layer, we have two cycles. One goes from application data through presentation data to rendering, with changes going back to the application data layer. The other just goes from the presentation data to the rendering and back.
I have found this kind of architecture “plays nicely” with almost any other requirements. For example, data sync with a server if our GUI is the front end of a web app can easily be handled in a separate module elsewhere in the system. It can observe the application data to know when to upload changes. It can update the application data according to any downloaded information via the usual interface for changing the state.
I have also found this kind of architecture to be very flexible and to fit with almost any choice of libraries for things like state modelling, server comms, rendering the actual view, etc. Or, if your requirements are either too simple to justify bringing in those dependencies or too complicated for a ready-made library to do what you need, you have a systematic overall structure in place to implement whatever you need directly, using all the same software design and scaling techniques you normally would.
Running = -1
WHILE Running
PRINT "1) Add record"
IF HasRecords THEN
PRINT "2) Delete record"
PRINT "3) Edit record"
END IF
PRINT "H) Help"
PRINT "X) Exit"
INPUT "Choice: ", I$
IF I$="1" THEN AddRecord
IF (I$="2") AND HasRecords THEN DeleteRecord
IF (I$="3") AND HasRecords THEN EditRecord
IF I$="H" THEN ShowHelp
IF I$="X" THEN Running = 0
WEND
...which, sure, it is ridiculously simple (like imguis) but it can very quickly become unwieldy the more you ask from your user interface. There is a reason why UIs moved beyond that.[1] https://github.com/0xafbf/aether/tree/master/addons/godot_im...
Does it? Computers are FAST. Webdev stacks are many things, but it ain’t fast or performant.
Has anyone built a moderately complex UI in a good immediate mode UI and in a good retained UI? I’d be very curious to know the actual results.
That’s how react works. It’s effectively an ‘immediate mode’ abstraction above a component based UI.
That avoids the component connecting and wiring problems, and creates the simple determinism that makes immediate mode systems easier to reason about.
This approach isn't without it's own challenges of course. For example it is sometimes hard to keep track of what is going on in complex applications with cascades of signals and slots. Some people also hate the fact that signals and slots use auto generated code, but I have never really found that to be a problem in practise.
I'm optimistic about Qt 6's QProperty (I don't know how it compares to FRP or, as someone else mentioned, MobX), but Qt 6 currently does not have KDE libraries, or Linux themes to fit into desktop environments or distros.
One thing you have to watch out for it that programmatic changes can fire signals. If you don't want this you have to add QObject::blockSignals(true);...QObject::blockSignals(false); around your call.
He did miss out a pretty significant flaw of message busses - the producers and consumers of messages are completely decoupled, which makes debugging really annoying because you don't have anything like a stack trace. You just know that your component received a message, and good luck trying to figure out where it was sent from and why.
That's also a big problem with magical reactivity systems like Vue. Your stack trace is pretty much "yeah something changed somewhere so we're updating this component. Don't try and figure it out."
This is actually a really good point. I don't know why you were downvoted.
A solution I’ve always had is to build your message bus with logging in mind initially
I think you misunderstood. Of course there's always a stack trace; you're still executing code. But with message buses and magic reactivity systems your stack trace always just goes to `mainEventLoop()` or `processEvents()` or whatever.
It doesn't go to the thing that actually caused the change as it would if you used direct function calls. I'm not saying it's a deal breaker, it's just a notable downside of those architectures.
IMO, a great solution here is along the lines of "Lift the state up" / "MV(X)". But... there is a vital detail which is usually missed when deciding how exactly the state in your M gets passed to your V for display: you must refresh your entire V when M changes in any way, not just the bit of V that you think changed. It's the only way to completely remove these difficult to test, hard to spot edge cases that the article discusses.
This is almost impossible to talk about without specific examples, so a while back I wrote such an example that I think distills the core problem, and demonstrates how refreshing the entire V not only solves the problem but typically takes less code to do it: https://dev.to/erdo/tutorial-spot-the-deliberate-bug-165k
> Immediate Mode GUIs somehow never reached the critical mass and you are probably not going to find it outside of the games industry.
Isn't this what React is built on? I think this was part of the 'original' selling point by Pete Hunt: https://youtu.be/x7cQ3mrcKaY (2013). Around 20:00 in that video he compares React with the Doom3 engine.
Agree with both of these points. You can no longer treat assignment simply as the way you mutate data, you also have to anticipate its effects in the data-binding system.
I imagine the circular event problem could be addressed with static analysis, but I don't know of any framework that does this.
That’s why in some framework I´m working on, the only events are those from input devices (keyboard, etc.) and windowing, views don’t generate any and are just here to paint the state and indicate to the controller where things are on the screen.
Often there is a price paid in brevity, but I believe it is worth it. It may seem annoying to propagate a click explicitly through 5 parent components just to sum clicks into a count widget, but as soon as a short circuit is made, you've created a graph, and you lose the ability to isolate GUI sub-trees for testing/debugging.
struct LightState {
var isLightOn: Bool
}
class LightViewController {
let lightView = UIView()
}
class StateDirector<
LightState,
LightViewController> {
let lightState: LightState
init(state: LightState) {
self.lightState = state
}
func bind(
view: LightViewController
) {
// Every time is turned on changed call this
LightState.add(
listener: self,
for keyPath: \.isTurnedOn,
handle: .method(Self.handleLightChange)
}
func handleLightChange(isOn: Bool) {
view.lightView.backgroundColor = isOn ?
.green : .red
}
}
This allows clear separation of State, View and Changes.You can then just model your state in your reducer and “simulate your views” inside Unit tests
We're in the age of 4K, 60 FPS rendering. If any GUI application has a message bus that's strained enough to impact performance, then either the application isn't made for humans (because if all that stuff is doing anything it'd result in a screen updating far faster than it could be read), or there's some horrible bug somewhere that produces a flood.
Or is that really the same as immediate mode ?
For the example given in the article, the update function could look something like
def View.update():
if model.lightTurnedOn:
self.backgroundColor = red
else:
self.backgroundColor = ibmGray
This way, all view property changes happen in one place, where you can read the code and understand how the view will appear in each possible state. Circular listener loops are impossible, and view properties for animations can even be computed by calling update twice (once before and once after the state change).I like the current trend of going back to renderless components as well. This way you separate the state changes from the way it looks like. Feels like each component is a miniature MVC framework with front and a back.
In fact, it can actually be less work, since you're coalescing changes into a single update() call rather than sprinkling them across observer callbacks. Also, if your update function starts running too slowly, you can always make it more precise by keeping track of which states have changed internally to the view. For example, if setting the background color takes a long time for whatever reason, you can do something like this:
def View.update():
if self.lightWasTurnedOn != model.lightTurnedOn:
if model.lightTurnedOn:
self.backgroundColor = red
else:
self.backgroundColor = ibmGray
self.lightWasTurnedOn = model.lightTurnedOn
Now backgroundColor will only be set if lightTurnedOn actually changed since the last update.Several toolkits and frameworks provide for "after all other events have been processed" hooks/events for such logic, e.g. Delphi has the TApplication.OnIdle event and later versions as well as Lazarus/FreePascal have dedicated controls for this event and "idle timers" meant to be used for updating any invalidated parts of the UI after all other events have finished. Similarly wxWidgets has wxEVT_UPDATE_UI and i'm almost certain that Qt has something similar too - though i can't find it now.
The answer for me has been pervasive use of MobX computeds everywhere. See https://mobx.js.org/getting-started
<REDACTED> - - [14/Feb/2021:22:26:45 +0100] "GET /posts/the-complexity-that-lives-in-the-gui/ HTTP/1.1" 200 16798 "-" "HackerNews/1391 CFNetwork/1220.1 Darwin/20.3.0"
<REDACTED> - - [14/Feb/2021:22:26:45 +0100] "GET /posts/the-complexity-that-lives-in-the-gui/ HTTP/1.1" 200 16798 "-" "HackerNews/1391 CFNetwork/1220.1 Darwin/20.3.0"
<REDACTED> - - [14/Feb/2021:22:26:45 +0100] "GET /posts/the-complexity-that-lives-in-the-gui/ HTTP/1.1" 200 16798 "-" "HackerNews/1391 CFNetwork/1220.1 Darwin/20.3.0"
The requests usually come from a certain ip multiple times until fail2ban bans it. It's not just one offender, there are multiple behaving like that.As for why it occurs so often in quick succession, perhaps there's a bug in the app causing it to fetch several times instead of once.
If anybody knows which app is that, please tell the maintainer that they have a serious bug. It's night here, so I am logging off.
And that's a good thing, because so far, AFAIK, no one has implemented accessibility (e.g. for screen readers) in an immediate-mode GUI. I hope to work on that problem sometime soon.
Modern React, with hooks and functional components, solves the problem posed in the article by choosing option 2, lift the state up. The hypothetical problem (change the avatar background when the “working” light is on) is a non-issue. You wouldn’t add an event listener for the light’s state change, you’d simply pass the “isWorking” prop to both the light and the avatar, and they would each render based on the value of “isWorking”. There is no reason for one component to know about the other; they don’t actually do anything except sit there and look pretty, correctly.
The UI is always backed by a data model. The more explicitly you express that model—by keeping it all in one tree like Redux does, for instance—the simpler your UI becomes to reason about. React won’t (can’t) stop you from doing bad things like hitting random services when a component loads, but its design, especially recently, guides you away from that pitfall.
hA! Hit the nail on the head :P
However, React introduces a lot of complexity to avoid unnecessary DOM updates, which makes me wonder about the viability of an immediate mode GUI in the browser using canvas.
The value of essentially creating and mutating a tree structure like the DOM is that things like screen readers and UI automation tools can read it. With canvas they just see a big bitmap.
- https://github.com/Erkaman/pnp-gui
- https://github.com/jnmaloney/WebGui
I guess accessibility is what takes the biggest hit with these implementations.
- it is badly designed. If state is too hard to manage it means often that the design is bad
- state is not the most important part of the UI. Interaction and being elastic to changes are the most important things
- frontend is often overengineered in a bad way. When you use bad solutions like React or Redux there is no change there won't be any problems.
If you have a problem synchronizing your views with a light, that's because the light should exist outside of your views.
MVC was invented in the 70s. Why do we have to act that this is not a solved problem?
The home page is a bit ugly, but it contains a range of examples that are commonly awkward to implement in other UI libraries:
No programming paradigm can stand up to rushed/flawed mental models.
The domain can become quite complex; it is wishful thinking to believe that a single approach could drain it of all complexity.
How is the click propagated recursively through every component and is the position compared repeatedly at every step?
Win* GetChildAt(Win* w, int x, int y)
{
size_t i;
if (x >= w->ScreenX1 && y >= w->ScreenY1 &&
x <= w->ScreenX2 && y <= w->ScreenY2) {
for (i=0; i < w->ChildCount; i++) {
Win* child = GetChildAt(w->Child[i], x, y);
if (child) return child;
}
return w;
}
return NULL;
}
Call this on the root window and you have the deepest child at the given coordinates.(though there is usually a bit extra logic for handling, e.g., invisible and disabled windows)
Well no. There are applications with nice well thought out GUIs.
>"Congratulations, a large amount of your effort will go towards resolving weird message bus problems as opposed to writing the business logic of your app"
Sorry but I do not resolve "weird message problems". I use my own publish-subscribe mostly asynchronous message bus for my GUI apps (actually I use it also for non GUI parts as well). Components (visible or not and including running threads) can subscribe to events. It does not exhibit any performance / memory problems and in combination with the global app state object makes programming interactions a piece of cake.
Simply not true. I recently had to salvage business rules from a codebase written by single person over the course of many years. It was probably one of the messiest code I've ever seen.
>"How big is your codebase and how many people work on it"
Depends on a project. On some I work alone. Some had 2-3 persons. The biggest team I've ever had to lead was about 35 people (not all developers). Properly organizing and splitting work and using mostly experienced people the resulting code was very decent and quite possible to grasp. Also well documented.
1. Mixed concerns and a lack of application layering. In React code bases and others like them, its not uncommon for me to find business logic embedded directly inside the components themselves. This makes the component inherently coupled to the specific implementation its being used and can only be re-usable in other contexts by passing in modifier flags to the props etc. In my opinion, components should be mostly of the stateless flavor and delegate anything not directly related to their immediate UI concerns elsewhere. This increases the likelihood of re-usability of your components and makes testing business and component logic much much more straight forward.
2. This might just be my personal experience, but I've noticed a bit of a dismissive attitude around design patterns and traditional computer science principles among my front end brethren. YAGNI and all that. While I think its fair that the UI != to the back end, I think frequently the baby gets thrown out with the bath water. For instance, I frequently leverage dependency injection in my front end work as a way to make my code more flexible and testable. It's hard to sell the benefits of something like DI until your application grows in complexity to the point that you are dealing with things like nested mocks in your tests and/or need to make a non trivial change to some piece of functionality. I've been seeing the winds start to shift a bit on this which is encouraging.
3. Most of the time there is little to no overall _conceptual integrity_ to a given front end code base. Its uncommon for me to come into an existing code base and have anyone be able to tell me what the architecture is comprised of. Things like what logic lives where and why. I'm not saying people _don't_ do this, but in the more gnarly code bases I've encountered, this "accidental architecture" is more common.
4. Front end is still see as "easy" and the place you stick the less experienced engineers first. I sincerely hope this doesn't come off like I am gatekeeping or anything. I work with some absolutely brilliant people who are only a year or two into their career and much smarter than me. IMO its less about skill and more about having been burned enough times to know where unexpected complexity lie.
I love front end. Its challenging and interesting and maddening. My hope is that it will continue to mature and grow to the point that it gets the same respect as other disciplines within this field. These days I work full stack so its not all I do, but it will always be my favorite. :)
1. Library authors optimize aggressively for the beginner. They specifically sell their tools as being all in one and you can do everything in the view layer. Maybe its because they legitimately want to set their users up for success or maybe they'd rather gain users even if they are setting them up for issues in the long run
2. The kind of people working on web frontend aren't necessary coming from a programming background. You have the web designer/developer types who were doing a great job when they were building out static templates that got integrated into some server rendering back end. Now that everything has become an app, the idea of handling all the associated complexity is daunting. A lot of self taught developers also come in through web UI and the last thing they want to think about is proper architecture.
I'd go so far as to say there is a current of anti-intellectualism in terms of frontend application design. There is a large section of developers who want a small number of tools to do everything for them and it seems they are unwilling to even consider creating their own architecture of which individual libraries are their own self contained piece.
> I'd go so far as to say there is a current of anti-intellectualism in terms of frontend application design
I try to walk a line here between relying on my experience and being open to the idea that there is possibly a better way to do things that is just unfamiliar to me. "Beginner mindset" and all that. React hooks are a great example. At first I hated them because I didn't really understand what problems they solved well. To my eyes they encouraged mixing concerns in the component layer in ways that made testing and re-usability way harder. It was when I started considering hooks as "headless components" that I started understanding where they fit into a front end architecture. I think they solve a specific class of problem that react had no answer for previously. I don't use them for everything but now that I consider them as a special kind of component, they fit into a larger set of tools that I have at my disposal.
I also think there are different kinds of front end developers and we all get lumped into the same bucket right now. There are those that work on flashy UI stuff (which I suck at) and those that do more "middle tier" type stuff where they are building out web apps that solve business problems (more my sweet spot). Those two skill sets are way way way different from one another to the point that I think they should be classified as different disciplines.
As a rule practical and manageable UIs use all these approaches at the same time.
Components, class based and/or functional (fe: UserSection, InventoryTable ), are in principle loosely coupled. They may not know about each other and use messages to communicate.
The Flight by Twitter ( https://flightjs.github.io/ ) was/is probably the first practical thing that pioneered that approach.
The Flight is not even a framework but rather a design principle.
In fact it is an anti-framework - it postulates that in practice each component may have its own optimal internal architecture. One component is better to use React-ivity, another - Angular-ish data binding, etc. On some, immediate mode drawing is the most natural.
Smaller the task (component) - greater the chance to find silver bullet for it.
I see OP doesn't mention this option, but is slightly related to both option 1 (connect the box) and 3 (message bus / event emitter). This option is similar to how OS provides an API to user space application program. For example Windows provides an API to an application to flash its window without exposing all of its state like mentioned in option 1. https://docs.microsoft.com/en-us/windows/win32/api/winuser/n...
Here's the detail:
A self-powered object, let's call it workingIndicatorObject, can be introduced to work on the working indicator. It provides 1.) `getState() -> WORKING/NOT-WORKING` a function to get its state, and 2.) `register() -> None`, a function to register user activities. These functions are dependencies for UserSection component and InventoryTable component respectively. In term of lifetime and hierarchy, it precedes and outlives both components.
The workingIndicatorObject MUST have its own lifecycle to regulate its internal state. Its core lifecycle MUST NOT be managed under the same event loop as the GUI components, assuming the program is written on top of a framework that has it (React, Vue, etc). This is to assure that it doesn't directly affect the components managed by the framework (loose coupling). Although, a binding MAY be made between its state and the framework-managed event loop, for example, in React environment, wrapping it as a React hook. An EventEmitter can also be provided for a component to listen to its internal event loop.
Injecting it as a dependency can use features like React Context or Vue's Provide/Inject pattern. Its consumer must treat it as an optional dependency for it to be safe. For example, the UserSection component can omit drawing the "working" indicator if it doesn't receive the `getState` function as a dependency.
Internally, the workingIndicatorObject can use a set. The set is used to store a unique value (a Symbol in Javascript). `register` function creates a Symbol, stores it in the set, and deletes it after $TIMEOUT, while firing event both at addition and deletion. `getState` function is a function that returns `set.size()`. When bound to the component, an addition to the set fires an "update" command to the component, which redraw the component along its internal state. This is just one example of implementation and there are other simpler ways to have the same behaving workingIndicatorObject.
This approach allows UserSection and InventoryTable to only know `getState()` and `register()` function and nothing else, and having both of them optional. Using static type system such as TypeScript can help a lot in this since we can pin down the signatures of both function to `null | () => WORKING | NOT_WORKING` and `null | () => void`, where we can enforce both dependent components to check if it exists and to call it with the correct types, otherwise the compiler yells.
As per dependency injection, you inject stuff the other party needs. As per "dependency inversion principle", the thing you inject stuff into owns the interface for what it needs. That means a concrete object you want to inject in many places should be made to conform to interfaces defined by the "recipients" of the injection, which may - and likely will be - different from each other. And in particular, an interface may be as simple as a function.
So, for instance, if a component needs a log sink to write to, and it only ever writes a single type of messages, you do not need to ship the whole logger object (and with it, the knowledge of what a logger is). All it needs is a function `(String) => void`, so all you need to inject is your logger's Info() method.
Another core point is actually the architecture of things, which lives/dies first/last, who depends on who, and how to connect them that each party is "happy" with what's on their plate.
> incidentally, when I built my own DI system for a game
Indeed systems in game are fun to play with! I've had my share of designing a game engine once and I learnt a lot from it too. I'm intrigued, what game were you working on?
Small point, but it grated.
"the scrimers of their nation, / He swore, had had neither motion, guard, nor eye, / If you opposed them."
King James Bible:
"For I am persuaded, that neither death, nor life, nor angels, nor principalities, nor powers, nor things present, nor things to come, nor height, nor depth, nor any other creature, shall be able to separate us from the love of God, which is in Christ Jesus our Lord."
Rudyard Kipling:
"But there is neither East nor West, Border, nor Breed, nor Birth, / When two strong men stand face to face, tho’ they come from the ends of the earth!"
Charles Dickens:
"I had youth and hope. I believe, beauty. It matters very little now. Neither of the three served or saved me."
Jane Austen:
"Mary's ailments lessened by having a constant companion, and their daily intercourse with the other family, since there was neither superior affection, confidence, nor employment in the cottage, to be interrupted by it, was rather an advantage."
Thomas Hardy:
"Half an hour passed yet again; neither man, woman, nor child returned."
Samuel Johnson:
"Among these, Mr. Savage was admitted to play the part of Sir Thomas Overbury, by which he gained no great reputation, the theatre being a province for which nature seems not to have designed him; for neither his voice, look, nor gesture were such as were expected on the stage"
The authors above are not cherry-picked, except that I happened to remember the Kipling and St Paul quotations. Not one of the notable English writers I looked up failed to provide me with at least one example of "neither" applying to more than two things.