This is remarkable because DAGs hit a sweet spot in the middle of the three common programming paradigms (OO, event-driven, functional). Let's have a DAG as the top-level structure of our applications. Data-fetching and onChange handlers live in DAG nodes, next to the data they act on. The UI flows out from the DAG with fine-grained reactivity. Our app state is effortlessly consistent, because any outside change (user action, api result) unleashes a graph traversal. Our UI components become much simpler, because they just need to dumbly reflect values in the graph.
I'm putting this up for a second time. Absolutely no-one bit the first time, which can't be right :-)
> any outside change (user action, api result) unleashes a graph traversal. Our UI components become much simpler, because they just need to dumbly reflect values in the graph.
But that sounds the same as the reasoning behind React's one-way data flow.
I wrote one for Rust called moongraph: https://crates.io/crates/moongraph https://github.com/schell/moongraph
It powers my configurable renderer and my ECS. I'm sure lots of other folks have written their own and they might not even know it's a DAG.
So while I think the description of your solution is better than I've seen in xstate's docs, I'd be inclined to go with the xstate's more mature software that gives me the ability to represent front-end state in a DAG, since it has a thriving ecosystem and (I assume) better tooling.
But I might give Octopus a spin anyway to see what the DX is like
B) you’re gonna go far in this life, I’d bet! I know nothing about you but this whole README just reeks of a bright young thinker who’s not afraid to question existing paradigms and can follow through on their conceptual vision. Plus it helps that you’re a very dramatic and effective writer. I encourage you not to let any of this feedback get you down!
Will sit down with the repo myself later, just for me it’s not an angle of HCI I’ve examined explicitly, at least for very long.
If you're looking for additional reading then it's arguable that Reacts matches this viewpoint but any web framework from the past few years using signals (re-popularized by Solid) is explicitly this approach. Most frameworks don't opt for graphical node editors and I've never liked the approach but I've seen a number of those as well.
More generally this falls into dataflow programming which has had a number of published papers starting in the late 80s I believe.
If you're looking for a similar computer sciency approach to UI structure you can look up statecharts which is an older idea that formed the conceptual basis for Ember Router and, in turn, most js framework routers in the past decade.
The "law of Demeter" is a design rule that says to only work with data you directly know about - i.e., friends, but not friends of friends. It make local reasoning tractable and forces any (friendly) use to be surfaced as such, which gives a better way to assess global complexity. (By contrast, the DOM is just, well, global state, whatever the data structure.)
But who or what is "you"? In this case, the node, which is responsible for (local) data liveness; a UI as simply derived nodes from that, with cross-node validation dependencies represented as graph dependencies.
Modeling overall system activity as graph updates does provide nice separation to permit concurrent programming, with graph structure as the arbiter of conflicts, but you'll need a more specific notion of "consistency" that can distinguish the use cases this would work for.
The comparison and question would be with various UI and data component systems, which handle data mapping to the back-end, cross-component aspects like themes or device constraints, user id and security, etc.: how are these higher-level concerns managed with an iterative graph structure?
Serialization is a nice sample concern, but your (attractive) approach exemplifies the issue: to make it work entails tracking some conventions/responsibilities, which are not captured by the DAG framing.
For me, frameworks work when the types basically constrain developers into doing the right thing. Local reasoning and responsibility translate directly to local code, and proof relates directly to compliance, preferably at the language level at build time.
These kind of local-reasoning models shine when the code is federated, e.g., when hosting multiple services and contributions. The whole app should just work if the participants can be validated on load/launch, and any failures should be restricted to the local nodes instead of cascading. So if you target an app requiring some specific collaborative domain (with a promising business model), this proposal might draw more attention, and you can work through issues concretely.
So perhaps target a killer app, build understandable constraints into the code, and demo key features.
I'm really not sure what's supposed to be different here. As is evident from the repository you obviously know about React and Vue, so maybe try to contrast it to them, or how it fits in in relation to them? To me it looks like you are building a system-in-a-system, where you are rebuilding the primitives that already come with React (or any of its peers).
The closest thing I'm aware of is Jotai. Atoms in jotai can take dependencies on other atoms, so forming a graph.
Reporting nodes in Octopus are, I believe, completely new. You can create nodes that select their predecessors with a filter function. So the "totalPrice" node takes a dependency on any node with a price property, and recalculates when anything with a price changes.
Not all acyclic graphs are trees, but all trees are acyclic graphs. As such this part of your description is confusing.
Your last paragraph does a much better job at explaining how it differs.
It struck me as being both a very intriguing framework, and also exceptionally over-engineered for the kind of problem it was solving (since an async-await architecture gets you all the same benefits).
Same downsides as runtime dependency injection frameworks like Guice—the framework would explode if it couldn’t connect the graph.
Some other upsides: you could decorate methods with all sorts of distributed goodness like retries and hedging. The approach also enable reusable methods that you snap into your server.
Maybe just me but I find them completely unreadable and painful.
I don't really think there anything inherently better about having the framework connect things through inputs and outputs or developer explicitly specifying the connections through function calls.
In both cases the executor can decide how to execute the graph.
Like many, I early in my career once rewrote a partial implementation of windows workflow foundation to work out of MySQL with db functions. It was .. beautiful. :) And the problem learned to behave and not need it again. Still running years later and reasonably refactorable.
my experience working with DAGs programmatically (ie not as an abstraction (like in React) but actually handling the edges&nodes of a graph-based abstraction in code) is that it looks nice theoretically but in practice top-down (conceptual) approach like this often tends to over-complicate things
would love to see some real-world examples where an actual dev team, etc, find values in graph-theortical-based abstractions like this
I may be misunderstanding what this is, but back in the J2EE days ISTR Struts having something very similar to this, ie a flow from page to page declared apart from the actual page logic itself. Edit: and all done in XML, so obviously doesn't count!
Cool project though!
That's the document tree - when it comes to application state, which is just data in memory, anything is possible. These state libraries tend to treat in-memory application state as the "real" state, and try to treat the document tree as a side effect. This is one reason that front end unit tests tend to miss bugs - application state being what you expected doesn't mean the user sees what they expected.
Then, in Octopus you also get reporting nodes and visualisation, which, once tasted, no return.
const { diff } = require("atlas-relax");
const DOMRenderer = require("atlas-mini-dom");
const App = () => (
<div>
Bonsly evolves into Sudowoodo after learning mimic.
</div>
)
// create a DOMRenderer plugin
const rootEl = document.getElementById("root");
const renderingPlugin = new DOMRenderer(rootEl);
// mount <App/> against the "null" DAG and render it to the DOM.
diff(<App/>, null, renderingPlugin);
The cool thing is you could diff two different DAGs against each other and listen to the delta, like `diff(<App1/>, <App2/>, consoleLogPlugin)`. The base library could be used to generate application frameworks as long as your application framework can be thought of as a DAG operating on data. React is an example of such a framework, but so is something like Airflow, so you could write a plugin that lets you build your own kind of Airflow. That was the motivation behind my DAG abstraction -- to make it easy to create DAG frameworks for frontend and backend. Let the base library do all the hard reconciliation work, and you can build application frameworks on top.Anyway that was all mostly an exercise. I didn't end up using my framework for anything more than a state management solution for React. It handles global data perfectly, although these days React context or hook management is usually enough.