Product market fit is generally achieved when you can’t keep up with demand for your product. “Can’t keep up” is generally an operational problem first, then a technical problem (the Do Things That Don’t Scale stuff catches up to you basically).
Scaling is a GOOD problem…as long as you actually have a business model. If you’re selling stuff but getting crunched by demand and your tech is cracking at the seams…hell yes! Pay people to fix the tech/replace the janky or slow stuff/automate manual things.
If you don’t have a business model, you can scale the shit out of some idea but not be able to afford to fix the issues coming from breaking out of your frameworks. Not to say you can’t eventually make this model work, you just have to raise a shit ton of money to do it usually.
Use frameworks. It’s not a debate. Don’t reinvent wheels when you’re trying to invent some new product idea.
Only work on the product.
I’ve worked at a startup that went all in on frameworks and the day we crossed the pain threshold with Relay, dev time slowed to a crawl. Really hindered our ability to ship product as you say, and the switching costs were high, because frameworks underpin everything you do, you can’t just swap them out like libraries
Too many are buggy and/or poorly documented and take lots of trial and error to get working as intended. And you create a dependency mess by including lots of pluggins/libraries. It may be quick up front, but can jack maintenance after say 5-ish years. Multi-K-lines-of-code libraries "rot" as environments change. If you can write a shop-tuned version in say 300 lines, do it, because it's easier to fix later. Select vendor libraries/pluggins judiciously.
But I think OPs main point stands, your far more likely to die by not shipping things early than by having to refactor occasionally. After 5 years 50% of businesses have failed [0] and this is far far higher for startups & side projects.
[0] https://www.bls.gov/bdm/entrepreneurship/entrepreneurship.ht...
But if that was the case, you would do that, resorting to other tools for something that tiny is silly and only really done by beginners. Something that tiny would also unlikely have bug or documentation issues.
There are some great tools that are poorly documented/buggy, yes, but they are quite rare and it's much easier to use them and fork the repo (if inactive) than to build from scratch.
This idea that you avoid creating a dependency mess by writing more of your own work, isn't the entire truth. The dependency is just internal, and far more likely to have poor documentation than open source plugin #4159. So you really haven't solved or avoided the problem at all.
You don't need to try hard to convince yourself that there's no causal link between the two (eg. fast food sells well, but isn't good food, high-end goods usually don't sell well because buyers aren't willing to pay extra for marginal improvements).
And so does your argument. The post OP was the reaction to was talking about quality and how misguided the whole field is in terms of using existing, concrete pieces of software s.a. React. And that is, unfortunately, the reality. Web is awful, especially because of how it's been developed, especially due to things like React. But it also sells well. But this wasn't the point.
> Pay people to fix the tech/replace the janky or slow stuff/automate manual things.
Sometimes? When circumstances allow that? This is not a given. And with the way Web is, and the way it's going, it's only getting worse. HTML is a bloated standard. JavaScript is an idiotic language. The GUI toolkit that browser offer is a combo of bloat and idiocy. The whole idea of making "single page applications" is idiotic because Web was never meant for that. It was meant to be an interface to what today we'd call a distributed document database. But is this going to change because individual programmers or individual companies recognize the problem? Are they going to fix / automate it? -- well, there's no way a single company, not even a mega international corporation at this point, which could do that.
The argument the OP was the counter-argument to was saying that Web needs a different foundation and that no modern frameworks are any good. It didn't go as far (as I did) in claiming that the whole stack is garbage. Still, you are missing the point when you are making an abstract argument about using frameworks. The point was: "we have garbage frameworks in Web, and things are getting worse, let's take action to do things differently".
A potential acquisition came our way that seemed to be exactly what we were looking for. Customer traction and revenues, the features exactly what we were looking to overhaul. There was very high internal interest in acquiring the technology and the company.
The need for this set of features was so high and the internal development estimated to be so costly (in developer time and delays in other projects) we were willing to overlook that this app was not built on our primary stack and only a few of our developers were familiar with. The language choice made by the company didn’t cool our appetite. However, technological choices made us pass on the acquisition.
They didn’t use a framework.
This means that our developers would have a very long learning curve. We also saw a lot of code that did what a framework would have taken care off and this means that we would have had to learn, maintain and expand that code instead of working on the revenue generating features.
We found close coupling of code all over the place. This meant we couldn’t quickly extend and modify the features as we wanted without first paying back the technical debt accumulated by the developers.
It wouldn’t have mattered which framework they would have chosen as most of them have good documentation and force some sort of standard development practices. However, we couldn’t take a chance that all the behind the scenes stuff would need a rewrite if we wanted to expand and scale the platform. We passed.
How unique is your product really though? If you're a biotech company or something that also needs a website, sure, use whatever the most widely used framework is at the moment. But I feel like most of the focus of these discussions is on web-native SaaS companies whose entire business is on moving some well understood commercial activity onto the web or competing with other well-established web businesses from the huge tech companies. Stuff like e-commerce, productivity software, social networking tools, developer tools, financial tools, insurance tools, restaurant booking, hotel booking, doctor booking, etc. These companies probably feel like they need their edge to be things like development velocity and unit economics (e.g. no human customer support), and they probably feel like the only way to do that is to have a bespoke development platform.
The difference is important though. With libraries, you have to connect them to each other. With a framework you don't. But with a framework, if the predefined connection isn't working for your case, you spend way more time to change it. That is, if the framework is good.
But, I also I think your sentiment is somewhere between a generally wise piece of advice/wisdom and a vacuously true statement of fact. The latter because of course you should not reinvent the wheel if there's a perfectly fine wheel already out there.
Frankly, a lot of these things we call frameworks are actually fairly low quality. And I say that with the humility of knowing I couldn't do any better, but that doesn't make it less true. One of my "favorites" to shit on is the Java ecosystem. In particular, JDBC/Hibernate for ORM/SQL stuff, and JacksonXML for (de)serialization. Both are absolutely awful for various reasons, but my go-to example is that JDBC literally doesn't have an API for getting a nullable integer column value out of a ResultSet; instead, you have to get a non-nullable int primitive which will be `0` if the database value was `NULL`, THEN you have to call a second method to ask "Was the previously returned value from this ResultSet actually a NULL instead of whatever I got?". This manifests in Hibernate as requiring an easy-to-forget-or-misuse annotation on your DTO class.
My point is that some of these frameworks are a total nightmare for reliability and robustness, and I'm not convinced that just reflexively opting in to all of the "de facto" frameworks at the beginning of a software project is always the right move.
That path may or may not involve a framework. One common developer mistake is to use a framework not because it will save time, but because it's a brand name and they think it'll save them from having to learn about a particular problem space.
For example, any developer should be able to build a simple task queue in a couple of days. You can read about how they work, find a simple example or two, and code one. Will it have lots of features? No. Will it scale? No. Does any of this matter before you have product market fit? No.
Along the way you make sure you have good separation of concerns in your overall architecture, so that _if_ it turns out there are many actual humans who want to pay for this idea, you can swap out what you wrote and replace it with something better.
As a bonus, by the time this happens you'll understand task queues (and in particular what your product needs one for) very well. So picking and implementing the right framework will be easy.
If a framework is the easiest way to get to something the market can validate, use it! If it's not, don't.
It's really not that hard to understand how any of the common building blocks of a modern web app works. To understand it deeply, sure, that may take 20 years, but at PMF that depth is not critical yet.
Let's be honest - 80% of the time when we talk about doing things "right" we are really talking about concurrency and scale. Premature optimization is the root of all evil and so on...
This is the key right here. Very little else matters. Where a lot of companies get this wrong is they either go the route of picking a minimal framework and writing everything in-house only to have created a very unique monster that took a lot of time at the end or they go all in on frameworks without identifying the bottlenecks as they go along and they wait until its too late to make an easy transition.
Those can be mutually exclusive.
He wasn't wrong in his assessment of complexity, but the fact he refused to acknowledge the business priorities were the same between Google and companies he called out, absolutely baffled me. The gist from my perspective was that companies external to his own should bend over backwards for performance, while his should not, because his personal goals were tied to improving the performance of the web. Hopefully that's an over-simplification and I've missed something, but that's what I can recall.
These are usually the narratives we tell ourselves to let ourselves off the hook.
Web is either missing or has screwed up too many common and expected GUI idioms: https://www.reddit.com/r/CRUDology/comments/10ze9hu/missing_...
We need a state-ful GUI markup standard. It perhaps should piggy-back off the Tk or Qt GUI kits to avoid starting from scratch. Let's practice industry-wide KISS, DRY, and YAGNI. Past attempts are YAML (too static & convoluted), XUL (too convoluted), and QML (too proprietary).
With such a standard, developing GUI/CRUD apps could be more like using Delphi/Lazarus or WinForms, which are faaar more pleasant than webshit, at least for smallish projects. ("Enterprise" may need different tooling. One size doesn't fit all.)
Recent related HN story: https://news.ycombinator.com/item?id=34696635
If google can't get basic text input right on the web, there's something very broken there.
Having said that I have been looking at htmx recently.
I'm in the early stages of a proof-of-concept using MS-WinForms to write a GUI browser. Nothing special about WinForms, I'm just somewhat more familiar it and C#, and there's a lot of web help for MS tooling. The demo wouldn't be feature rich; just enough to get a feel for possibilities.
But in parallel, we can kick around what the "ideal" markup language could look like here:
https://www.reddit.com/r/CRUDology/comments/112ly2i/gui_mark...
jQuery allows for websites to be interactive with minimal latency, but it's pretty low level so it leads to a lot of common problems. Like locking the main thread, undisciplined architectures, and a ton of issues with back/forward navigation.
SPA frameworks are developed to solve jQuery problems, but they introduce some new ones. Everything being JS is a big win, compared to split apps. But you have to deal with the up front cost of loading all the JS and a higher memory footprint.
And we iterate. SSR frameworks tone down the up front cost of SPAs in favor of more complex infrastructure. Frameworks that deploy to edge networks take another bite out of latency, but it isn't entirely clear what that will mean long term.
It feels so iterative that blaming framework popularity on malice is wild. There are real benefits to businesses, developers, and customers at every step along the path. Of course developers are going to follow along.
For instance, technologically, Flash is a much better platform for applications than HTML+JavaScript combo will ever be. But, it failed due to Adobe's bad market strategy. But Flash, too, was a band-aid on top of broken tech. It was obvious that Web was not meant for what Flash was trying to do, Flash was just better at overcoming the problems than HTML+JavaScript is.
Web frameworks aren't here to add value, they are here to patch bad foundations. To deal with "defects" (or, rather the consequences of unintended use) of HTML and JavaScript. So, they will inevitably be bad, because they are trying to fix the problem they didn't create and that is beyond their power to fix. And as long as Web stays the application platform of choice, Web frameworks will cause a lot of resentment amongst their users.
Flash was very nice for some use-cases, but it leaked memory. It is not a good fit for long-running apps (probably why Flex/Adobe Air failed, too).
As someone who wants to advocate for standards instead of frameworks in my org, the problem I have is that the standards process is moving too slowly to address very basic and obvious developer needs. For example, the fact that Web Components are registered in a global namespace is really inconvenient in a large organization where you have many different teams making a large number of components. Sure, you can work around this with a BEM-like naming scheme, but you shouldn't have to. There's a great proposal for scoped custom element registries by one of the Lit developers that would actually address this issue, but it's been sitting in a repo for years without any meaningful activity and there doesn't appear to be any momentum around implementing it. Meanwhile, this is a thoroughly solved problem in the React and Vue ecosystems. That's just one small example, there are a ton of other papercuts and annoyances that there's no hope of fixing in the foreseeable future if solving them is going to involve a years-long slog through the standards process.
The DOM part spec that emerged from Apple's template instantiation proposal has the potential to provide an extremely efficient standards-based target for React-like frameworks that would solve a lot of the performance problems, especially if it's used judiciously for just the dynamic parts of the page in conjunction with server rendering and declarative shadow DOM. But nobody seems to be working on making that a reality, activity on that spec is pretty much dormant. React is going to keep winning until the standards furnish comparable ergonomics.
Meanwhile frameworks and libs (even those who originally were very supportive of the idea of web components) have moved past WCs and are exploring approaches that WCs will never ever be able to provide such as granular reactivity, seamless frontend-backend integration, or even eschewing components as a rendering primitive entirely.
> React is going to keep winning until the standards furnish comparable ergonomics.
In all honesty, Scoped CSS + Nested CSS proposals coupled with https://open-ui.org would've solved 99% of what web components are trying to be.
Meanwhile we're about 20 years away from WCs as a finished thing: https://w3c.github.io/webcomponents-cg/2022.html
Authenticated content is likely for a mix of reasons. It can't be indexed, so doesn't need to optimize for SEO. Users that log in are higher intention, so they're less likely to bounce if the page loads slowly.
It's definitely more complex than just these things, but the inverse (an app that requires SEO and minimal upfront costs) are often used as justifications for moving away from modern frameworks.
And it shows the same upsides and downsides as evolution vs e.g. gradient descent - you avoid getting stuck in local optima as people will always try out new stuff just to create something interesting (in business without clear evidence this will bring money it's hard) but finding an optimal solution is slow.
It is much easier to attain certain goals by having a well managed team who report to you directly or indirectly and you can order them basically what to do and create a coherent vision for what they produce.
Whilst I strongly agree SPAs are overused, let's take a moment to consider this sorry state of affairs. It's 2023 and when you ship some 50-200KB of executable code, you're in the red.
This is like 1/7th of a damn floppy disk, which I've lived through. Mobile apps are hundreds of MB in size, and we're supposed to compete with that using sticks and stones. Recently my Razor mouse had a driver update, it was 2GB in size. We learn about Unity on desktop, now able to render 17 trillion polygons per second yet we struggle to render some state and squares on the mobile web.
That's the real issue. Big parts of the mobile web are severely underpowered yet we still want to ship the "rich application" paradigm to them. Worse, without web standards offering anything remotely useful for this model.
That said, the idea that SPAs offer a superior experience that users are somehow demanding is bullshit when you consider how a typical SPA works. Consider a simple navigation action (route change) where the standard argument in favor of SPAs is that these are much faster compared to a traditional server-side page reload.
First, the SPA might need to load a new bundle, as tree shaking is the best practice where you lazily load per route code. This client code needs to be downloaded, parsed and executed, before anything even starts to happen. Next, the route-specific components mount. Many will require remote data, so at this point the page change may feel fast, it's useless and will remain useless until all remote network calls are resolved.
Compare this to a traditional, well-optimized MPA. You navigate to another route. The server will come back with ALL data/state resolved as well as all rendering (HTML) already done. And as this server response comes in, the browser is already working on the DOM layout/paint process, which is not true for a REST call. Plus, back/forward works reliably, scroll position works, memory leaks are no issue nor are stale tabs.
It's really debatable whether the SPA has the better experience or that users demand it. The point is to deliver meaningful content and interactions, that's the true value of whatever you are building. As such, I think the new wave of SSR/hybrid frameworks are a step in the right direction.
I think the issue here is that on an MPA you may get a white page for a while, or a page with missing pieces in unexpected ways, even if the overall loading is faster it looks more broken.
I'm sure React is great, Angular as well, and VueJS, but they have grown to be all encompassing, so a developer is never stuck and need to rewrite everything. Many frameworks starts up being small, so people can understand them and easily learn. Then as time goes by the frameworks grown until they become to big and someone once again feel the need for a lighter framework and the cycle repeats.
Currently I'm trying to build a VERY small web app. It does need a tiny bit of JavaScript to pull in new data every minute or so. It's much much easier to just forget about using a framework. It would take more time learning how you start a React or Angular project, than just learning the bits of modern JavaScript I need. Then I also don't need to bother with npm, webpack or any other packaging and build non-sense a framework would try to impose on me. That save even more time.
The article is correct that something like React will save developer time, if the project is large enough. If not, simply figurering out which framework to pick and learn how to set it up is going to constitute the majority of the time spend on the project.
That time has has already been spent, because you hired React developers. They will be fastest by pulling in React. Really, the argument could end here. Yes, there is overhead in using it, things can get out of hand, but they do not have to. If they do get out of hand, would vanilla web technology have prevented that?
I'm not entirely convinced that this is true, either. At least not in some general, a priori, way.
Doing stuff "the right way" with React or any other framework probably still involves a fair amount of ceremony, boilerplate, and testing. Would your hypothetical React dev be faster with React than some other framework? Probably--if that other framework is similar in scope and complexity. But, otherwise, I just wouldn't assume that an expert in FrameworkX will always do better using FrameworkX without regard to the specifics of the project.
In the end it is a problem of multiple, near equivalent, solutions with no sharp-enough success criteria. This makes the traditional approach of using "the right tool for the right job" lose its edge.
Why dont we have good enough filters to select technologies and designs? Ultimately it must be that people's expectations are low. Why is it so? One reason might be because the predominant value proposition is still dominated by the novelty introduced by global (mobile) connectivity. That is exemplified by minimalistic interfaces: the value is in the connectivity, not the information post-processing.
Once people get used to that basic fact they will start being more demanding and that might give rise to more differentiated criteria. In a sense right now there is much latent potential with existing tech that does not yet express fully because the conditions are not conducive.
Try to convince modern developers who build Gutenberg blocks to use React-free libraries or frameworks for their component libraries or front-end.
frameworks are all low-effort fun until you need to do some Hard Problem that they can't handle well, then suddenly the effort shoots up massively, because now you're trying to figure out how to do your hard problem within the structure and using the primitives of the framework.
bespoke solutions start off high-effort because you have to reinvent the proverbial wheel, but then as you keep rolling, it gets easier, and you have something that is better-architected for your exact problem. when the same kind of Hard Problem arises, you can solve it much more easily, because you're unburdened by trying to shoehorn the solution into the structure and primitives of someone else's framework.
Mainly I think this boils down to there not being much actual innovation. Usually whatever framework you pick is fine: so long as it has sticking power/can survive, it probably won't get too far in the way, and it probably won't really provide a major boost other than giving you some frame of reference to adhere to, some theme to riff off of. Sometimes frameworks overreach, promise to much, and collapse, sometimes they whither, but a huge amount stick around & just keep doing what they're doing, and few have real differentiating power. Although they are all written in their own distinct opinionated styles, they're usually not very important.
Here's some topics we could be making head-way on to make webapps better: incremental loading, early hints, url based routing, service-worker services, off-main thread services. There's more experimental areas: custom elements, p2p data-channels, offline capable apps. Web Share & Web Share Target and protocol handler capable apps. Multi-window placement apps, PIP apps. Reactive systems (mobx). IoC/Dependency Injection. WebBundles/WebPackage, Signed Exchanges, Http Signatures. Some of these wander far from the central core of what a framework might need, but others could be compelling dynamic parts. But the current position of frameworks, what they consider their purview, is very narrow. Personally I think the modern client stack should look a lot more like a web-server than it presently does.
Right now we still write code like it's a process, but the web browser is really a multi-process environment. We have yet to see frameworks that really take that possibility & aim for it. One of wasm's most interesting possibilities is that we end up not with big processes ported to wasm, but lots of smaller littler independent processes communicating: I think the server-side people have thought about that, are excited for those very lightweight virtual machines, but I don't see the front end as thinking about how to decouple & unbundle & un-monolithize their front-end. There's ideas of portals and front-end microservices, but these are still often talking about fair conventional webapp architectures, just having many at once: they don't tackle or think about interconnection & shared services, about a network of front end microservices/microapps.
The whole SSR thing is interesting & good work is happening to reshuffle & re-explore, but again, I just think interest arose here in part because the client-side failed to keep pioneering & failed to be exciting, so this was just an open/available other place to go explore. And while it has lots of virtues, I don't think it's actually that exciting or important, not nearly as much as improving the client-side, where the user agency is actually seated. The frameworks need to expand, especially client side. We all got safe & conservative, started to apply the industrialized tools that work, and the hotbed of exploration & innovation got demolished, was squelched. And we all had our energy sapped figuring out how to build & bundle our stuff, via endless tooling, which we still are only so so ok at (still nothing would be a nice/easy as an EcmaScript Modules that would just work with source-as-it-is-authored, which Deno seemingly sort of pulls off, but the web is still far from).
I especially enjoyed the part of building frontend apps more as a server of communicating processes.
My interest is multithreading and parallelism. If I could write frontend software in the style of processes, that would be awesome.