edit/mass reply: I've been coding web apps with the rest of you guys for 20 years. The web isn't the problem, the tooling just isn't there yet. The solution space is large and we're still in the 'throw things at the wall' stage. It will eventually be figured out and web dev will be nice and stable just like the backend and database layers.
Clearly, there are many things wrong with that, but the fact that one could whip up several screens and essentially ship them in a few hours makes it a worthwhile thought exercise to think about what has been gained, and what lost. Most enterprise apps that I encounter seem to be massively overcomplicated for what they actually do.
I think the big issue is every app I use/develop in the enterprise needs single sign on and fine grained access control. Those modern requirements create a minimum layer of complexity that really slows things down.
But yeah, the whole enterprise programming world really missed an opportunity with JavaFX. For all the shit they get being able to deploy desktop apps in a virtual environment is really powerful.
FWIW, before there was Delphi there was an entire language (and consulting industry) built around this: PowerBuilder. Its primary component was a "DataWindow" which was a fancy presentation object around basic CRUD SQL. Good times were had by many.
Local dev, autorefreshes the Vuejs frontend code (yarn serve) and backend autorefreshes with `air` if Go-based or Intellij debug hot-reload if Jvm-based. I can whip up a db schema, backend, and frontend in a day with a good amount of layout and styling. It just took repetition and templating a few things.
Which is to say it doesn't take a lot of tooling to be productive--as long as it's focused.
I'm heavily reticent towards using web-based interfaces to remote services, and I think that's healthy and fine. As a developer, of course it's less complicated to not be negotiating network communication in order to create an application!
Now, consider that there's far more new people coming into programming each year than the prior year (or even if it's not strictly true on a year-over-year basis, there are far more software engineers with 1-10 years of experience than there are with 10-20 years experience, possibly even than those with 10+ years experience).
In a market such as that, ease of use is paramount and the killer feature that drives almost all you usage. Catering to amateurs is the path to increased usage, and mind-share, and market-share where applicable. With that in mind, is it really any wonder that amateurs are catered to so much that there's actually a regression in tools that cater to professionals?
My guess is that if you look into subgroups which are not friendly to new users for one reason or another (most commonly because of complexity and skill), yet still have retained users for some reason, you'll find high quality professional tools. I think C/C++ probably fits this, as well as systems programming, and kernel development. I'm not part of any of those subgroups, but my guess is they have not only retained the quality of their supplemental tools, but increased them.
However it still largely ignored as good practices, in a world where teachers still use IDEs like Turbo C++ as teaching tool.
Similar to Smalltalk, and Common Lisp are super fast to develop things. Maybe it is they were relying too much on their superior environment? So that other languages cannot easily replicate so that more generic but not so productive technology stand to the last?
I am not sure what headache are you talking about. My product/s are usually single exe with dependencies statically linked and acting in dual role: setup and the end software itself. Installation goes like this:
1) customer clicks on link
2) setup.exe is downloaded and ran.
3) setup copies itself to a proper location and renames itself to yourwonderfulsoftware.exe.
4) When running it may communicate to servers to get whatever data/files/licenses it needs if any.
5) It also checks for new version and if there is any it can self update if you click that "update" button.
6) Before update starts the old version and data are always backed up.
Yes it does lack that instant satisfaction experience that comes with good web applications. But then again the if software I want to make looks like a good candidate for Web target way I will implement it as a web app.
In hopefully not too distant future Web assembly may make the difference minimal but we will have to wait and see how it goes.
They recently changed their licensing AGAIN: https://www.qt.io/blog/qt-offering-changes-2020
GTK bindings via PyGObject also work on macOS[1], if Qt's licensing doesn't suit you.
[1] https://pygobject.readthedocs.io/en/latest/getting_started.h...
Only downside is that it is Windows only.
Looking back to VB and other WinForms RAD tools it's easy to do that stuff and there are HTML WYSIWYG tools but that double-click code behind logic doesn't scale - software these days is distributed, has more complex requirements and expectations. Once you bolt MVC or MVVM or whatever to one of those GUI toolkits you get very close to modern JS framework complexity.
In my opinion what has exploded the complexity is the proliferation of environments. The execution environment of our software provides very few guarantees on what is available (no standard library) or even what language is supported (many JavaScript features and versions with varying support). That, combined with the explosion of devices including input modes, screen sizes, and resolutions has just made it extraordinarily difficult.
We don't even have standard ui primitives like we did in the past. Every major website is expected to have a team of world class designers and reinvent the wheel.
It doesn't need to be this way. But it's the way we have chosen. It has advantages, but I'd imagine the economic cost is enormous.
Sure, there's always a tension of "simple" versus "limited". But simple was fun.
Until I knew more later and it became a disaster.
That aside, things have changed rapidly in the last five years in the C#/.NET world first with .NET Native then with .NET Core and CoreRT. Exciting times, really.
So far they are demoing single file release, which is basically packing everything into the same exe, but you get a JIT + MSIL instead.
Ironically this comes at the same time that C# the language has become much more usable without GC or with minimal GC thanks to the work that went into implementing Span but I think that was more a matter of necessity to support advanced async features for web usage (although I found it also made P/Invoke a joy and eliminated virtually all my need for marshaling in a few codebases.. and would have eliminated all the performance issues the led the OS team to abandon C#). It does seem that the ASP/Blazor team is driving the show and calling the shots after the UWP failure in terms of adoption, and I’m not seeing too much that would indicate it’s not the case even with Project Reunion. I’ve been testing WinUI 3, MSIX, and WebView2 and have been disappointed at the lack of story for putting all the parts together. It seems like side-loading packages with sparse package projects is intended to replace “native” UWP packages (“regular” AppX packages require .NET native unless side-loaded and I can’t get apps pulling in WinUI/STJ/Buffers/etc code to compile to .NET Native without an undeclared dependency on System.Private.CoreLib and without serious hacks to enable RTTI which makes me think they’re not meant to be used in that way any longer) but as always MS isn’t very forthcoming about the future of UWP components more than a single step at a time with all bets clearly hedged.
Too bad there are just a few quality interviews with him: https://www.artima.com/intv/anders.html
[1] https://channel9.msdn.com/Search?term="Anders+Hejlsberg"
[2] https://channel9.msdn.com/Blogs/Seth-Juarez/Anders-Hejlsberg...
1: https://www.amazon.com/Masterminds-Programming-Conversations...
For me it is the case. Before Delphi was a really the greatest, smartest and easiest RAD IDE. I even think that today dev tools are in a bad state and nothing match what we once had back then! Especialy for creating 'responsive' Gui apps easily.
Then, as shitty companies always do, they changed completely the tool and the language with the new bad .net like one. Despite it to be wrong, they claimed that this was better than what we had before and forced it on users. But it was not anymore the Pascal Object that we liked!
Sometimes you just have to make do with what your customers want. And besides, C# is IMHO the best computer language ever invented.
[1] http://techrights.org/2009/09/14/ms-admits-draining-to-destr...
Anders has a couple of interviews where he mentions that he resisted invitations from ex-Borland colleges that moved into Microsoft.
It was only when Borland stop being what it used to be, that he decided it was time to move on.
It was a big mistake not embracing AOT from day one, C++ wouldn't have kept its king position in MS ecosystem if the original .NET had been like .NET Native since the begging and kept Delphi like features for low level coding (some of each are have been added since C# 7.x).
JIT support could still be an additional option as well, just like on languages like Eiffel.
Instead we got NGEN (with basic optimizations), .NET Native (out of Midori, but which seems on the death march with Project Reunion), MDIL/Bartok (out of Singularity, used only in Windows 8.x), and all remaining efforts are from third parties (Xamarin pre-acquisition), Unity, CosmOS.
And no one really knows if CoreRT will ever be integrated into the main product.
Case in point: when Windows RT jailbreak came out, my .NET Framework AnyCPU apps just worked there. Now when I package, I very often have to list the target architectures in advance.
When I hear Java announcing JIT in Java and catching up with C# on syntax sugar, I feel that C# + .NET might start to lag behind in innovation.
Then there was Axum, Cω, Phoenix Compiler (LLVM like in .NET), Singularity, Midori, Rosylin, MDIL, .NET Native.
GraalVM goes back to MaximeVM and JikesRVM, so JIT in Java is also quite old.
What all these projects need is the money and political willigness to keep driving them forward, and here is probably the main issue with some .NET research projects, because since the begging Windows Development (which kind of owns the C and C++ story) teams aren't that willing into having too much .NET on their turf.
First, it is not friendly to web, we have some special web requirements, we decided to spawn a node process to do it.
Second, the tooling is so much better in Visual Studio. And the compiler is so much smart with sophisticated and syntactic analysis.
Last but not least, it really lacks 3rd party libraries so people always need to implement themself.
C# may not be the best option for any application. But it is general enough to almost support every types of application.
He never got around to it.
And this isn't theoretical--Altium actually did a full code rewrite in order to get off of Delphi because of this.
[0] https://docs.microsoft.com/en-us/dotnet/framework/configure-...
Satire and (2016)
And the title included "C# coders". I am not a C# coder, but all C# GUI coders I know had praised Delphi for its design and convenience for GUI...
edit:grammar
That was an additional custom controls library that you could enable, you could choose to use the standard L&F as well.
It was also available for the C++ products from Borland and it traces back to their first Windows 3.x compilers.
My first experience with it was in Turbo Pascal for Windows 1.5 (the last TP before Delphi was born).
Also, similar libraries existed for MFC or plain C Win32, sold by companies like ComponentOne.
Also the hello world apps for Delphi had these icons everywhere so it was "just the way" you built applications using it.
A standard button is a TButton: http://docwiki.embarcadero.com/Libraries/Sydney/en/Vcl.StdCt...
A button with a glyph is a TBitBtn (bit meaning bitmap): http://docwiki.embarcadero.com/Libraries/Sydney/en/Vcl.Butto...
Made it easy to spot a Delphi app.
The author also misses the point that C and C++ are systems programming languages (for developing operating systems, device drivers and low-level stuff such as compilers) and Pascal is an application programming language. C and C++ were pressed into service for developing application programming because many nerds thought it cool to have the fastest benchmark speed test result and ignored the fact that these languages are unsafe to use on a day-to-day basis. That's why C# and Java were invented.
Pascal is a totally fine system s language. AEGIS (a from scratch Unix like) was written in Pascal and by all accounts was a great option at the time.
And what makes C and C++ unsafe exists in Pascal.
These days you can probably mix assembly within your Pascal code or access pointers directly with some custom language extensions, but these weren't part of the original Pascal specification.
C# appeared after Sun sued Microsoft for trying to embrace/extend Java.