I do think they must have had similar intuitions in the beginning. Then identified big challenges with that and that what would be needed to overcome it more streamlined is some set of features or optimizations. It seems though that these emerged in isolation? At least to cross the threshold to saying "there's a way".
Seeing how long Valhalla has been in development, I'm curious how things went the way they did. The article mentions hindsight but that alone doesn't explain the change in direction.
Discussion on the mailing list: https://mail.openjdk.org/pipermail/valhalla-spec-experts/202...
Java generics took a shortcut by reusing the old classes and just layering them on in the language but erasing all useful information to the runtime, and this is what's biting them in the back now when practice has shown that flat memory layouts is highly beneficial in terms of performance as CPU speeds has outstripped memory latencies.
A List<int> in C# will be 2 objects, the List object and the underlying int[] array of sequential numbers, a Java List<int> will be 2+N objects, the List object, the object[] array and N boxed int objects, and the if N int objects are scattered in memory then traversing the integers will be far more expensive due to memory latencies.
Microsoft decided not to delay the release waiting for them to get ready.
Don Syme of F# fame has a couple of blog posts with the history of generics in .NET, as he was part of the original design team.
This alone lets one do things like freely copy/share/modify them, that directly allows for flattening.
In Java, value types are immutable so the VM can pass by value or by copy, as a user you don't see the difference and there are backward compatible (per the article), you don't need to recompile the user code when you replace an existing class by a value class.
C# has value types since the beginning so primitive types are value types while in Java it seems there will be 3 kinds of types instead of 2.
[1] https://learn.microsoft.com/en-us/dotnet/csharp/programming-...
My understanding is that moving from 1.1 to 2.0 (that introduced reified generics) required work from developers of libraries.
It was also done in relative infancy of the ecosystem.
But I trust some c# oldtimer to tell me how it actually was as I only remember that some tools I used insisted to have older .net runtime installed.
Java generics use erasure, and they are backwards-compatible with non-generic-using code. You can still say `var l = new ArrayList();` in the latest Java versions; you’ll get a compiler warning, but the code will compile and run as well as code using `ArrayList<object>` would. C# uses reified generics (which are faster, saner, and more expressive), and standard collections exist in two namespaces (System.Collections vs System.Collections.Generic). If you needed to work with legacy code that uses the non-generic types, System.Collections.Generic.List<T> implements System.Collections.IList (but the code would need to be smart enough to demand the IList interface instead of the concrete System.Collections.ArrayList implementation).
1. Forward compatibility: While backward compatibility means that old code continues to run and so is easy to do (add the new feature "on the side"), forward compatibility, or migration compatibility, means making it easy for old code to take advantage of new features with little or no change.
2. Simplicity: For every feature (added to help, say, efficiency) that makes an advanced developer happy you risk scaring away ten less-advanced developers. Complex languages are also rarely taught as first languages, which makes it very hard for them to reach or remain at the very top of the most popular languages. Eventually every language faces requirements that demand a new feature that makes the language more complex, but if great care is not taken to control the added complexity as much as possible (or even give up on the feature as not being worth the cost to complexity), the languages eventually faces a threat to popularity.
While Java has always cared about these two, .NET not so much (maybe they're right not to, but in any event that's a real philosophical/cultural difference between the two platforms). For example, to make simple blocking enjoy the scalability benefits of async/await you need to change a lot of things; that wasn't acceptable for us. And MS has always been a fan of rather complex languages (with the exception of VB); as early as the late eighties MS was "the C++" company.
Now, to be more specific, one of the big challenges of value types is object initialisation, which is what John's article is primarily about. If you create an array of a value type, the elements must be initialised to some value which isn't null. How do you express that in a way that is relatively forward compatible, simple, but also efficient? Furthermore, as John points out, even for value classes that don't admit a "zero" default value, how do you express the constructor for an immutable type that can be flattened? It's easy to do this for reference records because reference types are always "published" to readers with an atomic write of the pointer, but value types (which, again, are immutable) need to initialise their fields atomically even when flattened.
BTW, generic erasure is not a big challenge. While value types will eventually (in a later phase) require specialised generics (because you want, say, an ArrayList of a complex number value type internally use a flattened array), adding that is not a huge problem for forward compatibility and simplicity because value types are invariant, i.e. they cannot subclass nor be subclassed, so the complexity and forward compatibility issues that reification brings to variant types aren't as bad (the problem with reifying generics that can be variant is that you need to bake a particular language's variance strategy into the runtime itself). Erased generics have so far helped Java much more than they hurt it because we keep seeing their forward-compatibility benefits over and over (including for future features that we're thinking about). They also help simplicity because they make the runtime an attractive compilation target for complex languages that may draw those who prefer such languages while keeping them on the platform and at the same time reducing the pressure on the Java to add complex features that could threaten its popularity among the majority who prefer simpler languages.
At any point if you make the raw bytes accessible to someone, even if it's just to copy them somewhere, it becomes possible to mutate an immutable value and users will eventually take advantage of it, if only because they came up with a really good reason why they need to (like for example, initializing readonly values in a deserialization function).
I don't think .NET could have ever adopted true immutability for structs because the rules would be so easy to break. There are "readonly" fields but they're at best a railing to keep you from falling off - there are trivial ways to bypass those protections.
I do think immutability can be a valuable property though so it's cool to see the Java folks doing the hard work to execute on it.
Which to me was always ironic that they get perceived like that, given that C++ was born at Bell Labs alongside C and UNIX, and CORBA was born on UNIX, IBM and Apple were equally pushing for C++, and Borland always had better C++ tooling (to this day even as Embarcadero) than Microsoft has done during the last 30 years.
Now it is certainly true that WinDev is pretty much a C++ shop to detriment of anything else, including .NET.
https://www.youtube.com/watch?v=XL2zzFaybdE&ab_channel=EduMa...