It's much much more useful to the users to say, 2.0 introduced generics, it's distinct. If it's like other languages, generics changes the code people generate a lot, libraries start looking significantly different. It's very distinct, and if that is simply in version 1.18.0 or whatever, that is super bad usability from a language perspective.
A language or API (things you program against) are pretty much the things for which SemVer makes sense.
> The major version should represent major language changes, not whether its a breaking change or not
I don’t care if changes are “major”, I care if the code I wrote for version X is expected to need modification to work correctly in version Y. SemVer gives me that, Subjective Importance Versioning does not.
I like this because it emphasizes the community's commitment backwards compatibility, which I greatly value. I've spent a good deal of time writing Javascript, where library developers seem to have very little respect for their users and constantly break backwards compatibility. In ecosystems like that, upgrading fills me with dread. When I see a library on version 4, I have learned to keep looking - if they weren't thoughtful enough about their API design for the first 3 major releases, I shouldn't expect it to be much better going forwards.
For an application, I'm pretty open to version numbers signifying big features - Firefox and Chrome do this, and it's helpful with marketing. But for a programming language? A programming language is a tool, and when upgrading you need to carefully read the changelog anyways. A programming language is no different from a library (in Clojure it literally is a library), and backwards compatibility is /literally/ the main thing I care about. Is my tool going to intrude on /my/ schedule, and force me to make changes /it/ wants instead of being able to spend my time making changes /I/ care about? I want to know that.
[0]This is apparently an awful example as I've just learned that Java is actually doing the major version only thing. It still sort of works because the only reason they can do that is because they Will Not Break Compatiblity.
17 -> 1.17, 11 -> 1.8, this is bothering me way to much for no good reason.
https://docs.oracle.com/en/java/javase/17/language/java-lang...
https://docs.oracle.com/en/java/javase/17/migrate/getting-st...
The x.y versioning with y being a synonym for major version was abandonend in Java 9.
I don't agree. I usually don't care so much when a particular feature was introduced into a language (and if I do, it's usually a Wikipedia search away). I mostly care whether or not code written assuming version X can be compiled with version Y of the compiler. Semantic versioning can tell me the latter. Making versioning arbitrarily depend on what someone considers a "big" feature doesn't help me.
I care very much when a feature was introduced into a language, because maintaining compatibility with earlier versions of the language determines what features may be used. If I'm working on a library that needs to be compatible with C++03, then that means avoiding smart pointers and rvalues. If I'm working on a library that needs to be compatible with C++11, then I need to write my own make_unique(). If I'm working on a library that needs to be compatible with C++14, then I need to avoid using structured bindings.
If a project allows breaking backwards compatibility, then SemVer is a great way to put that information front and center. If a project considers backwards compatibility to be a given, then there's no point in having a constant value hanging out in front of the version number.
> I mostly care whether or not code written assuming version X can be compiled with version Y of the compiler.
Semantic versioning can only tell that for the case where X < Y (old code on new compiler). In order to determine it for X > Y (new code on old compiler), you need to know when features were introduced.
The PR version doesn't even have to be numeric. You can give them proper names.
A language update comes with the most fundamental set of libraries and APIs: the standard library (doubly so in Golang, which has a lot of batteries included).
It also potentially affects the behavior (if there are breaking changes) of all other third party libs.
The "silliness" part is a non sequitur from what proceeded it (and the following arguments don't justify it either).
>Your version is not really telling you the main things you care about.
The main thing (nay, only thing) I care about (for my existing code) from a language update is whether there were breaking changes.
I could not care less to have reflected in the version number whether a big non-breaking feature was introduced.
I can read about it and adopt it (or not) whether there's a accompanying big version number change or not.
>It's much much more useful to the users to say, 2.0 introduced generics, it's distinct.
That's quite irrelevant, isn't it?
It's not useful to users that follow the language (page, forums, blogs, etc.) and would already know which release introduced generics.
And it's also not useful to new users that get started with generics from day one of their Go use either.
So who would it be useful to?
Such a use would make the version number the equivalent of a "we got big new feature for you" blog post.
Why?
Old code still work and unless you are purposefully maintaining an old system you are expected to use the last version anyway. What does it actually change that generics were introduced in version 1.18 rather than 2.0? From now on, Go has generics. As there is no breaking change, it’s not like you had to keep using the previous version to opt out.
If semantic versioning is used correctly, like here, that's actually a reasonable-ish attitude.
Since backwards compatibility is already a given for languages, you can then have the major version number indicate feature additions, rather than always being a constant value as semantic versioning would require.
Languages are software; they are dependencies of other software (the only unavoidable dependency!) and as such should absolutely be versioned.
Versioning isn't for marketing or providing easy ways for users to remember when features were released. It's a tool for change management. Exciting features often come with breaking changes, but not vice versa.
Semantic versioning is an approach to versioning. It's an approach which, as GP stated, was designed specifically to help with dependency updating.
GP isn't proposing that languages shouldn't be versioned, they're saying that semantic versioning is the wrong approach to versioning for a language.
This is actually very important. If something is a major change or not is pretty subjective.
I'm afraid that expectation isn't entirely warranted. Especially around standard library issues.
Why?
Additive changes can be breaking changes quite easily, as those additions are adopted within a minor version range, as automated tooling needs to distinguish their presence, as documentation fragments.
My next biggest gripe with semver—that 0.y.z has entirely different semantics from any other major version—may actually be semantically better if adopted wholesale. If your interface changes, major version bump. Else you’re fixing bugs or otherwise striving to meet extant expectations.
Major language changes almost implies breaking changes, like Python 2 to 3 was major changes that break things everything from how modules were changed, where they were, and some syntactic and fundamental changes as well.
1. Min version in go.mod 2. Add a build tag for what to do for new/old version of go (These tags are automatic, you just need to set them in the files)
When a language adds any features, if your dependencies (whether real library dependencies or just things you're copying from Stack Overflow) start using the new features, you must upgrade to the new language version. That is an inherent usability constraint, and every time a language designer chooses to add a feature, they're making a tradeoff. But if upgrading to the new language version is trivial, then it's generally a worthwhile tradeoff.
For instance, suppose I find some code that uses Python's removeprefix() method on strings. I need to use Python 3.9 or newer to use that code. It doesn't matter that this is a very small feature.
However, I can generally expect to upgrade my Python 3.8 code to Python 3.9 without trouble. It's different from, say, code that uses Unicode strings. For that code, I need to upgrade from Python 2 to Python 3, which I can expect to cause me trouble. The version numbers communicate that. It's true that Python 3 was a "big" change - but "big" isn't really the point. The point is that I can't use Python 2 code directly with Python 3 code, but I can use Python 3.8 code directly with Python 3.9 code. There are plenty of "big" changes happening within the Python 3 series, such as async support, that were made available in a backwards-compatible manner.
As it happens, Python does not use semantic versioning. But they have a deprecation policy which requires issuing warnings for two minor releases: https://www.python.org/dev/peps/pep-0387/ It's technically possible, I think, that a change like Unicode strings could happen within the Python 3.x series, but that's okay, provided they follow the documented versioning policy. This policy addresses the same question that semantic versioning does, but it provides a different answer: you can always upgrade to one or two minor versions newer, but at that point you must stop and address deprecation warnings before upgrading further.
You are, of course, free to also have a marketing version of your project to communicate how big and exciting the changes are. Windows is a great example here: Windows 95 was 4.0 (communicating both backwards incompatibility with 3.1 and major changes) and Windows 7 was 6.1 (communicating backwards compatibility with Vista but still major changes).
that's why semver works; the definition of major change is defined, and that's when you update the major version number.
The Go people can just make up reasonable version numbers without having an all encompassing theory with definitions, and they only have to convince themselves, not everyone on earth.
Also they may have sneaked it in because they're implicitly acknowledging fault in their previous design decision to exclude it.
Basically, if 1.18 code is extremely unlikely to work against a 1.17 compiler, because a new (technically additive) feature is pervasively threaded through new code, I feel like it's hard to describe them as part of the same epoch.
I don't write enough go to know if that's true for generics, but it seems like it could become true fairly quickly from my experience with other languages.
This is why you can't really use semver (usefully) for everything. What's a "breaking change" in a word processor?
Languages aren't used like word processors, but they also aren't exactly used like libraries either. People get stuck on language versions for different reasons than why they get stuck on library versions.
At any rate I think in practice I think languages that are trying to hew to semver concepts like this just wind up with a 'fake major version'. Since Rust, for eg., might never go to 2.x the 1. in front of 54 is really just academic. That's the "real" major version as far as anyone needs to know.
A lot of the arguments in this thread seem to be kinda tautological. There's no law that says they have to use semver, nor is there a law that says semver can't be imperfect. "Semver is semver because semver says so" is not a compelling argument.
Every additive language change would be a breaking change in this ReverseSemVer you’re imagining.
also how is "big" even measured? meters? kilometers? it's immeasurable, which is why the rule is to update the version number based on what changes break existing code, because that can be measured
But, the backwards compatibility guarantee is that code that worked with Go v 1.n will work with Go v 1.j, for j >= n.
Next para is based on my recollection of the discussion around generics.
Specifically for generics, any code that doesn't use generics is untouched by the presence of generics elsewhere. Code that is, in and of itself, not generic will, in most cases, being able to call functions that are declared generic without extra hassle (there are most probably a few cases where a type annotation on the function using generics would be required). Code using generic data types probably needs to type-annotate, but there may be cases where it's not necessary.
A language is an interface with humans. Switching to generics is a major change in the way to think about the source code. It's not an implementation detail which is not a big deal as long as the compiler can accept earlier source syntax.
Yours is a pretty good one, i think that software versions should be indicative of what it contains to the people using them, whereas some others primarily care about the compatibility with the other versions. Semantic versioning is better suited to the latter group, because it doesn't really care about what's in the software, beyond what the changes are when compared to the other versions - breaking functionality, non-breaking functionality or just fixes of some sort.
My own alternative would take a slightly different approach yet - a system that would indicate when something was released, as well as whether the release is supposed to be stable (think MySQL 5.7, but in a format like 2021-stable-1234), or something more like a rolling release/nightly build with the latest changes (in format like 2021-latest-2345), an idea that in part i shamelessly stole from the Unity game engine, Ubuntu and JetBrains IDEs, since having a glance at their versions makes it apparent what you're looking at.
I actually wrote about that idea on my blog: https://blog.kronis.dev/articles/stable-software-release-sys...
Since then, i've started using that scheme for a few internal libraries in my dayjob to see whether it will work out (where switching to something else would be a matter of updating the CI, so less than an hour), as well as some personal projects.
Of course, each versioning scheme has advantages and disadvantages.