We use enums heavily to force devs who use our code into good choices, but the options are currently:
1) Use int-type enums with iota: no human-readable error values, no compile-time guard against illegal enum values, no exhaustive `switch`, and no autogenerated ValidEnumValues for validation at runtime (we instead need to create ValidEnumValues and remember to update it every time a new enum value is added).
2) Use string-type enums: human-readable error values, but same problems with no compile-time guards, exhaustive switch, or validation at runtime.
3) Use struct-based enums per https://threedots.tech/post/safer-enums-in-go/ : human-readable error values and an okay compile-time check (only the all-default-values struct or the values we define), but it still doesn't have exhaustive switch, is a complex pattern so most people don't know to use it, and suffers from the mutable `var` issues the post author detailed.
To my naive eye, it seems like a built-in, compile time-checked enum type with a FromString() function would help the community tremendously.
I find this comment from Griesemer [0] on one of the github issues for enums in Golang quite insightful:
>> [...] all the proposals on enums I've seen so far, including this one, mix way too many things together in my mind. [...] Instead, I suggest that we try to address these (the enum) properties individually. If we had a mechanism in the language for immutable values (a big "if"), and a mechanism to concisely define new values (more on that below), than an "enum" is simply a mechanism to lump together a list of values of a given type such that the compiler can do compile-time validation.
Like with generics, I like the team's approach of taking features seriously, not adding them just because other languages have them, but actually trying to figure out a way for them to work in Go, as cleanly as possible. I think computer science, as a field, benefits from this approach.
And I also dislike many things from Go, and I want "enums" badly too, but that's for another comment.
[0] https://github.com/golang/go/issues/28987#issuecomment-49679...
When there’s no clear winner in terms of tradeoffs, I prefer to leave it out of the language like Go has done.
Yes, they are poison for the evolution of a public API.
What you want here is something akin to Rust's match behaviour on enumerated types. If your alternatives aren't exhaustive, it doesn't compile. Now, Rust is doing that because match is an expression. Your day matching expression needs a value if this is a Thursday, so not handling Thursday in your day matching expression is nonsense - even though often the value of a match isn't used and might be the empty tuple it necessarily must *have a value.
It seems to me that today a Go Switch statement operating on days of the week can omit Thursday and compile just fine. Exhaustive switch means that's a compile error. If your "exhaustive switch" is optional or just emits a warning, it won't catch many of the problems for which exhaustive switch is the appropriate antidote.
However, zero values throw a wrench into this. The zero value of an interface is nil, so enhancing interfaces would require you to address what happens with an uninitialized variable. One of the current proposals suggests that nil continue as the zero value.
They could introduce a totally different type, like a sealed interface, which doesn't require a zero value, but that distinguishes between different types of interfaces, and I'm not sure how that'll be received.
Beware, tho, that with many languages today you’re not really doing that even when they advertise enums e.g. in both C# and C++, enums are not type-safe (not even `enum class`). Iota is, at least, a fair acknowledgement of that.
> with a FromString() function
That seems like way a step too far, is there any such “default method” today? And I don’t think Go has any sort of return-type overloading does it?
type SignedInteger interface {
~int | ~int8 | ~int16 | ~int32 | ~int64
}
Interfaces that contain type sets are only allowed to be used in generic constraints. However, a future extension might permit the use of type sets in regular interface types:> We have proposed that constraints can embed some additional elements. With this proposal, any interface type that embeds anything other than an interface type can only be used as a constraint or as an embedded element in another constraint. A natural next step would be to permit using interface types that embed any type, or that embed these new elements, as an ordinary type, not just as a constraint.
> We are not proposing that today. But the rules for type sets and methods set above describe how they would behave. Any type that is an element of the type set could be assigned to such an interface type. A value of such an interface type would permit calling any member of the corresponding method set.
> This would permit a version of what other languages call sum types or union types. It would be a Go interface type to which only specific types could be assigned. Such an interface type could still take the value nil, of course, so it would not be quite the same as a typical sum type.
> In any case, this is something to consider in a future proposal, not this one.
This along with exhaustive type switches would bring Go something close to the sum types of Rust and Swift.
https://github.com/BurntSushi/go-sumtype is great, but a bit unwieldy. Language support would be much better.
func UnmarshalOneOf(data []byte, args []interface{}) (index int, err error)
Which I use like this:
variants := []interface{}{ &T1{}, &T2{}}
i, err := UnmarshalOneOf(data, variants)
// …
return variants[i]
This is what linters will do for you by default.
> we instead need to create ValidEnumValues and remember to update it every time a new enum value is added
Code generators are first class citizens in go, and writing an icky but reliable test won't be too hard either.
Create a new int type and use that for your enums. While you still can create an illegal enum value, you basically have to be looking for trouble. It’s not going to happen accidentally. It’s even harder if it’s an unpunished type in a different package.
See:
https://github.com/donatj/sqlread/blob/91b4f07370d12d697d18a...
type Test int
const (
T1 Test = 0
T2 = 1
)
func TestSomething(t Test) {}
...
TestSomething(17)
So this isn't a good suggestion, because you can easily pass any int value and will not get a compiler error. You may as well be using strings at that point.Int types also don't give you any guards when deserializing.
The and and or functions in the template packages do short-circuit, so he's got one thing already. It was a relatively recent change, but it's there.
Non-deterministic select is a critical detail of its design. If you depend on a deterministic order of completion of tasks, you're going to have problems. Now there are cases where determinism might be what you want, but they are peculiar. And given Go's general approach to doing things only one way, you get non-determinism.
A shorthand syntax for trial communication existed. We took it out long ago. Again, you only need one way to do things, and again, it's a rare thing to need. Not worth special syntax.
Some of the other things mentioned may be worth thinking about, and some of them have already (a logging interface for instance), and some we just got wrong (range). But overall this seems like a list of things driven by a particular way of working that is not universal and discounts the cost of creating consensus around the right solutions to some of these problems.
Which is not to discount the author's concerns. This is a thoughtful post.
0: https://old.reddit.com/r/golang/comments/s58ico/what_id_like...
This really is one of the parts I like the most about Go. It really makes so many things simpler. Discussing code, tutorials and writing it.
Every time I'm trying to do something in JS I have to figure out why every guide has a different way of achieving the same thing and what are the implementation differences.
It looks for most select blocks in Go code, it doesn't matter whether or not they are non-deterministic or deterministic.
But, if the default is deterministic, user code could simulate non-deterministic, without much performance loss. Not vice versa (the current design).
Austin Clements (of the Go runtime team) wrote a paper that explores this in detail [1]. That was before joining the Go team, but the concepts are universal.
[1] https://people.csail.mit.edu/nickolai/papers/clements-sc.pdf
select {
case: <-chan1_whichIWantToCheckFirst
default:
}
select {
case: <-chan2_whichItreatTheSameAsChan3
case: 0xFF ->chan3_whichItreatTheSameAsChan2
}I'm curious how?
> Not vice versa
There are pretty common patterns for this. At least for real word cases where you might have one special channel that you always want to check. Ugly, but in relation to the previous question, I don't see how one is doable and one isn't?
I would like to remove all the "magic" that's built-in for specific SCMS/repository hosting services and have something that operates in a simple and predictable manner like C includes and include paths (although obviously I don't like preprocessing so not that).
As for the language, I like the range reference idea but my own minor pet peeve is an issue with pre-assignment in if-statements etc which makes a neat feature almost useless:
// This is nice because err only exists within the if so we don't have to
// reuse a variable or invent new names both of which are untidy and have potential
// to cause errors (esp when copy-pasting):
if err := someoperation(); err != nil; {
// Handle the error
}
// This however won't work:
func doThing() string {
result := "OK"
if result, err := somethingelse(); err != nil { // result does not need to be created but err does so we cannot do this.
return "ERROR"
}
return result
}
I don't have any good ideas about how to change the syntax unfortunately. func doThing() string {
result := "OK"
{
var err error
if result, err = somethingElse(); err != nil {
return "ERROR"
}
}
return result
}
`err` is introduced in the lexical scope, `result` isn't so it still refers to the string from the surrounding scope. `err` does not pollute the surrounding scope.You can also try the complete version here: https://go.dev/play/p/kDEB11YdvSs
func doThing() string {
result := "OK"
var err error
if result, err = somethingelse(); err != nil {
return "ERROR"
}
return result
}Your solution fixes the error, but at the cost of losing the upside OP saw.
Deterministic select I hard disagree with. The code in the blog post is race-y, and needs to be fixed, not select. If anything, making select deterministic will introduce _more_ subtle bugs when developers rely on that behavior only to find out in the real world that things aren't necessarily as quick as they are in development.
for ctx.Err() == nil {
select {
case <-ctx.Done():
return nil
case thing := <-thingCh:
// Process thing...
case <-time.After(5*time.Second):
return errors.New("timeout")
}
}
The extra check for ctx.Err before the select statement easily resolves the author's issue. for {
if _, ok := <- doneCh; ok {
break
}
select {
case thing := <-thingCh:
// ... long-running operation
case <-time.After(5*time.Second):
return fmt.Errorf("timeout")
}
}
Which goes along w/ https://github.com/golang/go/wiki/CodeReviewComments#indent-... of "Indent error flow".edit: nvm, your break would be blocked until one of the other channels produced a value. you'd need to check for the doneCh redundantly again in the select.
But yeah. Now that generics are in, I do hope they add a handful of common collections.
[0] https://pkg.go.dev/container/list@go1.17.6
A more general change would be to implement the "var" and "val" distinction that exists in some languages.
const x = 1 // x is a compile time alias for the untyped abstract number 1
var x := 1 // define x at runtime to be (int)1, x is mutable
val x := 1 // define x at runtime to be (int)1, x is immutable
Then the globals can be defined with "val".- A const is an abstracted value.
- A variable is an allocated piece of memory.
Edit: this likely wouldn’t fly as it would be completely backwards incompatible
> Constant expressions may contain only constant operands and are evaluated at compile time.
and a "const" can only be defined with a constant expression.
So the difference would be that a "val" can be assigned a value that is evaluated at runtime.
The template problem is a real problem. I used it once and it was pain and moved away instantly. I would vote for go inside go as a template system. So you can effectively write go code.
With the help of yaegi[1] a go templating engine can be build e.g here[2].
I think there's a number of language communities you can "grow up" in that teach you that things in the standard library are faster than anything else and have had more attention paid to them than anything else, like Python or Perl, because those languages are slow, and things going into the standard library have generally been converted to C. I think that because I look in my own brain and find that idea knocking about even though nobody ever said it to me directly. But that's not true in the compiled languages, of which Go is one. The vast majority of the Go standard library is written in Go. The vast majority of that vast majority isn't even written in complicated Go, it's just in Go that any Go programmer who has run through a good tutorial can read, it's not like a lot of standard libraries that are fast because they are written impenetrably. (Although the standard library may have subtle performance optimizations because it chooses this way of writing it in Go rather than that way, it's still almost entirely straightforward, comprehensible code.)
If you want JSON5 or a different template library, go get or write one. The Go encoding/json or text/template don't use any special access or have any magical compiler callouts you can't access in a library or anything else; if you grab JSON5 library (if such a thing exists), you've got as much support for it as JSON or text/template.
It's even often good that it's not in the standard library. I've been using the YAML support a lot lately. The biggest Go YAML library is on major version 3, and both of the jumps from 1 to 2 and 2 to 3 were huge, and would have been significantly inhibited if v1 was in the standard library and the backwards compatibility promise applied. v1 definitely resembles encoding/json, but that's missing a lot of things for YAML. If Go "supported YAML" by having v1 in the standard library, everybody would be complaining that it didn't support it well, and by contrast, asking anyone to jump straight to the v3 interface would be an insanely tall ask to do on the first try without any experimentation in the mean time. And I'm not even 100% sure there won't be a v4.
To me, the community take on "stdlib vs libraries" is a cyclical thing; we're coming out of a cycle led by JavaScript/NPM, where everything is a library due in no small part to how HORRIBLE JS/NodeJS's standard library is. Go back further, and you run into the Python/Java world which had far more comprehensive stdlibs, and today Go's (and to a lesser degree, Rust's) rising popularity is bringing back more feature-complete stdlibs.
So, it changes. And its alright for different languages to have different stances on how comprehensive their stdlibs should be. Go is absolutely an example of a language that wants tons of stuff in its stdlib; but its also a language which despises change, and thus we got a very awesome stdlib at v1, and limited improvements to it over the years.
I don't feel the "YAML changes a lot" argument is valid. It does; but if an app needs YAML, they can choose to use the stdlib, or a library, and they'll have to keep up regardless.
Putting it in the stdlib has tons of advantages. First: it increases the scope of parties affected by any breaking change, which naturally forces more deliberate thought into the change's necessity and quality. Second: it reduces the number of "things" code consumers need to update; from the go version itself & the YAML library & consuming code, to just the go version & consuming code (this has network productivity effects in only having to source one "breaking changes" changelog for your hit list on what needs updated). Third: it reduces multivariate dependency-dependency issues (eg YAMLv2 requires Go1.14, but we're on Go 1.13, so first we have to upgrade to Go1.14, then we can upgrade to YAMLv2). Fourth: it reduces the number of attack surfaces which security professionals need to monitor (all eyes on the stdlib implementation, versus N community implementations, strength in numbers). Fifth, less about YAML, but: stdlib encourages what I call the "net/http.Request effect"; if I want to write a library that does stuff with http requests, its nearly impossible to do that in a framework-agnostic way in JavaScript, because express does things differently than hapi etc; but in Go, everything is net/http.Request. So even if the stdlib doesn't have everything one needs, plugging in libraries to solve it is easier because everyone is using the same interfaces/structures.
Obviously not everything can be in stdlib, so it comes down to the question: what belongs there. And in my opinion, every language is too conservative (except maybe Python, which strikes a strong balance). Many language teams will say "we're not including X because X isn't an ISO standard, or because its still changing, or because we're not sure on the implementation". These are all arguments out of cowardice and fear of asserting an opinion. Language designers have developed a deep, deep fear of opinions; because they believe, maybe correctly, that one of their opinions will be bad, and it will hurt adoption. The issue is; having no opinion just hurts productivity, and people will also move from your language toward more productive ones (see: Go's rise in popularity, versus JS's dependency issues).
- Enun Types
- A special operator to cut down on `if err != nil { return err }` and just return at that point.
- Named arguments, and optional params with defaults
- Default values on structs
- ...macros? We already have `go generate ./...`
( edit: Removed unformatted source code )
func contains(s []int, e int) bool {
for _, a := range s {
if a == e {
return true
}
}
return false
}* nillability annotations: https://github.com/golang/go/issues/49202
* Change int from a machine word size (int32 or int64) to arbitrary precision (bigint): https://github.com/golang/go/issues/19623
Sadly the nillability annotations were rejected because they weren't backwards compatible. The bigint change is also unlikely to be accepted because the issue is already five years old and there are concerns about performance.
func foo(a, b int) int {
return a * b
}
At compile time, what is the return type of foo? Answer: Unknown. The compiler has no way of determining if the resulting type will be small or big int. This has to be taken into account not just in foo, but in every function calling foo, and every function that is called with the return of foo as a param.One of the best things about golang, is how little "magic" there is. Everything that happens is immediately obvious from the code. An int that is upgraded to a completely different type when it reaches certain values, goes directly counter to that; it would be hidden behaviour, not immediately obvious from the code.
Should golang have a builtin arbitrary sized integer type? Maybe. Using math/big can become cumbersome. But a much better solution would be to introduce a new builtin type like `num`
Parameter / Option ergonomics.
The current best practice of "functional options" and long function chains results in far too many function stubs ... its a minimum of 3 extra lines per parameter. Parameter structs require a whole extra struct...
Borrowing optional / named parameters from Python would cut down length and complexity of Go code drastically.
The current best practice of "functional options" and long function chains results in far too many function stubs ... its a minimum of 3 extra lines per parameter. Parameter structs require a whole extra struct.
Suggestion: Just borrow named / optional parameters from Python. It would cut down length and complexity of Go code drastically.
I'd love to see better mocking support. Doing mock.On("fun name", ...) is so backwards, confusing and brittle. It's also a great source of confusion for teammates when tests fail.
I miss better transaction management. I regularly juggle db, transactions and related interfaces and it's a continuous pain.
Then there's the "workaround" for enforcing interface implementation: _ InterfaceType = &struct . This could be easily part of struct def. rather than having it in var section.
As was mentioned by others doing x := 5 only to later do JsonField: &x is just a waste of intellectual firepower. Maybe this can be alleviated by generics but the lang should be able to make this a one liner.
Currently, you would have to invoke it like so:
svc.GetObjectWithContext(ctx, &s3.GetObjectInput{Bucket: bucket, Key: key})
But... why do you have to type "s3.GetObjectInput"? The function is taking in a concrete type (not an interface) for that argument, and there is only one possible type that you can pass in... so I agree with the person above that it should be possible to elide the type like so: svc.GetObjectWithContext(ctx, &{Bucket: bucket, Key: key})
Go already supports type elision in some places, such as... []someStruct{{Field: value}, {Field: value}}
instead of having to type []someStruct{someStruct{Field: value}, someStruct{Field: value}}
which would be equally pointless repetition.[0]: https://docs.aws.amazon.com/sdk-for-go/api/service/s3/#S3.Ge...
It really is not, no. The number of loops using `enumerate` (or working on range / indices directly) in Python or Rust are a small fraction of those just iterating the sequence itself.
That would be even more so for Go, which has no higher-level data-oriented utilities (e.g. HOFs or comprehensions, which would usually replace a number of non-indexed loops, and would thus increase the ratio of indexed to non-indexed loops).
“Backwards compatibility forever” seems like unnecessary shackles, and the language should be able to grow — I’ve seen some nice proposals for improvements. I just wonder what the strategy is going to be for migrating code from go1 to go2 and how painful that’s going to be.
The Go maintainers already said that they don't have any plans to do an actual version 2.0 anymore. Generics turned out to be possible without breaking backward-compatibility.
[0] https://doc.rust-lang.org/edition-guide/editions/index.html
Yes, commercial codebases understaffed for maintenance are kinda stuck, just like any legacy system. IMHO the solution must come from the business model down. Also, security, compliance & cost can help drive priority.
Python came this || close to dying during the migration, its users all having moved to other languages. Primarily data science saved it and then some time passed and libraries moved on, etc, and after a while the cost benefit calculation started swaying towards Python3, probably after 3.3 at least, so 4 years after Python3's launch.
And half the article could be fixed by adding new APIs and deprecating the old ones.
Does anyone know of a decent type safe templating package out there (for any language)?
I can write x = &Foo{...} but somehow x = &42 and x = &foo() are not allowed, which forces me in some cases to declare useless variables that hurts readability.
Dependency management:
Go mods is a dumpster fire. `go get` and `go install` is finicky and inconsistent across systems.
It's difficult to import local code as a dependency. Using mods with replace feels like a shitty hack, and requires me to maintain a public repo for something I may not want to be public. I end up using ANOTHER hack that replaces mod references to private repos and I have to mess with my git config to properly authenticate.
I've never used another language that made it so difficult to import local code. Rust's cargo is so much easier to use!
Sane dynamic json parsing:
Having to create a perfectly specified struct for every single json object I need to touch is terrible UX. Using `map[string]interface{}` is just gross.
Again, I think Go should copy the existing rust solution from serde. With serde, I define the struct I need, and when I parse an object, the extra fields just get thrown out.
If anyone thinks I'm misunderstanding something, please enlighten me. I hope reasonable solutions already exist and I just haven't found them yet.
With regards to this concern at least go 1.18 is adding workspaces, which should help (https://sebastian-holstein.de/post/2021-11-08-go-1.18-featur...).
> What is the value of cp? If you said [A B C], sadly you are incorrect. The value of cp is actually: [C C C]
What it shows is that Go doesn't have a `value` per iteration, it has a single `value` for the entire loop which it updates for each iteration. This means if you store a pointer to that, you're going to store a pointer to the loop variable which gets updated, and thus at the end of the loop you'll have stored a bunch of pointers to the last item.
This is most commonly an issue when creating a closure inside a loop, as the closure closes over the binding, and since Go has a single binding for the entire loop all the closures will get the same value.
A fix wouldn't be unwelcome but it seems it would have a good chance to cause performance regression - a lot more allocated values maybe on a lot of inner loops. I guess escape analysis might help avoid the allovations in the general case. ?
1. a unified improved *error* in stdlib with stack trace support.
2. a unified log interface(mentioned)
3. a STL library like c++
4. shared library support so we dont have 100 static binaries that among them each have 90% of duplicated content. go shall support shared libraries/modules officially.[0] https://fosdem.org/2022/schedule/event/go_finite_automata/
Is this true because I can’t think of anything more useless than that.
Native support for them in the context of the existing Go spec will be coming with the next release. To reserve the right to evolve the native support before committing it to the backwards compatibility promise, it will initially appear in the https://pkg.go.dev/golang.org/x/exp external repository, but that is the official Go repo for things either too unstable to be included in the standard library, or still experimental. General expectation is it'll be in the standard library in the release after that. It won't be in the standard library, but it's as official as it can be beyond that.
I carefully phrased that with "in the context of the existing Go spec", because I think expectations of this support are wildly out of whack with the reality. It's still going to be a very unpleasant style to work in, with many and manifold problems: http://www.jerf.org/iri/post/2955 . I think people will be crazy to turn to that style in Go. Go wasn't just missing generics to support this style, it was missing many things, and "solving" the generics problem still leaves it missing many things.