So I'm not sure what we are saving here. When has the time spent typing in code ever been a bottleneck in software development anyways?
This is my feeling from having worked extensively in Java as well as languages that support "var": C# and Swift. I feel like my productivity goes down when I have to support code that uses inferred types. There also seems to be a performance hit when compiling code with inferred typing, although that may be circumventable with better compiler tech, who knows.
* I write code a lot more fluidly with var. When I go back to writing non-var code (enforced by some departments) I find that it breaks my focus on solving the problem at hand. I end up writing my code with var and then going back and replacing my vars with the type names.
* I find code a lot easier to read. I can understand the flow of the logic easier, the variable names are enough. Unless you have the type definition memorized, just knowing the type isn't going to help you much. You're going to need an IDE either way.
* Refactoring is easier. For example, changing the return type of a function from array to list means a lot less code needs to be changed as a result if the callers were using var. The compiler will tell you if there's any instances where a caller was using an incompatible property.
* Reviewing is easier. Your change set is a lot smaller when changing a type name.
Seriously you won't miss it when it's gone. People also used to prefer Hungarian notation.
For example, if the code assumes it has a type Foo with a length field, and you change the return type of tha function to a Bar without that length field, the compiler will complain that Bar doesn't have a length field, rather than complaining about trying to assign a Bar to a variable of type Foo.
Even worse is if the Bar type changes the symantics of that length field (perhaps going from the number of elements to the max index of elements, causing an off-by-one error), the code could break silently.
That's mostly a strawman argument, however it is worth noting.
But this assumes that when reading code, you're always using a tool that can show you the type. Auto-inserting is only needed when writing.
One thing to be wary of is that even when the compiler shows you the type, (in an error message, for example), if it's complex, it will be difficult to understand. If you want to write simple code, perhaps it's better to avoid or encapsulate complicated types?
f(g())
it's just syntax sugar for var temp = g()
f(temp)
Now at least programmers aren't tempted to do gratuitous function nesting rather than introducing a variable just to avoid cluttering code with an unimportant type name.The Microsoft rule is to only use var when the type is obvious from the assignment. That essentially boils down to new, cast and anonymous types.
> You don't actually need to enter in the type, any competent IDE can do it for you.
These two points contradict each other. Any competent IDE can visualize the inferred type even if you don't spell it, for example as a tooltip.
This is not (only) about typing, it's about the visual noise and redundancy caused by explicit types. Plus, as the article illustrates, the ability to give names to expressions that have very complex types, reducing the need for type erasure.
In its favour: It’s surprising how much code doesn’t really need the types written out to be readable (a discovery that will shock Python developers not at all). Furthermore, refactoring said code has less busywork in it.
As a side note: There’s one weird benefit of var not mentioned in this. Namely, you’re guaranteed there’s not going to be a cast. This is a serious problem in C++ where assigning the result of a function to a variable can have arbitrary side effects. (It’s not that bad in Java since pretty much all you can do is cast to interface.)
People usually fight this with "why would I need to type it if the compiler can figure it out!?" but those people don't understand the cardinal rule of software engineering: code is not for the compiler or the computer to understand, it is for the programmers to understand. If this wasn't the case then more people would be using APL or similarly esoteric languages.
Adding the extra effort of recursing down the rabbit hole to find the first type being used does not sound like it will make Java more friendly.
Account account = customer.GetAccount();
List<Transaction> transactions = account.GetTransactions();
Have I saved you any extra-effort here by specifying the types? You have no idea where these types came from or how they are defined. So why is this useful? 'Go to definition' works just as well on var.
If you your method is too large to fit on the screen, the method is probably too long.
But what competent IDE doesn't allow you to just hover over the variable to the know the type?
It didn't occur to me before I read this post that the complex, chained generic types that you can sometimes get with "builder" like patterns (e.g. SQL generators) can become incredibly complex, so this would tidy that up quite nicely
Not only is it ambiguous to developers who might be maintaining the code later, I find it much worse for readability. People tend to start writing OO code like it is Javascript which is is never good.
My biggest gripe with Java by far is the implied IDE requirement.
I've worked with plenty of engineers who are absolute masters of vim and emacs; their fingers fly on the keyboard. It looks impressive but even the best of these people look like rank amateurs compared to the people who have spent equivalent time mastering IDEA or Eclipse. With a good IDE the code practically writes itself. This is a real productivity gain, and needs to be considered as part of the value proposition of the language/environment.
It's obvious that everyone will use `var` everywhere, so using `var` in one place and explicit type declaration in another probably would be even worse.
object i = new object();
I agree you only type once, the IDE will suggest object after new. But in that case the type is needlessly redundant, var could be use without making the code any less readable.
object i = myfunction();
Here the object is sementically useful but given that you have to type it before you type the name of the function I don't see how the IDE can possibly help you. Not only that but unless you know the return type of that function by heart it forces you to go check it out before you even start the line. And if you are using generics (or valuetuples) that could be a long type name.
myfunction().var<TAB>
and it would expand to
object i = myfunction();
where i would be highlighted so you can immediately type the name of the variable and when you press enter the cursor is placed after the completed statement.
Not sure how this helps. I pretty much knew 'getAddress' would return some sort of Address structure. It's not like knowing the type name tells you what properties are on it. So what's the point?
var c = new Customer();
Type inference works great imho to avoid specifying the object twice using a normal constructor, not a factory function, and in a codebase where actual classes and not interfaces are specified as method paramethers, though I suspect this is an antipattern ;)I don’t think requiring programmers to always write out the type of a variable on every declaration is such a feature, but can see arguments for requiring them in some places where compiler could infer them. Types of function arguments in function declarations are an example.
Also, verbosity in programming is definitely a hindrance to maintenance. It’s generally easier to understand what’s there when you can remove some noise.
The dynamic nature helps a fair bit for some actual readability. But in large, I think it is more good marketing and a loud opinion that it is readable that makes people think it is readable.
Python is excellent at being intuitive and readable for the original programmer or when you’re skimming the code looking for broad logic.
Not so much for the maintainer. The programmer needs to keep a lot of state in their head to make up for the type system. If you don’t have good tests, it is even harder.
On a side note, its getting very tiring to rewrite the whole ecosystem every time a language is being annoying. Can the CS types please work on this real world problem a little instead of going knee deep into homotopy type theory? Please solve this somehow: allow engineers to leave a language without leaving its library ecosystem - lets make libraries super-portable, easily.
By default everything has always been mutable e.g. collections, variables. And so whilst I support val I can appreciate the difficulty in switching everyone to an immutable by default mindset. Especially given the lack of decent functional transforms e.g. map, flatMap, filter in Java.
Imagine I want to have a variable like:
Person p = new Person("John");
All I would type in IntelliJ is: new Person("John").var
After pressing "tab", that will autocomplete for me and put the focus on the variable name so I can rename it from the default inferred value. new Person("John")
And then hit Ctrl-2 L.var foo = new ArrayList<String>();
It literally says exactly what foo is !!
ArrayList<String> foo = new ArrayList<String>()
Or, at least
List<String> foo = new ArrayList<String>()
Whereas with `var`, as you show, the compiler infers the type. Proponents of this point out exactly what you did: the type is right there, so why should the programmer have to write it twice?
n,<return>,A,r,L,<return>,S,t,r,<return>,Cmd-Option-V,<return>
The IDE autocompletes everything, extract-to-variable does most of the heavy lifting.
List<String> foo = new ArrayList<>()
> so why should the programmer have to write it twice?
You wouldn't.
The usual variable declaration syntax is <type> <name>; it's not a stretch to imagine the type as 'var', a catchall type. But 'let'? Let is a verb; it should be in a place where functions, not types, go. It makes sense in "let x in {}" type expressions, and it kinda makes sense in Lisp, but I don't see any argument for it in a C-like language.
The inventor of the C-like languages B and NB (which then inspired C) agreed and used the more intuitive auto.
{
auto x = 1; /* x is implicitly an integer */
}
Now it's back in vogue in the C-like language C++. It doesn't mean "automatic storage", though.Without jumps, all code can be rewritten to be a series of "let x = ... in <expression>"
The fact Java has statements instead of everything being expressions is a design flaw and should be rectified not glorified.