One problem, which is also one of the advantages of scala is that the interop with Java often means that all your Option[] etc code can still get NPEed by some offending Java Lib you've decided to use.
As the fidelity and ubiquity of pure scala libs improves this will hopefully go away to a some extend.
scala> Option(System.getProperty("kaboom"))
res1: Option[String] = None
then map, getOrElse or fold at will.
(edited format)
We've been running scala in production for 4 years, and had our share of NPEs in our code and in the libraries we host. My point I guess was just that it's not entirely true that they will not boil to the surface on occasion.
Use it every work day for over two years... Like a lot, but sometimes wishes it would be a bit simpler and more aesthetic.
The type system is crazy powerful, but that also means it's crazy complex.
E.g.
- syntax: def func() { and def func(): Unit = {
- collection library
One of the things I absolutely HATE about scala is operator overloading - and the excessive abuse of it. I ran into it just now using some library that used ==.
A Case:
List(1,2) == List(1,2); true Array(1,2) == Array(1,2); false
The reason for this is obvious, list implements equals and does a deep compare. While Array.equals is a pointer compare (like how java do).
This would be obvious in Java because that would look like.
ArrayList<> a = ArrayList<Int>(); ArrayList<> b = ArrayList<Int>(); a.equals(b); // equal because it's a value compare
versus
int[] a = new int[5] int[] b = new int[5] a == b; // obviously false because it's a reference compare.
The lack of a universal idea about what == means is pretty dangerous IMO.
*edited for spelling
There is an universal idea what == means, it's quite simple and more consistent then the mess Java has.
It's just that the JVM's idea cannot be brought in line with it consistently.
If you look at the history of Scala, you'd see that they tried to make Arrays work this way for 5+ years.
The blood being shed just wasn't worth the quirks it caused in other parts and in the end trying to fix the JVM's idea of arrays was abandoned.
All of this leads to potential newcomers thinking Scala is some inscrutable mess of a language, and even old fogeys will have to scratch their heads.
> "the biggest benefit with Scala is correctness."
...before the author goes on to say
> "When I say correctness, I mean the ability to easily and consistently write code that works as inteded (not the academic definition of correctness)"
It's quite reasonable to have a language where certain classes of errors cannot be generated from the source code. There are languages which can reliably detect or prevent subscript out of range errors, null pointer errors, dangling pointer errors, and race conditions. (C and C++ detect and prevent none of the above, which is the cause of most of the troubles in computing.) That's not full correctness; it's just language safety. It means you can't break the language model from inside the language. Most of the "scripting languages" have this property, or at least are supposed to.
Scala takes the null pointer issue a bit more seriously than most languages. That's good, but not enough to justify a claim that it offers "correctness".
"Yes, you can write more concise code; yes, you have a more advanced type system; yes, you can pattern match. There are hundreds of other reasons that Scala makes a great language. When a language can offer me constructs to write more correct code, I'll always be willing to deal with the learning curve."
I think the point was that these are the things that he thinks are of more value to developers than the other good things that Scala offers.
Pattern matching offers mainly correctness and conciseness, maybe the correctness part should have been emphasized more.
The problem is not so much the code you write but the one you have to read/use.Languages that are more rigid are often easier to work with,they are predictable,as you wont have to deal with strange apis.
I believe Java8 or Kotlin are good enough. Scala is sometimes just unreadable when you have to read other people's source code.
but maybe it's just me.
e.g you can define your own methods to sugar and desugar for pattern matches, define an apply method to treat a class like a function, or map() on an Option type - it simply behaves as a list of size 0 or 1 and that is all you need to map.
When I say correctness, I mean the ability to easily and
consistently write code that works as inteded (not the academic
definition of correctness).
I'm curious as to what the author believes the academic definition of 'correctness' actually is. Is it something other than code "working as inteded [sic]"?By academic correctness, I mean the formal definition in computer science, i.e. for an algorithm. More here: http://en.wikipedia.org/wiki/Correctness_(computer_science)
By this measure, it's hard to say any language is more or less "correct" than another.
The one thing that does bother me, as mentioned elsewhere, is operator overloading. There is a veritable soup of operators and you're never quite sure what an operator is actually doing. Worse, there aren't any plaintext equivalents. scala.collection.List doesn't have any "prepend/unshift" or "append/push" methods... all you have are ::, :::, +:, :+, /:, :\, :::, ++:, :++, ++ and so on.
val and final have stronger meanings when your objects are immutable. True immutability makes reasoning far easier than just referential immutability.
Aren't those members/fields not variables? If so, then those are effectively global not local, so that's a completely different story – I'm talking strictly about local variables.
> True immutability makes reasoning far easier than just referential immutability.
Yep, immutable types and constant global bindings are great.
As for marking local variable bindings as non-changing, I think it is tremendously helpful. In the (mostly Java) code base I work in daily we use this throughout. The net result is that I can just assume that property for everything, and whenever I see a variable not marked as non-changing I immediately know that something less than obvious is happening.
Given the above, I am naturally a big fan of making non-changing variable bindings (final/cons/val/...) the default and updatable variable bindings the case that should be marked. I would also like to work in a language where immutability of not just the variable binding but also the values themselves was better handled by the language.
A non-trivial number of people would dispute that ;-)
But from a writer's, modifier's or refactorer's perspective, what counts is the intent. Was a particular local variable meant to be mutable or immutable? Everytime I write a line of code, I need to watch out whether I mutated a variable which was not meant to be mutated. Or even the case where I myself mutate it unintentionally (by a typo, for example).
In a non-trivial project having a code-base with 100K lines of code, the time and effort spent in this manual analysis can be an overhead that might be well worth avoiding.
"what the parser figured out" != "what the author intended"
> this is a straightforward syntactic property
Not for the reader.
Is there exactly one assignment location that's not in a loop or a conditional?
yes => constant ; no => non-constant
This is neither hard nor unintuitive.
For deployment, sure, but for daily dev the vast majority of one's time should be spent taking advantage of sbt's incremental build feature; there the compile hit is pretty neglible (particularly when you break out your application into sub projects/modules).
Even after the proposed Scala overhaul (i.e. Dotty in around 4 years time) it's unlikely that clean builds will be blazing fast.
To put in perspective, right now scalac is roughly 10X slower than javac. Scala compiler team is banking on getting a speed up by generating Java 8 closures under the hood with Scala 2.12; that will mean less code for scalac to generate.
Beyond that, trimming down language features and streamlining the type system will provide further compile time reduction. As you say, lots of room for improvement ;-)
Look forward to dotty, then, Odersky's new project.
http://jaxenter.com/dotty-scala-without-the-backwards-compat...
I don't find the idea of programming without typeclasses appealing – at all.
Ceylon: if-then-else or try-catch are no expressions, embraces null, unstable software, breaks backward compatibility in minor releases.
Kotlin: Embraces null, inexpressive type system limits the things the compiler can check, unstable software, breaks backward compatibility in minor releases.
Most of the other languages I've noticed mentioned in the comments so far (i.e. C, C++, Go, Ruby, Python, JavaScript, Haskell) don't target the JVM, and those that do just do it on the side.
Perhaps Ceylon or Kotlin will gain some traction and become a third alternative for the JVM -- all the other contenders have been around too long and lost momentum. I'd pick Kotlin over Ceylon since it can use the popular IntelliJ as a delivery platform.