It would be a little nutty to suggest that Golang 1.1 is going to give optimized C code a run for its money. Nobody could seriously be suggesting that.
What is surprising is that the naive expression of an "interesting" compute-bound program in both languages are as close as they are.
Most C/C++ code --- the overwhelming majority, in fact --- is not especially performance sensitive. It often happens to have performance and memory footprint demands that exceed the capabilities of naive Python, but that fit squarely into the capabilities of naive C.
The expectation of many C programmers, myself included, is that there'd still be marked difference between Go and C for this kind of code. But it appears that there may not be.
This doesn't suggest that I'd want to try to fit Golang into a kernel module and write a driver with it, but it does further suggest that maybe I'd be a little silly to write my next "must be faster than Python" program in C.
So whenever people talk about expressiveness of Golang it just seems like a design gone bad. The designers wanted a programming language with the expressiveness of python and the speed of C, they ended up with a language with the expressiveness of C and the speed of python.
I suppose the one thing that Go does well (compared to C++ or Java) is builtin concurrency and communication across tasks.
Of course this only really applies to long running apps (web servers), if startup time matters then certainly Go wins.
Perhaps I need to try Go out, but I just don't see what the selling point is.
I actually did write a fair amount of Go code (my primary languages are C++ and Python).
On real, non-toy programs it is significantly more concise than C++, in the ballpark of Python code.
This isn't visible on code snippets (less than 500 loc) like the toy raytracer, but trust me: you'll see a big difference on 10k loc codebase.
"they ended up with a language with the expressiveness of C and the speed of python."
This is just pure troll. Apparently you have a Python variant of this renderer that matches the already impressive Go performance? Please reveal it.
AFAICT, the entire situation started only because the article was submitted to /r/Golang and /r/C++ with the trollbait title "Business Card Ray Tracer: Go faster than C++", and not because of anything in the article itself (which was actually a pretty good article).
I think the majority of languages in popular use are faster than Python. I believe that Go is popular with the Python / Ruby crowd because idiomatic Go is quite close to what they do already. I.e. you don't need to learn much to shift from Python or Ruby to Go. Using a language like Scala, for instance, is a much bigger jump.
Go doesn't do SIMD at all (see note 1). Personally I leverage Go coupled with the Intel Compiler (Go happily links with and uses very high performance C-built libraries, where I'm rocking out with SSE3 / AVX / AVX2).
To respond to something that Ptacek said above, many of us do expect Go to achieve C-level performance eventually. There is nothing stopping the Go compiler from using SIMD and automatic vectorization, it just doesn't yet. There is nothing about the language that prohibits it from a very high level of optimization, and indeed the language is generally sparse in a manner that allows for those optimizations.
*1 - For performance critical code you are supposed to use gccgo, which uses the same intermediary as the C compiler, allowing it to do all of the vectorization and the like. Unfortunately for this specific code gccgo generates terrible code, yielding a runtime that is magnitudes slower (albeit absolutely tiny). Haven't looked into why that is.
So it's not especially interesting that Go is in this space as well. The most surprising thing about Go is that its developers seem never to have heard of any of the above languages (with the exception of Java).
I am fine with boring languages. The first language love of my life is C. It's hard to get more boring than C. If a system I build is going to be clever or sophisticated, I'm fine with that being expressed in my code, rather than as the product of the environment I happen to be working in.
From another angle, consider that in HPC FORTRAN codes will often get the best performance - and FORTRAN doesn't have real pointers.
This doesn't show anything new though, programming math/graphics is perfect use for C/C++ and you won't really benefit from anything Go has to offer and some of its features actually become annoyance for this kind of application. The biggest strength of of Go doesn't really really shine either as simple parallelism needed for ray tracer is matter of few lines of code in both C and C++.
There is nothing standard about the optimizations -- direct AVX use is enormously uncommon, even among extremely high performance code.
I don't see why people feel that C++ needs to be replaced, when I write C++ I have many levels of scope - and while dangerous it is not impossible and the empowerment makes me feel like a god.
Programming is not incremental. If we spend all day writing a python back-end and when it doesn't give the performance numbers that day was a complete waste. When I think about C++ I know that a code written in C++ will take me 100% of the way - even if it takes longer to write.
Here are some of my reasons:
1. It is impossible to write high-level code without dealing with (and often getting bogged-down by) low-level issues in C++. Why should I be forced to choose between different "smart" pointer types? Why should I be forced to decide how variables should be captured by a lexical closure? Sure, such decisions might make sense when you want to squeeze out a constant-factor improvement in performance, but they do nothing to help you get things done in the first place.
2. Error handling and recovery is needlessly and pointlessly complicated. You can throw exceptions, except for the places where you cannot, and once caught you there is not much you can do to fix the problem. It is so bad that the C++ standard library actually requires certain errors to not be reported at all.
3. Extending the language is impractical. Look at what it took just to add a simple feature, lexical closures, to the language: modifications to the compiler. At best C++ gives you operator overloading, but you do not even have the ability to define new operators. Lisp, Scala, and numerous other high-level languages give programmers the ability to add new syntax and new features to the language without having to rewrite the compiler.
I am not familiar enough with Go to say that it addresses any of this, but I know why I stopped using C++ and why I have not regretted that decision. All the above make writing reliable code difficult. I actually switched away from C++ when I needed my code to scale better, because improving the scalability required a high-level approach and I did not have time to debug low-level problems. Even C++ gurus wind up having to deal with dangling pointers, buffer overflows, and other needless problems with their code -- that takes time and mental effort away from important things in most cases.
"When I think about C++ I know that a code written in C++ will take me 100% of the way - even if it takes longer to write."
The same is true of any programming language if the amount of time spent on the program is irrelevant. I am not sure what sort of work you do, but for what I have been working on, getting things done is considered higher-priority than squeezing out a constant factor improvement. Nobody complains about faster code, but everyone complains about late, buggy, and incomplete code.
If you don't want to decide then write all your types with value semantics and pass by value. How types are going to behave when passed should be decided before you write 'class{}'. It's a semantic decision. For types that you're borrowing, and not writing yourself, pass a shared_ptr or refer to the documentation.
-- Why should I be forced to decide how variables should be captured by a lexical closure?
Same thing applies. Auto capture[=] everything by value. If you're type doesn't have any (sane) value semantics, use a shared_ptr or a reference.
-- You can throw exceptions, except for the places where you cannot, and once caught you there is not much you can do to fix the problem.
You can throw an exception anywhere safely in correct code. The default assumption in the language is "anything can throw, any time, anywhere", so if your code doesn't at least provide the basic or weak exception guarantee you're swimming against the tide. Doing so usually improves the encapsulation and structure of code imo anyway.
-- once caught you there is not much you can do to fix the problem.
Exceptions are more like hardware exceptions or page faults than typical error states. You should only throw when you cannot reach a state expected by the caller. Ultimately, it comes down to API design, not philosophy.
// Clearly the only sane thing to do here if you
can't stat() the file is throw an exception.
size_t get_file_size(string filename);
// Some flexibility. Could probably avoid throwing.
optional<size_t> get_file_size(string filename);
// Better still, and easy to overload with the above
optional<size_t> get_file_size(string filename, error_code&);
... the better your API the better you can avoid having to throw. This isn't a new problem either, if you look at the C standard library there are many deprecated functions that provide no means of reporting an error at all except to return undefined results.-- extending the language is impractical.
Writing STL-like generic algorithms is trivial. Writing new data structures is trivial. Existing operators can be overloaded to augment scalar types or, more ambitiously, re-purposed to create DSLs. You have user-defined literals. Initializer lists and uniform intialization.
How would you like to extend further without it being completely alien to the existing language?
-- I actually switched away from C++ when I needed my code to scale better, because improving the scalability required a high-level approach and I did not have time to debug low-level problems
You should write more about this.
-- C++ gurus wind up having to deal with dangling pointers, buffer overflows, and other needless problems with their code
Not really bad pointers and buffer overflows these days. More slogging through pages and pages of compiler errors and hunting for library solutions to problems that should really be solved in the standard library (For me lately: ranges, matrices, more complex containers).
In any case, all languages have their share of friction. Look at that new Bitcoin daemon written in Go that hit the front page a few hours ago. The author had to debug 3 separate Go garbage collector issues.
No it's not. That's what the original post was arguing (I think successfully).
That was my thought while reading article; Rust seems like the answer here. I'm coming from the opposite direction than the OP: I'm unwilling to give up the expressiveness of Ruby and friends in order to write micro-optimized C++ code, and I'm hoping Rust will give me the best of both worlds.
> Nimrod is a statically typed, imperative programming language that tries to give the programmer ultimate power without compromises on runtime efficiency.
In my spare time, I’m working on a statically typed concatenative language called Kitten[2] with similar goals.
I believe you can turn off Nimrods GC as well, but you lose any guarantee of memory safety when you do. Not to mention -- aren't all pointers in Nimrod reference counted? That's going to take a fairly significant performance toll due to cache effects alone.
http://www.reddit.com/r/IAmA/comments/1nl9at/i_am_a_member_o...
How so? To me the callout to Rust diminished the entire article because it made it almost anti-Go for no particular reason (ala "Go is poopy anyways, but maybe [some other unproven option] will be the savior]". This same sort of nonsense occurred with the statement "on my not-very-fast-at-all Core i3-2100T" regarding the C++ performance, which is just narrative nonsense given that the same circa-2012, AVX-equipped processor was what yielded his Go numbers).
The author demonstrated C code that was 2x faster than Go code run through the standard Gc chain, when the C code is run through a very mature, hyper-optimized C compiler, which is actually really impressively good for Go. They then try to pound home the point by inlining AVX which is absolutely ridiculous for such a language comparison. It borders on pure troll, and I'm really surprised that so many people are falling for this.
It might seem that way if Go and Rust were competitors, but given that they seem to have pretty different philosophies and goals, that doesn't seem to be the case.
Yet again the fallacy that comparing implementations equals to comparing languages.
On the left corner the 6g compiler toolchain, which the authors admit that has yet to suffer lots of improvements in the optimizer.
On the right corner, the battle tested gcc optimizer, with circa 30 years of investment, aided by language extensions not part of any ANSI/ISO standard, that are not language specific.
Of course "C++" wins.
People, just spend time learning about compiler design instead of posting such benchmarks. And before someone accuses me of Go fanboy, I think my complaints about Go's design are well known by some here.
Presumption: Writing Go code is more fun than C++ code.
Demonstration: You can write performance Go code that's not too far from C++ code.
Result: Cool, here's a more fun than C++ language I can use as a step down the complexity path when I need performance.
Or like tptacek said.
Presumption: Writing correct Go code is more fun that C++
Ray-tracer workload can be fully split into each hardware core. Only input data needs to be shared, and even the input mostly doesn't make any race condition. The algorithm even can run on GPU.
This is handicap for Go. Go wants to solve - safe and easy concurrency without race condition for complex logic. So (IMO,) Go needs to make some overhead (or sacrifice some performance feature) for its goal. But in ray-tracer example, this ability mostly not required.
So, please just use nginx to host some static html files for your blog, and fetch your discussion boards asynchronously..
Wordpress + W3 Total Cache can handle a hundred HNs with ease. There is absolutely no reason to go to the past, and poorly configured blogs don't justify that Luddite argument.