Python3 has some nice features and some that could have been better designed, but personally I don't think it's as bad as this author makes it to be. It's pretty much a logical progression of the 2.x series. Python 3 is being adopted, slowly. I still think it's simply a matter of time, as Linux distributions have plans to move on. No one expected it to go quick.
And I like that Python 3 makes Unicode versus Bytes explicit. There's working with sequences of symbols (for humans) and working with bytes (for machines). I regularly hoped this would be done when working with binary data and making hw interfaces in Python, as there is a lot of confusion regarding bytes/unicode in Python 2 also in libraries...
It was interesting to read some discussion and arguments for/against 3.0, but it could have done with a little less "Python is now doomed" attitude...
When I wrote Alembic just recently, I ran 2to3 on it and it literally needed two "file()" calls to be changed to "open()" and that's it.
The contradiction in Armin's post is that he starts off with "we don't like Python because its perfect, we like it because it always does a pretty good job, etc. etc.". Aside from Armin's very specific issues with codecs (I've never used codecs that way) and with the filesystem (OK, he's convinced me work is needed there), I don't quite see how 2to3 is any different than "not perfect, but does the job at the end of the day".
Also, we really shouldn't be fixating on 2to3 very much - it is already deprecated. We should be thinking/worrying very, very, much about 3to2. Since we really should be shooting to have our source in all Python 3 within a couple more years.
I have a hard time seeing how one can use 2to3 on an everyday basis: it makes testing, distribution, etc... that much more painful because it is so slow (2to3 takes as much time as compiling all the C code from numpy, for example). It also removes one of the big plus of a language like python (modify code - see the result right away).
However I'm a big proponent of 'breaking with the past' once in a while, to fix issues that have snuck into the language/library/system, and to clean up the cruft. Yes it will bring some frustrating porting, but the end result will be a cleaner more focused language.
In contrast, JavaScript does not have a "standard" implementation and doesn't ship with any libraries. Sure, there's Node.js and probably others too.
Once you have a serious project, where you have dependencies and build systems and source control, it's not a very big issue to get install a few libs. But for small projects, Python's batteries come in handy. And some of those small projects turn into big projects.
What comes to Python 2 vs. Python 3, I feel it's a "can't make an omelette without breaking eggs" kind of issue. Unfortunately many people are running business critical applications with Python 2 and are not willing to put in the effort to migrate their code to Python 3. This "don't fix it if it ain't broken" attitude has slowed down Python 3 adoption.
http://sayspy.blogspot.com/2011/01/my-semi-regular-reminder-...
I think he's right in that there needs to be less of a gap, possibly with a Python 2.8. Migrating to 3 isn't the problem, it's maintaining the versions until 3 becomes dominant and 2.x support can be dropped. The Python core team is essentially deferring the difficulty of compatibility to library maintainers. And since Python 3 is essentially a new language, why not just use a py2js if/when it emerges? It'd be just as difficult. And with the additional benefit of entering a more mainstream community. So yes, let's lessen the gap.
I also agree with you in that Python 3 just needs time. And once 2.x is no longer supported, let's also remember this lesson: don't make such big leaps in language evolution.
This isn't all that similiar to perl5->perl6, which is a big leap.. The biggest problem is that they changed the default string type.
To my thinking, Python, Ruby, and Perl make people productive primarily because of the availability of tons of high-quality packages that "just work". The Python Package Index (http://pypi.python.org) lists 18 thousand packages now. Many are very high quality and require essentially no "impedance matching" to use with Python 2.7 except "import package". If there's a genuine issue with a package, you can usually use a several-line monkey-patch and leave the package source completely untouched. Beauty.
Put simply: there's no way for a language design to make writing code easier than not writing code. IMO, this is why, despite the warts, these languages are winning. JavaScript doesn't have a standardized module/import system, so its packages are fragmented across a dozen frameworks. But this may change if the world settles on "one framework to rule them all" (or maybe two: jQuery for UI and node.js server side).
But Python 3 breaks many of the available Python 2.X packages, and in exchange for improvements that in most cases seem more like tweaks than major design fixes. Things that should be fixed in both branches (e.g., OpenSSL cert validation support) are now relegated to ad hoc patches to Python 2.X, because all the development effort is going into the 3.X series now.
Finally, the biggest improvement to Python IMO hasn't come from the core team at all: it's the absolutely brilliant work being done by the PyPy team. I would love to see "Python 4" merge some of the ideas from the 3.X branch in a fully compatible way with Python 2, and move the standard implementation to PyPY. Among many other benefits, this would allow the Python community to start seriously exploring adding static type-checking facilities to the language, which would make it far more suitable for larger projects. (I'm not saying make Python into Java, but it would be nice to be able to declare types as one can in modern Lisp implementations, and have the compiler both check correctness and optimize using such hints.)
Perl 6 is doing that. Its is going to be one major incompatible release. But that like compressing two decades of deprecation cycles in one release.
I've only just started looking at Python, but I wasn't aware that it has true CLOS-style multimethods (or multiple dispatch). I know that there are ways you can add multiple dispatch to Python - but is it really accurate to say that the entire language has a design that is based on multiple dispatch?
Note that I'd be rather pleased to find that multimethods are an integral part of Python - they were one of my favourite features of CLOS and I still miss them.
My favorite example: Suppose you have a naive O(2^n) recursive factorial() function. I can write "factorial = memoize(factorial)" and suddenly your recursive calls are to my new wrapped function, which references your original implementation inside its closure. This is only possible because your recursive implementation dynamically dispatches by name. I have turned your O(2^n) implementation into an O(n) implementation.
In an unsafe systems language like C, I would only be able to accomplish the same thing with some severe memory hacks, and in a language like Java or C# I don't know how I would be able to do the same thing without some serious involvement in runtime reflection tools, and maybe even some decompilation.
Dynamic binding (in various forms) is arguably a pretty common language feature and generally nothing like as powerful as full multimethod implementations (let alone what is possible in CLOS).
C's dynamic dispatch hack isn't so terrible though--e.g. Microsoft has been doing it for a long time by sticking in a "useless" instruction at the start of every function. http://blogs.msdn.com/b/oldnewthing/archive/2011/09/21/10214...
You code to an interface and use an IoC container or dependancy injection framework to provide an instance of an object containing the required functionality.
So yeah, lots of reflection and tools :)
This has the side-effect of making multiple dispatch work with the same syntax reasonably well http://en.wikipedia.org/wiki/Multiple_dispatch#Python
It might amuse you to know that Perl 6, which isn't used either, has multi methods. :-)
Edit: Dynamic dispatch otoh is in most (all?) scripting languages.
As well as some other discussion here: https://plus.google.com/115662513673837016240/posts/9dLUJxg8...
This is why I've always found it difficult to love Python. It just didn't seem to me that Guido was familiar enough with previous language designs or had a sufficiently refined sense of language esthetics to be a world-class PL designer. Over time the community has built Python into an extremely practical and useful tool, but I don't think I'll ever derive the same sense of pleasure from writing Python code that I do from languages with a stronger unifying concept like Ruby or Lisp or even OCaml.
I used all three non non-trivial projects, and liked Python the best. It was better at handling complex data structures than the other two. Tcl was an easier language for my target audience (scientist/non-professional programmers) and it was easier to embed and extend Tcl, but Python's module and object system made up for it.
By the late 1990s, others in my field were already shipping Python-based applications, using Python bindings to Motif.
IMO, people didn't take notice of Python because of the "strange indentation", because high-level languages are seen as being too slow for real work, and because people coming from a statically compiled language often want the assurance that compile-time type checking gives.
[1] http://jimneath.org/2010/01/04/cryptic-ruby-global-variables...
When v3 was announced, IIRC even the Python folks themselves actually suggested that people just continue with v2.x until later when v3 becomes mainstream and it never did. In fact, I was surprised to see negative criticism about Python 3. It seems to me that nobody has been using Python 3, and therefore not complaining about it either.
When it was announced, people did suggest everyone continue on with 2.x, but not just until everyone waits for it to be mainstream (that clearly wouldn't work). No one expected users to drop everything and port right away. As your dependencies come up to speed with 3, try your project with them. Create an experimental branch. Do something to keep up. You don't need to halt your own progress for it, but you shouldn't sit on your hands.
I've been using Python 3 at work for around 2 years now, writing test frameworks and tools in a C++ environment (working on a historical tick database). While a lot of the web people are stuck on 2, and that has been changing for a while and it's only getting better there, a lot of other areas have been available to and have been using Python 3.
I'll go and sit back down in my corner.
There are some rough edges still. Our (marketing) website runs on Django, using Python2.7. There's been enough progress Py3k support for django recently that I'm hopeful we can migrate that by mid-2012. And I'd love to have solid Py3k support for a couple more libraries, like boto for EC2/S3/AWS.
All in all, for the particular things we need, Python 3 is practical now. We have paying customers whose services run on software written in Python 3.
I don't know... Python in the early 90s looked pretty much the same as it does now. Unless you mean that some features (or lack of them) required inelegant workarounds?
I think most machines were just not powerful enough yet in the 90s to make Python a viable solution for many problems. As computers got faster, that became less of an issue. Also, there was already a scripting language with a large following back then (Perl, naturally). Whether it was "ugly" probably had little to do with it. (Quite the contrary in fact, I recall that Python was often perceived as clean, elegant, concise and very readable.)
Adoption seems very slow from the various libraries, and without those people just won't move over. And if that's the case, then the language will stagnate, along with the myriad of great libraries that make it so excellent.
Python 2.x suits me just fine right now, it is a pragmatic language that lets me get things done quickly and predictably. But I would be lying if I didn't admit to gazing over Ruby's way now and then and thinking that the grass sure looks green over there.
It started slow like we expected, but I think it's acceleration lately has outpaced what a lot of people thought would happen. The number of Python 3 packages on PyPI is steadily rising [0], the number of Python 3 installers downloaded from python.org is rising with each version [1], and the number of projects announcing Python 3 support in places like reddit.com/r/Python is rising every day.
[0] http://dev.pocoo.org/~gbrandl/py3
[1] http://i.imgur.com/SLFDL.png - monthly download numbers for Windows installers for all downloaded versions over the last year (it's a rough draft, I just threw the download numbers in Excel quickly one day).
As for my anectodal experience with 2to3: I've recently been working on porting rpclib to Python 3. After skimming the diffs it produced for a simple `2to3 src/rpclib` call, I chose to ignore most of the transformations it applies.
Replacing commas in except statements by the "as" keyword or adding parentheses where missing work just fine. But wrapping every call to dict.keys() inside a list() call? That's bold.
Once 2to3 is tamed[1], I think the code it generates can be maintained. Certainly beats having to get the current exception from sys.exc_info.
I feel that my position is actually a majority in the Python community -- language users who are stuck with the branch that is considered unfashionable by the core developers and adding more resistance against migration to Python3.
All along, Python 2.7 is going to be maintained and bugs are going to be fixed. It is perfectly understood that the 2.x branch is currently by far the more used and deployed, and there's no plans to abandon it in terms of support. It just won't get new features.
> Now this all would not be a problem if the bytestring type would still exist on Python 3, but it does not. It was replaced by the byte type which does not behave like a string.
I was under impression that bytes is just an array of bytes and provides pretty much what `str` provided. What big thing is missing from that interface?
Since when correctness is not much?
To be precise, XHTML promised that pages render faster, which turned out to be browsers fault, not markup. PyPy solves "render fast" problem for python2. What problem Python3 does solve? Unicode? No...
I'm not sure XHTML ever promised faster rendering, I'm sure many people (including me) _assumed_ it would render faster. The truth is that XHTML rendered considerably more slowly! This days XML parsers have improved, but for a long time XHTML made partial rendering while loading and other things harder which meant loading an XHTML page took much longer than the plain-old HTML version.
I doubt rendering XHTML will ever be faster, at best it is/will-be not measurably slower than HTML.
So you have the choice of converting at source, and paying the price there, or converting during processing and paying the price there.
One could have an abstract 'String' type with concrete subclasses (ANSIString, UTF8String, UTF16String, EBCDICString, etc)
Assuming that any to-be-handled character strings can be round-tripped through UTF-8 (and that probably is a workable assumption), any function working with strings could initially be implemented as:
- convert input strings to some encoding that is known to be able to encode all strings (UTF8 or UTF16 are obvious candidates)
- do its work on the converted strings
- return strings in any format it finds most suitable
Profiling, one would soon discover that certain operations (for example, computing the length of a string) can be sped up by working on the native formats. One then could provide specific implementations for the functions with the largest memory/time overhead.
The end result _could_ be that one can write, say, a grep that can work with EBCDIC, UTF8 or ISO8859-1, without ever converting strings internally. For systems working with lots of text, that could decrease memory usage significantly.
Among the disadvantages of such an approach are:
- supporting multiple encodings efficiently will take significant time that, perhaps, is better spent elsewhere.
- the risk of obscure bugs increases ('string concatenation does not quite work if string a is EBCDIC, and string b is ISO8859-7, and a ends with rare character #x; somehow, the first character of b looses its diacritics in the result')
- a program/library that has that support will be larger. If a program works with multiple encodings internally, its working set will be larger.
- depending on the environment, the work (CPU time and/or programmer time) needed to call the 'correct for the character encoding' variant of a function can be too large (in particular, for functions that take multiple strings, it may be hard to choose the 'best' encoding to work with; if one takes function chains into account, the problem gets harder)
- it would not make text handling any easier, as programmers would, forever, have to keep specifying the encodings for the texts they read from, and write to, files and the network.
[That last one probably is not that significant, as I doubt we will get at the ideal world where all text is Unicode soon (and even there, one still has to choose between UTF8 and UTF16, at the least)]
I am not aware of any system that has attempted to take this approach, but would like to be educated on them.
I think Perl 6 still has a chance, though. No reason to dismiss it just because it's developing slowly.
The compatibility mode doesn't do Perl 6 much good, if you ask me. There are already too many ways to write Perl, if you ask a Python programmer. It's like writing C in C++.
However, CPAN compatibility would be hugely important to a usable Perl 6, by simple fact that no other language has the breadth and quality and availability of libraries to rival the CPAN.
I think there is CPAN compatibility mode, and Perl 5 programs are expected to run on Perl 6 compilers. Also Perl 6 is a total redesign but preserving the 'perl spirit' and its original design principles.
OTOH, Python is taking the path of incrementally correcting its problems at the expense of breaking compatibility as and when needed. The problem is every time you break backwards compatibility, you forcing an upgrade timeline on users and during that you are allowing rival languages and communities to flourish.
If people are using Python because it just 'works', then in its absence they will use something else too if it 'works'.
Me personally if I knew that a particular tool is going to continuously break my code base every now and then. I would avoid it all costs.
That's long been the plan, but the existing couple of proofs of concept have bitrotted. It won't happen any time soon.
All of this depends on Perl 6 being generally usable and having a working Perl 5 compatibility mode.
The `estr` suggestion is quite welcome.
What is he talking about?
Not part of the language as far as I can see, more a way how you do this in python
int(n) --> n.__int__()
format(x, spec) --> x.__format__(spec)
> I had the situation that when I logged into my remote server the locale was set to the string “POSIX”.