Personally I think the driver is going to turn out to be type annotations. When you see the enthusiasm for adding type annotations to JS (typescript, ES6, etc) its easy to see that translating to python. Static analyzers can be a huge help (you can already get a taste of it with PyCharm) and I for one would like to move away from "traceback driven development" where you just have to keep re-running the code until all the preventable glitches are worked out...
We have a fairly large Python codebase here. I've been wanting to move away from Python for a whole host of reasons, a top one being that it's dynamically typed. However, when mypy was released, that was enough of a reason to port from Python2 to Python3. Now we have a mostly-annotated codebase, and I'm a lot happier.
We'll probably move away from Python eventually (I still don't think it's an especially good language, and type checking with it will never be as useful as with a language where it's built in and required), but it's not as pressing of a concern.
I think type checking turns Python into a bearable language.
I can't use just python 3 because python 2 is still widely used. I can write code that works on both but now I'm using the worst of both worlds, and even worse I now have to test on both. And that'll last until python 2 goes away completely which, - when has a language ever gone away quickly?
These aren't fun problems. Improving python 3, making it more attractive, that's fun. But that's problem 2. Making migration less painful for me should be problem 1. Who's working actively on that?
What particularly grinds my gears is the apparent disregard for migration in the design. Take the changes to the print statement. Often this is the only thing that prevents my existing code from working in python 3. And it's in my muscle memory so I always get snagged when debugging on python 3. For what? Take a read through the rationale: https://www.python.org/dev/peps/pep-3105/#rationale. Most of the benefits you could have had if you'd named the new print function something else. I can understand that you regret adding it in the first place. What's even worse though? Adding it and then removing it in an incompatible way.
If you want to make python 3 more attractive maybe make migration easier before going on to the fun improvements? There are low-hanging fruits for migration too. How about adding back the print statement?
Another critical mistake python 3 made was unicode support. For people who work deeply in i18n support, UTF-8 encoding is the only practical solution. Python 3 string should use UTF-8 much like Go. Just for reference, Google stores almost all data as protobuf, which only support UTF-8. That proves the story.
I don't understand where you see the mistake in Python 3 with unicode. What encoding Python internally uses to store strings doesn't really matter. What's important is that it is always known what encoding is used. This was unclear in Python 2 and Python 3 fixed this.
If nothing else, I feel that backward compatibility should be given toward's a language's canonical "hello world" example.
To me this is the main pain point and I'm surprised that people are not talking about this.
2to3 and 3to2 are too kludgy and writing native py2-and-py3 compatible code is quite painful and requires a number of workarounds (e.g. unicode/str/bytes type, different methods on dictionaries...)
Why is changing 'print stuff' to 'print (stuff)' such a big deal?
I think 2To3 solves a lot of print cases https://docs.python.org/2/library/2to3.html
I don't think it could (easily) be resolved by treating it like a function when followed by parens and a statement otherwise. For illustration, consider what happens when you use parens after print in Python 2: the parens are treated like they would be in a mathematical expression — only there for the sake of order of operations. The parens effectively disappear when parsing.
I've been deploying python for almost 20 years, and I haven't had a single performance issue that was caused by GIL and couldn't easily be worked around. In my experience, building big multithreaded applications with shared memory access isn't great design anyway. I prefer systems that share as little as possible, and can therefore scale beyond a single machine when needed.
So I think the focus on the GIL is a false quest. It isn't a bad compromise to a thorny implementation issue (it allows certain performance optimisations, without forcing you to worry about re-entrancy and atomicity when writing simple code). Removing the GIL will be a big thing for pundits, I think, but won't make much of a difference to big python deployments. It certainly isn't the killer app for Py3 adoption.
I think the difference is that Node embraces the single-threadedness as part of its architecture. I don't use Python that much now so I'm not sure if it's changed, but I found it clumsier by default.
I know there's Twisted and other nice evented/multiprocess libraries that make Python closer to Node, but it surprises me that a batteries-included language with a GIL has to be, as you say, worked around. I don't work around single-threadedness in Node, I work towards it! And the platform encourages me to do so.
EDIT: I'm not sure why you seem to have been downvoted (man, I hate HN sometimes). Here's an upvote to counter it.
I agree that Node has its single-threadedness baked in, and Python doesn't clearly have one way to do it. Still, I think that multithreaded programming isn't very scalable in the long-run. Learning Erlang and figuring out that message passing buys you much more headroom was important to me. I like that you can do simple things in Python simply, but it is important to have some understanding of why different languages solve the problem in different ways, and what approach is best for a problem domain. My sense is that Node's default approach is actually very good for a large range of situations, a larger range than multithreading, anyway. But that could be because my use-cases are often either UI or CPU bound.
In your example, is your domain primarily internet servers (what I assume is Node's sweet spot for commercial deployment).
So yeah, the GIL is not an issue there. You'll just want something like celery for backend jobs, which is nice because it also maintains job state/info that survives process death - which globals in twisted won't do for you.
So, basically, it's a much more resilient way to do thngs, particularly if the front end of your application is request driven. Doesn't fit all models though.
(Also, Django + Django REST Framework is pretty awesome).
I'm one of those not so lucky souls that could benefit from having the GIL removed. I do a load of concurrent read only access from multiple threads of execution on a large object in memory. At the moment I'm forced to fork using multiprocessing. It works, but it sucks to have the overhead.
Though, as you say, it's really a non-issue for most workloads.
Saddly I agree with that. There needs to be either a big stick (Python 2 being really bad, but it is actually pretty good) or a large carrot ("Oh look 3x performance improvement!").
Something like a carrot was presented during Pycon and that was gradual types (optional types). These reduce some cases covered by unit tests, make code more readable for new developers, help with IDE support, and of course assist with general static checkers. According to Guido 3.5 should start having a partial support for it.
But in general I would have liked either one of these instead (some are contradictory, arbitrary hard, or downright impossible):
* At least 2-3x performance improvement
* No GIL
* Merge greenlet library in the core (to make eventlet or gevent work)
* Some kind of an ahead of time compiler that bundles just the needed interpreter library parts into an executable
* Firefox and Chrome agree to add browser suport for it
* Mobile support (native Android support or Apple ditches Swift and uses Python instead).
I think 2x-3x perf improvement /is/ possible. I mean we have the example of javascript that went from terrible perf to almost native parity. Of course Mozilla & Google each dedicated an entire team to get there.
We already have the example of PyPy too which /today/ averages a >5x (http://speed.pypy.org/) speed up with CPython!
It didn't have the ecosystem of C extensibility and mature extensions Python has, which drive much of its adoption in e.g. scientific computing (a big factor for Pypy having trouble making inroads).
Doing any non-ASCII string-processing in Python 2 with any regularity is a more than regular-enough beating for me.
Can you elaborate on this? Unicode processing is the same on 2.x and 3.x for the most part. There are some differences in interpreter internals, how string literals are represented and the internal representation was changed (and obviously the literal defaults and bytestrings were removed), but other than that the unicode support is more or less equivalent.
* "yield from"
* Unicode support (I'm German and the clear distinction between bytes and unicode really makes my life easier)
* function annotations (PyCharm interprets them and uses them for static type checking)
* cleaned up stdlib (not only names, but also features)
* asyncio
Library support is very good nowadays, pretty much all of the important libraries are either ported to Python 3 or have an active fork. Even OpenStack is working on Py3 support.
- Laptop running Ubuntu Trusty: Python 3.4
- Production servers with Debian Wheezy: Python 3.2
- Laptop running Fedora 20: 3.3.2
You don't usually install Python in Linux using upstream, you install the version provided by your distribution and unfortunately Python 3 has differences between these versions (eg, "yield from" was introduced in 3.3, IIRC), whilst 2.7.x it's been without changes for a long time.
Even Apple includes 2.7 now (handy when I distribute a game made with Pyglet); so I'm excited too, but Python 2 still makes my life easier.
Stop using Python 2. All of the justifications I'm seeing in this thread are complete non-starters. If you want to use a 3.3 feature, install Python 3.3. Are you seriously suggesting that we revert to the problems of 2.x, discarding all the effort that's been dumped into Py3 compat over the last 7 years, because you don't want to install a software package? It's really common to need to add extra sources to a package manager for new versions of things, and it's also common to need to build your own packages. It shouldn't be that hard. Be grateful that at least there's a possibility you can use the system Python, since Ruby communities don't really have that luxury.
In the meantime, it's always possible to install a self-contained Python 3.4 interpreter and a virtualenv. It's pretty straight-forward, actually. The only thing you'll miss is distribution-provided binary packages (like lxml), but pip/setuptools compiles them for you.
The main problem for me remains dependencies. Python makes it so easy to integrate stuff via pip/easy_install, but that's a double edged sword in this case: Since there's such a huge abundance of great libraries for anything out there, and they're so easy to install - people use them. And now I'm stuck with non py3k compliant libs in a big codebase (some of them not trivial), and suddenly there is a much bigger cost to switching - migrating away from, or porting of all those deps.
So now all those up-sides which are nice to have but are not real game changers (I'm handling Unicode fine - albeit in an clunky and ugly way, but it works), are not worth the pain. And that's sort of a chicken-and-egg thing.
If there was a much bigger gain from switching, that will outweigh the cost - e.g. better concurrency, better overall performance - it would be worth the pain. Otherwise, I guess I might use py3k for new stacks I'll build from scratch, but not for the current stack I work with.
https://pypi.python.org/pypi/python-bond
It spawns a second python process that you can call/execute code from almost invisibly, first-class exceptions included. The main difference with other similar solutions is that it support call backs: a remote function can call back new code, and can do it so recursively.
You can intertwine old code and new one.
The main drawback is that it's not efficient for small/lightweight functions.
I think if someone hasn't moved to Python 3 yet, no iterative change is really going to get them to do it. It's OK if old software is resistant to breaking changes; this is about building a good ecosystem for the software to come. If it was about making things easy for people who've already written their software, Python 3 would not have been released in the first place.
Honestly I don't really see why anyone cares about whether Mercurial is Py2 or Py3, since it's not a library and isn't holding up new development. Mercurial can use the Py2 interpreter to its heart's content and it shouldn't have any effect on the prosperity of Py3.
The Python community needs to get serious about pushing adoption of Py3 by the distros, and then we can put this navel gazing to rest and move on with Py3 finally realized as the standard.
Having people on older versions of the interpreter is a problem, because it divides the community in half when it comes to knowledge, skillset, capability, etc.. It also confuses outsiders who are looking at the situation, and don't understand which version they should move forward with, because it's not clear which one will by supported by the people they want to hire.
There is a real cost to having the community split - which is why so much effort is put into trying to pull it back together.
The differences between Py2 and Py3 are not substantial enough that it "divides the community in half" in any tangible fashion. Yes, it's possible new software may not take Py2 compatibility into account and you'll need to switch to Py3 to get new stuff, but that's been the plan all along, right?
183 of the 200 most-used libraries are Py3 compatible now, including major systems like SciPy and Django. It's not 2009 anymore, and we should stop talking like it is. Py3 is the way forward and the Python Foundation needs to quickly and firmly deconstruct any sign of flapping on that front if they hope to convert the remaining Py2 holdouts.
I believe it would be very damaging for the community if Python backpedaled and said "OK, we know you just spent 7 years investing in the Py3 platform and converting your apps and libs to work on it, but visible projects like Mercurial still don't like it so we decided it wasn't worth it anymore." People don't take time out of their day to make their voices heard unless they're already discontented about something. We shouldn't assume that everyone hates Py3 just because Py3 users are going about their business quietly.
It's interesting that performance wasn't a topic at this rump session as reported; I moved over to Go about a year ago, and while I miss Python's expressivity at least once a week, I'm just not willing to slow down all my programs by 5x.
On the other hand, if Python could double in speed, I'd likely try to rework it into our workflows. Well, maybe. I really dig Go.
But strangely they don't - PyPy has hardly gained traction (albeit the python2->python3 switch didn't help) and looking at the benchmarks that's more like 5-7x performance.
I suspect that most often, in places where performance matters enough in a way that would warranted refactoring a code-base from cpython to PyPy, they already rewrote the botte-neck parts in C anyways.
The issue I've usually had with it is just small things – maybe an module we depend on doesn't work with PyPy yet, maybe PyPy3 isn't 3.3-compatible yet and can't use "yield from", which we use in our code or which our libraries use.
Dealing with those sort of small issues (and/or waiting for PyPy to fix them), worrying about whether the project I'm currently working on can use PyPy or if I need to use CPython instead, etc makes me stop worrying about trying to use it after a while. In my dev environment at least.
I do really love what the PyPy guys are doing though, and when it works it works damn well. Building it from source is also super pretty.
Nim is a neat language, but comparisons to Python solely because it has semantic indentation are completely shallow (especially so when you consider that Guido doesn't even think that whitespace sensitivity is an important feature).
If you read the "Why Nim" posts, they are highly salient points for a 'journeyman' or better developer that wishes to be highly productive on a small project; I read the list of 'better than Go' stuff with that hat on, and it's very appealing.
But, there's too much rope to hang oneself in Nim for my use case. Getting a language stripped down just enough that team productivity over years is maximized is a very, very hard thing to do. I think the go folks have the best take on it right now, and it's run by courteous and responsive grownups who do what they say they'll do. That's a total win in my book.
Add in go fmt, a very good (not without warts) module import system, reasonable testing and a multi processor programming model that's easy to reason about, and it is a very, very good solution for my needs.
1. Speed
2. Language warts (e.g. del, __init__, import *, while: else:)
3. Lack of a modern UI toolkit
4. No native support in Android, iOS, or Browsers worth mentioning.
Right now Python is the perfect prototyping, glue, and modest workload language.
You can make it better for heavy loads by fixing 1. You can make it even more attractive to novice programmers by fixing 2.
But you really have to get to 3 or 4 before it becomes truly attractive and you get a mass adoption.
(i made a tracing tool that traces a python program and prints out all accessed variables - it actually makes use of this by-name-lookup-feature http://mosermichael.github.io/cstuff/all/projects/2015/02/24... )
i think that's quite wasteful, fixing variable access so that it is by some internal index and not by name would probably be a big improvement, even without changing the global interpreter lock.
Excluding one-liner syntax, why does Python actually need a colon to define a function, given it's goal of being free of unnecessary syntactic elements? I'm only an intermediate pythonista, but I'd be interested to know if there was a particular point to the colon.
If I just want to wrap a simple GUI around some functions then going down the qt route is currently far to 'heavy'.
I do this, and it works perfectly well. Here's a full implementation demonstrating this approach: https://gitlab.com/higan/higan/blob/master/libco/sjlj.c
It's been successfully used on x86, amd64, ppc32, ppc64, mips, arm and sparc in several projects.
However, it still has a good bit of overhead. But you can implement this concept absolutely trivially on any platform for maximum speed. All you need to do is save the non-volatile registers, swap the stack pointer, restore the non-volatile registers from the swapped-in stack, and return from the function. If you haven't realized, one function can reciprocally save and restore these contexts. Here's an x86 implementation, for example:
co_swap: ;ecx = new thread, edx = old thread
mov [edx],esp
mov esp,[ecx]
pop eax ;faster than ret (CPU begins caching new opcodes here)
mov [edx+4],ebp ;much faster than push/pop on AMD CPUs
mov [edx+8],esi
mov [edx+12],edi
mov [edx+16],ebx
mov ebp,[ecx+4]
mov esi,[ecx+8]
mov edi,[ecx+12]
mov ebx,[ecx+16]
jmp eax
This turns out to be several times faster than abusing setjmp/longjmp.I turned this into the simplest possible library called libco (public domain or ISC, whichever you prefer.) The entire API is four functions, taking 0-2 arguments each: create, delete, active, switch.
The work's already been done for several processors. Plus there's backends for the setjmp trick, Windows Fibers and even the slow-as-snails makecontext.
If Python does decide to go this route, I'd certainly appreciate if the devs could be directed at libco for consideration. It'd save them a lot of trouble making these, and it'd get us some much-needed notoriety so that we could produce more backends and finally have a definitive cothreading library.
https://github.com/python-greenlet/greenlet/tree/master/plat...
That's the downside of these libraries, there's as many libraries as there are people wanting to do this. I've been hoping for a standard to emerge, even if it's not mine. This looks a bit too tied into Python to be general purpose, though.
cothread_t a, b; //cothread_t is a typedef to void*
int main() {
a = co_active(); //get a handle to the main thread
b = co_create(entrypoint, stacksize_in_bytes);
printf("a");
co_switch(b);
printf("c");
co_delete(b); //no need to delete a
}
void entrypoint() {
printf("b");
co_switch(a);
}
//output: "abc"
We've only been using it in emulators so far. It's in MESS, higan, twoMbit and a few others.Dealing with Scandinavian languages having the Python 3 Unicode support is the killer feature in Python 3, it just make everything so much easier. In terms of performance it's fine and library support is no longer an issue (for us at least), everything we use just works.
In PIP, Python 3 has like 5% uptake...
If you mean that only 5% of the packages on pypi.python.org is Python 3, I would say that sound a bit low. Also unimportant for most, if it's just the correct 5%. There's also a ton of old cruft on pypi.python.org that would count against Python 3, but not really be important to anyone.
Quick check, there are 7633 Python 2 in the package index (https://pypi.python.org/pypi?:action=browse&c=527) and 8949 for Python 3 (https://pypi.python.org/pypi?:action=browse&c=533)
The suggestions in this post are mostly changes to the implementation (i.e., make it go faster), not the language itself. While CPython 2.7 and CPython 3.4 (implementations) surely have interesting implementational differences that don't boil down to just language changes, I'm not aware of them.
The language improvements are nice. I know this because I started with Python 3 then switched to Python 2 and missed some of the goodies now and then. But the language improvements are not enough to overcome the breakage of backwards compatibility. Only a vastly improved implementation (that is not backported to Python 2.7) will.
"Where is it appropriate to post a subscriber link?
Almost anywhere. Private mail, messages to project mailing lists, and blog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared."
Well, after a month of studying, using, and teaching the language...all I can say is, the conflict between 2 and 3 definitely lives up to the hype :)...Most of the changes make sense to me, and even as a "do-whatever-you-feel-like-aesthetically" Rubyist, I appreciate what Guido did/attempted to do in the clean-up. But things like lambda...there obviously was no easy answer...I love lambdas, but it's so functionally limited and awkward in Python that I also see Guido's point about just removing it from the language (ultimately, he gave up on that)...
But what about the built-in reduce()? Again, it's another function that I instinctively reach for as a Rubyist...and yet it's so awkward in Python that, again, like lambda, maybe it should die? But in this case, Guido halfway-won, and now it's been pushed into the functools package. Mmmkay. And so it is with so many of the 2 to 3 changes at the interface level...as a newbie, it's just mostly amusing since I have no legacy code to port over, but I definitely understand the strife.
But the conflict is still hard to avoid as a newbie...many of the most used guides (LPTHW, Codecademy's Python track) are just done in Python 2...LPTHW says right up front to stay the fuck away from 3.x. I don't think Codecademy even bothers to mention what version they're teaching...obviously, beginners don't need to get into the version wars, but as soon as they get past Codecademy and start Googling around, they're going to be in for some surprises.
Hell, the act of Googling is itself affected by the version-wars...everytime I google for commands/subsections in the official Python docs, the version 2.x docs are always at top. Sometimes the 3.x docs don't even show up. At least I know that there's a 3.x and how to manually switch to those docs...imagine all the novices who are also Googling for references...it's not hard to think that the cycle of 2.x indoctrination is propped up by the simple fact that 2.x docs/help are always at the top of the Google results.
I've written a few things that are meant to support Python 2 and 3 from the same codebase and it was a bit of a nightmare finding all of the gotchas. Stupid little stuff like changing dict.iteritems() and replacing it with dict.items() (which still exists in Python 2 but doesn't do what you expect!) in 3 are a big pain to deal with when writing code that has to work with python 2 and 3.
This page has a lot of good advice, but the fact it's so long is just a testament to how painful the 2 to 3 transition is for people: http://python-future.org/compatible_idioms.html
For example, the (Epic) Learning Python 5th Edition by Mark Lutz, had to be written for two simultaneous audiences, 2.7 and 3.3. There are all these disclaimers noting where things are different between the two versions, and there is a lot of cognitive overload trying to read a book can't assume you are on Python3.
In the best case world - people would treat this like a y2k situation, and realize that if they didn't get with the program, and migrate over to Python3, they'll end up like Perl, with some other newcomer that isn't so bipolar charging forward and winning mindshare.
Unfortunately, their are a lot of Python2 people who are happy with Python2, and we're in the situation we have today.
Apple is the only company that says screw legacy. Of course users complain but they just grumble and know to accept it. Apple and their users, as a whole, benefit greatly by "getting all the wood behind one arrow" strategy.
There's lots of grumbling about Swift but I bet we get a very large developer adoption rate within 24 months.
Fact is 2to3 is nice but it doesn't give you any guarantee about its code coverage. So you go almost just as fast working by hand.
But the lack of guarantees, that makes working in production very dangerous.
Tried to support 2 and 3 at the same time, but that's just too exhausting and error prone (one has to check in both python2 and python3)
Projects with 100% test coverage don't exist, spare time project have even less test coverage.
For me unicode was the driver to change. And it paid off. And I think that's the only P3 feature that actually improves expressivity (now I can clearly express unicode strings). The yield stuf, etc. is fine but nothing /that/ impressive.
For performance, forget PyPy, a 5x/7x improvements is not enough : you still can't write high perf code with that. If PyPy was 50x faster than CPython, that would be something.
So basically, after a lot of efforts I'd say write Python3 code because it helps Python or because you use unicode. Any other reason seems a bit weak to me. And that's sad, I've bet on Python 4 years ago and it didn't evolve much (it surely became very stable, which is not funny but damn useful!).
I guess the point of the 2-3 war is precisely that : 2 and 3 are different but not different enough... So people have hard time to make a choice.
That was a long time ago when Python's userbase was much smaller. I wonder if Python be in better shape today had they merged stackless back then.
https://www.google.com/trends/explore#q=Python%20programming...
Specifically:
- Python 3 forces you to use CPickle instead of the Python version of Pickle. In some multi-thread/multiprocess situations, CPickle has some memory allocation error, Python's memory becomes corrupted, and things go downhill to a crash. The Python version is fine. I submitted a bug report, but nothing will happen unless I come up with a simple test case, which is hard. Meanwhile I found out how to use the Python version, which works, despite Python 3, and am using that.
- PyMySQL (a "drop in replacement" for MySQLdb) originally didn't implement LOAD DATA LOCAL. When it was implemented, it wasn't tested for large data loads. I kept getting random database disconnects, until I figured out that it was trying to send the entire bulk data load as one 16MB MySQL connection packet. This only works if you configure insanely big buffers in your MySQL server. There's no reason to send a packet that big; LOAD DATA LOCAL will use multiple packets when necessary. It was just a lame default.
- HTML parsing uses different packages under Python 3. The HTML5parser/BS4 combination blows up on some bad HTML, usually involving misplaced items that belong in the HEAD section. The HTML5 parser, obeying the HTML5 spec for tolerating bad HTML, tries to add to the tree being produced at points other than after the last item. BS4 is buggy in that area. I wrote a function to check and fix defective BS4 trees, came up with a simple test case, and submitted a bug report. I have a workaround for now.
- Python 3 finally has TLS support in SSL. (That's also been backported to Python 2.7.9). SSL cert checking is now on by default. It doesn't work for certain sites, including "verisign.com". This is because of a complicated interaction between a cross-signed root certificate Verisign created, a feature of OpenSSL, and how the Python "ssl" package calls OpenSSL. It took weeks of work to get that fixed. Because it's a core Python package, it will remain broken until the next release of Python, 3.5, whenever that happens.
- Running FCGI/WSGI programs from Apache requires a different package than with Python 2. There are 11 different packages and versions of packages for doing this. The Python documentation recommended one that hadn't been updated since 2007, and its SVN repository was gone. There are six forks of it on Github, three of them abandoned. I finally found a derivative version from which much of the unnecessary stuff had been stripped out, and it worked.
- Python's installer program, "pip3", doesn't know which packages work under Python 3, and tried to install a version of one of them that only worked with Python 2.5-2.6. You have to know to install "dnspython3", not "dnspython", for example.
These are all bugs that should have been found by now, and would have if Python 3 had a more substantial user base. We're six years into Python 3. I shouldn't be finding beta-version bugs like these at this late date.
Python's Little Tin God's position on third-party library problems is that it's not Python's problem. His fanboys follow along. (Comment on comp.lang.python: "You have found yet another poorly-maintained package which is not at all the responsibility of Python 3. Why are you discussing it as if Python 3 is at fault?") As a result, PyPi (Python's third-party package list) has no quality control. Perl's CPAN has reviews, testing, and hosts the actual packages. Most of Go's key packages are well-exercised within Google and maintained there. PyPi is just a link farm.
That's why Python 3 isn't getting used. It's not a need for new features. It's that Python 3 doesn't work out of the box. Its supporters are in heavy denial about this.
If you can dump Apache, uwsgi and nginx is (imho) a much nicer way to host python apps.
On the other hand, I didn't find the new features in Python 3 appealing enough to make me fight the above drawbacks.
Last but not least, while I use most of the tools I've developed in Python on a daily basis, they are just that: tools meant to make my life nicer.
The rewrite of the language.
It was decided that if you need to break backwards compatibility anyway, you might as well do so in a big way. In contrast, Python3 breaks backwards compatibility as well, but only offers moderate improvements.
This is Python 2.7, but is based on PyPy (so it's waaay faster), and has requests + gevent + lxml built-in, and adds some small niceties from 3.x (yield from, a, b* = foo()).
My point is that there ought be a 2.8, and it ought to keep growing.
I sat through a similar talk about (lack of) 3.x adoption at PyConSwe last year.
Who did the people who invented 3.x think their customers were? What did they think people wanted and needed?
AFAIK there exists RC GC's which are performance equivalent to MarkSweep, but these aren't super common out of academia? What is the state of GC performance in python currently?
It's opinion presented as fact that Python 3 has low take up.
Without tangible proof I call bullshit.
Namely, Python3 is sufficiently different for applications supporting distributions.
If you are doing Software-as-a-Service hosted webapps, fine, you can choose your platform, but if you are shipping software, you have to make concious choices about usually supporting what the distros have.
And the Linux distros are inconsistent.
This problem sounds solveable by saying "users, install a newer Python", though this seldom is effective -- and long lifetimes of things such as RHEL 5 (yes, still afield - and some folks have to support version 2) ship versions that are less compatible with Python 3 compatible hacks than newer Pythons.
As a result, this intent to "clean things up", I feel, has massively undercut Python's growth rate. Maybe it's not declining, but there's been what feels to be an inflection point.
I suppose looking at download curves for hundreds of long-standing PyPi projects relative to the growth rates of other systems could provide this is a thing or not.
Anyway, I do love Python. The problem is not the GIL. Most folks who are making web services get by far with a pre-forking webserver (mod_wsgi, etc) and something like celery for backend jobs. multiprocessing is ok enough for some other cases.
It doesn't matter whether Python 3 is attractive so much, and that's what I mean about a decision point - the confusion gave people a chance to shop around, and some people are trying things in other languages now.
For instance, Go seems misapplied - it has a different expressiveness and domain area. I'm writing a fair amount of clojure, which also feels a bit more low level (sometimes, just in places). But I felt compelled to look around.
The crux of the theory is this - changing something singificantly will allow people the opportunity to think is this something they still want to do.
I still believe Python strikes a great balance between expressiveness and readability, and it's surpression of "clever" in programming makes it ideal for a lot of problem domains. And it's kind of old enough that people are going to want to look around.
Still, I begin to feel some of the directions being made in 3 are out of touch, just as the resistance to some more expressive language features (crippled lambdas, I vaguely recall) were that way. This happens when those that write the language don't neccessarily use the language, and the (percieved) BFDL approach of "fixing regrets" I am not sure it looks after the good of the whole, the way the 2->3 transition happened. Those should have been evolved slowly, keeping things compatible, rather than creating what is essentially a new language.
I'm still pleasantly surprised by how widely deployed Python is to the rate at which people talk about it (say, vs Rails), I think a lot of that is because it's NOT complicated, and you don't need to talk about it so much. It's a quiet workhorse.
But I've also started new projects in Python 2 - because I've needed them to work everywhere. Python 3 is almost sort of having the Perl 6 stigma to it in my mind, it's available now, but it's made a compatible break that has shaken trust.
Since Python 2 is essentially the deployed standard, there's no real reason for most apps that must be distro installable to work on hybrid support - until the distros move to a version that makes it easier to be compatible, it's more important to support where the users are than risk possible bugs and implementation trouble. Resources are better spent elsewhere.