People are wtf-ing a bit about the positional-only parameters, but I view that as just a consistency change. It's a way to write in pure Python something that was previously only possible to say using the C api.
Digression on the old way's shortcomings: Probably the most annoying thing about the old "format" syntax was for writing error messages with parameters dynamically formatted in. I've written ugly string literals for verbose, helpful error messages with the old syntax, and it was truly awful. The long length of calls to "format" is what screws up your indentation, which then screws up the literals (or forces you to spread them over 3x as many lines as you would otherwise). It was so bad that the format operator was more readable. If `str.dedent` was a thing it would be less annoying thanks to multi-line strings, but even that is just a kludge. A big part of the issue is whitespace/string concatenation, which, I know, can be fixed with an autoformatter [0]. Autoformatters are great for munging literals (and diff reduction/style enforcement), sure, but if you have to mung literals tens of times in a reasonably-written module, there's something very wrong with the feature that's forcing that behavior. So, again: f-strings have saved me a ton of tedium.
print('I do not get executed :)')
f'{!}'
File "stefco.py", line 2
f'{!}'
^
SyntaxError: f-string: empty expression not allowed
This has the pleasing characteristic of eliminating an entire class of bug. :)That post makes a few things very clear:
* The argument over the feature did not establish an explicit measure of efficacy for the feature. The discussion struggled to even find relevant non-Toy code examples.
* The communication over the feature was almost entirely over email, even when it got extremely contentious. There was later some face-to-face talk at the summit.
* Guido stepped down.
I've used assignment expressions in other languages too! Python's version doesn't suffer from the JavaScript problem whereby equality and assignment are just a typo apart in, eg., the condition of your while loop. Nonetheless, I find that it ranges from marginally beneficial to marginally confusing in practice.
Ergonomically, I see little benefit for the added complexity.
But there's a long standing trend of adding more and more of these small features to what was quite a clean and small language. It's becoming more complicated, backwards compatibility suffers, the likelyhood your coworker uses some construct that you never use increases, there is more to know about Python.
Like f-strings, they are neat I guess. But we already had both % and .format(). Python is becoming messy.
I doubt this is worth that.
[1] https://www.python.org/dev/peps/pep-0572/#alternative-spelli...
Disagree. In cases where it's useful it can make the code much clearer. Just yesterday I wrote code of the form:
foos = []
foo = func(a,b,c,d)
while foo:
foos.append(foo)
foo = func(a,b,c,d)
With the walrus operator, that would just be: foos = []
while foo := func(a,b,c,d):
foos.append(foo)
Further, I had to pull out 'func' into a function in the first place so I wouldn't have something complicated repeated twice, so it would remove the need for that function as well.Also, it's times like these I'm really glad docker exists. Trying that out before docker would have been a way bigger drama
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things.”
― Douglas Adams, The Salmon of Doubt
My understanding of Python will probably never be quite as good as my understanding of C, but I can live with that.
Anyone making this argument should be prepared to to accept every single criticism they make in their life moving forward can be framed as 'their resistance to change'.
This kind of personalization of specific criticism is disingenuous and political and has usually been used as a PR strategy to push through unpopular decisions. Better to respond to specific criticisms than reach for a generic emotional argument that seeks to delegitimize scrutiny and criticism.
Yep, I entered the Python world with v2. I eventually reconciled myself to 2.7, and have only recently and begrudgingly embraced 3. Being over 35, I must be incredibly open minded on these things.
The walrus operator makes while loops easier to read, write and reason about.
Type annotations were a necessary and IMO delightful addition to the language as people started writing bigger production code bases in Python.
Data classes solve a lot of problems, although with the existence of the attrs library I'm not sure we needed them in the standard library as well.
Async maybe was poorly designed, but I certainly wouldn't complain about its existence in the language.
F strings are %-based interpolation done right, and the sooner the latter are relegated to "backward compatibility only" status the better. They are also more visually consistent with format strings.
Positional-only arguments have always been in the language; now users can actually use this feature without writing C code.
All of the stuff feels very Pythonic to me. Maybe I would have preferred "do/while" instead of the walrus but I'm not going to obsess over one operator.
So what else is there to complain about? Dictionary comprehension? I don't see added complexity here, I see a few specific tools that make the language more expressive, and that you are free to ignore in your own projects if they aren't to your taste.
No, f-strings handle a subset of %-based interpolation. They're nice and convenient but e.g. completely unusable for translatable resources (so is str.format incidentally).
It's all about the culture. And Python culture has been protecting us from abuses for 20 years, while allowing to have cool toys.
Besides, in that release (and even the previous one), appart from the walrus operator that I predict will be used with moderation, I don't see any alien looking stuff. This kind of evolution speed is quite conservative IMO.
Whatever you do, there there always will be people complaining I guess. After all, I also hear all the time that Python doesn't change fast enough, or lack some black magic from functional languages.
I think this metric is grossly overestimated. Or your scope for "out there" is considering some smaller subset of python code than what I'm imagining.
I think the evolution of the language is a great thing and I like the idea of the type hints too. But I don't think most folks capitalize on this yet.
happy python programmer since 1.5, currently maintaining a code base in 3.7, happy about 3.8.
It was to allow only certain HTTP verbs on a controller function. A pattern adopted by most Python web frameworks today.
The "pow" example looks more like a case where the C side should be fixed.
In reality, the 2.x releases had a lot of significant changes. Of the top of my head, context managers, a new OOP/multiple inheritance model, and division operator changes, and lots of new modules.
It sucks that one's language is on the upgrade treadmill like everything else, but language design is hard, and we keep coming up with new cool things to put in it.
I don't know about Python 3.8, but Python 3.7 is absolutely amazing. It is the result of 2 decades of slogging along, improving bit by bit, and I hope that continues.
Doesn't mean nothing good comes out of them, and if it's simplicity that motivates people then eh, I'll take it, but gosh darn the cycle is a bit grating by now.
The development has been going quite well:
If I wanted a language with "only one way to do it", i'd use Brainfuck. Which, btw, is very easy to learn, well documented, and the same source code runs on many, many platforms.
I do agree that Python is moving further and further away from the only-one-way-to-do-it ethos, but on the other hand, Python has always emphasized practicality over principles.
Some kinds of data can be passed back and forth between processes with near zero overhead (no pickling, sockets, or unpickling).
This significantly improves Python's story for taking advantage of multiple cores.
"multiprocessing.shared_memory — Provides shared memory for direct access across processes"
https://docs.python.org/3.9/library/multiprocessing.shared_m...
And it has the example which "demonstrates a practical use of the SharedMemory class with NumPy arrays, accessing the same numpy.ndarray from two distinct Python shells."
Also, SharedMemory
"Creates a new shared memory block or attaches to an existing shared memory block. Each shared memory block is assigned a unique name. In this way, one process can create a shared memory block with a particular name and a different process can attach to that same shared memory block using that same name.
As a resource for sharing data across processes, shared memory blocks may outlive the original process that created them. When one process no longer needs access to a shared memory block that might still be needed by other processes, the close() method should be called. When a shared memory block is no longer needed by any process, the unlink() method should be called to ensure proper cleanup."
Really nice.
With mmap you have to specify a file name (actually a file number), but so long as you set the length to zero before you close it there's no reason any data would get written to disk. On Unix you can even unlink the file before you start writing it if you wish, or create it with the tempfile module and never give it a file name at all (although this makes it harder to open in other processes as they can't then just mmap by file name). The mmap object satisfies the buffer protocol so you can create numpy arrays that directly reference the bytes in it. The memory-mapped data can be shared between processes regardless of whether they use the multiprocessing module or even whether they're all written in Python.
I thought that when you use multiprocessing in Python, a new process gets forked, and while each new process has separate virtual memory, that virtual memory points to the same physical location until the process tries to write to it (i.e. copy-on-write)?
Empty space in internal pages gets used allocating new objects, refence counts updated or GC flags get flipped etc, and it just takes one write in each 4kb page to trigger a whole page copy.
It doesn't take long before a busy web worker etc will cause a huge chunk of the memory to be copied into the child.
There are definitely ways to make it much more effective like this work by Instagram that went into Python 3.7: https://instagram-engineering.com/copy-on-write-friendly-pyt...
Sharing post-fork data is where it gets interesting.
E.G: live settings, cached values, white/black lists, etc
But still copying?
If not, then how does it interoperate with garbage collection?
So it's not for containing normal Python dicts, strings etc that are individually tracked by GC.
https://docs.python.org/3.8/library/multiprocessing.shared_m...
Would this work with e.g. large NumPy arrays?
(and this is Raymond Hettinger himself, wow)
You may continue working on the standard library, optimizing, etc. Just no new language features.
In my opinion, someone should be able to learn all of a language in a few days, including every corner case and oddity, and then understand any code.
If new language features get added over time, eventually you get to the case where there are obscure features everyone has to look up every time they use them.
Lisps avoid this by building abstractions from the same material as the language itself. Basically no other language family has this property, though JavaScript and Kotlin, via different mechanisms, achieve something similar.
So has John von Neumann's 29 state cellular automata!
https://en.wikipedia.org/wiki/Von_Neumann_cellular_automaton
https://en.wikipedia.org/wiki/Von_Neumann_universal_construc...
(Actually there was a non-standard extension developed in 1995 to make signal crossing and other things easier, but other than that, it's a pretty stable programming language.)
>Renato Nobili and Umberto Pesavento published the first fully implemented self-reproducing cellular automaton in 1995, nearly fifty years after von Neumann's work. They used a 32-state cellular automaton instead of von Neumann's original 29-state specification, extending it to allow for easier signal-crossing, explicit memory function and a more compact design. They also published an implementation of a general constructor within the original 29-state CA but not one capable of complete replication - the configuration cannot duplicate its tape, nor can it trigger its offspring; the configuration can only construct.
All of which are ones that I once thought were quite enjoyable to work in, and still think are well worth taking some time to learn. But I submit that the fact that none of them have really stood the test of time is, at the very least, highly suggestive. Perhaps we don't yet know all there is to know about what kinds of programming language constructs provide the best tooling for writing clean, readable, maintainable code, and languages that want to try and remain relevant will have to change with the times. Even Fortran gets an update every 5-10 years.
I also submit that, when you've got a multi-statement idiom that happens just all the time, there is value in pushing it into the language. That can actually be a bulwark against TMTOWTDI, because you've taken an idiom that everyone wants to put their own special spin on, or that they can occasionally goof up on, and turned it into something that the compiler can help you with. Java's try-with-resources is a great example of this, as are C#'s auto-properties. Both took a big swath of common bugs and virtually eliminated them from the codebases of people who were willing to adopt a new feature.
That said, it is nice that I can take a Prolog text from the 1980s or 1990s and find that almost all of the code still works, with minor or no modifications...
From the v1.9 release just a few weeks ago: https://elixir-lang.org/blog/2019/06/24/elixir-v1-9-0-releas...
> As mentioned earlier, releases was the last planned feature for Elixir. We don’t have any major user-facing feature in the works nor planned. I know for certain some will consider this fact the most excing part of this announcement!
> Of course, it does not mean that v1.9 is the last Elixir version. We will continue shipping new releases every 6 months with enhancements, bug fixes and improvements.
Why should this be true for every language? Certainly we should have languages like this. But not every language needs to be like this.
Python, judged against JS, is almost sedate in its evolution.
It would be nice if a combination of language, libraries, and coding orthodoxy remained stable for more than a few years, but that's just not the technology landscape in which we work. Thanks, Internet.
Python was explicitly designed and had a dedicated BDFL for the vast majority of its nearly 30 year history functioning as a standards body.
JS, on the other hand, was hacked together in a week in the mid-90s and then the baseline implementation that could be relied on was emergent behavior at best, anarchy at worst for 15 years.
As soon as people start using a language, they see ways of improving it.
It isn't unlike spoken languages. Go learn Esperanto if you want to learn something that doesn't change.
How long has the code which was transitioned to python lasted?
A long time. 2to3 was good for ~90% of my code, at least
I write a lot of python for astrophysics. It has plenty of shortcomings, and much of what's written will not be useful 10 years from now due to changing APIs, architectures, etc., but that's partly by design: most of the problems I work on really are not suited to a hyper-optimized domain-specific languages like FORTRAN. We're actively figuring out what works best in the space, and shortcomings of python be damned, it's reasonably expressive while being adequately stable.
C/FORTRAN stability sounds fine and good until you want to solve a non-mathematical problem with your code or extend the old code in some non-trivial way. Humans haven't changed mathematical notations in centuries (since they've mostly proven efficient for their problem space), but even those don't always work well in adjacent math topics. The bra-ket notation of quantum mechanics, <a|b>, was a nice shorthand for representing quantum states and their linear products; Feynman diagrams are laughably simple pictograms of horrid integrals. I would say that those changes in notation reflected exciting developments that turned out to persist; so it is with programming languages, where notations/syntaxes that capture the problem space well become persistent features of future languages. Now, that doesn't mean you need to code in an "experimental" language, but if a new-ish problem hasn't been addressed well in more stable languages, you're probably better off going where the language/library devs are trying to address it. If you want your code to run in 40 years, use C/FORTRAN and write incremental improvements to fundamental algorithm implementations. If you want to solve problems right now that those langs are ill-suited to, though, then who cares how long the language specs (or your own code) last as long as they're stable enough to minimize breaking changes/maintenance? This applies to every non-ossified language: the hyper-long-term survival of the code is not the metric you should use (in most cases) when deciding how to write your code.
My point is just that short code lifetimes can be perfectly fine; they can even be markers of extreme innovation. This applies to fast-changing stuff like Julia and Rust (which I don't use for work because they're changing too quickly, and maintenance burdens are hence too high). But some of their innovative features will stand the test of time, and I'll either end up using them in future versions of older languages, or I'll end up using the exciting new languages when they've matured a bit.
One of the takeaways is, that most languages and their features converge to a point, where each language contains all the features of the other languages. C++, Java and C# are primary examples. At the same time complexity increases.
Go is different, because of the simplicity first rule. It easens the burden on the programmer and on the maintainer. I think python would definitely profit from such a mindset.
"Understanding" what each individual line means is very different from understanding the code. There are always higher level concepts you need to recognize, and it's often better for languages to support those concepts directly rather than requiring developers to constantly reimplement them. Consider a Java class where you have to check dozens of lines of accessors and equals and hashCode to verify that it's an immutable value object, compared to "data class" in Kotlin or @dataclass in Python.
Also Common lisp specs never changed since the 90s and is still usefull as a "quick and dirty" language, with few basic knowledge required. But the "basic feature set" can make everything, so the "understand any code" is not really respected. Maybe Clojure is easier to understand (and also has a more limited base feature set, with no CLOS).
Edit: I actually forgot about the split between LuaJIT (which hasn’t changed since Lua 5.1), and the PUC Lua implementation, which has continued to evolve. I was thinking of the LuaJIT version.
I was really happy, in some ways, when Python 2 was announced as getting no new releases and Python 3 wasn't ready, because it allowed a kind of unification of everyone on Python 2.7.
Now we're back on the treadmill of chasing the latest and greatest. I was kind of annoyed when I found I couldn't run Black to format my code because it required a slightly newer Python than I had. But... f strings and walrus are kind of worth it.
Though to me that's like saying, "I want this river to stop flowing" or "I'd prefer if the seasons didn't change."
When will this talking point die? It's not "ongoing". There's an overwhelming majority who have adopted Python 3 and a small population of laggards.
Some of this stuff seems to me like it's opening the doors for some antipatterns that I'm consistently frustrated about when working with Perl code (that I didn't write myself). I had always been quite happy about the fact that Python didn't have language features to blur the lines between what's code vs what's string literals and what's a statement vs what's an expression.
It kind of goes to the question: When is a language "finished"?
I don't think I've come across any f-string abuse in the wild so far, and my tentative impression is that there's a few patterns that are improved by assignment expressions and little temptation to use them for evil.
It helps that the iteration protocol is deeply ingrained in the language. A lot of code that could use assignment expressions in principle already has a for loop as the equally compact established idiom.
I'm not familiar much with Python, beyond a little I wrote in my linear algebra class. How much does the statement/literal distinction matter to readability? What does that do for the language?
The first part of the statement (at least one obvious way to do it) goes to gaining a lot of expressive power from having learned only a subset of the language specification corresponding to the most important concepts. So you invest only a small amount of time in wrapping your head around only the most important/basic language concepts and immediately gain the power that you can take any thought and express it in the language and end up not just with some way of doing it, but with the right/preferred way of doing it.
The second part of the statement (at most one obvious way to do it) makes it easy to induce the principles behind the language from reading the code. If you take a problem like "iterate through a list of strings, and print each one", and it always always always takes shape in code by writing "for line in lst: print( line )" it means that, if it's an important pattern, then a langauge learner will get exposed to this pattern early and often when they start working with the language, so has a chance to quickly induce what the concept is and easily/quickly memorize it due to all the repetition. -- Perl shows how not to do it, where there are about a dozen ways of doing this that all end up capable of being expressed in a line or two. -- Therefore, trying to learn Perl by working with a codebase that a dozen people have had their hands on, each one preferring a different variation, makes it difficult to learn the language, because you will now need to know all 12 variations to be able to read Perl reliably, and you will only see each one 1/12th as often making it harder to memorize.
I obviously don't want that. I don't think anybody wants that. But I also don't think that's going to happen as a result of the recent changes in the language. If anything, I feel like the average code quality in the wild has gone up.
never understood the need for this. why do you even need statements?
if there's one thing that annoys me in python it's that it has statements. worst programming language feature ever.
That said, I think some things have unquestionably gotten more "Pythonic" with time, and the := operator is one of those. In contrast, this early Python feature (mentioned in an article [1] linked in the main one) strikes me as almost comically unfriendly to new programmers:
> Python vowed to solve [the problem of accidentally assigning instead of comparing variables] in a different way. The original Python had a single "=" for both assignment and equality testing, as Tim Peters recently reminded him, but it used a different syntactic distinction to ensure that the C problem could not occur.
If you're just learning to program and know nothing about the distinction between an expression and a statement, this is about as confusing as shell expansion (another context-dependent syntax). It's way too clever to be Pythonic. The new syntax, though it adds an extra symbol to learn, is at least 100% explicit.
I'll add that := fixes something I truly hate: the lack of `do until` in Python, which strikes me as deeply un-Pythonic. Am I supposed to break out of `while True`? Am I supposed to set the variable before and at the tail of the loop (a great way to add subtle typos that will cause errors)? I think it also introduces a slippery slope to be encouraged to repeat yourself: if assigning the loop variable happens twice, you might decide to do something funny the 2:Nth time to avoid writing another loop, and that subtlety in loop variable assignment can be very easy to miss when reading code. There is no general solution I've seen to this prior to :=. Now, you can write something like `while line := f.readline()` and avoid repetition. I'm very happy to see this.
[0] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
[1] https://lwn.net/Articles/757713/
[edit] fixed typos
for x in iter(f.readline, ""):
Or if you don't know what readline will return you can wrap it in your own lambda: for x in iter(lambda:f.readline() or None, None):
There is a lot you can do with iter to write the kind of loops you want but it's not well known for some reason. It's a very basic part of the language people seem to overlook. Walrus does however let you write the slightly more useful while predicate(x:=whatever()):
Which doesn't decompose easily into iter form.I will say, though, that I was not comfortable using iterators when I first learned python; walrus strikes me as easier to grok for a novice (one of the ostensible Python target demographics) than iter. I'll bet this is why this simple form is not idiomatic (though you're right, it should be).
This is relevant to what I've been doing in OpenCV with reading frames from videos! In tutorial examples on the web, you'll see exactly the sort of pattern that's outlined in the PEP 572 article.
>line = f.readline()
>while line:
> ... # process line
> line = f.readline()
Just, replace readline() with readframe() and the like. So many off-by-one errors figuring out when exactly to break.
> for line in iter(f.readline, ''):
> ... # process line
How isn't it entirely obvious? := is the assignment operator in tons of languages, and there's no reason not to have assignment be an expression (as is also the case in many languages).
It is? Which ones? Other than Go, I can not think of a single language that has ":=" as an operator. Java does not, JavaScript does not, C/C++ do not, Ruby does not, I don't think PHP does, Erlang/Elixir do not, Rust does not... (I could be wrong on these, but I've personally never seen it in any of these languages and I can't find any mention of it in these languages' docs).
I tried looking around the internet at various popular programming languages and the only ones I could find that use ":=" are: Pascal, Haskell (but it's used for something else than what Python uses it for), Perl (also used for something else), and Scala (but in Scala it isn't officially documented and doesn't have an 'official' use case).
I don't have a strong opinion about ":=" in Python but I do agree that it's unintuitive and thus not very "Pythonic".
All in all the debate has been heated and long, but it has been decided that the python community will use it intelligently and rarely, but that when it matters, it can help a lot.
I'm against this feature, while I was pro f-string. However, I'm not too worried about missuse and cultural shift because I've seen 15 years of this show going on and I'm confident on it's going to be indeed tagged as "risky, use it knowing the cost" by everybody by the time 3.8 gets mainstream.
Looking at the module changes, I think my top pick is the changes to the `math` module:
> Added new function math.dist() for computing Euclidean distance between two points.
> Added new function, math.prod(), as analogous function to sum() that returns the product of a ‘start’ value (default: 1) times an iterable of numbers.
> Added new function math.isqrt() for computing integer square roots.
All 3 are super useful "batteries" to have included.
I think that the most important thing Python can do in each release is to improve performance, incrementally.
IMHO, the usefulness of this new operator outweighs the slight learning curve required to get past the awkwardness you will experience when you are first acquainted to it.
Here is that talk:
https://www.python.org/dev/peps/pep-0589/
I know it's almost always better to use objects for this, but tons of code still uses dictionaries as pseudo-objects. This should make bug hunting a lot easier.
I could see with c++ that between 2003 and 2014 a fair few underlying machine things were changing and that needed addressing in the language.
But Python is not quite as close to the machine, and I don't see how something like the walrus is helping much. If anything it seems like you'd scratch your head when you came across it. And for me at least one of the main attractions of python is you're hardly ever surprised by anything, things that are there do what you guessed, even if you hadn't heard of them. Function decorators for instance, you might never have seen one but when you did you knew what it was for.
Same with the debug strings. That seems to be a special case of printing a string, why not leave it at that? I'm guessing a lot of people don't ever read a comprehensive python guide, what are they going to do when they see that?
My guess would be "run it and see what it does".
It would be great if there was more momentum on this again, as it would be helpful in all sorts of places.
Like https://github.com/Tygs/ayo
It's not as good as having it in the stdlib, because people can still call ensure_future and not await it, but it's a huge improvement and completly compatible with any asyncio code.
When I was into Python, I liked it because it was a tighter, more to the basics language. Not having 4 ways to format strings and so forth. I don't think Python can defeat Java by becoming Java. It'll lose there due to multiple disadvantages. The way Python "wins" (as much as it could at least), is focusing on "less is more". They abandoned that a while ago.
My vision of a language like Python would be only 1-way to do things, and in the event someone wants to add a 2nd way, a vote is taken. The syntax is changed, and the old bytecode interpreter handles old scripts, and scripts written with the latest interpreter's bytecode only allows the new syntax. For me that's the joy of Python.
I think a lot of people wanted Python's original vision, "one way to do things". If I want feature soup, I'll use what I program in daily. Which I do want feature soup by the way, I just have no need to replace it with another "feature soup" language like Python turned into because it's inferior on technical and for me, stylistic levels.
By that standard, the walrus operator is not only acceptable but essential. Right now there are at least 3 ways to process data from a non-iterator:
# 1: loop condition obscures what you're actually testing
while True:
data = read_data()
if not data:
break
process(data)
# 2: 7 lines and a stray variable
done = False
while not done:
data = read_data()
if data:
process(data)
else:
done = True
# 3: duplicated read_data call
data = read_data()
while data:
process(data)
data = read_data()
There's too many options here, and it's annoying for readers to have to parse the code and determine its actual purpose. Clearly we need to replace all of those with: while (data := read_data()):
process(data)
Yes, I'm being a bit snarky, but the point is that there is never just one way to do something. That's why the Zen of Python specifically says one "obvious" way, and the walrus operator creates an obvious way in several scenarios where none exist today.Also, this motto should be interpreted in the appropriate historical context – as taking a position in relation to that of Perl, which was dominant when Python was gaining popularity and had the motto "there's more than one way to do it".
Just recently 'Declined Proposal: A built-in Go error check function, “try”' https://news.ycombinator.com/item?id=20454966 made the front page, explaining how a controversial potential Go feature was being declined early.
Python on the other hand, went ahead with what seems to be a proposal at least as controversial as 'try' in Go.
Potential vulnerabilities aside, I got bitten by some migration issue back in the 2.2 to 2.4 transition where some built-in types changed how they did their __setstate__ and __getstate__ (iirc) and that caused objects picked under 2.4 to not unpickle correctly under 2.2 or something like that. After that I never wanted to use pickle in production again.
There's a bunch of changes in the official "what's new" doc that I think are more interesting:
https://docs.python.org/3.8/whatsnew/3.8.html
* Run-time audit hooks, to see if your modules are making network requests, etc.
https://www.python.org/dev/peps/pep-0578/
https://tirkarthi.github.io/programming/2019/05/23/pep-578-o...
* multiprocessing SharedMemory for fast data sharing between processes
https://docs.python.org/3.8/library/multiprocessing.shared_m...
* Duck-typing for the static annotation checkers
https://www.python.org/dev/peps/pep-0544/
* Literal checking for the static annotation checkers. ie: It's not enough to check that you're passing a string for the mode in open(), you want to check that it's 'r' or 'w', etc.
https://www.python.org/dev/peps/pep-0586/
* The compiler now produces a SyntaxWarning when identity checks (is and is not) are used with certain types of literals (e.g. strings, ints). These can often work by accident in CPython, but are not guaranteed by the language spec. The warning advises users to use equality tests (== and !=) instead.
* A bunch of speed and memory optimizations:
- "Sped-up field lookups in collections.namedtuple(). They are now more than two times faster, making them the fastest form of instance variable lookup in Python."
- "The list constructor does not overallocate the internal item buffer if the input iterable has a known length (the input implements __len__). This makes the created list 12% smaller on average."
- "Doubled the speed of class variable writes."
- "Reduced an overhead of converting arguments passed to many builtin functions and methods. This sped up calling some simple builtin functions and methods up to 20–50%."
reductor = dispatch_table.get(cls)
if reductor:
rv = reductor(x)
else:
reductor = getattr(x, "__reduce_ex__", None)
if reductor:
rv = reductor(4)
else:
reductor = getattr(x, "__reduce__", None)
if reductor:
rv = reductor()
else:
raise Error(
"un(deep)copyable object of type %s" % cls)
Becomes: if reductor := dispatch_table.get(cls):
rv = reductor(x)
elif reductor := getattr(x, "__reduce_ex__", None):
rv = reductor(4)
elif reductor := getattr(x, "__reduce__", None):
rv = reductor()
else:
raise Error("un(deep)copyable object of type %s" % cls) m = re.match(p1, line)
if m:
return m.group(1)
m = re.match(p2, line)
if m:
return m.group(2)
m = re.match(p3, line)
...
With walrus: if m := re.match(p1, line):
return m.group(1)
elif m := re.match(p2, line):
return m.group(2)
elif m := re.match(p3, line):
The example would have been better if it didn't have the return, but just a value assign or a function call.Unchecked type annotations remain the worst addition since 3.0. Actual typing might be useful; it allows optimizations and checking. But something that's mostly a comment isn't that helpful.
Ugh, how did this get approved? It's such a bizarre use case, and debugging by print should be discouraged anyway. Why not something like debug_print(foo, bar) instead (because foo and bar are real variables, not strings)?
Also, it's part of the format string and not a special print function so that it can be used for logs and other output as well, not just the console.
def fun(a, b, /, c, d, *, e, f):
or
print(f'{now=} {now=!s}')
and guess what it does before actually reading the article.
Worst, the rationales of the PEPs are weak, presenting improvement for "performances" or enforcement of API because of low level stuff as C.
Back when I was 18 years old, Python was high level, rules were simple with only one way of doing things and performances weren't a concern, because you would just use the right language for the right task. There was no enforcement and you could always hack a library to your liking. Python now is getting closer to what Perl looked to me 10 years ago, trying to optimize stuff it shouldn't
It's still marked as a 3.8 target
Specifically "There should be one -- and preferably only one --obvious way to do it."
If this was any other language, the addition would be welcome, but I feel that the walrus operator fundamentally disagrees with what python is about.
It's not about terseness and cleverness, it's about being clear, and having one way to do things (Unless you are Dutch).
The abbreviated f-string syntax looks weird and kinda wrong to me. But then I'm not even sure I've got comfortable yet with the object field initialization shortcuts in Javascript and Rust (where you also get to omit stuff to avoid repeating yourself).
F strings are pretty awesome. I’m coming from JavaScript and partially java background. JavaScript’s string concatenation can become too complex and I have difficulty with large strings.
>Python 3.8 programmers will be able to do: print(f'{foo=} {bar=}')
Pretty cool way to help with debugging. There are so many times, including today, I need to print or log some debug string.
“Debug var1 ” + var1 + “ debug var2” + var2...and so on. Forgot a space again.
console.log({var1,var2,var3});
And the logged object will get created with the variables content and the variable nem as key, so it will get logged neatly like
{var1: "this is var1", var2: 2, var3: "3"}
More compact code at the cost of higher learning curve.
It violates the philosophies of Python and UNIX where one function, or one line, should preferably only do one thing, and do it well.
I get the idea behind the :=, but I do think it's an unnecessary addition to Python.
A lot of folks see Go as a Python successor which surprises me because I don't think the languages favor the same things at all. Maybe my perspective is weird.
Python never had that philosophy... You might confused it with "there should be one, and preferably only one, obvious way to do anything".
This is slated for 3.9: https://www.python.org/dev/peps/pep-0554/
(Yeah I know, ; is a statement separator, not a statement terminator in Pascal.)
As long as you're being just like Pascal, did you know Python supported Pascal-like "BEGIN" and "END" statements? You just have to prefix them with the "#" character (and indent the code inside them correctly, of course). ;)
if x < 10: # BEGIN
print "foo"
# ENDFWIW, I agree with the sentiment; I use := for assignment in my language precisely because that's the correct symbol. But even there, my grammar accepts = as assignment as well because I type it from habit.
Generally, library authors won't be able to use it if they want to support many versions; same as with f-strings.
if val = expr():
if x = y:
and if x == y:
is very small, and easy to ignore, so insisting on if x := y:
makes it very clear that what's happening won't be mistaken for comparison at a quick glance.[0]: https://www.python.org/dev/peps/pep-0572/#why-not-just-turn-...
The difference between = and == in an if causes many hugs in other languages. Using := for assignment in an expression instead of == means that you can't simply have a typo and have a bug.
= (existing) is statement assignment
== (existing) is expression equality
:= (new) is expression assignment
Also fix the GIL.
pr<TAB> --> print(" ")
# ^ cursorReally, I dont like anything that trys to force a future developer into using your code the way you expect them to.
def my_format(fmt, *args, **kwargs):
...
fmt.format(*args, **kwargs)
suffers from a bug if you want to pass fmt as a keyword argument (e.g. `my_format('{fmt}', fmt='int')`). With positional-only arguments that goes away.You could always force developers into using your code the way you expect by parsing args/kwargs yourself, so it's not like this really changes anything about the "restrictiveness" of the language.
If you run `help(pow)` as early as Python 3.5 it lists the signature as `pow(x, y, z=None, /)`. The first time I saw that `/` I was pretty confused, and it didn't help that trying to define a function that way gave a syntax error. It was this weird thing that only C functions could have. It's still not obvious what it does, but at least the signature parses, which is a small win.
Another thing it's good for is certain nasty patterns with keyword arguments.
Take `dict.update`. You can give it a mapping as its first argument, or you can give it keyword arguments to update string keys, or you can do both.
If you wanted to reimplement it, you might naively write:
def update(self, mapping=None, **kwargs):
...
But this is wrong. If you run `d.update(mapping=3)` you won't update the 'mapping' key, you'll try to use `3` as the mapping.If you want to write it in pure Python < 3.8, you have to do something like this:
def update(*args, **kwargs):
if len(args) > 2:
raise TypeError
self = args[0]
mapping = None
if len(args) == 2:
mapping = args[1]
...
That's awful.Arguably you shouldn't be using keyword arguments like this in the first place. But they're already used like this in the core language, so it's too late for that. Might as well let people write this:
def update(self, mapping=None, **kwargs, /):
...Although that being said I always really liked Perl
To be clear, that's not a new feature in 3.8.