Here is an implementation of a memoize decorator in Python that will support all inputs [1]. You'd have to modify to not use all the Pylons framework stuff.
[0] https://en.wikipedia.org/wiki/Memoization
[1] https://github.com/reddit-archive/reddit/blob/master/r2/r2/l...
It seems reasonable to me to describe an optimization by identifying where it can be applied. If someone wrote a blog post titled "optimizing multiplication by 2" and went on to describe changing "x * 2" into "x + x", there would be nothing unusual about that. It would no longer be multiplication, but we wouldn't necessarily expect it to be.
I agree it can be interpreted another way in this case, though. So while I maintain it is not wrong, I admit it is ambiguous. Someone could have used the same title had they written about a one-line optimization to the Python interpreter.
Also, there's an argument to be made that "function call" is the overhead involved in moving control from the call site to the callee (which can be significant in interpreted languages such as Python). That's at least how I interpreted it at first.
Just a side note: with Fibonacci + caching it solidly become Dynamic programming problem so Time complexity reduces from quadratic to o(n), IIRC .
There is a whole class of problems where recursion + memoization(caching) = Top Down Dynamic programming , The other way to Increase performance and actually reduce call stack in these class of problems including Fibonacci would be Bottom Up Dynamic Programming
Some gists I found on it https://gist.github.com/trtg/4662449
Edit: I don't actually have much of a problem with the article headline as it is now. It depends a lot on your target audience! For Hacker News, yeah, we know what memoization is because we learned it in CS 102, right after we learned about recursion.
But for a lot of people who would get the most value from the article, the word "memoization" isn't going to mean much, and wouldn't read the article.
Maybe something like: "Memoization in Python, or how I learned to speed up function calls in just one line"
For small values, the iterative version works just fine. For larger values, there are other methods: one obvious one is using the matrix form of the Fibonacci numbers, which just requires you to take an exponent of a 2x2 matrix (which you can do quickly because of exponentiation by squaring).
Try it with fib(35), curious what you find.
Why 3 is optimal, because of how recursion and LRU work. I wish i can explain it using animation.
You can play with it. https://repl.it/repls/NocturnalIroncladBytecode#main.py
If this theory is correct, every recursive function f(n) requiring access to f(n-x) should have x+1 as maximum usefull cache size.
Because of the way the recursion goes down the tree depth-first, I think 5 basically cuts down the computation to fib(10-5) = fib(5) which is pretty manageable, so the author couldn't really see any further measurable performance gains by increasing the cache further. I think for fib(35) it'd be clear that cache size 35 would help compared to cache size 5. (I picked 35 instead of, say, 300, because I think 300 would just not finish with cache size 5, it'd take forever haha.)
We had a chatbot that polls a server and sends notifications, but due to clock skew it would sometimes send two notifications. So I just added the lru_cache decorator to the send(username, message) function to prevent that.
One thing where I do think it is kind of safe is for loading a file into a class instance. The memoization will avoid defining (1) a method to check to see if the file is loaded, (2) a method to load the file and (3) a class variable which points to the loaded file.
Have a look at their REST API docs for retrieving comments on an Issue:
https://docs.atlassian.com/software/jira/docs/api/REST/8.10....
You'll see that they respond with an ID for each comment. That ID is unique and probably the PK of the comment on the JIRA application's DB. That is the the key that you need to use to do a duplicate check. Not username and message content.
A timed algorithm seems better suited. That said, it probably doesn’t matter much.
@functools.lru_cache(maxsize=31)
def fib(n):
I must admit I was hoping for a general approach to speeding up all function calls in python. Functions are the primary mechanism for abstraction and yet calls are relatively heavy in their own right. It would be neat if python had a way to do automatic inlining or some such optimization so that I could have my abstractions but avoid the performance hit of a function call (even at the expense of more byte code).
Is there a magic function? If one crops up, it would get baked into Python, development of which is extremely active. So magic functions trend obsolete. The main thing developers can consistently do to improve performance, is micro manage types and algorithms to improve the performance of bottlenecks. This is done with a tool like Cython, but you have to learn the discipline of coding for performance, you have to learn the details of what the machine is doing with the data in your particular piece of code. When do you want to replace linked lists with arrays? When are you doing unnecessary type conversion? How overweight are your high frequency objects? How is memory being used? Are there more efficient ways on my microarchitecture to get this calculation done... Performance coding is a discipline like application coding.
Is this a problem with Python? I don't think so, because performance comes second in the developer process. Write the code first then optimize the real bottlenecks.
Note that `lru_cache` doesn't just do caching, it also provides convenient methods on the original function to get cache stats etc in a pythonic way.
Okay, at least got the decency of providing a TL;DR. But if your summary is literally three words, why not put it in the title: “speed up Python function calls with functools.lru_cache”?
God I hate clickbait.
>"There are two hard things in computer science: cache invalidation, naming things, and off-by-one errors." – Martin Fowler
And there's plenty of similar articles, for example [1] [2]
[0] https://www.mediawiki.org/wiki/Naming_things
I have enjoyed reading Python Tricks[0] book by Dan bader (author of https://dbader.org/blog/python-memoization ). It's awesome. Might be outdated but I still recommend it to any body who has just started learning python.
Avoid these tricks if you care about thread safety.