Yes, but now every single time you update the library which provided the base class, you need to re-verify that __init__ doesn't do anything new. May be worth the tradeoff, but it really should be noted.
It's humorous when someone who presumably cares about performance tells you they use Python. Python is a wonderful language, but performance is not what it is designed for. Basically _anything_ that does not require an interpreter to run will be 10-30x faster on the same hardware, and most will also consume less RAM and be able to use more than one core on the system efficiently. It used to be that Python's lack of performance didn't matter because disks and networks were so slow things were IO bound. In more and more cases that's just not true anymore. You could be easily reading at 1GB+/sec and pushing 10-20Gbps to NICs, depending on the hardware.
CPython is slow as an interpreter, true. "Programming in Python" may or may not be many times slower than compiling the comparable code in other language. Depends what you're doing and how you're doing it.
Also, I care about performance in any language to some extent. If I can write a backup bash script that takes 2h, or write one that takes 20min, I do care about performance and will choose the second one. Why shouldn't I?
As to caring about perf, you shouldn't care about it until you have to. Take that 2h vs 20min example, for instance. If you only need to run it a few times and there's plenty of time available, who cares how long it takes. If the 2h one is easier to write that's by all means what you should do. OTOH if you're under severe time constraints and need to run it every hour, then obviously 2h script won't do the job. Or alternatively if 20 min script takes the same time to write as 2h one, then of course you should go with it. All too often I see people optimizing things that don't matter one iota, simply because they like things to be fast. Something gets executed once a day and runs for 5 minutes? Let's spend two weeks making it complete in 30 seconds. As long as the employer is paying, why not.
Anyhow, the relatively new Nuitka project seems to be aiming to tackle the python-to-c++ compiler problem, and seems to have a lot of promise. Really good compatibility, apparently decent speedups, and cross platform support. Works into Python3 too. I have a lot of hope!
TL;DR: things get faster if you know the machine you're working on and how to use it best, regardless of the programming language.
Most performance problems come from bad algorithms, not your programming language. I had a piece of code that had to do some complicated "image" masking with a 2 array, with feathering and some quite complicated statistical modelling to compute the offset. It took 20 minutes to run on a medium-sized data set. After sitting down with it for a few hours, I got it down to 30 seconds. If I switched it to C, it would've taken far too long to improve the performance. Python runs in a fairly well optimised VM anyway, so it's definitely "good enough".
The source is on my GitHub, but it probably won't be useful to anyone: https://github.com/cyphar/keplerk2-halo.
Python is easily able to push this much data but I have the impression that the problem are the performance-hogging libraries. In my case I had to write my own HTTP client implementation for Python to get such speeds. Python is not the problem you just need to avoid unnecessary LoCs.