To be blunt though, I don't think the idea is very interesting for a few reasons:
- It doesn't address how you decide which edges to speculatively consider executable.
- Optimizers intentionally do not play the "what if" game and try optimizing things multiple different ways to see what wins or what gains might be had. It simply gets too expensive (in compile time) too quickly.
- There are already other ways to achieve similar effect, e.g. superblock formation or simple tail duplication (which can be done for specific effect, e.g. see branch threading in LLVM).
> - It doesn't address how you decide which edges to speculatively > consider executable. > > - Optimizers intentionally do not play the "what if" game and try > optimizing things multiple different ways to see what wins or what > gains might be had. It simply gets too expensive (in compile time) > too quickly.
My use case was that you already have a set of edges that profiling tells you is "rarely taken", and you wish to know if eliding any of those edges improves optimization. Normally JIT compilers end up installing traps in most of these edges, hoping that the cost of the occasional side exit to the interpreter is worth the added performance in the fast case. I wanted a way to push the decision on whether to install a trap for a specific edge to later in the optimization process.
> - There are already other ways to achieve similar effect, > e.g. superblock formation or simple tail duplication (which can be > done for specific effect, e.g. see branch threading in LLVM).
I don't disagree here. I'll have to admit that this approach is somewhat "out there", in its current form it doesn't seem very practical.
True, I somehow forgot that stipulation by the time I got to the end of the post.