The solution here is to write real tests and hyperlink them in a marked up form to the functions they actually test. Coding inside of a doc string is an epic troll.
This is a warning to anyone getting seduced by doctests, stay away! They invert the problem, when one should just write tests, it is a problem with the documentation and code discovery tools that makes this seem like a good idea.
In particular, it's great if the documentation includes clear and simple examples for learning from, and it's even better if these are validated as working. This means that the focus of the code in documentation is typically different to a test (it doesn't need to include such precise validation or look at all the edge cases or regressions), but it's still really useful to have them automatically run.
Python's doctests are also formatted and in some ways behaving like an interactive shell session, with
>>> code here
output here
rather than just "literate code" so the execution context is a bit strange. This makes complicated doctests hard to inspect and debug. Doubly so because there's almost no tooling which understands doctests.And of course on the flip side the best tests make for absolutely terrible examples since they try to exercise weird corner cases.
Perhaps it's just my perception, but doctests (in comments) seem slower to run.