The point is that there is an “unwritten contract” between the user and the library: if we want any kind of performance, then we have to promise that the inputs are “well behaved” (in a very specific sense that varies from algorithm to algorithm). The user knows this and so does the developer, because otherwise most algorithms will spend most of the actual compute time converting problems to “fully general forms” that work on every input, but are hundreds if not thousands of times slower than the original.[0]
The point is that most people want “mean” to be fast, and a simple implementation of mean will work on almost every case, without needing to resort to arbitrary precision fractional arithmetic. The claim I make above is that the latter thing is almost never what people need, because if they did then almost every numerical algorithm is doomed from the start, whenever they perform operations that are more complex than “take the mean.” Now, if they really need this to be done for the huge numbers given (or whatever the case is), then there are libraries that will do this at a huge cost in performance. (But, if they really did need it—for example, when a problem cannot be rewritten in a simple way that would work with the available implementations—my bet is that this user would know).
Also apologies for mistakes, I’m currently on my phone walking to a meeting.
——
[0] that isn’t to say that if you can write a fast and general version, that you shouldn’t! Just that it usually incurs some overhead that is often unacceptable (see, e.g. using arbitrary precision fractions for Gaussian elimination).