If you go the route that requires you to handle arbitrarily large tables, then computing a hash with a long enough value necessarily requires more steps, even if you assume all operations on length w integers are constant time -- because eventually your keys have to be larger than w, after which computing the hash increases in number of steps with the key size.
This is still different from the sorting problem. You can have a consistent computation model in which memory seek times are constant (even if unrealizable). That seems fundamentally different from a model where a hash computation is constant steps but also has arbitrarily long output.
The issue isn't whether the scaling of the hash computation matters in practice, but whether the big-O is properly O(1). And even if you did just care about "in practice" limitations, you're stuck with the problem of the hash computation taking longer than any realizable trie lookup would (as another poster mentioned is common), meaning that the O(1) claim misrepresents performance relative to another data structure.