Yeah I've tried that approach. The model ends up needing to learn every combination of tokens. For example, the word "apple" now has six bytes positions it can be split on and the model suddenly needs to learn that all six will yield the same output attention state.
It ends up being O(max token length) more complex and so you end up needing a proportionally larger model to accommodate it.