The code will be somewhat faster (I don’t care to predict how much) from removing the variable-width character encoding in favour of bytewise access. Yes, pure ASCII stuff has some fast paths in string access, but they’re still decidedly slower than the fixed-width encoding that is [u8]. Using strings also gives the incorrect impression that it can cope with non-ASCII.
The code will be easier to follow if you use bytes throughout, because currently it’s a mixture of bytewise and charwise operations, so that you need to think a little about whether you’re dealing with char or u8 in each place (and half of them are even mislabelled); and there are suitable alternative ASCII methods for every place that uses charwise methods (e.g. char.is_digit(10) → u8.is_ascii_digit()) so that no extra burden is added. In the end the only place slightly complicated by it is printing the solutions, but more code will have been decomplicated—hotter path code, too—so that it’s easily worth it.
I don’t know where the code you’re citing came from, it’s newer than what’s on the master branch but in its changes includes some pretty bad stuff like DIGITS, using &str for something that is always a single-character ASCII digit, accessed by already having had the digit as a u8 and turning it back into a string prematurely. Admittedly the optimiser is going to fix a fair bit of that badness, but not all.