This would work best in a high level language where the specific bit layout of struct types is not defined (by default). Rust is one such language, but this would also work with .NET and JVM languages.
One approach is that integers are represented as A < ( x < S + O ) < B. This would allow ranges, powers of two, “NonZero”, offsets, and the like the be represented in the lowest levels of the type system. Additionally, the high level type system could also keep an additional list of specific numbers that are excluded.
Pointer types could be internally represented as ordinary integers, or aligned non-null pointers on some architectures would be “0 < x << 3”.
This could have no effect on the emitted code, but the compiler would be free to utilise the spare low bits or the non-zero value if it chose to do so. Rust does this in a few hard-coded scenarios, but a more complete type model would allow more flexibility. I.e.: it could pack the enum discriminator and the value in there if the enum has only pointer types as values.
Conversely, the high level language can use this for type checks to give better error messages.