And if the "keyboard" was usable for practical purposes then why would the scammer waste the chance to monetize those practical, working features which they sunk their own time developing?
Doing specific feature testing would not be trivial. Your description may say you have the worlds only AI keyboard driven by machine learning. No way the reviewers will be able to test that, so they will accept it at face value.
A few years ago Apple substantially decreased the App Review time, in direct response to developer complaints. It went from a week to a day. Part of the reduction was the use of more automated tools to detect violations. Some of it was adding more resources.
But that means reviewers only have minutes to review each app, not hours. And they are focused on technical rules violations. They aren’t ever going to build a test plan based on marketing claims to verify every single one.
On this example, it could have gone like this. They create a simple keypad on the watch, and some subscription screens. The app reviewer verified that there is a keypad on the watch, that the screens language and subscription process is reasonable and approves.
Then when the app appears on the store, now it works entirely differently and all the user sees are the scam screens.