This ignores how information spreads. Given how unlikely a randomly generated face will match a relevant person in a local area related to the protest or gov, that means the AI technology will also be widespread (otherwise it’s statically a non-problem because it will be significantly even more unlikely to happen IRL).
If it is widespread then the government workers doing facial recognition should be keenly aware of its existence and adapt by seeking photos from non-activist/protected sources… like the thousands of photos posted on social media after every protest.
The bad government doesn’t want to be arresting random people either they want the real ones.