A standard practice to use... potentially AI modified surveillance photography? Where will they get these photos from exactly in this future scenario? Activists and journalists when these photo apps become widespread (unbeknownst to the government)?
It's an interesting hypothetical for sure but it's stretch to call it a glaring flaw. Not really any worse than people being misidentified in normal photos.