I initially spent two hours trying to modify different instructions, and then gave up. I saw another blog post written by a reverse engineer by the name of "Hassan Mostafa" (aka cyclon3) that previously succeeded in the same approach (taking Hopper Disassembler to Instagram on iOS) and I was inspired to try again that night, but I had no luck. I even found and attempted to modify the same instructions.
I decided to call it quits, and then a few weeks later with a bit of a grudge, I spontaneously tried again and I had it done in about 30 minutes after finding the sandbox function.
If I end up in the same arena I think I’ll look for debugging code next. I love certificate pinning as a user, but as a forensic analyst I fucking loath it.
Routing real app traffic through an intercepting proxy can be a real time-saver depending on what the researcher is trying to do. E.g. if they want to automatically tamper with a parameter in a request that doesn't happen until after some kind of authentication/session setup, it's much faster to let the app do all of that and configure the proxy to just make the one change, versus having to write a whole client that does all of the initial steps and then makes the modified request, or writing an eBPF filter that makes the changes the researcher is interested in.
Ha, same. I think I was eventually trying to hook into kernel-level functions and do it that way (I was using the Android client) but couldn't get far there either, though I think it's technically doable. IIRC, they were using some kind of vtable patching protection around kernel functions to ensure integrity.
I built anti-cheat software (and hardware) before, and it felt like anti-cheat level security. I had an axe to grind with Snapchat, as they rejected me after the first interview round :P
Yes, but it's significantly harder than flipping a bit. There's also clever ways of countering this (e.g. checksumming the public key). Of course, even this is technically hackable, but extremely time-consuming in practice. Imagine getting the public key and adding a bunch (and by a bunch, I mean like 16k) of random ops throughout the control flow that crash the app if any random byte of the key is wrong. For extra fun, offset the byte by the instruction pointer. Good luck debugging that.
….until now
But I think there's still a lot of people doing the NOP-patching thing, albeit with more complexity. There continue to be people breaking DRM, and investigating random [mobile] apps with hex-editors & etc.
It's harder to get started these days as programs are more complex, but at the same time the knowledge required is more accessible.
https://www.facebook.com/whitehat/bugbounty-education/261571...
Basic questions, admittedly. Just noticed that the final solution was to simply modify a few bytes of the binary, which seemed preventable.
On non-jailbroken platforms you generally do this with a developer certificate.
I imagine this would be something that AI will be able to do easily in an automated fashion, you can literally just try flipping the JEZ/JNZ in a bunch of candidate spots and launching the app and seeing if the nag screen comes up.
Now if AI could crack something like Denuvo in a 0-shot way...
Now if I could feed an AI a binary and have it tell me where what is happening in a very broad scope, that'd be a game changer, and I'd say that's quite attainable with a high-context window LLM as they seemingly understand hex-formatted byte-code quite well.
everything is AI these days apparently... even LLMs
Cert pinning is basically free and is sort of a "you must be this tall to ride the ride" thing--not secure, but keeps the riff raff out.
At the end of the day, your code runs on user machines, and they can observe what the code does, so it's always possible to deobfuscate, and if one person does it and shares their results, it becomes very easy to replicate. That doesn't mean obfuscation is useless, but you shouldn't put too much time into it.
Obfuscation would've had very little effect on the outcome of this experiment, but might've changed the approach to involve dynamic instrumentation a little more. The most effective obfuscation I've seen is VM obfuscation, but that presents a significant performance impact. Obfuscation would also make legitimate debugging harder.
Preventing modified binaries is done at the system level, and could feasibly be implemented at the application level and is common, but this functionality itself could be both bypassed, or modifications could simply be implemented after security checks have completed (once again, through dynamic instrumentation libraries like Frida).
Engaging in a cat-and-mouse game with reverse engineers probably isn't in Meta's best interest.
Plenty of people are convinced that Facebook's apps spy on them through their microphone and use that to show them targeted ads.
The easiest way to disprove this is to monitor the traffic between the apps and Facebook's servers... but certificate pinning prevents this!
(Not that anyone who believes this can ever be talked out of it, see https://simonwillison.net/2023/Dec/14/ai-trust-crisis/#faceb... - but it's nice to know that we can keep tabs on this kind of thing anyway)
It's NOT good when you listen to conversations without explicit (or implied, for that matter) consent, just as it's equally NOT good to exploit human predictive models to such a precise degree for profit. You SHOULDN'T be complicit to these practices, and saying that it's "just what it is" is one more person in the arena that's throwing their hands up to allow it. Attempt for change is ALWAYS better than apathy for complacency - before every interaction exists to become a transaction.
IANAL, but in the US, at least, I think the exemptions for good-faith security research[1] would apply. Maybe even the reverse-engineering for interoperability language in the DMCA itself[2].
[1] https://www.federalregister.gov/documents/2015/10/28/2015-27...
[2] https://www.govinfo.gov/content/pkg/PLAW-105publ304/pdf/PLAW...
Though I'm personally glad we no longer have to :) it's still way more difficult and compilers can really obfuscate the code (if it isn't already by design)
It shows my previous account even after I delete the app, clear the cache and Keychain, disable iCloud Drive, AND sign out of iCloud??
Why can't I see where this data is stored? Same for TikTok.
WHY does Apple, parading around as a pompous paragon of privacy, even allow this shit?
What else is being stored that we aren’t even aware of?
In case you are curious. I used Frida for installing hooks to native functions.
I guess it's the fact that in my mental model any supporting library doesn't have to be modified to allow viewing the traffic, and no cert-pining-breaks required
Integrity checking is user-hostile, but certificate pinning can be good for users.