Pretty much every single hotel gets a call from Mr. Patel at night asking to wire money due to an emergency. A lot of hotel employees fell for it and wire money. These employees even drill open the safe. Some even wire money from their personal account.
This scam is mostly social engineering without any AI/Deepfake. It's going to be a fun time ahead for everyone.
He explained the whole thing to me.
(Media reporting suggests this can also be true at some US hardware tech companies).
In any case what I find strange is that usually HK finance companies (like much of the rest of the world) will have some kind of maker-checker system which prevents individual mistakes like this.
I can only imagine this being leveraged nefariously.
But having lived & worked in a few countries now, the way other cultures do their overrides is always more visible (e.g. Country A you might pay bribes to get out of tickets, country B might just not pull people over in nice cars)
Sure there might be cultural differences, but maybe this guy is just careless.
There was a case in the US where someone pretended to be a cop, called a fast food restaurant, and actually convinced the manager to strip search an employee.
I guess this is also a case of cultural power distance.
Power distance might matter, depending on nationality of participants.
Also if English is a second language, then perhaps the sound quality of the synthetic voices wouldn't need to be as good - we are surely better at recognising voices in our mother tongue.
"Silence, power and communication in the operating room" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3001035/
Prior to checklists, nurses would feel hesitant to point out errors by surgeons.
Post checklists, people felt more empowered to say "Doctor, I believe you missed step 5".
(Didn't completely remove the hesitancy but this point was identified explicitly in Atul Gawande's book The Checklist Manifesto)
This could be totally real, but also could one employee saying 'the CFO was on a call' and claim deepfake to make it an excuse?
I guess it was a matter of time before this occurred. How long before scammers do bulk video calls to parents/grandparent pretending to be the kids saying they are in trouble and need $$$ ASAP.
The even better question, is how can this be stopped or reduced and is there a new business there?
Especially when a high percentage of people post their face and voice on social media. I find this especially crazy in the age of AI. I trained a Stable Diffusion LORA with photos of a friend and showed it to them (with permission) and they were completely shocked. Showed it to one of their friends and they were fooled for at least a minute and took some careful looks to find discrepancies
Keeping yourself anonymous isn't compatible with a lot of even moderately senior-level jobs out there.
There has been little issue for most people having photos of themselves online on social media.
If people want a photo of you they will find one.
If you refuse and it's an actual emergency with the real CFO, it might be a career limiting move, if you don't get fired.
If you accept, it might be a deepfake CFO and you might get sued.
This is really the crux of it: senior management needs to take the lead setting up policies which are efficient enough not encourage people to try to bypass them and the culture that everyone in the company should feel comfortable telling the CEO “I’m not allowed to do that”. This is possible but it has to be actively cultivated.
I would assume the matter of time for it occurring has elapsed a while ago, and now we are in the place where it's not only being detected, but further, actually revealed, regardless of how embarassing that is.
Unfortunately, this is why we need open access to some deepfake tech. The only way to convince people who are not immersed in tech how convincing deepfakes can be is to sit with them, and create their own deepfakes.
Then memorize and practice security protocols like verbal passwords.
https://abcnews.go.com/US/utah-missing-foreign-exchange-stud...
The generic type of vulnerability referenced in the latter part of the article has sprung up after fintech tried to emulate traditional offline auth and KYC with things like scanned images of ID documents, face recognition and liveness detection. Anyone in the know could see these attacks coming miles away.
Umm where have you been the last decade? The "Grandma help me I'm in a foreign prison and need you to buy iTunes gift cards" scam is extremely lucrative.
Opening with the line "Umm where have you been the last decade?" feels like throwing insults, and not conducive to a positive enviroment to learn from one another. You probably didnt mean it that way but though Id point out this style.
Regarding the last decade,, the pertinent part of the comment you responded to is "do bulk video calls to parents/grandparent pretending to be the kids" - more referring to when these existing scams hit a higher level.
A friend of mine in the US actually personally knows two people that this has already happened to, albeit with audio only. With video it's going to be nuts.
This sounds like it required quite a bit of preparation, i.e. collecting data for each deep-faked participant including image/voice samples.
If it's reaching this level of sophistication already then I suspect a new participant validation scheme is on its way for sensitive meetings.
It would easily be worth it spending $1m on the perfect setup.
I have clients where anything over even quite a low set limit (say €10k) requires multi-party authorisation - and it's very common for the person entering payments to be unable to authorise payments. That's just good practice.
A payment should not be able to be queued without a PO number. If the payee is new, the bank details need to be verified by phone. Once approved as a destination account, that payee is set up in banking, and authorised by a finance clerk and someone more senior. At the point a payment is requested the PO and other details should be double checked against what is in the system. If there's a match, then the payment can be queued for authorisation. The person entering payments and the people approving payments should be entirely different - and it should be people, not a single person. When payments are entered, the payments should be reviewed by first authorisation - a finance manager, for example - and once that authorisation is conducted, depending on payment limits, another authorisation or authorisations will be carried out.
If you have 10 business units trading 50 world currencies, checking 500 transactions for FX every day is a total chore hence it would get automated, and only unusually large transactions would be flagged. Rules like <10m goes through automatically would be tuned over time so that the workload on operations team members would add actual value without being onerous on their time.
So, depending on the business we are talking about, a 25m transaction could basically be lost in the noise. Given the mention of the CFO being london based and the operations team being in HK, it sounds like a typical investment bank setup to me.
After having worked IT for various startups I cannot understate just how much executives and other higher ups detest policies that make them verify who they are. It short circuits something with their ego.
This is astounding levels of incompetence.
From what I understand of the literature, it’s often several interactions to gather enough information from several employees to learn to sound like you belong there, then using it all against someone with “keys” who escorts you the rest of the way.
I can imagine a scan where the fake CEO gets a phone or laptop outside of the process "because CEO". This however will still be limited to generic, low value stuff handled by single people in a company.
There is no way that a reasonably organized company can leak 40 MM USD.
Oh, please, HP lost some 40 million in inventory while contracted to Solectron Global for repairs, because their inventory systems are utter garbage compared to Dell or Toshiba.
Except these sort of transfers almost always happen with, at a minimum, dual approval where exceptions cannot be made because it's software defining the rule.
1 employee submits the transaction for review, and a 2nd (and sometimes a 3rd, 4th) person must approve it before the payment initiates. There isn't typically a bypass function.
Also, CFOs are typically responsible for setting up and enforcing these controls. A big part of a CFO's job is to manage risk. If you work under a CFO, you would be more likely to be rewarded for following the process than be punished.
Obviously there are exceptions to this, but by and large no CFO would punish a finance person for disobeying an order to bypass a process intended to prevent financial fraud.
These people aren't stupid. I'd expect them to understand risk better than your average senior software engineer and if you tell them "Sorry boss, too risky to do that right now. I can't be 100% sure this message is genuine. Let's sync on this after your meeting", your chances of promotion at this company would likely rise, not fall.
install notepad++ from pre-packaged store? approval needed
change to mailing list you own? approval needed
1 line config change to production alerting system? 8 approvals needed
I can easily imagine people just clicking Approve sometimes without reading
And many more stories like that.
But yes, for small fish there is an approval process for everything, even to but a paper clip.
Regardless, approvals for multi-million transfers require a higher level of process and approval.
Oh, my sweet summer child. The larger the organization, the more dysfunctional it becomes.
See How this scammer used phishing emails to steal over $100 million from Google and Facebook
https://www.cnbc.com/2019/03/27/phishing-email-scam-stole-10...
Couple years ago I thought that too...
All the checks you describe - multiple approvers, standing data, callbacks etc - the guys going after big payments like this know these checks are in place, how they work and have a game plan for it.
If you can deepfake one guy with the checkbook, can’t you deepfake the guy with the checkbook and the guy who enters the POs into the system? Lower odds, but far from zero.
I mean we still right now live in the world where just a very rough match for signature on a piece of crappy paper is enough to move millions if needed.
Maybe it’s because I’m in the EU. Banking here is very different to the US.
Deepfake was used in the 2023 MGM casino breach to convince tech support staff to do things that compromised their MFA
Now we're seeing a combination of these for significantly higher gains.
this is the real problem. why oh why, after suspecting an email as phishing, would you then go on to even click ANYTHING, let alone join a video call?
insanity. either stupidity or he's lying about suspecting the email. how many corporate security trainings does it take? this is just about 101. "if asked to do a secret task by a suspicious email, DONT do it"
It takes $CURRENT_NUMBER + 1.
People are still, to this day, racking up thousands of dollars in iTunes gift cards on corporate cards and mailing them out, because they got a text from "the CEO". It happened at my spouse's work just last year. It'll continue happening again, forever, because to paraphrase P.T. Barnum, a sucker is hired every minute - in the probability distribution of humanity along that particular axis, there's always going to be some percentage at the bottom who'll fall for the most obvious scams. Sometimes repeatedly.
This is not what they teach you in trainings, though. They teach you to get the requestor (or your boss or whoever might be authoritative) on the line and confirm that the email is authentic. I believe a video call qualifies as well.
Have you ever actually done corporate security training? It's very obviously 100% useless and not going to teach anyone anything.
A company I worked for actually started sending test phishing campaigns which is a lot more effective, but I thought they were still pretty obvious and also it led to stupid people reporting them on Slack endlessly.
Still, probably the best thing you can do.
I've seen some decent ones. e.g. One that was presented from adversaries PoV which I thought was innovative & got people thinking about it in novel ways (at least did for me).
Many people will open a suspected phishing link, report it, then open it later in the afternoon...
If someone claims to be a police officer and hands you a number to call to see if they are real... don't use that number. Figure out the non-emergency number of the station they claim to be coming from independently and ask them. If a "new agent" from your bank calls you and gives you a "new number" to call them, figure out an official number of your bank and call that.
Also, whenever paying new accounts, once you've independently reached the person you think you're talking to, always do a test transaction and make sure they get it before sending the rest.
Nobody would accuse me of great people skills and while I'd like to point to my technical acumen as the reason I can spot fakes like this easily, it's my primate brain that knows something is wrong.
Social engineering works because people think they could spot it.
That belief is a catch-22, though. By definition, each time one fooled you, you didn't note anything other than a run-of-the-mill normal video. A lot of tiktok accounts lately are dedicated to deepfaking celebrities. For example, if I hadn't already told you and you just casually scrolled by it, would you immediately suspect this isn't Jenna Ortega https://www.tiktok.com/@fake_ortegafan/video/732425793067973... ? I didn't look for the best example, that was just the very first that came up.
>Is this an already existing product
Usually cutting edge ML has to be done with a github repo last updated a few days ago using Tensorflow/Pytorch and installing a bazillion dependancies. And then months later you might see it packaged up as a polished product startup website. I've seen this repo a lot https://github.com/chervonij/DFL-Colab
I tried to find the link but my search-fu is not good today it seems..
I did find this, which seems related: https://blog.metaphysic.ai/the-emergence-of-full-body-gaussi...
There's also the fact from the article that this was an employee in Hong Kong on a video call with people supposedly in the UK, so it's also possible they took advantage of bad video quality to do this..
Get on video for the first minute or so, then, as we've all done, say "I'm going to turn off my video so my connection sounds better" etc...
Talking about how something like this can happen in a big company is fun and all, but the scary thing is is that it is _so much easier_ to do these sorts of scams with deepfakes. Which means they will be deployed against "softer" targets, like you and me, and your parents and grandparents.
Imagine every C-level exec who's opened a top-urgent ticket with IT because their printer doesn't work (they forgot to plug it in/forgot it needs paper/it's not a printer, it's a paper shredder) trying to operate some form of key exchange software securely, while people capable of pulling off this sort of scam are targeting them.
I don't think this is a problem that can be solved with technology.
We already have facial verification systems in hundreds of millions of devices that are genuinely very difficult to spoof.
Doesn't the US military have DoD people plug in their ID badges to read/sign emails through outlook?
Such headlines are usually followed, a few weeks later by an headline reading not unlike this:
"Three indicted in scheme involving deep-fake to steal $25m".
Basically, it was a well thought and well executed scam that perfectly fit the employee's situation.
That is, the scammer manages to get ahold of the SIM card / phone number of the CFO, and be on the receiving end if/when a worker calls the CFO up.
Weakest link would probably be to compromise some telecom worker, so that this can be orchestrated.
This problem isn't a technical one..it's a process issue. One person shouldn't be able to transfer $25m without multiple people authenticating and authorising.
It wasn't just a fake call, and he had a paper trail of the order...at this point it's pretty hard to prevent this from happening, short of having every order double checked by some other independent entity.
If an employee routinely receives email or zoom instructions to transfer $25m without any sort of sign off then the company is completely at fault for terrible process.
Most non-enterprise companies have fairly loose wire protocols. That said, outgoing phone calls to two separate signers is a good, simple best practice.
Sam deal for the call as well. I'd expect the video client to warn that some members of the call are external to the organization (Google Meet does that). Or the CFO is expected to be outside (from another org) from the get go.
> Initially, the worker suspected it was a phishing email, as it talked of the need for a secret transaction to be carried out.
That's how I almost lost £100k. I got an email from my lawyer instructing me to pay an amount that I was expecting to have to pay, but to the wrong account. The email "from:" was definitely my lawyser's email address. It satisfied Gmail's spoofing checks. But it was not my lawyer who sent it.
The funny thing is, I ask them to say say "I don't know" rather than the above, but they still do it...
You can work around it by picking a difficult practical problem from your domain and talking through choices and their different tradeoffs.
This is an obvious and natural evolution of the kinds of attacks that have existed for years. It was bound to happen eventually. I think it's just sooner than people expected.
1) a multiperson zoom of deep fakes fooled the worker 2) the worker was in on it as an inside man and the deep fake story is cover
Don’t write a check unless you hear me mention aardvark or Mad King Ludwig.
Finance worker pays out $25M after vid call with deepfake CFO
Edit: maybe not zoom