Regardless of the sophistication of the deepfake, surely this rings huge alarm bells, right? I'm not even sure I'd be comfortable making secret transactions on instruction from my boss. Even if your boss is actually asking you to do this, how can you have the financial authority to transfer $25M and not the savvy to think that being asked to transfer huge amounts of money in secret isn't going to result in you getting thrown under the bus?
For example, that they've just closed a deal to buy a startup - a negotiation which was of course conducted in secrecy. It's a startup in another country, which is why we're all out of the office. Timezones are why you've received the request outside of normal working hours. And we've got to, um, close the deal so we can announce it outside of stock market opening hours, for both countries. To close the deal we've got to pay 10% of the 250M purchase price upfront. If you can't get this done within 2 hours the deal will fall through.
Its a company with revenues of a couple of billion and that probably sub contracts thousands of other companies on projects around the world. The finance department is probably sending similar payments regularly.
Most payments will be "secret" in that the amounts won't be made public to employees that don't need to know. The company maybe, for example, be repeating work that has been already been done in house so doesn't want it known inhouse what companies are being paid.
hell i do this if our tester hasn't managed to go over some aspect of our release. That way i get in writing from the product owner that he has OKd it, and if he sends me a teams message i ask him to email me confirmation.
Electronic engineers spent decades overcoming thermal noise floors so that humans could communicate over vast distances with small amounts of energy.
AI researchers, in a few short years, undid all that by making computer-generated chatter and images indistinguishable from messages sent by humans.
Until such a time as we live in a Bladerunner-like world of Replicants, being in-person will be the only reliable way to convey a message from human to human.
I'm long on travel and in-person meetings, short on VR and telecoms.
I am not in the finance department, but in software engineering and operations, two-party controls are everywhere. I can't check in code without reviews. I can't access production systems or make changes without approval from another team member. I would think that similar processes could be put into place for transferring tens of millions of dollars.
In other words, there are ways to deal with this that don't come down to "mistrust all technology and revert to face-to-face meetings and handing cash to each other".
Then I'd set up a short-notice multi-way meeting between the target, the CEO and the hacked account. The deepfake 'CEO' then turns up with no alarms raised, except one wrong name - easily dismissed as a glitch, or an assistant having booked the meeting.
I took the GP comment to mean truly in-person, like face-to-face across a table
That's a false dichotomy. "computer-generated chatter and images" ARE messages sent by humans. There are no cases of computers having agency known to me yet. The root of the problem is humans who lie and mislead. Now they merely have more avenues to do so. In the same vein, you could blame the electronic engineers for allowing people to lie quickly and over vast distances.
I mean - we have authentication for bank accounts, why wouldn't that be demanded for transactions like this? Without proper authentication of the authorities there's no way that a transaction like this should be put through.
And I have to mention, Tesla robots are way behind the competition, it's not even clear if their robot does anything really on its own, given how much they fake their videos of it with "creative" editing.
Just an inside job.
If a large company allows a single employee to transfer millions to a new bank account/vendor that has no history, on "their belief" the instruction came from an approved person (i.e. their boss, CFO etc) - that company has major governance issues that are not related to deepfake.
Imagine the more simple scenario - an employee transfers millions, knowingly fraudulantly, to some people they are working with. They then simply supply some "deep fake" pictures and a story how it was an accident - and boom; you walk away with millions.
Checks and balances exist for many reasons - deepfake doesnt overcome those by itself. This company is just missing basic steps that would have protected itself here.
edit: in fact- its even more obviously some inside job; put the deepfake aside for a moment. How was the meeting even booked? Their PR person said "none of our internal systems were compromised". So this meeting magically appeared in someone's calendar? Using their internal video system (Skype or Teams or whatever). And the criminals knew to target this person, with enough knowledge of random office people to deep fake them? Come on...
I hate discussing deepfakes. I'm one of the original patent holders of automated actor replacement technology. I developed it for personalized advertising, after having been an actor replacement specialist in a bunch of VFX film you probably saw.
I spent from 2002 to '08 creating a VFX pipeline, with global patent protections, and an ethical guidance that included public education on this fundamental new technology. Long story short, I needed financing, went to VCs and angels and they were perfectly winning to fund a porn company, but not what I'd planned: an ethical rollout of a sensitive and very powerful technology with many legs, few realize even today.
By '13 I was bankrupt, burned out, and one of my tech partners, a global leader in facial recognition hired me. That's a different story. Actor replacement technology is a fundamental capability with applications far more important than fraud and pornography. But our civilization is far far too immature to realize any of them.
Well, yes, that's kind of what the rest of us have come to expect from the industry. Ethical rollout is always going to take a back seat to raking in as much money as possible. I'm slightly surprised they were willing to touch porn though, not for "ethical" concerns but because it's treated as radioactive by payment services.
The UI really could stand to be more assertive about what they mean though.
The bank workers are normally quite understanding - except when it is someone from fraud detection (and yes these are legitimate calls) and they tend to get odly defensive that I wont hand out my personal information.
Ideally they screen-record (can you do that in Android/iPhone?), so at least if it's really scam, they can say "but I follow protocol, here's the evidence".
Btw we once had a similar scam attempt. "The CEO" emailed Finance in great urgency to transfer money. Good thing the CEO was sitting next to the Finance lady. I was sitting next to them watching the horror turned comedy.