Sort of. They can't change plain text, but modern emails often include vast swaths of remote content. When you open the message, it retrieves the relevant assets directly from whoever sent the email. That remote content is not permanently stored. It's cached for a bit and will not be re-used if the email is opened months or years later.
If those assets disappear or are changed, there's very little any email provider can do about that.
Absolutely bonkers.
"Because of the dynamic nature of AMP messages, the content displayed in Gmail messages can change as time passes." https://support.google.com/a/answer/9709409?hl=en
And on the one hand, it's cool as hell to see your email update itself to show tracking progress
On the other hand, just send me a new email. It's fine, I promise.
The benefit of this is senders couldn't treat it as a read receipt, because the provider can state "Our infra performs this operation for the user for immutability purposes" similar to other email operations that proxy these requests for privacy purposes.
It could be anywhere, which is another knock against HTML email.
Which is why text only email is still king, and used in a lot of places still.
Apple’s private loading feature also shows how that could be fixed: the mail server can retrieve the referenced content once and save it so you’d always know what was served at the time the message was sent.
> For our staff, we encourage understanding the tools that exist in the world, and how to use them safely. Our policy makes it clear that any use of tools, including tools with AI in them, must follow clear privacy-preserving principles:
Data Protection: All data protection, confidentiality, and privacy policies must be followed (our vendors for things like anti-abuse and support are moving towards using AI for translation, categorization, abuse detection – and we are ensuring that their policies continue to provide protection for our customers)
Accountability for work: Any AI generated writing or code must be reviewed and understood by a human being, and go through our regular second-set-of-eyes processes before being used
Bias awareness: Actively look for biases or hallucinations in AI output
Human authority: Always have a path for appeal to a human from any decision that is made by automated toolsI gotta be honest, this scenario is not a concern that impacts my choice of email provider.
The cameras used to document "news" will need to be watermarked, fingerprinted and authenticated, like what Canon and Nikon are already doing (and which AFP has already adopted).
It may have seemed gimmicky at first, but in a year or two, you'll probably only be able to trust visuals from companies that do this (wire agencies like AFP, AP and Reuters are heavily disincentivised to create fake news anyway but that's another topic).
At a certain level, I imagine social media apps will also encourage direct camera-to-post for documentation/videos of reality, since this will be the only end-to-end method to verify an image was created unaltered. I can imagine a world where, if you film a protest through the Instagram app, you'd get some kind of "this is real" badge on it, whereas if you upload a video, it gets treated as "could be AI" like 99% of all future content.
A lot depends on watermarking at source and the social media platform using that to make a clickable/hard watermark
This is a bigger threat than phony AI videos.
One of the most common forms of submissions on Reddit/Twitter is an image with text, or a screenshot of a tweet, or a screenshot of a headline that makes a claim, and everyone takes it dead seriously.
Almost nobody is going "hmm let me look this up first to see if it even exists or accurately represents the facts".
So if all you need is an image of text for people to believe it, what does it even matter if you have this sophisticated system where you require photos to be signed by camera hardware or whatever? You aren't even putting a dent in how bullshit spreads.
This removes the possibilities for bad actors to just one - the platform itself.
In any case, the audience will have to learn new ways to "trust" and tech alone won't be the solution. But I've less hope in people and more hope in new social contracts
I think LIDAR sensors would be useful to verify depth information in an image, on a side note.
On Instagram? The website owned by that guy who loves AI slop and wants to fill your feed with it? That Instagram? Yeah, doesn’t seem likely.
https://techcrunch.com/2025/09/25/meta-launches-vibes-a-shor...
https://fortune.com/2024/10/30/mark-zuckerberg-ai-generated-...
The next flaw is that cameras are happy to record screens playing AI-generated videos and mark them as authentic. Perhaps you can tell today because the screen pixels aren't perfectly 1:1 mapped to the image sensor pixels, but as soon as elections depend on being able to do that, those screens will exist.
People are saying to add LIDAR to prevent this "record the screen" hack, but a mirror over the LIDAR sensor and me sitting at a desk motionless looks to LIDAR exactly like the world leader I'm deepfaking sitting motionless at a desk. People are not using AI to generate amazing action shots.
At the end of the day, people will have to take some personal responsibility. Migrants probably aren't killing and eating pets. Pets taste terrible and grocery stores that you can just walk into and steal whatever you want exist. There isn't a bed that can cure any disease. If someone says they do, even a world leader, test them out on something non-critical. Break off a fingernail and see if the magic bed can regrow it overnight. If not, maybe stick to traditional cancer treatments until there is some clearer evidence.
It’s already possible. See the Stagecraft studio they built for the production of TV series The Mandalorian.
> shooting the series on a stage surrounded by massive LED walls displaying dynamic digital sets, with the ability to react to and manipulate this digital content in real time during live production
https://www.unrealengine.com/fr/blog/forging-new-paths-for-f...
> The StageCraft process involves shooting live-action actors and sets surrounded by large, very high-definition LED video walls. These walls display computer-generated imagery backdrops, once traditionally composited primarily in post-production after shooting with chroma key screens. These facilities are known as "volumes". When shooting, the production team is able to realign the background instantly based on moving camera positions. The entire CGI background can be manipulated in real-time.
But proving to others that an email hasn't been modified is a more difficult task. As I understand it, you'd need to retain DKIM keys for the signing server, to check that historical DKIM signatures verify correctly and the old message was not forged or altered.
Are DKIM signing keys issued in some kind of Certificate Transparency log, where you can verify whether a particular DKIM key existed for a particular domain in the past, in order to do this in general?
https://github.com/robertdavidgraham/hunter-dkim#but-gmails-...
EDIT: this one exists but is incomplete: https://archive.prove.email/about
If you count for automatically categorized Bayesian spam, it's about 99% noise.
That's one of the things that sucks about the current AI. Being employed by people that that are categorically opposed to using it to enhance privacy and filter advertising.
In fact until recently email was sent and received in the clear like a postcard, the whole system wasn't designed to be secure or secret in any way.
From article: "An email is your copy, and the sender can’t revise it later."
But as in all cases, you can only be truly sure no one is tampering if you don't give it to anyone else.
I had 16.5GB or so used up so it was flashing red. When paid for Gemini, my total space jumped to 2TB and my usage dropped to 12GB. Disgusting. So might as well switch to fastmail. Not sure.
fastmail: read my lips: I pay you because you offer a traditional email service
if you add a single AI feature I will return to self hosting
I use everything I can to block trackers, spy ware, etc and have never been "Cloudflare blocked".
> Add a user to your billing plan to give someone their own Fastmail Inbox and login. Build your team, be it work or family, and share calendars, contacts and more. Give users extra addresses for free
The way my UX works is I can add users but they always have to have their own paid plan. Makes sense for heavy email users but not so much for my partner or our kids. I was hoping there was a 5 accounts for the price of 3 thing like Spotify et al do.