Imagine an AI (e.g. wearable) is always integrated into your daily life. It sees what you see and hears what you hear.
Do you think this will be the future — that AI will be more integrated into your daily life? Or is humanity not yet ready for something like that? VR glasses are also sometimes very polarizing when it comes to data protection and privacy.
I’m genuinely curious where you draw the line. Because in practice, many of us are already surrounded by passive listening systems, even if we don’t actively use them ourselves.
I keep all of that stuff disabled.
> Or when you're visiting friends who have an Alexa device at home. Would that already be a problem for you?
Yes, it is, although I only have one friend who has such a device. I tend not to spend much time at their place. If someone I knew had a wearable that was analyzing/recording everything, and they refused to remove it, I'd minimize the amount of time I'd spend with them.
> I’m genuinely curious where you draw the line.
"Drawing the line" isn't really the way I'd put it. As you say, we're surrounded by surveillance devices and there's little I can do about it. All I do is avoid them wherever it's possible for me to do so and be sparing/cautious about what I say and do when I'm near surveillance that I can't avoid.
Where my life is mundane shit that most of the time I don’t even need the current generation of tech anywhere near. Walking the dog. Playing with and looking after my kids. Everyday conversations and intimacy with my wife. Barbecues with friends. Work.
And these guys lives are just working out, coding, and cooking on-trend dishes with expensive cookware, all to be relentlessly optimised.
For instance I've never brought my camera to a funeral. Most daily life deserves the right to be forgotten.
Then there are privacy laws, etc.
You're going to capture hours of walking and/or seemingly doing nothing, exchanging pleasantries/small-talk/banter. Without access to my thoughts, this is stuck in some superficial layer -- useless other than to maybe surface a reminder of something trivial that I forgot (and that's not worth it). Life happens in the brain, and you won't have access to that (yet).
Curious though: if there were a way for an AI to understand your thoughts, would that even be something you’d want? Or is the whole concept off-limits for you?
It's an interesting question -- I've thought about it a lot in the context of some hypothetical brain interface. There are a lot of unknowns but I personally would go for it with the very hard constraint that it be the equivalent of read-only (no agents here) and local (no cloud).
As potentially scary as it seems, I would not be able to fight the temptation to participate under those conditions. It would make elusive thought a thing of the past.
If it was networked, it would need to have much tighter security than the current internet.
If it was just a terminal to some corporate server running unknown software for purposes I wouldn't necessarily agree to, nope, nope, nopity-nope. Even if it didn't start off as a device for pushing propaganda and advertising, there's no realistic expectation that it wouldn't evolve into that over time.
BigTech has burned so much good will at this point, that every new venture just feels like a timer ticking down to a bait and switch for ad revenue, subscriptions, pro features or just selling our data to the highest bidder.
And what happens to 'local' data when the 3 letter agencies want access. No thanks, sounds completely dystopian. If the data is there, someone will find a way to abuse it.
That's why my entire architecture is being designed differently, based on two principles:
- Fully Functional Offline: 3-letter agencies can't access data that isn't on a server in the first place. The core AI runs on-device. - Open Core: You're right to expect a "bait & switch." That's why the code that guarantees privacy (the OS, the data-pipeline) must be open-source, so the trust is verifiable.
My business model is not to sell ads or data. I'm trying to design a system where trust is verifiable through the architecture, not just promised.
Personally I would consider it a moral imperative to refuse to use such a device and to avoid anyone who does otherwise.
So no, please don't create such a thing. Stop now.
That said, I often think about how this tension applies to nearly every new technology. Most tools can be used for good or bad, and history shows that progress tends to happen either way. If we had refused to develop technologies simply because they could be misused, we might not have any at all.
I do believe it’s possible to build responsibly through transparency, local-first design, and strong legal safeguards. The EU’s data protection laws, for example, give me some hope that we’re not entirely defenseless.
Do you see this kind of outcome as something we’re tangibly heading toward, or more as a warning of what could happen if we’re not careful?
Definitely won't trust AI shackled to other humans.
There's one thing AI can't do, and that's actually care about anyone or anything. It's the rough equivalent a psychopath. It would push you to a psychotic break with reality with its sycophancy just a happily as it would, say, murder you if given motive means and opportunity.
It could also help me use my time better. If it knows what I’ve been doing lately, it might give me useful tips.
So overall, more like a coach or assistant for everyday life.
Have you read about procrastination / resistance? The issue is not an absence of nagging but unresolved emotions / burnout etc.
http://www.duntemann.com/End14.htm
Elon Musk's portable-Grok-thing is a long step toward the jiminy idea.
Notwithstanding that most of the mobile OS’s are locked down more than some would prefer for a “general purpose computer” (but less than is likely for a porta-Grok), and that most devices are bigger than a matchbook to support UI that wouldn't be available in that form factor (though devices are available in matchbook size with more limited UI), and that it mostly uses RF like Bluetooth instead of IR for peripherals because IR works poorly in most circumstances, isn’t that what a smartphone is?
I think that there is some limit to how much additional information is useful to the AI tools that I use. I don’t know where that limit is and I also think that models are getting better all the time, so storing the data now for later use might be useful.
I have no idea how much it would cost to store/analyze 14-18 hours of data a day? I’m assuming that it could be post-processed and delete the useless stuff?
Obviously I understand the privacy-zealots issues with this technology. But I’m going to be dead in a couple decades and this idea sounds interesting to me. To me, whatever risk there is would be worth the unknown reward.
Just curious: what would have to change for you to even consider it? Is it more about the concept itself, or the way it's implemented?