Deliver your site only to the "inner browser" (that the user has no control over because it's heavily obfuscated and tricked-out with anti-debugging code) and you eliminate all ad blockers. Throw some DNS-over-HTTPS w/ certificate pinning in for good measure and you kill DNS-based ad blockers too.
Accessibility will be a challenge but if it sells that'll get "fixed".
(I think this idea is evil, BTW, but somebody is going to do it.)
Edit: As an aside this needs to go here, too. https://www.destroyallsoftware.com/talks/the-birth-and-death...
Doesn't matter either way. Can't wait for AI-powered ad blocking. Just imagine it. AI parses content and filters out ads, brands, even subtle PR text from pages automatically. Not just textual content either. It also kills ads in audio, video, images.
If I can imagine it, it must be possible. I'm sure someone much smarter than me will create this at some point. Perhaps this comment will inspire that person.
I'm not against ad-blocking but this seems a bit too Big Brother-ish.
Such a person is the very last creature I want parsing every byte of content delivered to me. in order to make this service a reality it must be local.
What we need is data poisoning. Have the AI watch ads, spoof responses, while we watch ad free content. Run it like SETI, during our devices downtime. They'll try to raise fraud concerns. Understandable. Accusations of piracy? Certainly possible. Convictions, though? Probably not.
It would seem that it is just as likely that browser producers would add AI "watchers" to the browser to make sure you are not using any ad blocking! AI doesn't see any ads or marketing copy for 30 mins, sorry browser temporarily unusable ... unless you have a business account, then no ads for 8 hours.
I have no doubt that current AI is capable of this (in fact I'm sure it would be trivial for GPT-3.5 and well within reach of even locally-run LLMs), the question is what fraction of users can be bothered -- and I don't see any reason why AI would lead to an increase there.
I'm confused how the "inner browser" meaningfully helps you accomplish this. How is this any easier or more effective than just having a website that hosts its own advertising assets (or proxies them) and obfuscates/randomizes its DOM structure to make ads difficult to target with simplistic ad-blocking rules?
Right not it is not hard to write an adblocker, just block network requests and DOM elements (regardless of whom hosts of proxies them).
A browser in a browser would make blocking dom as hard as attaching a debugger and manipulating a price that has been hardened.
Network requests may still be able to be blocked, but it is going to make ad blocking harder.
> How is this any easier or more effective than just having a website that hosts its own advertising assets
advertisers REALLY don't want you to do that because it's far too easy to cheat.
I don't know how we got from "don't download and install random software from untrusted sources on your devices" to "let anyone with a website run code directly on your hardware. Sandboxes are impossible to breach!"
I get that it's cool tech and the promise of writing software to run on many different platforms is exciting, but from a real world/user perspective it's insane.
It hasn’t been the end of the world, but it hasn’t been great either.
It's not just a advertisement and a viewer, it's also the bots.
Adtech is where it's at now just cause it wants you to see it but because a industry of faking viewership built up around it.
No other advertisement has really had to deal with how ads are bought on per viewer basis.
All the targeting tech is equally a response to "personalization" as it is to "fraudulent botters"
You can then understand that if ads reverted to the old static billboard or tv commercial state, there's probably be little incentive to harass the user.
It's also the new flash since like flash it's just a bucket of pixels. Like when say VisionOS comes out and their browser has made tweaks to all the HTML form elements so they work well with finger gestures in the air but of course here this page is just a block of pixels so it will have the wrong interface for the device.
Every time there's an article about some kind of "Here's a really simple core to build stuff from scratch" technology people seem to get really excited.
I was hoping we'd be going the other way and building web tech into OSes!
The far more likely way we'll see push back against Ad Blockers is by simply detecting that an Ad did not play and then refusing to display content until it does.
https://plato.stanford.edu/entries/goedel-incompleteness/sup....
I would pay for such a service.
Eventually, AI will be something we can run locally on a typical desktop, or even a cell phone, and at that point we could locally host that kind of ad blocking, but trusting all of your traffic to some random company that promises to delete ads but not abuse their position seems naive given our situation today. It preserves the worst dangers of ads while adding even less control and transparency for the user.
But even without WASM, there is the “tiktok strategy”, of pushing bytecode to a self-made interpreter, that frequently changes its semantics.
Your proposal, while feasible, turns this into a static linking affair. This comes with many risks, like becoming complacent and not updating the "inner browser" due to browser incompatibilities and bugs. It creates a giant mess of dependency update hell if you aren't regularly updating the webview.
At that point, you might as well ship a desktop app that does something similar and proxies ads through the first party server since it's probably less work.
You are right that it would work. I just don't know if going to such lengths is required to achieve the same thing.
A good counter is that a desktop app could be exploited to alter the behavior via reverse engineering. But a browser would show you the WASM as well, so I'm sure you could reverse engineer it and alter it with an extension like a traditional binary.
Maybe I'm missing something though - I'll admit that I'm an advocate of WASM but don't keep super up to date on all advancements.
user doesn't want to download an app, since that has a lot of friction. Going to a url and waiting (even if the download time is the same) feels like there's less friction, and so this idea of shipping a blackbox is more desirable.
The only problem really is the jankiness of any non-browser controls. The user expects a good right-click context menu, keyboard navigation, scrolling, etc, which all would have to be implemented if you're compiling a native app into WASM. But if the app itself is just HTML+javascript, then shipping a WASM browser, then using that browser as the app layer solves all of those UX jankiness (since the user should not really be able to tell it's a WASM browser).
The idea is insidiouly bad for user freedom, but great for businesses like google (who wants to control the user space completely).
Like a reverse proxy inside the browser, but with a server component.
When i first heard about their idea I couldn’t really comprehend it, and actually I still find it hard to understand. But they think it will be big!
Inasmuch as any app, eg., a video player, can include DRM or otherwise lock things down.
This is feasible, but its unclear how successful it would be -- it would just start an arms race to hack it.
For now.
Now, maybe that's just a potential usage that trust-me-bro it'll never actually try to use or access in RAM or on disk... but how on earth is that number for a chat client so much bigger than either Firefox with 100+ tabs or even Java-based IDEs like Webstorm?
“Big number is scary” is not a good way to understand performance.
[1] https://foldoc.org/Big+bag+of+pages
[2] https://github.com/google/sanitizers/wiki/AddressSanitizerAl...
> So I started making a browser engine (for fun) a few days ago, it felt kind of inevitable so here we are
And I got to admit, it is pretty neat.
> The RoveMax Model 3 was developed in total secrecy by Kerbal Motion's R&D team over the course of a year and a half. When it was finally revealed to the company's chairman, he stared in shock, screamed 'WHY', and subsequently dropped dead on the spot.
The reaction is about the same, anyway.
To make the web entirely like a TV, everything should be rendered on canvases. To let you truly deploy your org chart, each team should be responsible for one isolated canvas.
> But making a new browser engine is impossible!
No. It. Isn’t! (Also I don’t really care how possible/feasible something is.)It's just how the universe works.
I'd put my money on C, almost everything is bootstrapped from it.
Forth is a another candidate if you'd have to build everything yourself.
"Yo dawg, I heard you like browsing, so we put a browser in your browser so you can browse while you browse"
Aim: fun and perhaps learn something about how layout works.
You can get surprisingly far quite quickly with JavaScript and <canvas>
At least for mine, I got text wrapping and scrolling working :)
Second: I'd love to be able to see it access Wikipedia!
(Wikipedia should be a fairly "low-hanging fruit" in terms of functionality that must be implemented to achieve compatibility -- if I were writing a web browser, I'd always start with Wikipedia compatibility first, then once that has been obtained, move on to more challenging, technologically complex sites... But that being said, Wikipedia is probably a bit more complex these days than when it was first implemented... still, it would be a great "win" to be able to browse Wikipedia with it!)
Anyway, wishing you great luck with your browser!
1. Wikipedia changes so rarely that it's almost considered a short of a misfeature, so it's hardly a moving target the way that other stuff would be
2. Even despite the breadth of subject matter, most pages are fairly similar, and the more exotic stuff will be used infrequently via templates and confined to supplementary items like infoboxes
3. Your bang-for-buck once you have a passable implementation will be enormous (not to mention: doesn't outright contribute to the decay of society), in contrast to e.g. a social networking site where a quarter of the world uses it and 3/4 either revile it or are indifferent to it when stacked up against their preferred platform
One thing to be considered is that the general look of unstyled pages that is common to almost all browsers isn't actually prescribed by any spec—browsers are free to come up with their own UA style sheets—so anyone developing a new browser should just make unstyled content look by default like what it would look like under reader mode. Pretty crazy that mainstream browsers haven't changed the way unstyled content is presented.
The linked page has a screenshot of the engine. You have to click through to get the full viewport experience (with keybindings).
Safari is the new i.e. in the sense that i.e. was in the late 2000s/early 2010s, meaning it’s the browser that we would like to ignore due to how weird it is compared to the browser we normally target. And how stubborn the vendor is in neither acting like the majority browser, nor giving up and adopting a different engine (or in the case of iOS, even allowing a different engine to run).
At some point maybe the web dev world will understand that, sometimes, you just have to live with the fact that there is no one true best solution, and instead build tools that are built with cross-platform in mind (like Qt, SDL, etc.).
Yup, looks like canvas for sure, can't copy-paste anything and all the lines are clipped because it's not even conforming to my window size.
I think we all have newer versions than 1.5, so no problem... right? ;)
Lovely opening quote and by my own hobby project procrastinate high bar allow me to wonder when the "few days ago" exactly were ?
Great project
Something about coming full circle, but that full circle is inside of a dumpster fire.
Also, I love this. This is fine.
Goodbye World wide web, it was nice knowing you.
Text selection doesn't work. I guess it's rendered to canvas... but would there be any way to make this work?
I get a page of White and Black and <shadow> as a title.
> As with all my recent projects, the name is because …
This makes me whisper the name in my head
It's not like one of those browser engines that are promising but not quite there yet, like NetSurf, Servo or Ladybird.
Man, this thing was small and fast
It doesn't load in (updated) Firefox, which is interesting. It only shows that the FPS is around 60. Not as many extension (vs LibreWolf). I rarely use FF (opting for LibreWolf).
On Brave (no extensions at all, except built in protections), the site runs at ~121FPS and the fonts look normal enough (no zoom, page at 100%). Fonts continue to look fine even when I zoom to 150% or higher.
I expected that my CPU temp would increase, maybe the fans might kick in... nothing. Cool as a cucumber. I have 4 browsers all opened on this page.
Interesting project. I read what @EvanAnderson wrote (evil software). I tend to agree. Respectfully, I don't see how this would be used. Browsers that already run JS don't need this. Browser that need this usually don't run JS (fails on Dillo, NetSurf, Lynx for example).
As the post says, it's just for fun :)