It's work like this (Mess with DNS). This is the stuff. Revealing, experimenting, inviting people in. Tech that illuminates & shows off, that is there to explain & help create understanding. This is the stuff, this is what keeps humanity powerful & competent & connected. Tech does a lot for us, but when it helps us become better wiser more creative people, when it reveals itself & the world: that holds a very dear place in my heart, is the light & heat in a vast cold and dark universe. I love this project. It's a capital example of revelatory technology, of enlightening technology.
The strength of humanity is teamwork, working together to build things other groups can build things upon. Abandon 100 random humans in the same jungle and they will build a town.
This is why I don't trust anybody who tries to tell me that human population growth is an actual problem and not just our rulers' fear of irrelevance.
Not arguing, just questions that came into my mind.
https://en.wikipedia.org/wiki/Lord_of_the_Flies
I'm not sure -- but I do think it would be interesting how that would turn out. Australia would founded in this sort of fashion. I think there's a bit more nuance though.
Teamwork is still the work of many individuals, and I think a person's upbringing & disposition & the capabilities they've developed are hugely influential on what kinds of teams are possible in the world. The world of computing today gives users interesting capabilities, but only shallowly, only on the surface; it denies us the view below, denies us the freedom to see, understand & explore, and humanity always being so yolked restrains human growth, restricts what I see as one of our key better nature from getting a chance to come out & thrive.
Sure, we are not going to all learn how to build apartment buildings; we will take much for granted. But many people do learn some home repair, or try their hand at fixing appliances. Sometimes just to save some money, but sometimes because it's interesting, & because there's videos showing them how to, because they can. But computer/information tech, in my view, has created a highly resistant unrepairable unviewable digitalia that is anathematic to this basic human engagement with the world about us. It is not just a built environment, but a built environment which resists real understanding, which prevents human empowerment.
Creating an accessible world, one where human's have a strong locus of control, where they have flexibility & options to experiment, to play, to try, to explore is absolutely capital to me. Humanity loses who humanity was when/if we view the world as prebuilt, as a creation of some wider us, that we are but tiny figures upon. Yes there are many things that we have to rely on groups for, but that ability to learn about the world, to understand it, to investigate & understand & experiment in the pieces of it we so choose- that spirit is the lifeblood of this planet, and it's that attitude & disposition that produces highly functional teams & groups. Which is something we will, best I can tell, always need.
To speak to technology & it's revelatory potential, to put it in scope here, I think it's important to review Ursala Franklin's dichotomy of technology. She divides tech into work & control related, work that helps individuals do things, control that regulates systems. Going further, she divides tech into holistic & prescriptive techologies- prescriptive technologies which break down work into fixed, predictable, deliberate steps & processes, and holistic technologies, which amplify the capabilities & prowess of the tool-bearer. There's a lot of tech on this planet, but even "creative" tech like a photo-sharing sight is mechanistic in nature, follows limited & fixed flows, & affords only superficial control to it's users. Where-as tech like Mess with DNS amplifiers human understanding, gives us the power to explore & test out what is possible, lets us set our own rules. This world is in need of techno-spiritual healing- computers are widely used but rebuff understanding, they have become overwhelming elements of control rather than empowerment. I look forward eagerly to a shift, to revelatory technology that abides different ends, that seeks a holism. Mess with DNS is "just" a little playground for some tech, hardly an attractive application on it's own, but I believe that individuals everywhere would be much better off- that teams would be much richer as a result- if tech worked to open up the engine-bay & allow some monkeying around.
And I think that makes all the difference. I tend to believe very strongly in hands on experience, think that seeing things happen yourself & getting to play is by far the best way to learn, just incredibly surpassing.
There's a theory of education called Constructivism[1] that is broadly similar. Adherents include folks like Seymore Papert[2], creator of Logo, employee at One Laptop Per Child (which I think is the most interesting & innovative software environment we've ever created, vastly under-appreciated). Projects like Logo are supposed to create that hands on feedback, to make programming not just writing scripts & having programs run, but ways to see the code really execute, to create more interactive modes.
With software eating the world, it is so so so important to me not just to create knowledge, to tell tales of what software is, but to let people have the experience themselves. To create playgrounds to meddle, to mess around. I wish so much that applications could actually show & explain what they are doing, what's inside of them, could reveal their workings, but we're so far away from that Enlightened world, we've fallen into such deep shadows imo.
(Side note, I see things very differently, but I also am disappointed folks would downvote your perspective like this. As for the lack of knowledge/experience, I'd say that most engineers don't have familiarity because there's not a lot of opportunities to set up & learn systems work; most coders spend their time coding, not setting up bits of infrastructure to run code on. You yourself also say "writing the code is the easiest part", which underscores just how complex/inter-related/particular all the systems/infrastructure stuff is, how probable it is engineers might not feel fully competent or brave enough to engage.)
[1] https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_...
$ dig @50.0.1.1 nelson.lily6.messwithdns.com a
Results in two queries being answered by the messwithdns server. One for nelson.lily6.messwithdns.com as expected, but also one for _.lily6.messwithdns.com.
Any guesses what that naked underscore query is for? Not every nameserver does it (Cloudflare, Google, Quad9, and Adguard all don't). But Sonic isn't the only one that does.
I've asked on Twitter and the best guess right now is it has something to do with RFC2782 or RFC 8552. But those are about using _ to make unique tokens that aren't likely domain names, things like _tcp or _udp. What would a naked _ mean?
I wrote the draft algorithm that appears in appendix A of the first experimental RFC describing qname minimization https://datatracker.ietf.org/doc/html/rfc7816#appendix-A
I wrote it because I wanted more specific advice about how qname minimization should work, and I deliberately aimed it at an ideal world, ignoring obvious interoperability problems. I hoped that this would provoke discussion and get people working towards a more realistic algorithm. But that did not happen until years later.
So the early implementations of qname minimization had to invent their own ways of working around the inevitable interop problems, and some of those solutions were quite creative.
I think the bare _ version is trying to avoid querying delegation points directly, so that it still gets a referral as it would have done using the full qname. And the _ also avoids problems with negative responses, which are often implemented very badly - it is common to make a mess of the distinction between NXDOMAIN and NODATA.
Does QNAME minimization try to prevent the scenario where a malicious party has setup a DNS tracker that responds with the same A/AAAA entries for a specific subdomain in the sense that e.g. "session-id.actualserver.company.tld" results in the same entries as "actualserver.company.tld"?
How would a client detect this before actually resolving it? I mean, if TTL is 0, no client will cache the results and therefore the minimization aspects are kind of irrelevant because the client has to resolve all over again, right?
I think I am having questions about the logical conditions "when" a client tries to resolve "_" before resolving the actual domain, which I am assuming is what the draft proposed...because to me this scenario would have the requirement that the very same party also has ownership of the HTML/actual links in the code, so I don't understand what it's trying to prevent because the same party could just read their apache logs to gain better datasets.
Maybe I'm missing something here?
https://www.isc.org/blogs/qname-minimization-and-privacy/
https://bind9.readthedocs.io/en/latest/reference.html (look for qname)
(I work at Sonic)
In DNS, the recursive resolver sends the entire FQDN each time to every step.
Now realize, like every company, DNS operators want to collect and sell your data.
So imagine a 'bigsite.com' that does a lot of things. And you like, say porn.bigsite.com. Without this minimization, everyone from the root to verisign to bigsite knows what you queried for.
But I wish a service existed that made domain names easy enough to use that the average person could manage them. IMO you shouldn't have to learn DNS and TLS in order to securely use a domain name. If I want to sign up to have Fastmail host my email, why do I have to manually copy and paste a bunch of DNS records? Fastmail already knows exactly what records need to be set. I should be able to OAuth redirect over to my domain registrar and approve giving Fastmail control over a subdomain of my choosing, and Fastmail should be able to use a simple open protocol to update the records.
It's obviously not as hassle-free than something like your oauth example, but it's using the infrastructure that is already there.
This is a real risk. When people start adding CNAME's or A's that point to known phishing sites, it's very easy for Google to notice and block.
I would imagine they might also show warnings in Chrome.
It's probably a good idea for the author to add this project to the list.
If you don't trust across separator boundaries you're mostly safe. That is, mytxt.foo.com shouldn't be blindly trusted for my.subdomain.foo.com nor mytxt.subdomain.foo.com shouldn't be trusted for foo.com.
IMO the biggest concern is with organizations that blacklist domains for various reasons, because they are not eager to just build very fine-grained blacklists.
https://en.wikipedia.org/wiki/DNS_Certification_Authority_Au...
Now, there are a bunch of things you could do about that, and I believe this cool toy does one of the obvious ones: Don't have any certificates for the problematic domain. The web site isn't in the domain you can mess with. But it would be nice if Let's Encrypt got to this, periodically I check so far each time somebody has pestered them for RFC 8657 recently, so I don't pile on since that's unhelpful.
Real answer: many ISP’s DNS servers are set to ignore whatever you set and use a value they feel works best for themselves.
https://blog.benjojo.co.uk/post/dns-filesystem-true-cloud-st...
dig 'a test.hazel10.messwithdns.com' txt +short
"test"
If the owner of the site contacts me I'm happy to discuss...It's interesting to see how different DNS providers cap the maximum TTL.
Google uses 21600s
Quad9 uses 43200s
Cloudflare does not cap at all!
And my personal unbound uses 86400s (which is the default)
Its out there on my GitHub if folk are interested. Ironically 53 comments just before I added this comment...
Maybe, just maybe, it is an omen? ;-)
A month ago, I scripted https://github.com/moretea/browsers-with-fake-dns as an alternative to editing /etc/hosts. It's a docker container with a BIND DNS server, and chrome/Firefox reachable via webvnc
I do hope the author has set some limits on the DNS configuration you can freely enter. One annoying trick DDoS spammers will use is that they will set up DNS records that are as large as possible to use for their botnet's amplification attack, so allowing them arbitrarily large requests on your domain may be problematic and may cause nasty complaints against your domain. I'd recommend anyone running a free subdomain service (or something super cool like this!) to consider this in their configuration. We can't have nice things because of these bad people :(
I've seen jvns take a similar path to me in engineering over the years, almost uncannily. The difference mostly is that I stored it all in my head, and they take the time to write it up for everyone.
Same with DNS. DNS is such a freakin black box, mostly because outside of RFCs, it's some good ol boys club of 'consultants' that don't want to share information. You should see the mailing lists, it's a giant pissing contest.
Back on point, I always wanted to distill this information down to make it for everyone, but always hit some small hurdle like... making a website about it.
That Julia takes the time to do this and share this is invaluable. It's like a better version of me exists out there, and I'm happy for it.
It's impressive to get technical stuff to be this friendly.
Allowing to experiment quickly on infras/devops knowledge is the key and tools like Ansible are useless for that.
> 2021/12/15 18:39:10 http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 1s