• Every code snippet should be copy/pasteable (and work well), that article might be the entry point to what many are doing.
• I have this feeling this is following Cloudflare's Workers style of `addEventListener('fetch', eventHandler);`. I need to write a 500-word essay on that, but the short version is that I strongly believe that it'd be way better for their clients if the client could just do `export default async function handleDns(event) { ... }` instead of having a global (what?) context where there's a function unattached called `addEventListener` and you need to know it's for `"dns"` (is there even other possibility here?) and that the respondWith() only accepts a promise, which is not common (it's more common to accept an async fn which then becomes a promise). Compare these two snippets, current and with my small proposal:
// Current API:
addEventListener('dns', event => {
event.respondWith(handleRequest(event.request));
});
function handleRequest(request) {
return new TxtRecord('Hello world!, 30);
}
vs
// Potential/suggested API:
export default function handleDns(event) {
return new TxtRecord('Hello world!, 30);
}
This way it's easier to write, to reason about, to test, etc. It's all advantages from the client point of view, and I understand it's slightly more code from Bunny's point of view but it could be done fairly trivially on Bunny's end; they could just wrap the default export with this code on their infra level, simplifying and making dev lives a lot easier: // Wrapper.js
import handler from './user-fn.js';
addEventListener('dns', event => {
if (typeof handler === 'function') {
event.respondWith(handler(event));
} else {
// Could support other basic types here, like
// exporting a plain TxtRecord('Hello', 30);
throw new Error('Only handler functions supported now');
}
});I'm the tech lead of Cloudflare Workers and you're absolutely right about this. We actually introduced such a syntax as an option a while back and are encouraging people to move to it:
https://blog.cloudflare.com/workers-javascript-modules/
Your argument is exactly one of the reasons for this. The more general version of the argument is that it enables composability: you can take a Worker implementation and incorporate it into a larger codebase without having to modify its code. This helps for testing (incorporating into a test harness), but also lets you do things like take two workers and combine them into one worker with a new top-level event handler that dispatches to one or the other.
import { Router } from 'itty-router';
import { exportedObject } from './some/other/file';
const router = Router();
router.get('/update', async (request: Request, event: FetchEvent) => exportedObject.update(request, event));
router.all('*', () => new Response('404, not found!', { status: 404 }));
addEventListener('fetch', (event: FetchEvent) => {
event.respondWith(router.handle(event.request, event));
});
I don't even need to start up a http server to do testing of my handlers. I just test the functions directly.The end users of these domains are globally distributed and served from 14 different data centre locations across the world.
To do the geo IP matching we tried a lot of things, third party services etc but couldn't find one that works well and are priced well.
For example, the hosted DNS service we use also have a IP based filter chain feature but are priced around $22 per domain per month as add-on.
At the end, we built a anycast based solution that was very painful to setup but works fine now and can use a single A record that works across the world. We had a get a ASN, a /24 block and hell lot of back n forth with a government run org to set it up.
A "hosted" scriptable DNS server which takes the location as input and output IP of nearest edge server as output is the exact thing I needed. So yes there is definitely a niche market for it.
I still have to explore and see how closely bunnydns is able to get the source IP/location (tricky) and how does health check etc could work but definitely something I would explore and consider.
I'm struggling to see what this could be used for, but the comments here help.
In summary:
- an alternative to anycast.
- an alternative to routing inside your app (your app could detect the IP, and have different behavior based on rules internally). This means you are always going to the same origin, which scriptable DNS would prevent, you could put things at the edge and reduce hops.
Why else would you use this?https://gist.github.com/hayesr/55b55d167f67f57fe5e56ec3ab1f8...
Same, but then they lost me at Javascript.
I run PowerDNS but haven't found a reason to play with this yet.
Seriously this is great. I started building a "scriptable DNS" to make it easy to have a DNS record that always points at the valid K8s nodes in my cluster (and randomizes the order of the IPs each time). Since nodes can come and go very quickly (especially during an upgrade), and their IP changes every time, it's useful to be able to act dynamically.
This is most assuredly better than what I was building though. Mine is rust-based but the "script language" is a very simple DSL. I considered allowing docker containers that receive some command arguments and must write the answer to standard out, but that felt like a brittle interface and I worried about performance (even with offering a cache). I also considered writing it in Elixir and allowing elixir code snippets, but I got scared of how hard it would be to secure that.
Anyway really neat idea! I hope to see more innovations and implementations!
In my experience, ISPs (particularly residential providers) sometimes ignore/override the TTL in authoritative DNS records and aggressively cache responses, for reasons...
Has anybody else run into this and solved it? Cloudflare DNS seems to have figured out a decent way to deal with this. I may take a close look at their responses and see what they set for TTL, etc.
1) Claim privacy first and then have a cookie banner.
2) Say “routing” when you mean location/IP based DNS.
3) Is that a loosely typed language in the scripting engine? Not sure I would want DNS queries to be relying on that.
I am sure there is still some innovation left in DNS. SDDNS I’d call it: Software defined DNS. Especially with the splinternet we are walking into these days. Just don’t think this version cuts it. Nevertheless an interesting company to follow. I see potential.
You can route connections on different layers than IP routing. We commonly talk about http request routing as in dispatch based on the domain/path. I'm happy with "routing" as in directing traffic via DNS resolution. I doubt anyone here is confusing that.
But cookie banners do. Essential cookies that are required e.g. to store login data no not require a cookie banner (https://github.blog/2020-12-17-no-cookie-for-you/). So if there is a cookie banner you can assume that the site wants to store analytics, tracking or advertising cookies.
I disagree. As a former network engineer the title "We're transforming internet routing" and subtitle "Rethinking Internet Routing" [my emphasis] makes me think of IP based routing first. I think they could have been clearer or picked a less grandiose title.
Don't get me wrong, on the surface this looks like a neat tool.
In American English, "route" and "about" have the same vowel-sound, which seems unfortunate; I wonder how that happened.
English is a mess, but I hope we don't try to fix it!
Head over to https://cachecheck.opendns.com/ and plug in 'www.google.com', you'll notice the Google returns different IPs in different geographic locations to route visitor traffic.
They are fairly competent and cheap. A happy bunny customer here
I looked up the definition of routing in a few places and I do not see why it does not fit. Does this also qualify as incompetent https://cloud.google.com/dns/docs/zones/manage-routing-polic... ?
"It always puzzles me when people speak of "routing" in conjunction with DNS." - original message for context
Also routing and DNS are different things. Misunderstanding what routing is while trying to sell your technology to technologists is likely not a winning strategy.
If you need help, write me to luis at <my HN username>.com :-)
> 2. DNS: Run trick DNS servers that return specific server addresses based on IP geolocation. Downside: the Internet is moving away from geolocatable DNS source addresses. Upside: you can deploy it anywhere without help.
> You're probably going to use a little of (1) [Anycast] and a little of (2). DNS load balancing is pretty simple. You don't really even have to build it yourself; you can host DNS on companies like DNSimple, and then define rules for returning addresses. Off you go!
Seems they are saying that "the internet in general" is moving way from location-based DNS, but that's a bit like saying that the internet in general is moving away from Wordpress.
1. Location based DNS is incredibly useful
2. Sending user data (IP address, location) to authoritative nameservers is out of vogue.
There are efforts to send privacy-friendly geo info to authoritative nameservers. But they aren't getting much traction. Which means location based DNS is getting less useful by the day (because it's not working for as many people).
Bunny in general has been a positive experience for me so looking forward to trying this
Not so sure about the per million pricing on scriptable dns queries. Isn’t it quite easy to generate billions of dns queries? ie I hope there is some sort of ddos mitigation in front of that
i thought it's been proven that geo-ip data is not reliable?
second.. DNS is not routing.
So I can easily believe that it’s wildly inaccurate for a significant amount of the world.
The programmer in me says: what cool stuff can I do with that.