Not an expert but I would guess at least the following: traffic filtering, peer traffic filtering by (possibly dynamic and automated) agreement, traffic classification and anomaly detection (DNS/TCP/HTTP(S)/etc.), routing different clients (based on origin AS and/or geolocation) to different IPs through DNS, hosted web frontends, web-level active user challenges, potentially dynamically altering the advertisement of routes, and by charging so much money the moment you need to use them that buying extra bandwidth and netblocks isn't an issue for them. Probably some of them also drop to high-overhead traffic reduction modes which can expand frontend IPs and DNS response segmentation, dropping DNS TTLs and spinning up new proxy systems in order to better filter out robotic attackers. Many also probably create/profile/buy various browser fingerprinting techniques and may have a library of non publicly disclosed approaches available for additional mitigation during high bandwidth attacks. Oh yeah, and replicating a static cache as a cheap means of degraded service provisioning.