I think the solution is IPv6. Once every device on the Internet is uniquely addressable again, we can do away with these NAT hacks and two endpoints should be able to reliably connect to each other again, no matter where they are. Of course, that's assuming we don't get more short-sighted engineering that breaks things again...
When they do, most pain points caused by NATs will go away, and that's not webrtc specific. While you'll always encounter some (intentionally?) broken network which only allows 80/tcp and 443/tcp from time to time, there's not much you can do about it, and webrtc can't do much about it.
This is a naive statement since it assumes IPv6 support amongst the clients. At least here in the US, such support is fairly minuscule.
The smart thing to do would have been to make the signalling layer use IPv6 and insist on configured 6in4-configured gateways (or similar).
That is true in some respects because the method of allocation of these addresses is not straightforward and has its own set of problems.
But in actuality "the problem" the OP is encountering is that for the RTC developer/user, a publicly reachable IPv4 address block is too expensive.
With a publicly reachable IP address (most ISP's will provide one, often for an additional fee), you can do peer-to-peer quite easily.
UDP hole punching works fine, save for when both peers are behind the same NAT, in which case you need a peer outside the NAT to forward traffic.
And it's easy to simulate a LAN over the internet using encapsulation.
Gamers have been successfully hole punching for many years.
I'd say up until recently, gamers have really been the only group that has demanded peer-to-peer connectivity and made it work.
One wonders if every ISP customer were willing to pay the extra fees (if any) and requested a publicly reachable IP address, could the ISP's meet the demand?
Well no, that is why all ISPs are using DHCP. Most offer static IPs for extra fees, but like fractional reserve banking that only works if all possible subscribers aren't trying to claim unique IPs each.
IMHO, this is a common misconception. IPv6 doesn't magically solve the problem.
In an IPv6 world, we will all need stateful firewalls (imagine a typical human's home router). These will generally be configured to allow all outgoing connections, and block all incoming connections - just like a NAT router effectively does today.
Now, you have the same problem all over again. How does the firewall know what new inbound connections to accept, and which to reject? We're back into the realms of packet inspection ("ALG") or protocols to explain to the NAT router what is required, such as NAT-PMP, uPnP etc.
Sure - each endpoint will have a unique address, and this is useful. But a direct peer-to-peer connection between these endpoints will be firewalled by default, except via the same (equally bad) solutions that currently solve the problem (badly) in a NAT world.
That is the promise of WebRTC -- just a few lines of JS and you've got yourself Google- Hangouts.
One can argue this happens in web frameworks and other technologies that make it easy to get started with, which is a good thing.
But unfortunately a lot of these easy abstractions can't completely abstract away things like speed of light latencies, limitation of network bandwidth, funky NAT setups and so on.
So far there are very few peer peer technologies that work reliably and are successful.
I'm unfortunately not joking. The original spec allowed Javascript to directly provide an encryption key; this was removed because someone working for one of the browser vendors argued it would allow companies to MITM video chat (I think it was Google?) In order to make group video chat feasible, this was then replaced with a new feature where the central server sent out a copy of the encryption key over the encrypted RTP channel, meaning it now needed to have the keys to decrypt all the video passing through it.
That's why BitTorrent asks to open specific ports in your router, because otherwise you simply can't do p2p.
What port(s) were you using on your TURN server endpoint? Keep in mind that port 80 is often filtered, and many corporate networks often block non-HTTP(S) ports. At the WebRTC-utilizing service https://appear.in, we have found that using port 443 plays nicely with most restrictive networks.
If a provider is using DPI and trying to disable ssl network wide, you will still have a problem.
1 - https://groups.google.com/forum/#!topic/turn-server-project-... 2 - https://bugzilla.mozilla.org/show_bug.cgi?id=891551 3 - https://bugzilla.mozilla.org/show_bug.cgi?id=906968
[1]: https://air.mozilla.org/intern-presentation-seys/
TLDR: E2E encryption is included, but authentication is currently non-existent, allowing for pretty easy MITM attacks if you have control of the relaying website.
http://tools.ietf.org/id/draft-ietf-rtcweb-security-arch-09....
and
The identity provider portion involves trying to sandbox javascript in new and interesting ways. However, most of the pieces are fairly well understood, and the breaks will happen because identity providers mess up.
It can get even more involved if you want to use peer identity and authenticate against an IDP. WebRTC is still in draft stage and browser implementations are still writing more code.
That said, it does provide a platform for interesting apps.
On a related note - I am curious how many people will use WebRTC directly or wrappers like those provided by twilio and similar.
1. Stateful IPV6 firewalls still need hole punching to connect peers, thus a stun server. 2. Think about a corporate network with complex routes and potentially multiple firewalls (for load balancing or route optimization). No guarantee you use the same gateway to get to the stun server and the other peer. Thus, punching fails.
At least that's my uneducated guess. PS there's nat for ipv6 too (linux supports it). Misery is not going to end.