It simply wouldn't have been possible for the models to be more accurate with the data they had. As they say, bad data in, bad data out.
That's why this time around the pollsters made sure to be more thorough in their polling.
"This time is different."
I've heard that enough times to be highly skeptical. I'm also deeply skeptical of the notion that polling is even remotely correlated to actual results. Cultural and historical trends play a drastically higher role and are almost always left out.
But it has been strongly correlated to the results in basically all elections so far in all democracies on the planet. Taking 2016 as an example there has been a very strong correlation between polling and the results. The national polling averages were only 3 points off from the actual result. If that's not correlated I don't know what is.
There is a 0.750.750.75*0.75 or a 31% chance of no rain at all
https://projects.fivethirtyeight.com/2016-election-forecast/
If I had a laptop that only worked 1/4th of the time, rather than 1/20th of the time, would that make it a reliable laptop? I don't think so.
Also, it doesnt make sense to look at a single prediction to evaluate a model.
Out of all the predictions they have made (did you look at individual state predictions?), how many were correct (and how confident were they?) - how many were wrong (and how close to 50% were they?).
That is how you evaluate a model (aka cross entropy)
It's unfortunate we can't just run the election again a few times, and actually find the rate at which Trump is elected given the polls.
And it's not empty signalling if 538 assigned Trump a higher chance of winning; they were pretty much the only ones saying he has a chance. That is why people think the models are useful.