One of the most common and basic techniques in Starcraft is to bait another player into thinking you are doing a certain popular build, and then do something theoretically inferior but which requires a wildly different response.
Know your opponent is going to have turrets up at 5:30 on the clock? New build! Hits at 5:25! Have the AI use builds which have their turrets at (x-2) minutes just in case? Well, that's going to be an awful build against almost everything.
As shown in one example in the paper, even when the AI was extremely lucky and hard-countered the human's build, the human was able to adapt, respond, and win handily.
Since even the top humans are able to trick each other in these manners, you'd basically need general/strong AI to be able to compete.
No one mentioned being safe against just about everything, just the important timing attacks (such as 3-hatch muta), which are known to be a big part about Starcraft. If you're going a build order that gets you straight-up killed by one of the most popular build orders, something is going wrong. Even if the enemy timing hits 5s before you get turrets up, your build should have a fallback e.g. marines in the base to hold off mutas until turrets complete, which is what a lot of progamers do off a 4-rax opening. Your (x-2) minutes statement seems to assume that AI builds can't cut corners and are forced to play the safest build possible every game.
> Since even the top humans are able to trick each other in these manners, you'd basically need general/strong AI to be able to compete.
This flies in the face of game theory, which is focused on solving exactly the problem you deem impossible to solve without strong AI. In 2008, poker bots (researched by the same university, the University of Alberta) have been able to defeat human experts in limit heads-up poker [1].
Perhaps the problem statement the researchers are working with is different from the one I'm envisioning. Maybe they are training their bots with only replays from other bots, rather than with replays from the top players in the world which constitute what we currently regard to be optimal play. I am thinking that the researchers may overlap with the poker research group, which suggests that the entire project is more focused on using superior game theory as the winning condition rather than exploiting areas where the computer is already known to be better than the human e.g. micro (giving hundreds of units unique instructions) and macro (sending build commands to buildings on the clock). The former is shown in the wraith vs. hydralisk video [2]. This may not work as well as in poker and may not yield as impressive games in the short-term as focusing on micro and macro, but the research is definitely fascinating!
http://webdocs.cs.ualberta.ca/~cdavid/pdf/starcraft_survey.p...
Top play uses huge tradeoffs depending on what the progamer thinks they are facing; that would require an AI which can dynamically scout and effectively adjust its build based on what it sees. That isn't even taking tricks into account.
Basically, everything that you are saying is so far beyond what they're doing to program the AI that it's completely irrelevant. Critical things are hard-coded instead of adaptive, so that would all have to be written from scratch.
(None of this proves that it's not possible to program an AI which can beat humans without general AI, of course. I suppose that if you had enough scenarios with adaptive logic built in, the AI's ability to perfectly split marines or zerglings could be used to design a timing attack impossible for humans to stop.)
edit: I did not see your edit of the two sentences of your post when I wrote mine, or I wouldn't have covered the same concept.