> Not really. It would take immense effort to train bots to play “like humans” and not “as performantly” as humans which is very different.
There is precedent in Maia Chess, which does a good job of mimicking human chess players at various ELO ratings. Of course, it's a lot more difficult to extrapolate to games with significantly more state/movesets, but I imagine that this space will be further explored in the near future.
> And if you’re going to be optimizing game parameters that means you’re assuming that either the AI doesn’t change its behaviors even though the game is different or you’re assuming that humans will adapts in the same way the bots do.
This could be addressed by including the game parameters of interest (what map, what character, the weapon stats at time of gameplay, etc.) in the input to the training data.
> It also takes away a lot of the design thinking behind balance. You probably don’t want to nerf the AK. You probably want to buff counterplay options (guns are not a great example but still)
Tool-assisted QA is nothing new. Using AI is a newer iteration of the concept. You still have to interpret the results it gives and make decisions based on that. The design thinking isn't replaced, it's augmented with additional insights. Are those insights potentially inaccurate? Sure, but you can account for that with sanity checks/manual intervention/play testing.