I think that using these numbers as as stand-ins for difficulty is itself a form of obfuscation.
The truth is that, despite the massive number of potential board states, Chess and Go are some of the easier games to solve, thanks to their nature (perfect information, zero randomness, alternating turns where each player plays exactly one move). And trying to use board states as a proxy for complexity and complexity as a proxy for difficulty doesn't generalize to other categories of games. Compared to Go, what's the complexity of Sid Meier's Civilization? If I devise a game of Candyland with 10^180 squares, is that harder to devise an optimal strategy for than Go just because it has more board states?
The reason that we're still using board states as a proxy for difficulty is because historically our metric of "this is difficult for a computer to play" was based on the size of the decision tree and thus the feasibility of locally searching it up to a given depth. In the age of machine learning, surely we can come up with a more interesting metric?