Reason is run by libertarians, who start by assuming their own axioms are true and "argue" from there--in everything--and against/in spite of empirical observations. ChatGPT almost certainly does have some bias, as most AI does, but this article doesn't make that case, and I am skeptical anything Reason produces could.
Also very on-brand for Reason is showing results where it keeps coming up in the left/libertarian quadrant of two-dimensional typologies, and calling this merely “left-leaning” and a product of left bias in its training, but nowhere noting that if it is left because of left bias, it being libertarian would also be because of libertarian bias in that training.