>In Peru, a political crisis has been unfolding over the past few months, with the ousting of former President Pedro Castillo over his refusal to step down [1]. Protests have been held in response to Castillo's ouster, and they have been met with a strong police response. Additionally, truckers and some farm groups are planning to go on strike on Monday to demand measures to alleviate their economic hardship. Peru is also facing an economic downturn, with many businesses facing closure due to the crisis.
The citation link was https://www.reuters.com/world/americas/what-happens-perus-fo...
Some details are wrong, it says something will happen on Monday but it does not realize that's supposed to be relative to the publication of the cited article. But it did correctly summarize what the source says.
So, it's at the level of a person of average intelligence and a bit over superficial investment in what's being asked about.
I lack the knowledge now to tell if it will stall at this level, but that's nothing to sneeze at for something whose labor comes for free and tirelessly, and may keep improving.
Getting it to cite one that actually exists, ah, now that's a hard problem. Given how slimmed down the tech currently is, even if one can hypothesize some mechanism for having the system keep track of where it got certain ideas (and it is not at all obvious to me how to encode into an otherwise notoriously opaque neural net where ideas came from, given that we can't even point at an "idea" or "concept" or "fact" in a neural net at all), it is hard to imagine it wouldn't take so many additional resources that we'd have to trim the model size down to tiny fractions of what it is now.
For all the people going "wow" at the current state of GPT, I wouldn't be surprised that in 20 years it's actually seen as a dead end. I'm also impressed, but at the same time, I'm seeing the limitations it has for practical use. The hypotheses about why pure neural net approaches are going to be too problematic to use are basically coming true. AI models that can't give human-comprehensible reasons for their conclusions, including attestation of sources, are too dangerous to use. They're just black boxes, and for all you know someone's got their finger on the scale of the black boxes. OpenAI is already doing that, quite visibly, and even if you are comfortable with their reasons for doing so today, you should conclude from the fact they basically immediately stuck their fingers on the scale that you aren't getting some super AI to answer your questions, but a manifestation of some particular group of human's answer to your questions. But... I can already get that! I don't need to pay OpenAI to AI-wash their answers.