- "Meta’s misstep—and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models."
"Hubris" here is unnecessary colouring. And although it links to an article (yay), an article can't justify statements like "big tech has a blind spot", "big tech hubris", or "language models are _severely_ limited".
- "Meta and other companies working on large language models, including Google, have failed to take [this technology's limitations] seriously."
This is unciteable.
- "They think that this is the future of information access, even if nobody asked for that future."
This was a quote from one of the researcher's. But presenting it as the last line of the article, without noting that this is one researcher's opinion but instead using it almost as 'proof' of a previous sentence "But Meta’s handling of Galactica smacks of the same naivete [as Microsoft's Tay bot]." Makes the use of the quote biased.
Also biased is the information not included. One of the tweets they cited shows that Galactica had a big disclaimer that it did hallucinate and that you shouldn't blindly trust its output. They choose not to directly include information by the project the whole article was about, to push the argument that "big tech is ignoring the limitations of this tech".
I think an unbiased article to me would've looked like :
- describing what happened first. Galactica took down their model. There has been a lot of criticism from researchers. - expand into the known limitations of this technology (including Galactica's stated limitations) - speculate whether there's a place for this tech on the future based on the cited work.