Linguistics
lvxferre
•
10mo ago
•
100%
[PNAS] Systematic testing of three Language Models reveals low language accuracy, absence of response stability, and a yes-response bias
https://www.pnas.org/doi/10.1073/pnas.2309583120Interesting paper, about the alleged ability of LLMs* to judge the grammaticality of sentences - something that humans are rather good at. Eight phenomena were tested, and LLMs performed extremely poorly.
*LLM = large language model. Stuff like Bard, ChatGPT, LLaMa etc. I'd argue that they aren't actual language models due to the absence of a semantic component, as shown by the article.
Comments 0