For the first time, AI analyzes language with a human expert

The real kind falls this story it came from within Quanisa magazine.
Among my skills that people have, which one has a different personality? Language has been a central figure at least since Aristotle, who wrote that humanity is ‘an animal with a language.’ Even with large language models such as chatgpt that replicate normal speech, researchers want to know if there are certain features of human language that have similarities to the communication systems of other animals or intelligent devices.
In particular, researchers have been exploring which types of language can communicate through language itself. For some in the linguistic community, linguistic models are not the only ones don’t do it have communication skills, they he won’t. This idea was claimed by Noam Chomsky, a prominent liaist, and two coauthors in 2023, where they wrote The New York Times that “the correct definitions of language are complex and cannot be learned from collisions with big data.” AI models can be adept at using language, these researchers argue, but they cannot analyze language in a sophisticated way.
That view was challenged in a recent paper by Gašper Regušist at the University of California, Berkeley; Maksymilian DąBkowski, who recently received his doctorate from Lingeaties in Berkeley; and Ryan Rhodes of Rutger University. Researchers put many large linguistic models, or llms, through a gamut of linguistic tests – including, in one case, having an LLM guide the rules of the language made. While most of the LLMSs failed to integrate the rules of languages in the way people know them, one had impressive capabilities that far exceeded expectations. It was able to analyze language in the same way that a graduate student in linguistics would do linguistics that would make sentence diagrams, solve multiple meanings of meanings, and use complex features of languages such as repetition. This finding, BeiceŠ, “challenges our understanding of what ai can do.”
The new work is timely and “very important,” said Tom McCoy, a computational linguist at Yale University who was not involved in the research. “Since society has come to rely on this technology, it is very important to understand where it can succeed and where it can fail.” Linguistic analysis, he added, is the ideal test bed for examining the degree to which these types of languages can help people.
Unlimited difficulty
Another challenge of giving language models difficult tests is that languages are sure not to know the answers. These programs are often trained on large amounts of written information – not just the bulk of the Internet, in inventions if not hundreds of languages, but also things like language books. Models can, in theory, simply be able to memorize and register the information they have been fed during training.
To avoid this, Cieuš and his colleagues created a language test in four parts. Three-quarters, which involved asking the model to analyze specially constructed sentences using tree diagrams, was first introduced in Chomsky’s landmark 1957 book, Syntactic structures. These diagrams break sentences into two clauses and verb phrases and then further divide them into nouns, verbs, adjectives, adverbs, adverbs, conjunctions and more.
One part of the test focused on retrieval – the ability to embed phrases within sentences. “The sky is blue” is a simple English sentence. “Jane said the sky was blue” embeds the first sentence in great difficulty. Importantly, this process of repetition can go on forever: “Maria wondered if Samu knew that Omar heard Sky clearly, if not in language, if not, in a repeating sentence?
.jpg)


