Monday, December 23, 2024

Cognitive scientists are developing a up-to-date model to explain difficulties in language understanding

Share

Cognitive scientists have long sought to understand what makes some sentences more tough to understand than others. Scientists believe that any account of language comprehension would benefit from understanding comprehension difficulties.

In recent years, researchers have developed two models to explain two significant types of difficulty in understanding and producing sentences. Although these models are effective in predicting specific patterns of comprehension difficulty, their predictions are confined and do not fully match the results of behavioral experiments. Moreover, until recently, researchers have failed to integrate these two models into a coherent picture.

A up-to-date study led by researchers in MIT’s Department of Brain and Cognitive Sciences (BCS) now provides a unifying explanation for language comprehension difficulties. Building on recent advances in machine learning, researchers have developed a model that better predicts the ease, or lack thereof, with which individuals produce and understand sentences. They recently published their findings in .

The paper’s senior authors are BCS professors Roger Levy and Edward (Ted) Gibson. The lead author is Michael Hahn, a former visiting student of Levy and Gibson and now a professor at the University of Saarland. The second author is Richard Futrell, another former student of Levy and Gibson and now a professor at the University of California, Irvine.

“It’s not just a scaled-up version of existing descriptions of comprehension problems,” Gibson says; “we offer a new, fundamental theoretical approach that allows for better predictions.”

The researchers built on two existing models to create a unified theoretical description of comprehension difficulties. Each of these older models identifies a distinct culprit for frustration in understanding: difficulty in waiting and difficulty in memory retrieval. We have difficulty with anticipation when a sentence does not allow us to easily predict upcoming words. We have difficulties in memory recovery when we cannot follow a sentence that contains a complicated structure of embedded sentences, for example: “It was surprising that the doctor, whom the lawyer did not trust, irritated the patient.”

In 2020, Futrell first developed a theory unifying these two models. He argued that memory limitations do not just affect the retrieval of embedded sentences but plague the entire understanding of language; our memory limitations do not allow us to perfectly represent sentence contexts when understanding language more generally.

Thus, according to this unified model, memory limitations may create a up-to-date source of difficulty in prediction. We may have difficulty predicting an upcoming word in a sentence, even though it should be simple to predict based on the context – when the context of the sentence itself is tough to keep in mind. For example, consider a sentence that begins with the words “Bob took out the trash…”, we can easily predict the last word – “out”. However, if the context of the sentence preceding the last word is more complicated, difficulties arise in waiting: “Bob took out the old garbage that had been lying in the kitchen for several days [out]”

Researchers quantify comprehension difficulty by measuring the time it takes for readers to respond to various comprehension tasks. The longer the response time, the greater the challenge in understanding the sentence. Results from previous experiments showed that Futrell’s unified account better predicted readers’ comprehension difficulties than the two older models. However, his model did not specify which parts of a sentence we tend to forget and how exactly this memory retrieval bias clouds understanding.

Hahn’s up-to-date study fills these gaps. In a up-to-date paper, cognitive scientists from MIT join Futrell to propose an expanded model based on a up-to-date coherent theoretical framework. The up-to-date model identifies and corrects missing elements in Futrell’s unified account and provides up-to-date, fine-tuned predictions that better match the results of empirical experiments.

As in Futrell’s original model, researchers start with the assumption that our mind, due to memory limitations, does not perfectly represent the sentences we encounter. But to this they add the theoretical principle of cognitive efficiency. They propose that the mind tends to apply its confined memory resources in a way that optimizes its ability to accurately predict up-to-date words in sentences.

This view leads to several empirical predictions. One key prediction is that readers compensate for their imperfect memory representations by relying on their knowledge of the statistical co-occurrence of words to implicitly reconstruct the sentences they read in their minds. Sentences containing rarer words and phrases are therefore more tough to remember perfectly, making it more tough to predict upcoming words. As a result, such sentences are generally more tough to understand.

To assess whether these predictions match our linguistic behavior, the researchers used GPT-2, an artificial intelligence tool based on neural network modeling. This machine learning tool, first made public in 2019, allowed researchers to test the model on large-scale text data in ways that were not previously possible. But the powerful modeling capabilities of the GPT-2 language also created a problem: unlike humans, GPT-2’s pristine memory perfectly represents all the words processed by GPT-2, even in very long and complicated texts. To further characterize human language understanding, the researchers added a component that simulates human-typical limitations on memory resources – as in Futrell’s original model – and used machine learning techniques to optimize how these resources are used – as in the up-to-date proposed model. The resulting model retains GPT-2’s ability to accurately predict words most of the time, but shows human-like failures for sentences containing scarce combinations of words and phrases.

“This is a great illustration of how modern machine learning tools can help advance cognitive theory and understand how the mind works,” Gibson says. “We couldn’t have done this research here even a few years ago.”

The researchers fed a set of sentences into the machine learning model with complicated clauses such as: “The report that a doctor whose lawyer didn’t trust irritated a patient was surprising.” The researchers then took these sentences and replaced their initial nouns – “report” in the example above – with other nouns, each with its own probability of occurring or not producing the next sentence. Some nouns made the sentences they were assigned to easier for the AI ​​program to “understand.” For example, the model was able to more accurately predict how these sentences ended when they began with the common phrase “The fact that” than when they began with the less common phrase “Report this.”

The researchers then set out to confirm the AI-based results by conducting experiments with participants who read similar sentences. Reaction times on the comprehension tasks were similar to the model predictions. “When sentences start with the words ‘report this,’ people tend to remember them in a distorted way,” Gibson says. The limited wording further confined their memory and, as a result, confined their understanding.

These results show that the up-to-date model outperforms existing models in predicting how people process language.

Another advantage of the model is the ability to offer different predictions depending on the language. “Previous models could explain why certain linguistic structures, such as embedded sentences, may be generally more difficult to handle under memory constraints, but our new model can explain why the same constraints behave differently across languages,” says Levy . “For example, sentences with embedded clauses seem easier for native German speakers than for native English speakers because German speakers are accustomed to reading sentences in which subordinate clauses move the verb to the end of the sentence.”

According to Levy, further research on the model is needed to identify the causes of inexact representation of sentences other than embedded clauses. “There are other types of ‘confusion’ that we need to test for.” At the same time, Hahn adds, “the model can predict other ‘confusions’ that no one has even thought of.” Now we’re trying to find them and see if they affect human understanding as predicted.”

Another question for future research is whether the up-to-date model will lead to a rethinking of a long line of research focusing on difficulties in sentence integration: “Many researchers have highlighted the difficulties associated with the process by which we mentally reconstruct linguistic structures,” says Levy . “The new model likely shows that the difficulty is not in the process of mentally reconstructing these sentences, but in maintaining their mental representation once they are constructed. The fundamental question is whether these are two separate things or not.”

Either way, Gibson adds, “this kind of work marks the future of research on these issues.”

Latest Posts

More News