What words can convey MIT news

From search engines to voice assistants, computers are gaining a better understanding of what we mean. This is because of speech processing programs that make a staggering number of words meaningless, never making clear what those words mean. Such programs determine meaning through statistics, and a new study reveals that this method of calculation can assign a wealth of information to a single word, just like the human brain.

The studypublished in the magazine on April 14 Nature Human behavior She was led by Gabriel Grand, a graduate student in electrical engineering and computer science from the MIT Laboratory of Computer Science and Artificial Intelligence, and Idan Blank, a 16-year doctoral degree, an associate professor at the University of California, Los Angeles. The work was supervised by a researcher at the McGovern Brain Research Institute Ev Fedorenko, a cognitive neurologist studying how the human brain uses and understands language, and Francisco Pereira of the National Institute for Mental Health. Fedorenko says the rich knowledge her team has been able to find in calculating language models shows how much can be learned about the world through language alone.

A team of researchers began analyzing statistically based language processing models in 2015, when the method was new. Such models gain meaning by analyzing how often word pairs appear in texts and assessing the similarities of word meanings in those relationships. For example, such a program may conclude that “bread” and “apple” are more similar to each other than “notebook” because “bread” and “apple” are often found next to words such as “eat” or “snack.” and the “notebook” is not.

Undoubtedly, the models measured well the overall similarity of the words to each other. However, most words have a lot of information, and their similarities depend on what qualities are valued. “People can come up with all these different mental scales to help organize their understanding of words,” explains Grand, a former undergraduate researcher at Fedorenko’s lab. For example, he says, “dolphins and alligators may be similar in size, but one is much more dangerous than the other.”

Grand and Blank, then graduate of the McGovern Institute, wanted to know if the models captured the same nuance. And if so, how was the information organized?

To find out how the information presented in this model matched people’s understanding of words, the team first asked human volunteers to rate words on a variety of scales: were those words conveyed large or small, safe or dangerous, wet or dry? Then, by plotting where people place different words on these scales, they looked at whether language processing patterns do the same.

Grand explains that distribution semantic models use statistics of common events to break words into a huge multidimensional matrix. The more similar the words are to each other, the closer they are to that space. The dimensions of space are huge, and there is no innate meaning in its structure. “There are hundreds of dimensions in these word insertions, and we have no idea what any dimension means,” he says. “We’re really trying to look at this black box and say, ‘Is there a structure here?’

Specifically, they asked if the semantic scales they asked their volunteers to use were depicted in the model. So they looked at where the words in space lined up along vectors defined by the extremes of those scales. For example, where did dolphins and tigers get in one line from “big” to “small”? And were they closer to each other along that line than they were on the danger line (“safe” – “dangerous”)?

From more than 50 sets of world categories and semantic scales, they found that the model arranged words very similarly to human volunteers. Dolphins and tigers were rated as similar in size but far apart on scales measuring danger or humidity. The model structured the words in such a way as to reflect many meanings, and this was done only on the basis of word cartels.

That, according to Fedorenko, tells us something about the power of language. “The fact that we can recreate so much of this rich semantic information from these simple statistics that coexist with words shows that it is one very powerful source for learning about things you may not even have direct perceptual experience.

Godfrey Kemp

"Bacon fanatic. Social media enthusiast. Music practitioner. Internet scholar. Incurable travel advocate. Wannabe web junkie. Coffeeaholic. Alcohol fanatic."

Leave a Reply

Your email address will not be published.