How AIs Understand Words

Researchers at EPFL have created a mathematical model that helps explain how breaking language into sequences makes modern AI like chatbots so good at understanding and using words.

There is no doubt that AI technology is dominating our world today. Progress seems to be moving in leaps and bounds, especially focused on large language models (LLMs) like chatGPT.

But how do they work? LLMs are made up of neural networks that process long sequences of "tokens." Each token is typically a word or part of a word and is represented by a list of hundreds or thousands of numbers - what researchers call a "high-dimensional vector." This list captures the word's meaning and how it's used.

For example, the word "cat" might become a list like [0.15, -0.22, 0.47, …, 0.09], while "dog" is encoded in a similar way but with its own unique numbers. Words with similar meanings get similar lists, so the LLM can recognize that "cat" and "dog" are more alike than "cat" and "banana."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.