The pen is mightier
In the previous piece of text, I exposed my currently unfounded worries about the potential threat Large Language Models represent to our writing and speech. I want to explore the idea that LLMs could be used, voluntarily or not, in a destructive manner to our languages. I will first explore what current science says about the link between words and mind, how words do or do not evolve and try to link those findings to LLMs to prove or disprove that they have the potential to alter our words, their meanings, and potentially our minds.
For a long time, scientists and anthropologists believed languages acted like a permanent prism through which speakers perceived their world. It was argued that being born in a country and becoming a native speaker of a language determined how a person would see the world, it was something considered automatic and inevitable. It was thought that words and language directly altered the very signals reaching our eyes and ears, converting the raw signal into meaning and perception. Russian speakers have two words for blue: goluboy (light blue) and siniy (dark blue). A study found that Russian speakers were capable of distinguishing shades of blue faster than English speakers. Their language had them primed for the difference, their brain was thus faster at seeing and interpreting the color signal.
More recent studies agree with the idea that words do have an impact on our perception of the world, but found that words do not filter or completely override raw signals from our senses. They feed back into the raw signals when the signal reaches the brain, shifting our perception of reality subtly. (1) This process is automatic, subconscious, and āas I will argueā (Yes, Iām using em dash.) highly vulnerable to the influence of the linguistic models we interact with daily.
Another study I found discovered that knowing a word for an item or concept helps our brain recognize the item/concept with our senses. If you prime your brain by reading the word āchairā, your brain will then be able to detect chairs in your environment slightly faster than it would have without priming. (2) This points to the idea that raw senses and words all merge into our consciousness or mind. Giving potentially tremendous power to words. The pen might actually be mighter than the sword.
When people study a foreign language, they initially tend to ask for a translation to a word they need. Implying that they believe a word alone bears meaning and that it suffices to ask for the same word in the other language. But words do not carry their meaning by themselves. The context provided by the surrounding words nudge the word toward one of multiple meanings. Cool is a cool word, because sometimes it is cool, and sometimes it is colder. The context and frequency of appearance of a word make it feel empty or give it its richness. Frequently using a word to describe a weaker or flatter context over a period of time compared to the norm will gradually alter the perception the word generates in oneās mind. Let me illustrate.
Imagine a world where Lays, the chips company, creates a new chip variant called āUtopiaā. Now imagine the only frequent recurrence of āUtopiaā is the name of the Lays chips, and rarely if ever the idea of an ideal world, the word utopia will first generate the image of the food instead of the concept of utopia/ideal world. As time passes and as the brand reinforces itself in our mind, the ideal world concept will progressively become weaker and fuzzier, potentially to the point of extinction. This obviously requires time and scale never seen before. The original utopia word would have to become very infrequent and the food item very common for some time before the original utopia degraded enough to become vague and not convey its original meaning properly anymore. But LLMs could be the engine required to achieve the scale never seen before, generation billions of words everyday, exposing millions of human readers everyday. I will call this phenomenon āLinguistic Terraformingā. I think it sounds cool, although terrifyingly bleak. Just like we would alter the composition of the atmosphere of Mars to make it inhabitable to humans, LLMs could be altering the composition of our vocabulary and languages, making it a favorable environment for coporate-approved or hollowed words. (3)
Conversely, words used frequently are more resistant to change and evolution. Irregular verbs used frequently stay irregular and do not evolve. Irregular verbs not used frequently enough tend to slowly evolve to adopt the regular form. The verb Forecast/Forecast seems to be evolving into Forecast/Forecasted and already appears in some dictionaries. (4)
This means that if a word is used less often in a context that carries a complex or rich meaning in favor of the same word, but with a context carrying a more shallow or simpler meaning, eventually the shallow meaning will become the default one. The complex meaning could survive but become rare, or completely disappear. This point reinforces my earlier illustration of the utopia hypothesis.
Conclusion I argue that words are directly connected to our perception or reality and that words are not static in their meaning, interpretation and that exposure to words shape the word over time. Repeated exposure to a word with a specific meaning or context from a specific source will, over time, influence the way this word is interpreted by the reader and directly influence the perception of the world for that person. This article will be the foundation for my exploration of what threats large language models could pose to our writing and speech, and thus our minds.
If you found this research valuable, you can support my independent work with an espresso.
Bibliography
Frontiers Linguistically Modulated Perception and Cognition: The Label-Feedback Hypothesis. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2012.00054/full#h10.
Gary Lupyan. https://www.sas.upenn.edu/%7Elupyan/projects.html.
A context-sensitive and non-linguistic approach to abstract concepts - PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC9791476/#abstract1.
Verbal evolution: The more you say a word, the less likely it will change - CSMonitor.com. https://www.csmonitor.com/2007/1025/p14s01-stgn.html.