DiPLO
Survive the AI jargon tsunami: Find shelter in your mother tongue

English is the undisputed lingua franca of AI. Machine learning, neural networks, AI agents, and other core concepts are conceptualised and deployed in English. Yet, English as a powerful and practical medium for AI is turning into a problem with a hype-driven AI language tsunami: the more we talk about AI, the less meaning our words seem to carry. Like any inflation, this inflation reduced the value of the inflated object – the language dealing with AI.

A rescue can come from an unexpected direction: our native languages. The AI lingo bubble can be deflated through the mother tongue we learned before we ever touched digital technology. It can help us address AI with common sense, critical thinking, and sharp insight. 

The power of our inner language

Native language is our inner language, the very scaffolding of our thoughts, as argued by the famous linguist Lev Vigotsky1and supported by the Sapir-Whorf hypothesis.2

Let me start with myself. I developed my cognitive apparatus to understand nature and social relations through Serbian. But I think about digital and AI issues in English, the language through which I grasped the core concepts in this field. Our ‘inner language’ is not fixed; if we speak more than one language, we can experience a moment of wondering in which language we think (form our ideas).

Inner language reemerged as a help in dealing with the gap between hype-inflated language about AI and the reality of this technology. I started a personal ‘mental experiment’ of using Serbian, my native language, to think about AI and develop linguistic immunity from AI hype. I found help in observing the Maltese language and culture during my decade-long stay in Malta (1991-2002). 

The Maltese language is layered like geological sediment.3 Its foundation is Semitic, established during Arab rule (870-1091 AD), and includes its core grammar, kinship terms, body parts, and basic verbs. After the Norman conquest in 1091, it was heavily exposed to Italian (specifically Sicilian), which shaped its food, culture, and abstract terminology. The latest layer is English, introduced in the 1800s, influencing the language of technology, science, and governance. In their daily lives, the Maltese fluidly switch between these three layers, which inherently impacts how they cognitively frame a problem.4

This linguistic multilayering inspired me to peel back the layers of jargon and convention that obscure my understanding of AI. The real and non-technical meaning started appearing in Serbian which I often translate back to English (e.g. my explanation of AI through flags).

However, my attempt to extend this experiment to a wider group hasn’t been easy. The intuitive reaction is to revert to English. Some find it impractical and have ‘WTF’ reactions. Others, in a pragmatic way, simply use ChatGPT to translate English terms into Serbian, which misses the point of the exercise entirely. But here and there, I have had successes that have been truly illuminating.

Getting to the core meaning

Here is one example of getting to the core meaning of the trendy concept of AI contextual engineering (CE). By thinking in Serbian, I stripped away the trapping of ‘AI lingo’ and got back to a simpler English explanation of contextual engineering.5

First, I started with professional definition…

Contextual Engineering (CE) employs structured Data Units (DUs) within a Retrieval-Augmented Generation (RAG) pipeline to dynamically aggregate semantically relevant information. This is achieved by interfacing with external Knowledge Bases (KBs) via a Multi-Connection Protocol (MCP), enabling federated access to heterogeneous data sources (e.g., APIs, SQL/NoSQL DBs, or document stores). The MCP facilitates secure, low-latency data ingestion through standardised connectors (REST/gRPC) while implementing QoS policies for bandwidth optimisation. A Large Language Model (LLM) performs subsequent semantic synthesis using transformer-based NLP to generate context-aware outputs. The E2E process leverages ANN search for vector similarity matching within the RAG framework, ensuring high-recall information retrieval before LLM inference.

… and tried to explain it in Serbia…

Kontekstualno inženjerstvo je razumevanje pasusa ili recenice u kontekstu duzeg testa i drugih relevantnih podataka. Na primer, odgovor na nasa pitanja, veliki jezički model vestacka inteligencija moze pruziti tako sto im posaljemo pitanje, osnovni pasus, siri text i druge relevant podatke. 

… landing into translation in English…

Contextual Engineering (CE) is about understanding one paragraph (data unit) in the context of a longer text (RAG) and other relevant data (KBs). This paragraph, broader text, and other pertinent data can be collected and sent to an LLM with a request (prompt) to generate answers to our questions.

This shift to a native language stripped away the heavy terminology and led to a common-sense answer that can pass the Feynman test of explaining a complex problem to a child.

The risks of professional ‘turf language’

Many problems with inflated AI lingo are not new. Professions develop their languages to optimise communication. Acronyms and established cognitive frames make communication among insiders easier and faster. Often, professional language becomes a method of turf protection.

I have seen this repeatedly, from established fields of multilateral diplomacy to emerging ones such as internet governance. In International Geneva, you find dozens of these ‘turf languages’ for trade, climate change, human rights, health, and more. Each community coalesces around roughly 50 key acronyms, terms, and document references that outline parameters of professional turf.

The risk of camouflaging AI by ‘turf language’ is particularly relevant as AI impacts the very basics of social life, jobs, and human wellbeing.

Tension between deterministic software development and probabilistic AI

In addition to loss of meaning by inflated AI language, there is another deeper tension between the deterministic language of programming and the probabilistic nature of AI. Software is always developed as described in step-by-step procedures. AI is like a ‘guessing’ machine built around probable, not guaranteed, specific results or outcomes. Thus, dealing with AI requires a different set of skills from traditional programming, including a mix of intuition and common sense. 

The in-built tension is between, on one hand, deterministic approach to develop technical infrastructure for AI: setting up servers, generating vector databases, creting agents, and, on the other hand, different set of cognitive skills to navigate the probabilistic nature of AI in specific knowledge domain, be it management, diplomacy, or medical science.

In this interplay between the determinism of computer programming and the probabilistics of AI, our native language can help us access the ‘inner language’ of our understanding. We engage in a different cognitive process by forcing ourselves to articulate complex AI concepts in a mother tongue less polluted with AI technical jargon. We are scaffolding our thoughts about AI differently, just as Vygotsky suggested.

Practical next steps 

Explain concepts in your native language first. Describe what a large language model does (predicts the next word based on context) or what “retrieval‑augmented generation” means (finding relevant documents before generating an answer). Use metaphors that resonate within your culture.

Develop an AI team with native speakers in your language. If you can discuss AI with others in your native language, it will increase the sharpness and depth of your thinking about AI. Language is a social medium which is enriched through exchanges.

Alternate between languages when reading or teaching AI. This can enhance executive control and help identify gaps in understanding.

Be aware of professional jargon and challenge it. Ask whether terms like context engineering hide simple ideas.

And what if your native language is English? You may have a slight disadvantage of not having another language as a tool, but you can still follow the same logic. The goal is to develop a new kind of literacy for the AI era by consciously bypassing the clichés and AI hype.