Students can now submit impeccable work in terms of structure, arguments, and sometimes brilliance. However, when questioned, a discomfort emerges as they struggle to explain what they have truly understood, justify their choices, and relate their work to personal experience or concrete situations. Artificial intelligence (AI) is not always the direct cause of this situation, but it reveals a powerful truth. While producing information has never been easier, understanding what one is doing has never been more demanding.
Knowing, Understanding, and Comprehending: A Central Distinction
In the era of AI, education cannot simply be about accumulating knowledge. It requires clarifying the meanings of knowing, understanding, and comprehending, and examining how these dimensions interact in the learning process.
Two major epistemological traditions shed light on this distinction. Scientist Michael Polanyi showed that all human knowledge includes a tacit element rooted in experience, action, and engagement. “We know more than we can say,” he wrote, emphasizing that understanding often precedes explicit formulation. This active knowledge, often implicit, is built through doing, trying, making mistakes, and facing reality.
Conversely, philosopher Gaston Bachelard established that scientific knowledge does not simply extend from experience. It necessitates a break with initial assumptions and opinions through rational, critical, and abstract construction. “Science does not proceed from opinion,” he noted, underscoring the need to pose problems rather than gather answers.
Therefore, forming a “well-rounded head” does not involve accumulating abstract knowledge or relying on raw experience. It entails balancing these two dimensions: lived experience and conceptual construction, action and reflexivity.
What AI Can Do – and What It Cannot
Artificial intelligence systems excel where knowledge is formalizable: calculations, synthesis, reproduction, and formatting. They increasingly handle explicit, stable, and calculable knowledge. However, they operate within a specific regime: statistical correlation and the generation of plausible statements.
AI does not know the world, nor does it comprehend it. It lacks experience, embodied connection to reality, and access to the pluralistic conditions of the phenomena it describes. The information it generates is probabilistic in nature – based on likelihood calculations from statistical correlations rather than understanding causes, contingent and dependent on data, enunciation contexts, and technical parameters, and revisable, correctable, contradicted, or reformulated at any time without internal progress in comprehension.
This distinction is central in contemporary work on educational uses of AI, indicating that if cognitive tasks automation is poorly regulated, it can diminish critical judgment exercise.
As AI productions become more convincing, the risk of mistaking formal coherence for real comprehension and appearing truthful instead of offering a prudent statement that fosters dialogue is heightened.
Measuring Complexity: A Skill to Learn
Given this scenario, a major educational challenge emerges: the ability to measure complexity. Distinguishing surface information from structured understanding, grasping the depths of a problem, a system, or a situation.
This ability is not innate but gradually developed through real-world experience. It requires active engagement between theoretical anticipation and concrete realizations. Judgment criteria sharpen in the gap between the model and experience, cultivating a truly situated intelligence that combines formalized knowledge and lived understanding.
Knowledge becomes effective only when tested, in tension with reality, adjusted in light of its resistance and surprises. Conversely, raw experience, if not approached in a reflective and conceptual framework, remains silent and hard to convey. Education must facilitate demanding circulation between theory and practice, abstraction and embodiment.
A Pedagogical Revolution as Much as a Technological One
AI does not just transform tools; it alters the core of superior cognitive functions: outsourced memory, instant access to information, generation of apparent reasoning. Unlike previous technologies that amplified pre-existing human capabilities, AI now reshapes the balance.
Consequently, the educational challenge shifts from producing or presenting information to evaluating depth, coherence, validity conditions, and real-world effects. This transformation aligns with sociologist Edgar Morin’s complex thought analyses, emphasizing the need for minds capable of connecting, contextualizing, and confronting uncertainty rather than reducing reality to simplified answers.
Recent studies in cognitive sciences and education indicate that a substitutive use of AI can lead to excessive cognitive offloading, reducing intellectual engagement and long-term memorization, whereas a reflexive and critical usage can enhance learning.
Cultivating Engineers and Citizens Capable of Judgment
Cultivating a well-rounded head in the AI era means avoiding cognitive offloading and intellectual surrender. It involves forming individuals who can use powerful systems without subservience, maintaining a demand for meaning where machines only produce form.
At IONIS, the development of IONIS Institute of Technology (I2T) on our campuses arises from a strong belief: while our engineering students must master AI technologies, they must equally learn to test limits through real-world confrontation. Laboratories, workshops, and experiments become central places for judgment formation.
Training good engineers – and broader enlightened citizens – involves fostering a critical, measured, and evolving mindset, nurtured by concrete experience, trial and error. In the AI era, the crucial question is not just what we expect from the machine, but what we expect from humans: their capacity to understand, create, and decide judiciously in uncertain and technologically enhanced environments.






