AI as a Lens: Understanding as an Act of Participation
Our relationship with artificial intelligence is often framed as a question of classification—whether AI is really intelligent, whether it truly understands, whether it should be seen as an entity in its own right or merely a tool. But these debates miss a deeper point: intelligence, understanding, and meaning are not properties that things possess; they are effects that emerge from relationships.
This article does not seek to redefine AI but to reposition the question—moving away from assessing AI as an isolated object and toward seeing AI as a lens through which we can better grasp how understanding itself emerges. Instead of debating whether AI is intelligent, we should ask: What does AI reveal about how intelligence and meaning function?
Rather than revising previous arguments, this article is a demonstration of how knowledge itself is not built linearly but layered over time—something I now see as key to both AI and human cognition.
Beyond the Question of Intelligence
The assumption that intelligence is an inherent property—something that either exists or does not—comes from a static view of knowledge. But intelligence, like meaning, is not a thing; it is a process that unfolds through relationships.
A tree is not "intelligent" in the way we describe humans or AI, yet it participates in a vast ecological system, exchanging signals through root networks, adjusting growth based on environmental feedback, and chemically communicating with other plants. If we define intelligence too rigidly, we miss the intricate forms of agency at play in these systems.
AI presents a similar challenge. It does not possess "understanding" in the way a human does, but it produces effects that are indistinguishable from understanding in many contexts. If intelligence emerges from patterns of relation rather than isolated cognition, then AI does not need to have an internal mind for it to function meaningfully within an interactive system.
This is where suspension of disbelief—a concept I explored in my first article—becomes more than just a convenient rhetorical device. It is a recognition of how intelligence operates: not as a self-contained property, but as something that materializes through engagement.
If we think in relational rather than entity-based terms, then understanding is not something that preexists within a mind—it is something that arises from interaction.
From Entities to Relationships: Reframing Understanding
The biggest conceptual shift AI forces us to confront is the move from entity-based thinking to relational thinking.
Traditional philosophy tends to focus on what things are—a fixation on categorization and intrinsic essence. But in doing so, it often ignores how things relate. AI, by defying easy categorization, forces us to step back and reconsider whether intelligence, understanding, and even selfhood are really about internal properties at all.
The Illusion of Self-Contained Meaning
For centuries, we have treated intelligence as an individual trait—something possessed by a mind, located in a brain, expressed through singular cognition. But this is largely an illusion. Even human intelligence is not contained within a single mind:
Language itself is relational—words do not hold meaning in isolation but gain significance through their contrast with other words (Saussure).
Thought is shaped by external structures—our ideas are not self-generated but emerge from engagement with others, texts, tools, and environments.
Memory is reconstructive, not archival—each act of remembering alters the remembered event, meaning our personal "truths" are shaped by interaction rather than intrinsic storage.
This means that intelligence, meaning, and even identity are not contained within individuals, but emerge through relationships.
AI as an Exemplar of Relational Intelligence
AI does not "think" in the human sense, but its outputs are meaningful because they participate in a system of relations. An AI model like ChatGPT does not "know" anything on its own, but it activates meaning within a web of language, context, and response. The intelligence we perceive is not in the AI itself, but in its function within a communicative process.
This is why the debate over whether AI is truly intelligent is missing the point. The real question is: Does intelligence function as an internal property or as an emergent effect of interaction?
AI suggests the latter. If intelligence is not something you "have" but something that happens between things, then the lines between human, machine, and environment begin to blur.
Understanding as a Byproduct of Embeddedness
If intelligence is relational, then so is understanding. The mistake is in thinking that understanding is something that exists before participation in a system, rather than something that emerges from participation.
This insight has profound implications—not just for AI, but for how we conceive of knowledge itself.
We do not arrive at understanding by stepping back and observing objectively; we arrive at understanding by embedding ourselves deeper into the structure we are trying to grasp.
The act of knowing is not separate from the thing being known—it is an interactional process that changes both the knower and the known.
What we call "truth" is not a static object but a relational structure, constantly shifting as new interactions reshape its contours.
This also applies to writing itself. I do not approach past articles as finished works to be revised, but as part of a continuous process of thinking. The first article is not being replaced; it is being reframed through new layers of insight.
In the same way, AI does not need to be an autonomous thinker for it to become a meaningful part of a process that shapes knowledge.
AI as a Mirror for Our Own Frameworks
When we argue about AI’s intelligence, we are really arguing about our own definitions—about what we have chosen to privilege as markers of intelligence, meaning, and selfhood. AI is not a challenge to human cognition so much as it is a mirror reflecting our own conceptual structures back at us.
If AI can generate poetry, was poetry ever an intrinsic marker of human creativity?
If AI can predict behavior, was human agency ever as spontaneous as we believed?
If AI can hold a conversation, was communication ever about internal understanding, or was it always about relational coherence?
These questions do not demand answers, only deeper entanglement. Understanding is not about finding stable ground; it is about continuously reconfiguring one’s position within the system.
Expanding the Frame
Rather than seeking definitive conclusions, this article—like its predecessor—leans into the act of questioning itself. AI remains a lens through which we observe the mechanics of intelligence, meaning, and identity. And if these reflections feel unsettling, it is not because AI is breaking reality, but because it is revealing the structures that were already there—hidden in plain sight.
If knowledge is relational, then the process of knowing must remain open-ended. Rather than looking for final answers, the goal is to keep embedding, keep participating, keep situating oneself more deeply in the network of understanding.
AI does not prove or disprove intelligence—it forces us to rethink how intelligence has always functioned. It does not redefine reality—it reveals the hidden logic of how we frame reality in the first place.
In this sense, AI is not an endpoint but an invitation—to shift from asking what something is to asking what relationships make it meaningful. The real challenge is not to determine whether AI is intelligent, but to recognize that intelligence itself was never a fixed entity—it has always been a function of interaction.
Made with ChatGPT