top of page
Absenti Tech Branding
ABSENTIA TECH LOGO.png

The Age of False Intelligence

  • Mar 18
  • 8 min read

There is a scene in almost every film you have ever loved that does not technically exist. With a standard 180-degree shutter, the most common setting in cinema, the frame is dark exactly half the time. Light hits the screen, then cuts to black, then light again, twenty four times a second. Your brain receives less than it thinks it does and quietly invents the rest. You are not watching a movie. You are co-authoring one, filling in the darkness with your own expectations, and you never once notice the gap.


The illusion of watching motion pictures. The illusion of intelligence.

I have spent years working at a level where the gap between what an audience perceives and what was actually captured becomes a technical problem to solve. I can look at footage and tell you the resolution, the camera it was shot on, the lens, the lighting setup, the post-processing choices made in the grade and beyond. I have been the person that expert cinematographers and photographers brought their unresolvable problems to, at places like Samy's Camera and Division Camera. At that level, you stop seeing the illusion and start seeing the machinery underneath it. The gap between what an audience believes they saw and what actually existed in front of the lens is not an accident. It is a craft.


In my previous post I argued that the AI industry has scaled a communication tool and called it intelligence, and that the only honest path to genuine machine intelligence runs through real-world sensory grounding, not language. This post is about what happens when that gap, between what AI is and what we have decided to believe it is, gets deployed at scale into the real world. We are living through what I think history will call the Age of False Intelligence. Not because the technology is fraudulent. Because the story we are telling ourselves about it is. And we are filling in the dark frames with science fiction instead of science.


The Audience Completes the Picture


Cinema works because the human brain is not a recording device. A camera captures light mechanically and stores what it receives. The brain does something fundamentally different: it maps. It builds an active, predictive model of the environment from incomplete sensory input, fills in what is missing based on prior experience, and presents the result to consciousness as seamless reality. This is why you perceive continuous motion from a series of still frames. The continuity is not in the footage. It is in you.


Projecting our own intelligence onto LLMs and calling it Artificial Intelligence.

The same mechanism that makes cinema possible makes us extraordinarily susceptible to a specific kind of deception: anything that presents enough of the surface pattern of intelligence will be processed as intelligence. Before scrutiny arrives. Before the critical faculty engages. The projection happens first, automatically, because that is how the mapping system works.


Large language models produce language that sounds exactly like it came from a mind that understands the world. They predict the next token in a sequence based on statistical patterns and learned vector relationships across a training corpus so large it encompasses most of recorded human thought, and nothing else. They have no world model. No ground truth. No persistent representation of reality beyond the context of a single conversation. What they have is the surface pattern, more convincingly rendered than anything we have ever built.


Our brains do the rest. We project understanding onto fluency. We assume that something which speaks like it knows must know. We fill in the dark frames with what we expect to find. The result is a collective illusion running at civilizational scale, and most of us are too deep inside it to see the cuts.


We are not interacting with a mind. We are co-authoring the illusion of one, filling in the gaps the way an audience fills in the frames, and mistaking our own completion for the film.

The Cinematographer Who Left the Set


In November 2025, Yann LeCun walked out of Meta.

LeCun is not a peripheral voice. He is one of three researchers who won the Turing Award in 2018 for the foundational work on deep learning that every major AI system is built on. He spent twelve years running Meta's AI research division. And for the last three of those years he had been saying publicly, without diplomatic softening, that the industry was filming the wrong scene.


LLMs, he argued, are a statistical trick. Systems that predict the next word so convincingly they pass for understanding, while understanding nothing. His alternative, the Joint Embedding Predictive Architecture, JEPA, does not predict tokens. It learns how environments change over time, building representations of physical cause and effect, something closer to what the brain actually does when it models the world rather than describes it. The difference between a world model and an autocomplete is the difference between a mind that understands a scene and a camera that records its surface.


Meta had made its choice. In June 2025, Zuckerberg invested $14.3 billion in Scale AI and installed its 28-year-old CEO to run a new LLM-focused division, Meta Superintelligence Labs. LeCun, one of the most decorated researchers alive, would report to him. By November the distance between their views of what the technology actually was had become irreconcilable.


On March 10, 2026, four months after leaving, LeCun's new company AMI, Advanced Machine Intelligence, closed a $1.03 billion seed round at a $3.5 billion valuation. The largest seed round ever raised by a European startup. Twelve employees. No product. Investors include Bezos, Nvidia, Samsung, Toyota, Berners-Lee, Cuban, and Schmidt. Every check was a bet on one thesis: that the current paradigm is filming the wrong scene, and that the audience is going to notice eventually.


LeCun's stated timeline: within three to five years, world models will be the dominant architecture. Nobody in their right mind will use LLMs of the type we have today.


When the Dark Frames Are Operational


The Anthropic-Pentagon dispute is where the cinema metaphor stops being comfortable.


By late 2025, Claude was the only frontier AI system cleared for classified Pentagon networks. Intelligence analysis. Cybersecurity. Operational planning. The military had decided the film was real and built their workflow around it.


DoD, DoW, military use of AI. Current events.

Then negotiations broke down. The Defense Department wanted unrestricted use for all lawful purposes. Anthropic refused two conditions: autonomous lethal weapons with no human in the targeting loop, and mass surveillance of American citizens. On February 27, Trump ordered federal agencies to stop using Anthropic's products. Hegseth designated the company a supply chain risk. Anthropic sued.


Dario Amodei said it plainly in Anthropic's public statement: frontier AI systems are not reliable enough to power fully autonomous weapons. He was right to hold that line. But the question that statement opens is the one nobody was publicly asking: if they are not reliable enough for autonomous weapons, what were we assuming about their reliability in the intelligence analysis and operational planning roles they were already filling?


A system with documented hallucination problems, no causal world model, no grounded understanding of the physical environments it was analyzing, embedded in the most consequential decision infrastructure on earth. The fight that became visible was about safeguards. The fight that did not become visible was about whether the technology was what anyone involved actually believed it to be.


That is not a gap in a film. That is a gap in a classified network. And the audience cannot fill it in.


The darkness between the frames is not a problem when you are watching a movie. It is a problem when you are making operational decisions inside it.

What the Audience Is Losing


The military context is the most extreme version, but the same dynamic is running everywhere these systems have been handed authority.

A 2025 peer-reviewed study by Michael Gerlich at SBS Swiss Business School, 666 participants across age groups and educational backgrounds, found a statistically significant negative correlation between AI tool usage and critical thinking scores. The mechanism identified was cognitive offloading: when a tool completes the picture for you, you stop developing the ability to complete it yourself. The effect was sharpest in younger users, highest reliance on AI, lowest scores on independent reasoning.


Youth testing lower in cognition because of bad relationship with AI.

A Microsoft and Carnegie Mellon study found knowledge workers who trusted AI outputs engaged in dramatically less critical evaluation of the work they were accepting. An MIT Media Lab study found that students using AI writing tools showed measurably reduced engagement in brain regions associated with learning and memory consolidation compared to peers working without them.


This is the cost that does not show up in the benchmark comparisons or the earnings calls. When you outsource the completion to the machine, you stop building the capacity to complete anything yourself. The audience gets better at watching and worse at seeing. We are not just projecting intelligence onto a tool that does not have it. We are slowly offloading our own in the process.


The Name Is the Problem


LLMs are genuinely useful. Communication, drafting, summarization, coding assistance, research synthesis, translation. For anything where the substrate is language and the requirement is rapid pattern completion across a large corpus, these systems perform at a level that is real and will continue to improve. Treated honestly as the best information retrieval and language interface ever built, they deserve real respect.


The problem is not the tool. The problem is that we named it wrong.

We called it artificial intelligence because the name was exciting, because it attracted capital, because it made the story bigger than the product. That name carries an entire cosmology of science fiction and philosophical implication that the technology does not inhabit and cannot. And now the story is running the decisions. The story is what got embedded in classified networks. The story is what a billion dollars just bet against. The story is what the cognitive studies are now measuring the cost of.


The filmmaker in me recognizes exactly what this is. Sergei Eisenstein built an entire theory of cinema, montage, around the idea that meaning is not in the shot. It is in the cut between shots. The audience's mind creates the meaning from the juxtaposition, not from anything that was actually filmed. The AI industry has been running the same play. The intelligence is not in the model. It is in the gap between what the model produces and what the audience's mind completes it into. The cuts are invisible. The gaps disappear. The audience has been doing the work for years and crediting the screen.

The correction is coming. Performance saturation is appearing at the frontier. Benchmarks are losing credibility. The military is finding that fluency and reliability are not the same thing, and diverge exactly when it matters most. LeCun raised a billion dollars on the thesis that the foundation is wrong. The cognitive data is now a trend, not an anomaly. When an audience starts noticing the cuts, the spell breaks fast.


Large Language Models and LWM Grounded Reality AI. SHAI. SAI.

The companies that survive will be the ones that were honest about the distinction from the start. Building for environments where the hallucination is not a product limitation to be managed but a mission failure that cannot be tolerated. Building intelligence grounded in the actual world rather than in the statistical shadow of language about it.


At Absentia that has been the frame since day one. Vision first, because light bouncing off matter is the most direct, highest-bandwidth signal the real world produces. No gap-filling. No projection. No dark frames passed off as footage.


The next post will take up the vocabulary the industry has built around AI, and why the terms we have inherited, AGI chief among them, now obscure more than they reveal.


If you want to see what building honest AI looks like in practice, visit absentiatech.com.



Emanouil Angelov is Co-Founder of Absentia Technologies and a screenwriter who has been writing films about artificial intelligence since 2017. His background spans professional filmmaking, cinematography, photography, marketing and teaching. His work at Absentia is informed by the intersection of visual perception, linguistic theory, and AI architecture. To learn more, visit absentia.tech.


 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page