Should we preserve "The Last AI-Naive Humans"?
6.9 billion people have never used generative AI. That number is shrinking every month. We are losing the only population that can tell us if any of this actually works.
I run content operations for a living. AI is in every workflow I touch. I use Claude, I use automation, I build systems that produce content at scale. I am writing this from inside the machine.
And I am writing it because of a number I cannot stop thinking about.
The number
16.3% of the world’s population used generative AI tools during the second half of 2025.
That is the figure from the Microsoft AI Economy Institute’s Global AI Adoption report, published January 8, 2026. It is the most comprehensive population-weighted measurement available. 147 countries. Aggregated telemetry adjusted for device market share, internet penetration, and country population.
16.3%.
That means 83.7% of the world’s population, roughly 6.9 billion people out of 8.28 billion, has never actively used ChatGPT, Claude, Gemini, Midjourney, Copilot, or any generative AI tool. For learning. For work. For anything.
One in six people on Earth has touched this technology. Five in six have not.
ChatGPT alone hit 900 million weekly active users as of February 2026. That sounds massive. It is still a fraction of the global population. The Global North sits at 24.7% adoption. The Global South at 14.1%. The gap is widening. The UAE leads globally at 64%. The United States ranks 24th at 28.3%. An estimated 2.2 billion people are still completely offline.
There is a naturally occurring control group of billions of human beings whose taste, judgment, and creative instincts remain untouched by AI-generated content.
It exists right now. It will not exist forever.
The contamination problem
Here is the situation as I see it.
Almost every knowledge worker under 50 in the Global North has now interacted with generative AI in some meaningful way. They have used it to write, to edit, to brainstorm, to code, to research. That exposure changes how they evaluate quality. It changes what they expect. It changes what they forgive.
Product teams feel this already. When you run A/B tests or user interviews, the feedback you get is increasingly filtered through AI-adjusted expectations. People know what polished AI output looks like. They tolerate mediocrity that would have been unacceptable three years ago. They confuse “clean” with “good.”
Synthetic users and AI-generated personas do not solve this. They echo our current assumptions back at us. They are mirrors.
The only way to get a genuinely unfiltered human reaction is from someone who has never been exposed to AI output. And inside the Global North tech-connected knowledge economy, that population is almost gone. Globally, it is still 6.9 billion people. The disconnect between those two facts is the entire point of this piece.
The seed bank analogy
I think about this the way I think about biodiversity.
We maintain seed banks around the world. The logic is simple. If a disease wipes out a crop variety in the field, we need uncontaminated genetic material to start over. The preserved seeds are the control. The field is the experiment.
We are running the largest experiment in cognitive history. AI is reshaping how billions of people think, write, evaluate, and create. And we have no deliberate control group. We have a massive accidental one — 6.9 billion people — but nobody is preserving it. Nobody is studying it. Nobody is using it as a baseline.
The Microsoft data makes the urgency concrete. Adoption grew 1.2 percentage points in six months. The rate is accelerating in every region. The 83.7% will be 80% by next year, then 70%, then the window closes. The accidental control group dissolves into ambient AI exposure: search engines, social media algorithms, AI-embedded keyboards. You do not need to open ChatGPT to have your cognitive environment shaped by generative AI.
What a real control group would do
I am talking about something deliberate, voluntary, and structured. A cohort of people, across demographics, who agree to opt out of AI tools entirely for a defined period. Five years. Ten years. Compensated. Monitored. Consenting adults who understand what they are giving up and why it matters.
Amish communities and off-grid villages are self-selected populations with dozens of confounding variables. Interesting anthropologically. Useless for product research.
Here is what a structured cohort could do.
Product testing. Route a portion of your user research through people who have never seen AI-generated interfaces, copy, or interactions. Their confusion reveals your real UX problems. Their delight reveals your real wins. No AI-adjusted expectations.
Homogenization detection. AI smooths edges. It optimizes for the average. It erases quirks. An uncontaminated cohort would function as an early warning system. When they say “this all feels the same,” that is a signal worth millions in market positioning.
Taste calibration. In content, in design, in code, in music. We need a reference population whose aesthetic preferences have not been reshaped by AI-generated content. Their reactions tell us what is actually better versus what is just shinier.
Scientific measurement. We are three years into the generative AI era and we still cannot answer basic questions. Has AI made people more creative or less? Has it improved writing quality or flattened it? Has it accelerated learning or created dependency? Without a control group, these questions are unanswerable.
The closest thing we have
Anthropic ran a randomized controlled trial with junior software engineers. AI-assisted versus non-assisted, working with unfamiliar codebases. The AI-assisted group was actually slower.
That study is interesting. It is also short-term, narrow in scope, and focused on task performance. It tells us nothing about long-term changes in judgment, taste, or creative instinct.
Companies are moving the other direction entirely. They are replacing human feedback with synthetic users because it is faster and cheaper. I understand the economics. I also understand that this creates a closed loop where AI validates its own output.
That is an echo chamber with a budget.
Why this is hard
I am not naive about the obstacles.
Ethics. Access to tools is increasingly seen as a right. Asking someone to voluntarily forgo AI for a decade raises real questions about informed consent and opportunity cost.
Feasibility. AI is embedded in search engines, email clients, phone keyboards, social media algorithms. True isolation is nearly impossible in 2026.
Cost. Compensating a large enough cohort for a meaningful period is expensive.
Self-selection bias. People who volunteer to avoid AI may already be predisposed to distrust technology. That skews the data.
All of these are real. None of them are disqualifying. Partial cohorts still produce directional signal. Shorter commitments still generate useful baselines. Even a few hundred genuine AI-naive participants would be more valuable than thousands of synthetic users.
And the 6.9 billion people who have never used AI tools right now? They exist in the Global South, in rural communities, in populations with limited internet access. They are real people living real lives. The question is whether anyone will study them as a baseline before that window closes — with their full consent, with their fair compensation, with their agency intact.
The math of the closing window
H1 2025: 15.1% global adoption. H2 2025: 16.3%. A 1.2 percentage point jump in six months. The rate in the Global North is accelerating faster — 1.8 points in the same period.
DeepSeek, the open-source AI platform, is surging across Africa and the Global South. It removed cost barriers entirely. Free access. Open license. Adoption in parts of Africa is estimated at 2 to 4 times higher than in other previously low-adoption regions.
The next billion AI users will not come from San Francisco or London. They will come from the populations that currently form the accidental control group. Every new user is one fewer uncontaminated baseline.
This is a problem that gets impossible to solve later.
Where this goes by 2036
The technology S-curve for generative AI is steeper than the internet or smartphones at the same stage. Here is what the data trajectory suggests over the next decade.
Adoption saturates. If we extrapolate from the current diffusion rate, adjusted for accelerating drivers like DeepSeek, multilingual models, and AI embedded into messaging apps and banking platforms, global penetration reaches 70-85% by 2036. Near-saturation in the Global North at 90%+. The Global South at 60-75%. The last true AI-naive cohort shrinks to under 10-15% of humanity — mostly offline and remote populations.
The market scales. The generative AI market was valued at roughly $38 billion in 2025. Precedence Research projects it reaching $1.2 trillion by 2035, at a compound annual growth rate of 37%. McKinsey estimates generative AI could add $2.6 to $4.4 trillion annually to the global economy across 63 analyzed use cases. The Wharton Budget Model projects a 1.5% boost to global GDP by 2035, with the strongest productivity acceleration in the early 2030s.
AI becomes infrastructure. By 2028-2030, an estimated 30-50% of enterprise software will include autonomous agents. By 2036, most knowledge work involves human-agent teams. Models go multimodal by default — text, image, video, voice, code — and shrink to run locally on phones and edge devices. Generative AI stops being a category. It becomes the operating layer.
The control group disappears. This is the part that matters for this argument. Every year of the next decade reduces the pool of genuinely AI-naive humans. Passive exposure alone — through AI-generated search results, social feeds, auto-complete, and recommendation engines — contaminates even people who never open a chatbot. By 2036, the concept of “untouched human taste” becomes theoretical.
The window is 2026-2030 at most. After that, we are measuring drift without a reference point.
The question I am leaving here
I do not have a pilot program. I do not have funding for this. I have a Substack and a set of observations from running content operations in an AI-saturated environment.
Here is what I want to know.
Would you volunteer? If someone offered you fair compensation to stay AI-free for five years, would you do it? Would you give up ChatGPT, Claude, Midjourney, Copilot, and every AI-assisted tool in your workflow?
Or is it already too late for you?
This is pro-measurement. This is pro-knowing-what-we-are-actually-doing. I say that as someone who has built a business on AI.
If you build products, run teams, or create content for a living, this affects your work. The quality of your feedback loops depends on the quality of the humans in them.
Preserve the seed bank while there are still seeds to collect.
Fleire Castro is the founder of DashoContent and Third Team Ventures. She builds AI-powered content operations for B2B companies and trains marketing teams to integrate AI into real workflows through the Unfireable workshop series. She writes about marketing ops, AI systems, and what running multiple companies across multiple countries actually looks like.
Subscribe. Share this if it made you think. Drop your answer in the comments — would you volunteer for the control group?


