At the 2024 Research Association of New Zealand Annual Conference, we asked an uncomfortable question: what happens when the research briefs stop coming?
Fast forward to 2026 and it’s no longer hypothetical. As organizations embrace practical use cases for tools like Copilot and Claude, consulting revenue has ebbed. Questions that once took $20,000+ in custom research are now answered in seconds for a nominal monthly subscription.
That’s not to say the clever haven’t survived, even prospered. But the reality is human research hours are competing against always on, know it all machines that sit on everyone’s devices and offer ubiquitous access to the world’s accrued knowledge on demand.
Global AI capex is estimated to reach $571 billion in 2026, rising to $1.3 trillion by 2030 (1). S&P Global indicates that ~160 key players in AI infrastructure will generate more than $250 billion in combined revenue in 2025 (2). That’s an industry which barely existed 5 years ago now capturing almost double the total revenue ESOMAR estimates the insights industry (tech and non-tech) generated in 20243 (3).
Doesn’t feel like a fair fight. How did we get here so fast? Where does that leave us in the insights industry?
A new beginning
Hope is not lost. In fact, on the contrary, we believe market research is going to become more important than ever - a critical differentiator and source of competitive advantage that is highly valued and closely guarded by organizations.
That might sound like wishful thinking, nostalgia even, but there is a compelling logic. Guided by the needs of our clients, which revealed this change, our ambitions and aspirations for 2026 are aimed at the use and production of human data (more on this a little later).
For decades, the research industry called itself “market research” or “market insights”. We ran surveys. We conducted groups. We wrote reports. We made recommendations. Yet these processes don’t pass muster anymore. They belong to a world where organizations moved a lot slower and where decisions were made by humans who struggled to process vast amounts of information and coordinate insights from multiple sources. That is demonstrably not the case anymore due to the wonders of LLMs.
Organizations have started rolling out Copilot, Claude and a myriad of other AI systems across their workforces. It is still early days with 78% of companies saying they have integrated AI into at least one function, but only 27% having achieved full enterprise-wide deployment (4). These systems largely draw on publicly available sources for their ‘intelligence’.
This has profound implications.
Having vast amounts of information conveniently at our disposal through intelligent multi-modal interfaces that can answer questions in natural language is great. But we all have access to pretty much the same information as the default. That sameness leads to homogenous thinking, and a winnowing of diversity as organizations apply AI for innovation, product development, brand positioning and other tasks that currently rely on primary research and human analysts.
Marketing Myopia – what it is we do in research
Theodore Levitt wrote an article in HBR back in 1960 (5). I first read this article (back in the days when we read long form content) in the late 90s and it was a revelation. To disservice a great read, Levitt warned that companies fall behind when they cling to the thing they produce, not the benefit they deliver. In other words, they too narrowly define the value they produce and in doing so, risk obsolescence through innovation.
The classic (somewhat cliched) examples include railroads being in the railroad business, kodak being in the film business, or Blockbuster being in the video rental business. In those three examples, the disruptors were cars, digitization, and streaming. For us in market research, our ‘thing’ is qual and quant research and our disruptor is ubiquitous, cost-free insights from AI.
Our industry risks the same trap. If we think we produce surveys or qualitative interviews, even reports, data analysis or dashboards, we’re putting ourselves at great risk of obsolescence.
The corollary of Marketing Myopia is the Resource Based View (RBV) of the firm. Its warning is that organizations should anchor their focus on customer needs, but competitive advantage comes from unique internal capabilities. For research agencies, that unique ability is our power to understand the human condition and how it translates into behavior. So where does this leave us market researchers in 2026, as a career, a profession and an industry?
The AI transformation – it really is happening
Every organization is racing to adopt AI decision engines. These models are hungry. They absorb data from the open web: public sites, social streams, news, forums, product reviews. But this data is available to everyone. Ask a question of any public model today and your competitor can ask the same thing tomorrow. When everyone draws from the same well, no one gets ahead. This doesn’t make it less useful or powerful. It doesn’t make it a less compelling instant substitute for traditional research which takes time and costs money, in the age of instant gratification. It does mean it’s ubiquitous and therefore only brings organizations to parity, also known as ‘the great averaging’ (6).
Returning to Marketing Myopia, to survive and prosper we need to rethink what we do and how we do it. That’s why we believe the future for our industry is private human data - datasets that carry a company’s unique understanding of people, their motivations and frustrations, their emotional logic, and the deeper patterns that influence choice.
It’s this material that fuels an AI engine in a way that can’t be copied and is a vital, valuable and real competitive advantage for the organizations that have it.
These data sets, created from real humans and structured to unpack their modes of thought, emotionality, aspirations, desires and beliefs, are incredibly valuable to AI. Beyond description, it delivers a much more fecund and powerful analytical base from which to have AI generate responses. And this is where HumanListening has been building quietly, and steadily, for years.
The head start: Qualitative AI that understands people
Before large language models (LLMs) went mainstream, we were already working on EVE (Evolved Verbatim Engine), our qualitative AI moderator designed to converse with humans in natural, reflective ways. Our goal was to create a genuine qualitative moderator built on the craft of nearly a century of method: laddering, projection, storytelling, narrative exploration, and the art of getting past surface answers. Our goal was simple: help people explain the real reasons behind their choices, including the needs they don’t articulate, the motivations they don’t see, and the emotions that guide them.
When modern LLMs arrived, we embraced the technology that solved some of the challenges with NLP. EVE became stronger, faster, and more adaptable. The combination of human methods and machine scale meant we could understand people at a depth and volume never possible before. We then adopted Machine Learning and LLMs to help us unpack the meaning by applying some very cool advanced analytical data science methods that you see today _as Impact Maps. They reveal cause and effect in complex human systems.
The combination of how we ask and interpret has led us to become leaders in the field of ‘human data at scale’. Our ‘thing’ is creating datasets rich enough to reveal patterns, subtle enough to uncover unmet needs, and varied enough to give AI systems real texture. This is the rocket fuel for decision-making models.
At this point I want to emphasize that we do not use any client data to train the LLMs. Your data is yours, not ours. Our technology and techniques do not rely on model training, and we don’t use synthetic data (yuk).
Introducing ‘fecundity’: A word that sounds a bit naughty, but is your secret sauce
Fecund (ˈfɛk(ə)nd,ˈfiːk(ə)nd/): producing or capable of producing an abundance of offspring or new growth; highly fertile. "A lush and fecund garden".
Companies often talk about differentiation, but the coming years will make the source of competitive advantage very clear:
- AI systems will be universal
- Algorithms will be commoditized
- Access to public data will be equal
What will set organizations apart is the quality of the private data they feed into their models. This is why ‘fecundity’ is our word of the year and where HumanListening is shining. We are not in the survey business. We are not in the qualitative research business. We are in the business of producing the world’s most ‘fecund’ private human datasets - the kind that let companies personalize, predict, and strategize with confidence. These datasets don’t just help AI make better decisions; they help organizations build strategies and content that still feel human in a world anxious about the rise of machines. They allow leaders to stay grounded in real emotions, real stories, and real human needs, not generic summaries scraped from the internet.
Two pillars of the new human data model
1. How we gather it
EVE is already getting this high fecundity private data at scale. We are pressing further ahead with EVE Qualitative Pro, coming out soon. It’s an upgrade that embeds advanced qualitative techniques directly into qual AI, giving researchers and agencies a deeper toolkit to explore human motivations with nuance and structure, without sacrificing scale or speed. It means thousands of people can share their stories while still being guided with the skill of an expert moderator.
2. How we analyze it
Once the data is collected, we apply an ensemble of analytical methods like emotional scoring, causal discovery, driver modelling, time series analysis, and thematic clustering. These techniques reveal the psychological and emotional logic inside the data. And because this is qualitative data at scale, you can zoom out to see the big picture or drill down to an individual story. A “25%” in a chart is no longer an abstract number - it’s hundreds of real experiences you can read, compare, and pull insights from.
This is where innovation lives: in the variation between individual experiences. That diversity is what fuels AI with fresh ideas and untapped opportunities.
What this means for brands and agencies
For brands: You gain access to private datasets that your competitors can’t replicate. These datasets become an asset - material that strengthens internal AI systems and guides strategy with insight grounded in real human thinking. You can use this data inside our platform or integrate it into tools like Copilot or Claude.
For research agencies: You can apply your own methods through EVE, build proprietary datasets for clients, and strengthen your role by offering intellectual property that goes beyond project work. It becomes part of your value - unique data, unique understanding, unique leverage.
Where we’re heading
2026 marks a turning point for the entire insights industry. We’re moving from an era of gathering answers to an era of building intelligence. From reports to datasets. From public information to private human understanding.
This is the opposite of synthetic data. Our role as researchers is to help shape a future grounded in human needs and positive human experiences. We know many of our clients share this belief, and we look forward to shaping this exciting future together.
References:
(1) AI infrastructure spending boom: a path towards AGI or speculative bubble? | IEEE ComSoc Technology Blog: https://techblog.comsoc.org/2025/12/01/ai-infrastructure-spending-boom-a-path-towards-agi-or-speculative-bubble/
(2) AI infrastructure: Midyear 2025 update and future technology considerations | S&P Global: https://www.spglobal.com/market-intelligence/en/news-insights/research/2025/10/ai-infrastructure-midyear-2025-update-and-future-technology-considerations
(3) Global Market Research 2025 | Esomar Reports: https://esomar.org/publications/global-market-research-2025#table-of-content
(4) AI in 2026: How Many Companies Are Really Using It? | Elementor Blog: https://elementor.com/blog/ai-how-many-companies-are-really-using-it/
(5) Marketing Myopia | Harvard Business Review: https://hbr.org/2004/07/marketing-myopia
(6) Generative AI And The ‘Great Averaging’ | Forbes: https://www.forbes.com/sites/ronschmelzer/2024/06/06/generative-ai-and-the-great-averaging/










