The Israeli startup has raised $5.5M for its platform that uses “statistical AI” to generate synthetic data that it says is as good as the real thing.
© 2024 TechCrunch. All rights reserved. For personal use only.
Surveys have been used to gain insights on populations, products and public opinion since time immemorial. And while methodologies might have changed through the millennia, one thing has remained constant: The need for people, lots of people.
But what if you can’t find enough people to build a big enough sample group to generate meaningful results? Or, what if you could potentially find enough people, but budget constraints limit the amount of people you can source and interview?
This is where Fairgen wants to help. The Israeli startup today launches a platform that uses “statistical AI” to generate synthetic data that it says is as good as the real thing. The company is also announcing a fresh $5.5 million fundraise from Maverick Ventures Israel, The Creator Fund, Tal Ventures, Ignia, and a handful of angel investors, taking its total cash raised since inception to $8 million.
Data might be the lifeblood of AI, but it has also been the cornerstone of market research since forever. So when the two worlds collide, as they do in Fairgen’s world, the need for quality data becomes a little bit more pronounced.
Founded in Tel Aviv, Israel, in 2021, Fairgen was previously focused on tackling bias in AI. But in late 2022, the company pivoted to a new product, Fairboost, which it is now launching out of beta.
Fairboost promises to “boost” a smaller dataset by up to three times, enabling more granular insights into niches that may otherwise be too difficult or expensive to reach. Using this, companies can train a deep machine learning model for each dataset they upload to the Fairgen platform, with statistical AI learning patterns across the different survey segments.
The concept of “synthetic data” — data created artificially rather than from real-world events — isn’t novel. Its roots go back to the early days of computing, when it was used to test software and algorithms, and simulate processes. But synthetic data, as we understand it today, has taken on a life of its own, particularly with the advent of machine learning, where it is increasingly used to train models. We can address both data scarcity issues as well as data privacy concerns by using artificially-generated data that contains no sensitive information.
Fairgen is the latest startup to put synthetic data to the test, and it has market research as its primary target. It’s worth noting that Fairgen doesn’t produce data out of thin air, or throw millions of historical surveys into an AI-powered melting pot — market researchers need to run a survey for a small sample of their target market, and from that, Fairgen establishes patterns to expand the sample. The company says it can guarantee at least a two-fold boost on the original sample, but on average, it can achieve a three-fold boost.
In this way, Fairgen might be able to establish that someone of a particular age-bracket and/or income level is more inclined to answer a question in a certain way. Or, combine any number of data points to extrapolate from the original data set. It’s basically about generating what Fairgen co-founder and CEO Samuel Cohen says are “stronger, more robust segments of data, with a lower margin of error.”
“The main realisation was that people are becoming increasingly diverse — brands need to adapt to that, and they need to understand their customer segments,” Cohen explained to TechCrunch. “Segments are very different — Gen Zs think differently from older people. And in order to be able to have this market understanding at the segment level, it costs a lot of money, takes a lot of time and operational resources. And that’s where I realized the pain point was. We knew that synthetic data had a role to play there.”
An obvious criticism — one that the company concedes that they have contended with — is that this all sounds like a massive shortcut to having to go out into the field, interview real people and collect real opinions.
Surely any under-represented group should be concerned that their real voices are being replaced by, well, fake voices?
“Every single customer we talked to in the research space has huge blind spots — totally hard-to-reach audiences,” Fairgen’s head of growth, Fernando Zatz, told TechCrunch. “They actually don’t sell projects because there are not enough people available, especially in an increasingly diverse world where you have a lot of market segmentation. Sometimes they cannot go into specific countries; they cannot go into specific demographics, so they actually lose on projects because they cannot reach their quotas. They have a minimum number [of respondents], and if they don’t reach that number, they don’t sell the insights.”
Fairgen isn’t the only company applying generative AI to the field of market research. Qualtrics last year said it was investing $500 million over four years to bring generative AI to its platform, though with a substantive focus on qualitative research. However, it is further evidence that synthetic data is here, and here to stay.
But validating results will play an important part in convincing people that this is the real deal and not some cost-cutting measure that will produce suboptimal results. Fairgen does this by comparing a “real” sample boost with a “synthetic” sample boost — it takes a small sample of the data set, extrapolates it, and puts it side-by-side with the real thing.
“With every single customer we sign up, we do this exact same kind of test,” Cohen said.
Cohen has an MSc in statistical science from the University of Oxford, and a PhD in machine learning from London’s UCL, part of which involved a nine-month stint as research scientist at Meta.
One of the company’s co-founders is chairman Benny Schnaider, who was previously in the enterprise software space, with four exits to his name: Ravello to Oracle for a reported $500 million in 2016;exited Qumranet to Red Hat for $107 million in 2008; P-Cube to Ciscofor $200 million in 2004; and Pentacom to Cisco for $118 in 2000.
And then there’s Emmanuel Candès, professor of statistics and electrical engineering at Stanford University, who serves as Fairgen’s lead scientific advisor.
This business and mathematical backbone is a major selling point for a company trying to convince the world that fake data can be every bit as good as real data, if applied correctly. This is also how they’re able to clearly explain the thresholds and limitations of its technology — how big the samples need to be to achieve the optimum boosts.
According to Cohen, they ideally need at least 300 real respondents for a survey, and from that Fairboost can boost a segment size constituting no more than 15% of the broader survey.
“Below 15%, we can guarantee an average 3x boost after validating it with hundreds of parallel tests,” Cohen said. “Statistically, the gains are less dramatic above 15%. The data already presents good confidence levels, and our synthetic respondents can only potentially match them or bring a marginal uplift. Business-wise, there is also no pain point above 15% — brands can already take learnings from these groups; they are only stuck at the niche level.”
It’s worth noting that Fairgen doesn’t use large language models (LLMs), and its platform doesn’t generate “plain English” responses à la ChatGPT. The reason for this is that an LLM will use learnings from myriad other data sources outside the parameters of the study, which increases the chances of introducing bias that is incompatible with quantitative research.
Fairgen is all about statistical models and tabular data, and its training relies solely on the data contained within the uploaded dataset. That effectively allows market researchers to generate new and synthetic respondents by extrapolating from adjacent segments in the survey.
“We don’t use any LLMs for a very simple reason, which is that if we were to pre-train on a lot of [other] surveys, it would just convey misinformation,” Cohen said. “Because you’d have cases where it’s learned something in another survey, and we don’t want that. It’s all about reliability.”
In terms of business model, Fairgen is sold as a SaaS, with companies uploading their surveys in whatever structured format (.CSV, or .SAV) to Fairgen’s cloud-based platform. According to Cohen, it takes up to 20 minutes to train the model on the survey data it’s given, depending on the number of questions. The user then selects a “segment” (a subset of respondents that share certain characteristics) — e.g., “Gen Z working in industry x,” — and then Fairgen delivers a new file structured identically to the original training file, with the exact same questions, just new rows.
Fairgen is being used by BVA and French polling and market research firm IFOP, which have already integrated the startup’s tech into their services. IFOP, which is a little like Gallup in the U.S., is using Fairgen for polling purposes in the European elections, though Cohen thinks it might end up getting used for the U.S. elections later this year, too.
“IFOP are basically our stamp of approval, because they have been around for like 100 years,” Cohen said. “They validated the technology and were our original design partner. We’re also testing or already integrating with some of the largest market research companies in the world, which I’m not allowed to talk about yet.”
Leave a Reply