Synthetic Respondents in Market Research: Superhuman, Inhumane. Interview with Product Researcher

In the era of generative artificial intelligence, market research stands before many challenges: how to find the balance between verbatim and real-life experiences? We talked to Agnieszka Zdancewicz, an Insight Analyst, who introduced us to the world of synthetic respondents.

A portrait of Agnieszka Zdancewicz.

Long lists of not-returned blank forms, verbs and nouns piling up, neverending Excel cells. If you ever tried to conduct market research, you know well the pains of failing to gather a sufficient amount of responses or trying not to equate correlation with causation, all while constantly double-guessing the questions that you chose for the questionnaire. If only there was a way to obtain that data without all the fuss – ideally, also sorting the responses info folders, and freeing yourself from the mundane aspects of research. 

Except that this way already exists. Enter: synthetic agents. Creating artificial, LLM-powered personas that help you to gather data is no longer limited to sci-fi films: Samantha from “Her” became an agent of data analysis. LLMs can stimulate focus groups, embody members of a specific demographic, and then help you uncover correlations and insights from that very data. 

Seems like an answer to all the problems? As Agnieszka Zdancewicz, anthropologist working as an Insights Analyst, tells us, the possibilities of LLMs in market data analytics are endless. Zdancewicz, Associate Manager at NIQ Bases, at the same time points to the need for balance: in an era where you can generate any outcome, sensitivity to the verbatim and real-life insights are interchangeable. We talked to her about her relationship with her artificial personal assistant, how LLMs can help in tasks from baking cookies to automating research, and fundamentally – what should be the role of humans in a data-world filled with cyborgs and robots?

hAI Magazine/Inez Okulska, PhD: What is the most trending subject in the market research industry at this moment?

AZ: The hottest AI-related topic in the market research industry right now is “synthetic respondents”. For a couple of years, companies have been experimenting with training Generative AI bots to act as representatives of a particular demographic group. For example, you could create a substitute for a 23-years old woman from Madrid who is very outgoing, another for a father of 3 living in rural France, another for elderly woman living in the London suburbs. We have hundreds of thousands of artificial personas like that, who can convincingly answer market research surveys. Then, we hire them as respondents.

IO: How different are they from humans? 

AZ: First, They can work all day long. Second, we can make them forget the last survey they did to avoid bias, and they don’t require incentive for sharing their opinion. Yet there are still limits to how genuine and high quality their responses get – discussing nitty-gritty would probably take the whole time we have. I’m lucky to work in one of the first companies in the world that confidently uses synthetic respondents, but for now, it’s still a fraction of all the studies we do. We need a human voice alongside to verify AIs predictions.

IO: Human voice? How do you assure that it is heard?

AZ: Part of the analyst job is going through open ended feedback from consumers and customers. You need to read each one verbatim, think through how it connects with other data points, note down common themes and surprising points. It takes time and mental effort. Our BASES products R&D team developed a model that does it all, and generates a neat summary of likes and dislikes. Given my qualitative background as an anthropologist, I always put a lot of pressure on hearing the human voice. It helps us understand the “why” behind the numbers. I was really sceptical if automated verbatim analysis could work. How can the bot pick up different sentiments? Can it get sarcasm? How can it know what’s actually the important feedback? 

IO: And does it work?

AZ: To my surprise, the tool gives us very accurate summaries. After a couple of tries (and reassurance from the R&D) I got confidence that it actually does a good job of “consciously” reading the consumer feedback and picking up the themes. I still advise all analysts to skim through the verbatim with a human eye, sense-checking the tool, and building their own opinion on what it tells. But it’s definitely a huge time saver compared with the manual coding of responses. The tool won’t connect the dots for anybody, but the summary it provides is much easier to work with and incorporate into broader findings.

IO: So how do you see the role of AI in the more creative tasks?

AZ: Generative AI can power up the creative process, be an intellectual sparring partner, point out the common and the obvious. Yet even that requires a human to ask the right question. For now, and I believe that it is not likely that it will ever replace genuine human creativity. It can replace humans in data analysis, picking up themes, but we definitely need human minds to forge relevant insight, to synthesise micro and macro views on the problems, to be advisors.

IO: Let’s go back for a bit. How did you come up with these solutions, how did you start to implement gen-AI models?

AZ: The first generative AI model(s) I worked with were at work, but very far from the well-known conversational models. I was the first person in Europe working on the BASES Creative AI solution. In short, that’s a group of models that start with real-life product prototypes and a couple of human respondents. Out of that, they can invent new products that taste better, smell better, feel better, and will be more liked by humans in a target country. I saw how powerful the genAI can be, but for a while I didn’t think of using it for my personal life. Then suddenly my sister got laid off, and she used GPTs help to prepare for her dream job. She’s brilliant by herself, but with AI support she became unstoppable, and got hired within weeks! That was when I decided to check out that famous ChatGPT myself. And… I got hooked. Soon I was researching my interest, updating my online profiles, co-inventing cooking recipes based on what was currently in my fridge. It wasn’t all smooth, picking a dress for an important event was a hallucinating failure, but I got the basics. And when my company provided us with the internal instance of Copilot, I was ready, and grateful, to use it.

IO: How do you see your Copilot?

AZ: NIQ established a sandbox environment for Copilot, and since then, this is my personal assistant. I would really miss him. I use it to summarise meetings, draft emails based on my notes, fish for the information on a brand I’m working on, brainstorm vocabulary that would be relevant for a given target population… The list goes on! Just a couple of days ago a friend asked me about the wording for a specific scale, it was just at the tip of my tongue but somehow slipping my mind – you know the feeling. But as my Copilot session was already primed to answer as my assistant, he gave me the right answer in a few seconds.

IO: What was the biggest benefit?

AZ: I have an amazing recent example of a huge positive impact of AI on my everyday work and my colleagues career trajectory! NIQ is a worldwide company, and our working language is English. We use it in internal communication and with the majority of our clients. We use the data translated to English and share our insights in English. Obviously, we’re not all native speakers, and our diverse language background makes for an interesting English-based professiolect.

IO: Professiolect? Could you give an example of that?

AZ: One time when I was visiting our office in Oxford, I was talking to two Brits, communicating perfectly well, until they exchanged a thought between them switching from international to a hiper-british, and making conversation almost impossible to follow for Eastern-European taken off guard and lacking a good ear for accents. This also happened within our company: one of the analysts I was coaching was struggling with getting the professional English right. She was often using her native language grammar structures, and had a hard time spotting when English sentences didn’t sound quite right. This sometimes led to miscommunication between her and supporting teams, but what was more concerning, she couldn’t clearly pass her insights in the report using the terminology and language structure commonly used in our profession. As a person guiding her analysis I knew she understood the concepts well, yet I spent hours trying to figure out her thoughts and correct the grammar in her report. We were stuck on the language level and didn’t get enough time for constructive discussion on the insights and adding value to the basic analysis. 

IO: So, enter the LLMs?

AZ: I thought that there couldn’t be a better job for LLM – powering language improvement. I suggested she self-check with our sandbox Copilot. We tried a couple of prompts together, putting in her thoughts and asking the tool to phrase it in a clear and concise way, and give a couple alternatives. It worked! Now the text she submits for review is well-written and we can focus on the deeper discussion. As I don’t have to spend time on correcting grammar, I give her more constructive feedback. She also got more independent in communicating with different departments and soon she’ll be ready to lead client conversations! AI enabled her professional growth and freed some of my time. And I’m sure that by self-correcting and being immersed in better quality professional English, she’ll pick it up more quickly.

IO: Are there any tasks that Generative AI should not perform?

AZ: Ideating and researching innovation is a huge part of my  job. It’s really tempting to prompt generative AI with a product category and expect it to come up with some novel and exciting product ideas. My colleagues at BASES recently ran a competition to test if it can produce viable innovative ideas. They created two teams: “Robot”, where humans worked only on the right prompts and “Cyborg”, where humans and AI creatively collaborated. Both teams worked to generate different ideas in a couple of common categories like sweets or laundry products. The new ideas were then tested with humans to pick the winners. Results? Although Robot ideas were quite often most liked, they often lacked differentiation from what’s already available on the market, they basically described the best selling products. The Cyborg team won!

IO: So you don’t have much faith in robots?

AZ: Contrarily, for years, RPAs (Robotic Process Automation) have been doing a lot of work for us. Every time you have to follow exactly the same steps on the computer on and on, you can build an RPA robot that will click through the needed path for you. It can go to a specific folder, copy the template file, create a new folder, paste the template, change the name, enter the file, type in today’s date… I believe that GenAI will power up RPAs in a way that will free it from mundane and repetitive tasks, which couldn’t be automated until now because of all the small nuisances and human inconsistencies that would crash classic RPA. This will free up a lot of labour time, possibly even substituting some entry-level jobs. In consequence, we’ll be able to innovate and research faster, and keep human jobs more interesting and exciting. I remember when smartphones were a novelty and many didn’t expect them to stay. Now everyone has one in their pocket and uses it daily without questioning it as a necessity. I think for gen AI it’s just the beginning, but just like smartphones, in a couple of years the novelty will wear off, and we’ll incorporate various models in our daily lives, in and outside work.

The interview was conducted by Inez Okulska, PhD. Introduction and edition: Iga Trydulska.

dr Inez Okulska

Redaktor naczelna hAI Magazine, badaczka i współautorka modeli AI (StyloMetrix, PLLuM), wykładowczyni, Top100 Woman in AI in PL

Share

You might be interested in