
Will AI agents be facilitating user interviews? Think again, but now, without assuming the constraints of human UX Researchers.
Our UX research practices were heavily shaped by the skills, schedules, and limitations of human researchers. What would research look like without those constraints?
To understand that, we need to go back to the original problem that UXR is trying to solve. Let’s forget about current solutions. What’s the job to be done? Is there another way to solve it?
About Zsombor: he is a Sr UX Researcher at SAP, developing genAI products, author of A Knack for Usability Testing; background in psychology, machine learning and cognitive science. Views expressed don’t necessarily reflect that of SAP.
Job-to-be-done 1: Identify user needs
The first and foremost job-to-be-done for user research is to identify user needs that the product team can address by building features.
The primary source of this information used to be user interviews. We talk to people in order to uncover the goals they want to achieve, understand how they are achieving them currently, and identify aspects that could be improved with technology.
This information is not exclusively available through user interviews, though.
More often than not, this information is available on the public internet where said users share and discuss their processes, exchange ideas, complain about products, etc.
The reason why UX Researchers don’t use these public forums as information source is that they are noisy. First, internet forums and professional chats cover a wide range of topics, only a fraction of which might be relevant to the researcher’s questions. Secondly, this content comes from a variety of people; most of whom may not be qualified or have the right experience to have insights relevant to the research question. It just makes no sense for a human researcher to read through thousands of pages of content, cross-checking the background of thousands of commenters, only to find 1 or 2 relevant remarks. For a human, it’s much easier to go talk to a handful of people from the target audience and directly about the research topic.
For AI, however, reading through thousands of discussions, while constantly cross-checking the authors’ background, may not be so big of a stretch.
AI could get all the information about user needs without talking to any users. And it may only take a few minutes.
JTBD 2: Identify usability issues and find fixes
Earlier, Kate Moran and Maria Rosala wrote that current AI tools have a hard time “watching” usability tests, observing user behavior, and uncovering usability issues based on that.
I agree, AI can’t solve the problem the same way as humans do. But it doesn’t have to. It can solve it in other ways. So, let’s start from first principles: what use cases do usability tests supposed to solve after all?
One of the common use cases is deciding between 2 or 3 design alternatives. The reason why teams take such a decision to usability testing is that it’s harder to build 3 product versions and put them up in an AB test than conduct a quick usability test with 3 low-fi prototypes. That is less and less true in a world in which AI does the coding. If AI can implement all 3 variants and configure an AB test in a minute, this pretty much eliminates the need for running usability tests to decide between alternatives.
Another usability test use case is when we only have one design/product and we want to learn if there is anything that trips users up.
So let’s give this a minute: from what information do human researchers infer that there is a usability issue?
- First, we have users carry out a certain task and thenWe observe them, looking eagerly for signs of confusion, hesitation, or errors.
Now, this information (confusion, hesitation, or errors) is not exclusively available on the test participant’s face. It’s also available in the log of user interactions:
- Hesitation shows up as an above-average wait time before the next user interaction or as excessive mousing.User error is obvious from the use of an “Undo” function, “Go Back” button, or “Delete.”Confusion can be inferred from the user deviating off the happy path, trial and error, going back to the starting all over again, and so on.
These are all behavioral clues of usability issues that are clearly present in logs. The fact that human UX Researchers extract these clues from the user’s facial expressions and metacommunication by empathy doesn’t mean the same clues couldn’t be extracted from other data sources.
It goes without saying that AI is especially good at analyzing logs. In fact, it can observe all users simultaneously.
In a way, this kind of observation is even better than human usability testing because it’s directly observing real user behavior, as opposed to observing proxy users in an artificial test setting.
Other than log data, of course, there exist plenty of other info sources on usability issues: customer support tickets; feature requests; in-app feedback forms; angry internet rants. All this is a wonderful source for identifying usability issues, assuming someone has the ability to process gigantic amounts of information and pick out the signal from the noise. AI has that ability.
So far, we have only mentioned data that is readily available for mining today. Usability testing tools could tap into other data sources too, say data from wearables on the users’ heart rate, ECG, skin resistance, and blood pressure. Or an eye-tracker. That’s not sci-fi. Some of these devices are already on everybody’s arms. What separates a usability testing company from this data is a simple consent.
Ok, you could say, we have log data, but
- We still don’t know what tasks users are trying to carry out (because we didn’t give them the task; they chose it for themselves)We only see their behavior but don’t see their thought process, which the think-aloud protocol would have revealed in traditional usability sessions.
As for the task and the user’s goal, AI could infer that too. First of all, most products have a well-defined goal, so it’s not a big mystery what the user might be up to. Secondly, if AI can observe the final result (that the user achieved after several failed attempts), we can safely assume that this state was the original goal. And if that wasn’t enough, AI could still just ask the user (via an in-app survey or AI chat) what they were trying to achieve.
As for the understanding of the thought process or understanding the user’s mental model, human researchers use this information to zone in on the root cause of the issue. If we know what exactly does the user misunderstand, we know what to fix. Without the think-aloud data, AI may not be able to access users’ thought process. But it can compensate for it with its speed. Perhaps it can’t zone in on the exact root cause, but it can generate 10 hypotheses and quickly test each one of them via live AB tests on the user base. Same outcome: usability issue solved.
Perhaps AI can’t do things the way humans do, but it doesn’t have to. The only thing that matters is getting the job done.
Human UX researchers would still be needed for judgement and persuasion. ?
Jacob Nielsen wrote that as AI is doing more and more of the UX work, humans would still be necessary to make judgments and persuade stakeholders.
I don’t think so. For two reasons:
(1) The best persuasion is showing the data.
(2) The best judgment is the result of a conclusive experiment.
The AI UX Researcher working in the above outlined ways will have both of those: conclusive experiments based on hard data. Compare that to a human researcher’s 5 interviews and tentative conclusions. Human researchers persuading would be a thing of the past.
Additionally, let’s ask ourselves: Who would need convincing? Human stakeholders? Are you sure? Why are we assuming that there will be flesh and blood Product Managers and Developer teams to be convinced? I think “persuasion” will be reduced to a mere API call between a UXR AI and a Product Manager AI.
That’s not the end of it
The picture I painted above about the future of user research still assumed that the users were humans.
What if the users of software are AI systems?
Overall, I think, research wouldn’t be much different in that world. Different user AIs would still have heterogeneous needs, which the UXR AI would need to map, synthesize, and identify the best opportunities, given that the Engineering AI still needs to find the best use of its limited resources. It’s just that the whole loop will happen much, much faster because neither the researcher-engineer nor the user is bogged down in human time.
The really interesting world will be the one in which AI (perhaps AGI) gets so good at developing products and services that it will just build them for itself. So far, we assumed that the user and the developer were distinct entities. We needed user research to mediate between the user and the developer. If an AI system develops tools for itself, product development would look more like inner speech. Or think of an engineer of today, building tools for themselves. They know their own needs; they can build for it; test it on themselves, and iterate. In that world, there will be no need for user research -human or AI- neither.
Call to action
I expect UX research to disappear entirely, eventually. But now, UX research is still done by humans, and the current generation of UX research tool companies needs your work data to train (fine-tune or evaluate) their AI features to make a profit. This is our chance to get on the AI train. We have something valuable, and we can rightfully ask for a piece of the action in exchange. Once our human capital (skills and knowledge) is absorbed by a model, it becomes somebody else’s capital. It will stop producing returns to us, and it will start producing returns only to the owner of the AI model. Let’s make sure we become owners in return for our work.
Take care,
Zsombor
More articles on AI from me:
- GenAI is wealth transfer from workers to capital ownersSAP is not volunteering my data to Figma AI, and I am proud of SAP for that.Work Data Is the Next Frontier for GenAI | Towards Data ScienceGenAI has just made usability testing the most valuable research methodThe Lump of Labor Fallacy does not save human work from genAI.
What will UX Research look like in an AI future? was originally published in UX Planet on Medium, where people are continuing the conversation by highlighting and responding to this story.