While Frozenbyte had a user research process when I joined the company, it lacked ownership and focus, which lead to unfocused insights and missed opportunities for actionable feedback.
The established user research process was this: Once the game was near shipping and almost completely polished, they would invite players over to their offices to play the game and give feedback on it.
The support team would organise and moderate these sessions. They would take the latest build on the day of the test, record the entire session, and inform the game team that the videos were available for them. The more tests they ran, especially on the same game, the better observations they could make during the sessions, and flag issues directly with the team immediately after the playtest session was over.
Over the years they have learned a lot about player perception and thinking through this process. But this learning was unfocused, unstructured, and unappreciated.
Over my tenure at Frozenbyte, I improved the process is multiple ways.
I understood that the biggest issue blocking us from getting more out of our playtests was the lack of clear research questions. The moderators, and myself, had anticipated one, but often defaulted into “we’ll log everything we see”. As a research question was never required, the teams didn’t know to pose any, thinking they’d “know the issues when they saw them”.
Another issue was the lack of interaction between designers and playtest moderators. The moderators were removed from the design decisions and the questions the designers were grappling with. They also didn’t have the training and insight to playtesting to ask about these things from the designers. The designers had never thought too deeply about playtests, so they could not offer answers and insights to these questions without prompting.
User research had effectively become a checklist item to be crossed off, not something the teams put a lot of conscious effort into.
Before we did anything, I started analysing larger bundles of tests rather than focus on each test individually. By doing this we could identify recurring issues as well as give the designers more condensed reports saving them time.
At first I encouraged moderators to document the issues they could identify during the tests and share them with the designers with the recording of the entire test.
However, due to time pressures, optimism, and lack of clear boundaries on responsibilities, the designers soon started treating the reports moderators handed them, as enough. After all, the issues were obvious enough to be seen by everybody, so they must be the most important ones. But as the moderators were barely interacting with the game teams as their main task was player support, the moderators could never understand design intents or even guess what kind of feedback the designers were looking for.
So, working together with management, designers, artists, writers, and developers, I set out to understand what people wanted to know about our users at any given time. I ran various different trials and initiatives to explore what the teams needed and to showcase different ways of including player insights to the game development process at Frozenbyte.
These trials and initiatives are explored in the next section.
The main process improvement in this phase was getting the designers in the habit of documenting their research questions for each test.
These initiatives had several company-wide benefits:
Long-term impact
By the time I left the company, teams had the tools and processes in place to conduct focused, efficient user tests, with the capability to turn around actionable insights within hours.