Research is iterative, but even more importantly, it doesn't really have a beginning and never really ends. It starts with something that already exists - a design, a question, an insight - but never ends because it raises more questions.
So it must be adaptive and flexible to meet the dynamiic needs of any design environment and responsive to your users while endlessly innovating to meet changing needs.
Iterative Research Flow
Everyone has an idea of what's needed, so before anything can be done, a meeting of the minds - the stakeholders- needs to clarify purpose and goals and layout a shared vision that so we can ask the right questions. While there are many types of test planning that need to occur, this essential step lays to path to get to the test artifacts required for any research process.
The research defines the processes and details to achieve the goals and shares with the team and organization to further advance the shared understanding of the value they can expect.
This flow chart on the right shows an aggregate research workflow of what I might end up doing during a research activity. It highlights the non-linearity of how research proceeds.
Typical research flow
Persona of target user
Starting with personas is a good place to begin. Creating shared representations of your target users is key to creating a shared vision of your product. There's a lot of design and development decisions constantly being made by everyone as the product is built out. If the team isn't on the same page, users will know. Persona's helps keep everyone on the same page, but just don't take them too seriously - they aren't real, just design tools.
Ron is a very mobile sales person always on the go and has different needs than one working from an office.
Screening for App developer
Good research needs good users and good users need to be defined - very clearly. Otherwise it's just garbage in and garbage out. There's always user definitions, usually too many, so they need to be collected, collated and agreed to by the team.
When the results come in, acceptance is based on buy-in. If you have well defined persona's, the screener and buy-in is easy. If you don't, well then you need research just to get the persona's right, so its back to the screener.
This screener asks about specific criteria for an api/app developer to ensure the target user is recruited.
To plan what you want a user will test, you need to have it available. If you can visualize it, you can design research for it. This becomes another piece of the shared mindset that helps keep everyone in lockstep and moving toward a common goal and vision.
Instead of just lists of nearby accounts, visuals better convey account info and location in a map, providing plenty to talk about with the stakeholders and customers.
Idea for account info
Designs are ready, questions have been asked and users defined. Now bring it all together in a single test plan, the kind that contains the backstory, tasks, questions, ratings, instructions and everything both you and the user need to evaluate the designs. These can be just a simple scenario format or long and detailed.
It depends on what youre testing. A few mockups for conceptual evaluations call for a simple scenario, but for a usability test of a working product that requirements systems, resources and environments, be prepared to spend time now or the might get garbage as things just don't go well with all that complexity.
To get results to the team in time to address them means you need to work backwards from the release date and plan what can be done and when it needs to be complete, as shown by the timeline on the left.
When the test focus has been defined, a task analysis shows what you want the user to do in a test, which results in a test script, or to do list for the participant, shown to the left, the gives them what to do (goals, objectives and information) without specifying how to do it.
Test plan timeline
Now's the fun part - breaking out the tools, techniques and time to uncover what really happened to all those users you paid to work out the kinks and polish the stories. What to do? Well you shouldn't be asking that now. That was a question for well before the test began - you need to know exactly what to collect if you want valid answers.
Most analysis is some combination of qualitative and quantitative - counting actions and comments to uncover and prioritize obstacles - could they do it to the standards you set and what did they say about it. Looking for where users converge around similar likes, dislikes points to what should be kept and removed.
Sometimes, you need to dive head first into big data. If you can get it, you're better off. And you'll need a mix of performance and subjective data so make sure you know how it all relates before the test, so your correlations make sense and are valid for the kinds of analyses you'll be running.
The Mural.ly board showing an app developers journey using a tool in a test environment combined with background information from surveys about the ecosytem and collaborators.
The Sankey Diagram (produced in R) shows the retention flow and magnitude of influnces during travelers early membership. The left side shows the possible influences and the right side show's the ones that matter to create effective marketing strategies
Part of a journey map
Sankey diagram of influences
It all comes together in the end to guide design. Communicating the essentials of the users journey, the key insights about their experience, the painpoints they deal with and empathy for what we make them do with our designs. The findings both asnwer the questions laid out in the planning and open new questions about what we still need to know. The end results bring us back to the planning table for the next steps in our journey.
The slide on the left defines what a good avatar should be.
The Mural.ly board combines individual results with essential insights for a documentation landing page.
The Mural.ly board on the right depicts the conceptual and navigational complexity facing users of an API development tool.