A simple visualization (and notes) illustrating why self-referential design is a problem, and another reason why usability testing is important:
I had a conversation recently with analysts at a Columbus-based consulting agency regarding user research and usability evaluation. I was disappointed—but not surprised—to learn the agency didn't engage in either despite having a team of designers on hand. This isn't the first time I've encountered this phenomenon, and I wanted to put some questions and comments together.
So, assuming a lack of user research and/or usability evaluation for a project, these questions occur to me:
The client and/or product owner hopefully (but not always!) has a vision in mind, but how well-defined and realistic is this vision? How much research has been done? How many people have they spoken to about the product or service? Who are the competitors? If the amount of research has been minimal, the consultant needs to utilize their expertise in helping refine the vision by conducting some user research. It doesn't have to take a long time or be expensive (check out our user research cheat sheet for more information).
What if there's significant ambiguity about what's being built (there's always some)? Yes, the team can get together, ideate, and generate some stories. But are these stories anything other than assumptions if there's no data to back them up? So, take time to gather some data to help inform ideation/storytelling sessions.
In the midst of design, the team should take the initiative and conduct informal usability evaluation during the design sprints (or at some point that makes sense for the project). Just like gathering data for ideation/storytelling sessions, it doesn't have to be expensive in terms of time and resources. Formative sessions, with 4 to 6 users (actual users strongly preferred), and held over a day or two should provide enough data to ensure that the design is on course. Employ techniques with some level of rigor: Internal tests, 5 second tests, and similar techniques return questionable results in my experience.
We're the experts and shouldn't assume the client knows exactly what needs to be built. An interesting product/service concept needs to be developed, and we owe it to our clients to utilize the tools and techniques at our command. Part of this may involve educating the client (and perhaps internal people who manage the client relationship) about what needs to be done and why it will benefit the project. Instances of great ambiguity may involve forging ahead and doing the work you know needs to be done (sometimes it's better to beg forgiveness, than ask permission)!
A while back I got into an exchange with another designer on Twitter regarding his conduct of a usability test. It started when he "tweeted" this (these aren't the exact words, but I've captured the gist):
I'm testing with users and updating the design as issues are uncovered.
This surprised me: it didn't seem like a good idea from a methodological perspective, yet here was a fairly well-known designer (who'd written a book or two by the time of our exchange) talking about it like it was business as usual.
I replied something to the effect of "Shouldn't you change the design after you've run all the sessions (and therefore collected all the data)?" We had a brief friendly exchange following this: he didn't see the harm in changing the design as he was running the sessions. I left it at that, but wanted to put down my thoughts on why this is not a good idea.
If you change the design immediately after the session, how do you know the issue is a problem? And to what extent is it a problem? For example, pretend that you ran a series of 12 user-testing sessions, and observed several issues:
Is issue 5 really a problem? Maybe, but certainly not as big of a problem as issues 1 and 2. Had issue 5 been "fixed" early in the test sessions, you wouldn't have been able to know whether it's truly a problem (perhaps it's a methodological or other test anomaly), and you wouldn't know the magnitude of the problem (data necessary for prioritization).
Usability testing, in part (a large part), is about understanding why users are having problems with a product's design. For a given issue, you want as much data as you can get so you can understand the nature of the problem. If you're changing the design (ostensibly to fix an observed issue ) as you're testing, you've lost the opportunity to learn more the problem. If you only have 1 data point for an issue, can you really address it with a high level of confidence (and again, is it really a problem)?
A side question: What happens when the mid-test "fix" introduces new issues? It seems like the design and testing sessions could go off the rails pretty quickly with this methodology.
Testing and updating in this way, it's entirely possible that the designer is "fixing" issues that aren't really problems, or are relatively trivial problems. Or, because the designer doesn't have an understanding of the nature of an actual problem, it's not addressed as well as it could've been, had there been more data.
What are your thoughts? Is this a common testing methodology? Is there a context in which it would make sense?
Hua (simplified Chinese for flower) is the tool Just Right UX uses for its blog. It's a static content generator developed after we considered a number of CMS/blogging platforms (dynamic and static) and decided to create our own tool. One of our requirements included reuse of existing styles, includes, and other assets to save time and, more importantly, ensure a seamless look between the primary website content and blog. Hua allowed us to meet this requirement and took significantly less time to code than adapting a theme from another CMS/blogging platform.
Hua is written in the Ruby programming language and was inspired in part by the venerable Perl-based blogging tool Blosxom and similar static content generators. Simplicity is one of its core principles: the database containing blog entries, the blog content, includes, and template files are maintained in plain text. Comments are provided through a through a third-party engine like Disqus or IntenseDebate.
Hua is in the early stages of development: we'll be adding to it over the next few months to support markdown, full tag functionality, and other features. While the code is currently hosted internally, we plan to eventually host it on GitHub to encourage other interested contributors. If you can't wait and are interested in Hua now, get in touch.