Blog

Mon Jul 17 2017
  • methodology
  • testing
  • usability

Changing a design while it's being tested: good idea?

A while back I got into an exchange with another designer on Twitter regarding his conduct of a usability test. It started when he "tweeted" this (these aren't the exact words, but I've captured the gist):

I'm testing with users and updating the design as issues are uncovered.

This surprised me: it didn't seem like a good idea from a methodological perspective, yet here was a fairly well-known designer (who'd written a book or two by the time of our exchange) talking about it like it was business as usual.

I replied something to the effect of "Shouldn't you change the design after you've run all the sessions (and therefore collected all the data)?" We had a brief friendly exchange following this: he didn't see the harm in changing the design as he was running the sessions. I left it at that, but wanted to put down my thoughts on why this is not a good idea.

Is the problem really a problem?

If you change the design immediately after the session, how do you know the issue is a problem? And to what extent is it a problem? For example, pretend that you ran a series of 12 user-testing sessions, and observed several issues:

  • Issue 1: 7 users experienced this issue
  • Issue 2: 6 users experienced this issue
  • Issue 3: 4 users experienced this issue
  • Issue 4: 3 users experienced this issue
  • Issue 5: 1 user experienced this issue

Is issue 5 really a problem? Maybe, but certainly not as big of a problem as issues 1 and 2. Had issue 5 been "fixed" early in the test sessions, you wouldn't have been able to know whether it's truly a problem (perhaps it's a methodological or other test anomaly), and you wouldn't know the magnitude of the problem (data necessary for prioritization).

A Missed Opportunity

Usability testing, in part (a large part), is about understanding why users are having problems with a product's design. For a given issue, you want as much data as you can get so you can understand the nature of the problem. If you're changing the design (ostensibly to fix an observed issue ) as you're testing, you've lost the opportunity to learn more the problem. If you only have 1 data point for an issue, can you really address it with a high level of confidence (and again, is it really a problem)?

A side question: What happens when the mid-test "fix" introduces new issues? It seems like the design and testing sessions could go off the rails pretty quickly with this methodology.

The Bottom Line

Testing and updating in this way, it's entirely possible that the designer is "fixing" issues that aren't really problems, or are relatively trivial problems. Or, because the designer doesn't have an understanding of the nature of an actual problem, it's not addressed as well as it could've been, had there been more data.

What are your thoughts? Is this a common testing methodology? Is there a context in which it would make sense?

  • development
  • hua
  • methodology
  • testing
  • usability
Fri Jun 30 2017
  • development
  • hua

Introducing Hua

Hua (simplified Chinese for flower) is the tool Just Right UX uses for its blog. It's a static content generator developed after we considered a number of CMS/blogging platforms (dynamic and static) and decided to create our own tool. One of our requirements included reuse of existing styles, includes, and other assets to save time and, more importantly, ensure a seamless look between the primary website content and blog. Hua allowed us to meet this requirement and took significantly less time to code than adapting a theme from another CMS/blogging platform.

Hua Logo

Hua is written in the Ruby programming language and was inspired in part by the venerable Perl-based blogging tool Blosxom and similar static content generators. Simplicity is one of its core principles: the database containing blog entries, the blog content, includes, and template files are maintained in plain text. Comments are provided through a through a third-party engine like Disqus or IntenseDebate.

Hua is in the early stages of development: we'll be adding to it over the next few months to support markdown, full tag functionality, and other features. While the code is currently hosted internally, we plan to eventually host it on GitHub to encourage other interested contributors. If you can't wait and are interested in Hua now, get in touch.