A look at what's going on in the field of user experience.
There are a lot of ways to display multipoint rating scales by varying the number of points (e.g., 5, 7, 11) and by labeling or not labeling those points.
By Jacopo Cargnel
“Design the right things versus designing things right.”
By Steven Hoober
Tables have an undeserved reputation for being evil and wrong in the digital environment.
By Jonathan Walter
Has a developer or stakeholder ever informed you that your user-interface (UI) design deliverables were oversimplifying things or were portraying systems and workflows differently from the way they actually worked at a programmatic level? Perhaps they criticized you for creating order on top of the chaotic underpinnings of the systems or for willfully streamlining technical tasks whose completion they assumed to be the responsibility of users—who might not readily understand them. I’ve encountered such scenarios numerous times throughout my career. Often, the people offering such criticisms were correct: the user interfaces I had designed did not truly reflect the underlying system they represented. Of course, this was intentional on my part.
By Lassi A. Liikkanen
What can your company learn from other organizations’ failures in embracing design? Embrace the best ideas from the experiences of thousands of organizations who have taken a shot at becoming more design driven!
One of the primary goals of measuring the user experience is to see whether design efforts actually make a quantifiable difference over time. A regular benchmark study is a great way to institutionalize the idea of quantifiable differences. Benchmarks are most effective when done at regular intervals (e.g., quarterly or yearly) or after significant design or feature changes.
A UX benchmark is something akin to getting a checkup at the doctor. You get your blood pressure, weight, height, cholesterol, and other health measures checked. These metrics help describe quantitatively how healthy you are. They can be compared to existing criteria (e.g., to determine if your blood pressure or cholesterol is relatively high) and tracked over time. If there’s a problem, you create a plan to improve your health. And the same idea applies to UX benchmarks (Sauro, 2018, p. 2).
There is plenty of debate about the best way to quantify attitudes and experiences with rating scales. And among those debates, perhaps the most popular question is the “right” number of response options to use for rating scales. For example, is an eleven-point scale too difficult for people to understand? Is a three-point scale insufficient for capturing extreme attitudes?
Most research on this topic shows that if the scale (1) can measure the direction of the response (e.g., disagree vs. agree), (2) can provide some indication of the magnitude of the response (e.g., agree vs. strongly agree), and in most UX research, (3) has a neutral position, it will have sufficient reliability and sensitivity (keeping in mind that increasing the number of response options usually increases reliability up to eleven points). To meet these requirements for single-item measures, you need a minimum of five response options. When combining item responses into multi-item scores, you can usually get by with a smaller number (e.g., the SUMI uses 50 three-point items).
Radio buttons and checkboxes have long caused confusion for users. They’re often used in the same context, but look totally different. Designers and developers know the difference, but that’s because they learned it through their work. What about users who were never taught the difference?
The fact that users need to be taught the difference shows that these two components are not intuitive. Their appearance alone does not convey their slight differences in functionality. The visual cues themselves—a dot and checkmark—carry no specific meaning to users other than an option selection. Therefore, the existence of both radio buttons and checkboxes violates the UX principle of Consistency.
Flick, flick, flick, flick! That’s the sound of the user’s finger scrolling through your long mobile landing page. Eventually, they’ll get flick fatigue and abandon your page if they don’t see the call-to-action button.
The user’s attention and energy are finite resources. Once it exceeds a certain point, they’ll call it quits and move on. If they can’t tap your call-to-action button when they’re ready, you could lose them as potential customers.
Form field label alignment has evolved over time. It’s been five years since I first introduced infield top-aligned labels. Due to its advantages over both top-aligned and infield labels, many have adopted them. However, it seems many have also adopted its counterpart, floating labels.
This is unfortunate for users because infield top-aligned labels have far better usability and accessibility than floating labels. The standard for label alignment should have a high level of usability and accessibility, or it’s not the best practice everyone should follow.