by Donald Norman.
This is a timeless classic on user-centric design, chock full with wonderful examples. If you need to read one book on user interface design, even though there is not a single software example in the book, this should be it.
Humans are no psychics, and forgetful, too. So things need visible clues, "affordances", that show what you can do with them. Humans see patterns everywhere and build a "conceptual model" how things work based on what they see, so things better show how to use them by "mapping" their controls to their functions. Humans are used to feel effects in the physical world, so things must provide direct feedback about what is going on, what state they are in, and what the effects of an action are. Humans make errors, so things should help them and limit actions to the ones that make sense in a given situation by "constraints", or at least provide a way to easily back up.
How easily can you
- tell what you can do?
- tell how you can do what you want to do?
- do it (only if sane to do so)?
- tell what state the system is in?
- tell the effect your action?
- tell if the effect is what you wanted?
- undo it (if you did not like it or made a mistake)?
This basically tells you how user friendly your product is.
Affordance: the perceived and actual properties of a thing, that determine just how the thing could possibly be used.
Visibility: making relevant parts visible.
Mapping: evident relationship between controls and functions; the relationship between two things, in this case the controls and their movements, and the results in the world. These should map one-to-one between intended actions and the possible operations. Whenever the number of possible actions exceeds the number of controls there is apt to be difficulty. Good mappings use spacial analogy, use additive dimensions like length to show increase/decrease, use substitutive dimensions like color to show change, and use natural relationships. If you need labels, the design may be faulty.
Conceptual model: what the user thinks how the system works, based on the visible part of the system. The designer never communicates directly with the user to explain the design, and the user is never reading manuals, so the designer must take care that the design is self-explanatory. Provide sensible models in your design, or people will make up insensible ones. And if the way people think your system works differs from the way it actually works, you have a problem. The designer, having built the system, understands it to work like it does and is often blind to this. Some strategies to help with this are grouping controls that deal with a shared function, using different kinds and operational modes for switches with different tasks, using physical analogy in layouts, letting someone who will have to or whom you want to use it try things out, to see if they work for him.
Clues: Precise behavior does not need to come from precise understanding. Knowledge can be stored in the world. Full precision is not needed, as long as one can distinguish the right choice from all others. Natural, physical constraints can block erroneous actions. In human affairs, social constraints help, learned and internalized common rules. Clues can be semantic, relying on our knowledge of the world, cultural, like rote learning that is commonly shared, such as traffic light colors, or logical, determining mapping by excluding all other options. Put the desired knowledge in the world, don't require all the knowledge to be in the head.
Feedback: giving each action an immediate and obvious effect. Something that happens right after the action appears to be caused by the action; conversely, when an action has no apparent result, you may conclude that the action was ineffective. And repeat it.
Errors: if an error is possible, someone will make it. The designer must assume all possible errors will occur, and design to minimize the chance of error in the first place, or it's effect once it gets made. Errors should be easy to detect, they should have minimal consequences, and, if possible, their effect should be reversible. Slips slips are small, unconscious errors, out of routine or inattention. All slips rely on feedback to be detected.
There are many kinds of slips, some suggesting ways to deal with them: Capture slips, finding yourself performing a frequently done activity instead of what you wanted to do, if both start similar, or description slips, erroneously performing the correct action on the wrong, similar, nearby object. Lesson for both cases: keep different switches different looking or apart. Mode slips, when devices have different modes in which the same action has different results, and you forget you changed mode; especially likely if mode is invisible. Lesson: avoid allowing this, or at least make mode clearly visible.
Mistakes are conscious errors, out of a mistaken understanding of the situation. They can be addressed by providing better clues and mapping in addition to feedback. Both can be prevented by constraints like forcing functions.
Constraints: Make it mechanically impossible to do the wrong thing. Examples are well done plugs, or inactivated buttons. It is better if impossibility is plainly visible, rather than when one first has to try.
Forcing functions: failure at one step prevents the next step from happening. For example, you may have to remove key before being able to lock a car from outside. Interlock forces operations to take place in the proper sequence. Lock-in prevents premature stopping of an operation, like exiting without saving. Lock-out prevents or makes difficult activating a dangerous function or entering a dangerous area. You need to think about forcing function effects - if they are annoying, people will try to disable or circumvent them. Alarms, like "Do you really want delete?", are no good; people confirm the intention, not the object, and regular alarms only annoy and are ignored. It is better to provide an undo possibility.
Undo: Actions must be without cost. If they have an undesirable effect, they must be reversible.
Declarative knowledge: what, facts.
Procedural knowledge: how. Hard to write down or transfer by explanation, often subconscious, like martial arts, or playing piano.
The design process
Design by an iterative, hill climbing process: design, test, modify, discarding things that do not work, keeping good things that do, until you run out of resources (time, money, energy).
There are three dimensions to design: aesthetics (often overdone), usability (often neglected), cost and ease of manufacture. All have their right and need to be balanced.
A major part of the design process ought to be the study of just how the objects being designed are to be used.
Sources for bad design are
- overt emphasis on aesthetics, ignoring usability ("it probably won a prize")
- the designer becomes so expert in using the object they have designed that they can not believe anyone else might have problems. Only interaction and testing with actual users throughout the design process can forestall that.
- customers are often not end-users and may have no clue.
- featurism, which overloads products with puzzling, unnecessary features. Either don't add the features (which is cheaper) or group them in meaningful ways.
No comments:
Post a Comment