GEEKERY  
ADVENTURE  
CONTEMPLATION  

20110524

designing everyday things and computer interactions

I've been reading The Design of Everyday Things by Donald Norman.  It was originally written in 1988 and my edition was revised in 2002, but it still retains archaic examples like electronic land-line phones without screens that were in common use around the time of the original writing.  The author also used visicalc as an example of spreadsheet software and referenced the Xerox Star and the Apple Lisa as failed personal computer designs.

Despite a few of these examples (which are no longer "everyday"), he does make some good points, namely that the principles of good design are visibility (to determine the system's state and possible actions), a good conceptual model (so the user develops a coherent idea of what's going on and why), good mappings (knowing what physical movement performs what system action), and feedback (informing the user about the results of actions).

When I got to the section on computers, there was a subsection titled Two Modes of Computer Usage.  Norman described commands as either "third-person" and "first person," the former being issuing "actions" from a command line and the latter being GUIs (although he never used that term since the book was written before it became a common acronym) like games and spreadsheets.  I don't really like this terminology as it is arguable inaccurate.  No matter what, if you're interacting with a computer, it's first person.  Maybe if you were tunneling to another computer it might be "third person," but even then your commands don't change at all.  Norman made the distinction by saying that using a command language feels more like you asking someone else to do something, whereas inputing data into a spread sheet feels like you're doing it yourself.

I guess my conceptual model (to use his terms) of the computer varies from his (which isn't surprising given the generation gap), but in my mind, the 1st and 3rd person terminology should work the other way around if you have to use it at all.  On the command line, you say piece by piece what you want done, whereas in a GUI, you let "someone else" take care of the details for you.  Lower level is more direct.  GUIs simply mirror physical interactions, which is why they feel more like direct interaction at first.

Then, when you want to move your clipart a little to the right and the word-processing software won't let you move it exactly how you want, you'll realize that it's not quite like the physical world.  Problems like these are design flaws stemming from the premise (or conceptual model) that interacting with the computer is like interacting with things in the physical world.  While text-command interactions are harder to learn, they are less prone to these flawed conceptual models.  The history of graphical computer interaction is one of constantly trying to make it more like the physical world.

GUIs and command line interactions both have their purposes.  I'd never want to do digital art with a command line.  (Well, not never.  I might actually prefer a scripting interface for things like cropping, scaling, color adjustment.)  But listing the contents of a directory, moving files, or compiling software?  Command line, please.

1 comment:

Lucas Sanders said...

I know you said you don't really think the first-/third-person labels work here, but I'd say the directness of manipulation doesn't work to distinguish the two. Both are fairly abstract ways of manipulating the machine, and the interface that implements more abstractions does so with the explicit goal of giving you actionable "objects" to manipulate in ways more akin to how we interact with real objects. I suspect the command line only feels closer to direct manipulation to you because you understand the ways the command line's interface mirrors its technical implementation.

Also, if this characterization did work, we'd have quite a hard time categorizing touch-based UIs. Most touch-based interfaces are even less direct than WIMP GUIs in terms of their technical implementation, yet adhere more closely to a metaphor of directly manipulating objects than the older UI paradigm did.