Skip to main content

Human-Computer Interaction by Lena Mamykina

HCI encompasses two things:

  1. Designing technology: useful and usable variety, be nice.
  2. Studying how people interact with technology. E.g. were you doing cockpit design in the 1940s? You were engaged in Human-Technology Interaction.

Human-Computer Interaction as a field started around the 70s with the rise of graphic displays and personal computers. Think about punch cards from the 50s: how much are you 'interacting' with them?

Okay so why do this? Some or many things about tech and design may be super-obvious. But generally speaking, new technologies are unpredictable in terms of their adoption and evolution. See this book. It's very important to note that there may be (and usually are) multiple stakeholders with varying requirements and goals. E.g. the EHR needs to be used by doctors, administrators, finance bros, and yet ultimately serve the patient as the ultimate beneficiary.

You need to study safety and socio-cultural impacts too. E.g. ChatGPT "Therapists". Enough said.

In Medicine and Biomedical Informatics

Technology is deeply integrated into Medicine. That's been the case for a while. But Information technology is pretty recent in that it's old as your dad and clinicians rely on it heavily to get their jobs done.

There are always problems ("Job Security"). Consider the EHR: the goals were noble but there are problems like increased documentation burden, information overload, and so on.

But there are exciting, potential solutions too. Consider using SuperHuman™ AI's (the ones good at beating Contra and racing sims). In an example, Dr. Pierre Elias showed how SuperHuman™ AIs beat human doctors handily at predicting disease from ECGs. The Human+SuperHuman combo fared in the middle! Why?

Explanations may help but human beings are complicated. If you explain too much, humans will overrely on you (and in some cases, they'd be better off making a decision unaided by AI). If you explain too less or not at all, humans won't listen to you. It's complicated.

Theoretical Foundations

Discussion on the Seven Stages of Action by Donald Norman, a Founding Figure in HCI. You can read this book.

It's not the only one. There are several other areas like Distributed Cognition, Action Theory, and so on (Prof teaches a class on this).

Getting Things done

There's a high-level flow, it's a bit commonsensical but things make sense in hindsight quite often. You have Requirements → Design → Evaluation

That's it. Figure out what they want (really figure it out), make it, ask them how you did (no back-patting). E.g. Prof studies chronic disease self-management.

Read this paper, "The Computer for the 21st Century" by another Founding Figure, Mark Weiser. Apparently spurred a lot of imagination (e.g. IoT).

User-Centering ML and AI

Like with every new piece of tech, these pose their own challenges: How do you interpret the model? How do you replicate results? How long should we keep pestering the user (and/or experts) to validate results?