Response Paper 1
- Construct a specific example of a human-human interaction that clearly involves affect. Construct its "equivalent" interaction between a person and an affective technology, by using the media equation. Do this for two cases: one where the equivalence seems likely to hold, and one where it seems likely to not hold. Do you think the presence of affect in a human-technology interaction makes the media equation more or less likely to hold? Explain your thinking.
- Argue for or against this statement: "Emotions are just special kinds of thoughts."
- Pick a least favorite and a most favorite application from Affective Computing Chapter 3 and critique both of them (pros and cons) based on your own personal and unique research perspective. I wrote these over ten years ago, and while some things have not changed much, I am interested in what you think is most interesting, most likely to succeed or fail, and why.
Response Paper 2
- Describe an experience where you perceived yourself as being empathetic toward someone. Would you say it was more affective, cognitive, or a combination of the two? Why?
- Take the Empathy Quotient test. What are your impressions of the test in light of the different approaches to understanding empathy? How would you design a test that does not rely on self report to see if someone's ability to empathize has changed as a result of some intervention? For example, how might you incorporate physiological measures or other affective technologies into this test?
- If you were able to control an unconscious tendency to mimic the emotions of others, how might that affect interactions?
Response Paper 3
- Consider this criticism: "empathetic technology" cannot succeed because technology cannot feel what people feel. Can you give an example where one person cannot feel what another person feels, and yet their empathy succeeds? What do you think are the limits of empathetic technology given what you know about technology on the near horizon? Will it be able to help more than it has been shown to do so in these readings where it has been limited largely to scripted responses from machines? Alternatively, side with this criticism and strengthen it, justifying your arguments.
- The work in Klein, et al., only used empathy once, and it appeared to help reduce frustration. Do you think technology's use of empathy (without being able to actually fix the user's problem) could succeed over long-term use? Do you think it would be necessary to build into the technology something else in order for this approach to succeed repeatedly, over time? If you think it would fail over continued use, be clear why. Support your argument, considering this week's readings as well as any other sources you'd like.
- These two approaches are sometimes contrasted for advising a parent how to help their child when he or she is frustrated: identify the child's problem and offer a fix for it, vs. empathize with the child, so he/she can get past the bad feelings and find their own fix. (1) Comment on how you think this approach might work when delivered by technology to a child who is engaged in using it for educational instruction (e.g., using a computerized learning tutor). (2) Change the previous use of "Parent" to "Technology provider" and of "child" to "customer" and consider the customer to be an adult user of some new technology. Does your answer to the previous question change in this case? In both parts of this question be clear if you recommend favoring mostly approach (1), approach (2), or a mix of both.
- How important do you think it is to allow for "repair" when technology shows empathy? What do you think technology should do if a person responds adversely to its attempt at empathy?
- Cheery drivers responded best to an energetic voice, and upset ones to a subdued voice. Listen for instances this week where people change their voice to deal with the emotions of another person effectively, and share an example of this with the class. (If you can't find an effective example, you can give an example where the interaction was ineffective and tell us why you think it failed.) Please do not disclose identifying information.
Response Paper 4
- In Rana el Kaliouby's PhD thesis she managed to use facial expression to infer internal state of a person using vision and very clever software. However, many people feel differently than the expression on their face would have other people thinking. This is just one limitation of facial analysis systems. What others are there? What other things that we can see with a Webcam should be taken into account when determining the emotions of a subject?
- Did you find the eyes test difficult? If you did well, what features were you paying attention to that you felt helped you make a decision on the emotional state from the eyes? If you did poorly explain, upon getting the right answer, what feature(s) of the eyes did you miss that you think could have improved your performance.
- Most machine learning technologies/algorithms depend heavily on the features you choose to extract from a given set of raw data. Think of an affect recognition system, be it any combination of sensors you want (vision, SC, EMG, EKG, ...) and describe what states you would want it to recognize and what features you expect to be important for recognizing those states.
- The number of sensors that are used in a given application can be very small or very large depending on what you are looking for. Rana el Kaliouby used just one sensing modality while Wagner, et al. used several. Think of and describe a situation where a large number of sensing modalities might be needed. Think of and describe a situation where only one is needed. Feel free to check out some of the projects from the affective computing group for ideas (it is ok to report something somebody else has done - just put it in your own words, and feel free to raise questions and critique.)
There is no advance writing assignment related to the readings this week. However, your project proposals are due in class (see below for what is needed in them). Please also think about these two more-complex-than- they-sound questions Jim Russell has asked us to contemplate while reading this week: "Do faces express emotions?" and "How can we understand emotions?"
Project Proposals: Please submit a page or two describing and explaining:
- What are you proposing to build/test/investigate?
- What resources would you need to do this? (Be clear: what you already have, what would you have to get).
- If the project "works" what do you expect could be learned from it? What about if it "fails?" (Let's make sure this will be educational/informative in either case).
Response Paper 5
- If we can regulate our emotions automatically, we can avoid the effortful (and perhaps costly) process of intentional emotion regulation. This is the contention of Mauss, et al., and it is shared by many researchers in the emotion regulation field. Do you think computers could help guide us to regulate our emotions automatically? Do you think they should? Consider the efficacy of this approach as well as its ethical implications (i.e., do we want computers to purposefully manipulate our emotions without our knowledge, even if this might be helpful?).
- Tamir, et al. argue that anger, while unpleasant, might be purposefully sought to achieve certain goals. Can you think of another unpleasant emotion you might willingly summon? What techniques would you use to conjure this emotion? Tamir, et al. used music and emotional recall. Can you think of some other emotion regulation tricks that could be suited for this purpose?
- Many of these papers discuss individual differences in emotion regulation. Do you think technology could tailor itself to these individual differences in order to respond more adaptively? How? Please give one example.
- Check out the game Web site The Journey to Wild Divine: The Passage and click on "Demo the Passage Now!" The demo is thick with syrupy new-agey vocabulary, but please try to evaluate the product objectively. We will discuss limitations of this (and other) biofeedback programs in class. For now, however, please think about how games like this could be useful. Consider what you've learned in the readings to answer this question. Replace their rhetoric with your own informed insight into emotion regulation (that is, don't just say "it can help you unfold your full potential and glimpse the field of infinite possibilities!").
Response Paper 6
- Jill, the please-her-boss pollster, has been given ten questions on which to collect people's opinions. The questions relate to the overall satisfaction that people perceive with her party's politicians and their impact both locally and nationwide. She is not allowed to modify the questions, but she is willing to modify how the poll is conducted in subtle ways to make her party's political candidates look as good as possible. She plans to poll 1000 people nationally by phone and 1000 locally, in person, by some "random" process. Describe three ways Jill might bias the opinions she collects by manipulating affect-influencing factors. Be clear how you think each of Jill's three manipulations would affect their opinions.
- The work with Larson was inspired by twisting around the idea of Isen's in order to find a measure that could be influenced by very subtle affective feelings. Pay careful attention to all the factors that might have influenced each participant's feelings when reading this paper and see if you can find some that were not fully controlled.
- Have the readings this week changed the way you will (critically) read future psychological studies? Describe a way some other work you've seen or read might have had a different outcome if they had carefully controlled for emotion-related variables up front.
Response Paper 7
- The results of Slack's survey of patients using computer-based medical interviewing are very interesting. Considering that the study was done in 1968, do you think that the overwhelmingly positive results were due to the novelty of interacting with a computer? Or do you think that similar results would be achieved today? More importantly, how do you think that user evaluations would change after repeated interactions or with long-term interactions?
- There have been a number of studies since the Robinson and West paper that have shown superior performance of computer-based interviews over physician interviews in the solicitation of sensitive information from patients. This has included drug use, sexual behavior, and violence. Robinson and West give some great criticisms of their own work, but still raise the hypothesis that patients may report more to computers than to physicians because of less embarrassment. Do you think that their work provides reasonable evidence of this? What weaknesses of their study do you think are most critical? Now take the opposite point of view and assume that it is true that patients feel less embarrassed with and less evaluated by computers. If a computer system was designed to behave more like a human (to use text-to-speech, to use an anthropomorphic animation, to show affective facial expressions, etc.), do you think that the differences in patient reporting between computer and physician would decrease?
- The paper by Bickmore, Gruber, and Picard showed increases in the bond dimension of the Working Alliance Inventory and greater desire to continue working with the relational agent than the non-relational agent. Unfortunately, the differences in physical activity levels between the different groups only approached significance (p=0.06) for the agent vs. non-agent condition (the relational agent was no different than the non-relational agent with respect to exercise measures during the study). Do you think that this dismisses the relational agent as a useful tool for behavioral change? Or do you think that the limitations could be overcome? Can you think of any techniques to boost the motivational value of the relational agent?
- What strategy do you think is best?: 1) design highly efficient computer systems for healthcare so that doctors will have more time and energy to be empathetic or 2) design empathetic computer systems for healthcare that can augment the empathy delivered by physicians.
Response Paper 8
- In 2001: A Space Odyssey, AI and two articles, these imaginary stories were not entirely positive on future emotional technologies. Choose one scenario (from the movies or the two short articles) and consider "the bad part that is most likely to happen in the near future." Are you concerned about this happening? Describe.
- In Affective Computing (chapter 4) and [Picard and Klein, 2002], affective computers can play many different roles to engage people in everyday life. Affective computers may serve for one individual or a community. Please illustrate one scenario where affective computers are important for a group of people, not for an individual.
- In the Lie Detection paper and in Dumit's book (chapter 4), colorful images (e.g. thermal images, CT scans) of people's faces and brains can suggest that a person is lying or a person may have mental disorders. How do these claims (produced by experts and technologies) influence the society? Who benefits from these technologies? Who gets hurt?
- In philosopher Ian Hacking's book (chapter 4), he talked about the concept of ÔInteractive Kinds', kinds that can influence what is classified. Are emotions (e.g., bored, irritated, arrogant, annoyed) interactive kinds? When we design emotional technologies, how can we deal with the problems with inaccurate labels depending on changing contexts?
Also, bring to class a paragraph describing your class project progress.