Project in UXD Graduate Studio ​

Emotion Translator

A tool to assist communication between human and aliens

In this project, we wanted to create an experience for communication. We imagined a future which humans and aliens living together peacefully but having trouble communicating with each other because they don’t express emotions the same way. Our goal was to create a technology that would serve as a bridge in conversation.

 

We conducted three rounds of ideation and three rounds of testing to refine our design which considering the efficiency, privacy, usability, likability, and human behavior. Our final solution was a bracelet showing text hints. It could be activated by pushing a button or twisting a wrist. Also, it would notify the user through vibrations when emotion changes were detected.

Role

UX Researcher

UX Designer​

Timeline

2019/1 ~ 2019/2

Team Member

Yixuan Bian​

Wave Upatising

 

Framing the experience

In this situation, aliens and humans are living together as equals. They will be meeting and interacting together as friends, co-workers, neighbors, etc., in places such as school and church. However, ever since the aliens arrived at earth, there was always a barrier in terms of expressing and interpreting emotions because they don’t express feeling the same way.

Content

Persona

We started by making a persona, which was used only for the purposes of framing the experience that our team was working on. The information and characteristics of the alien helped create a clear and consistent story for the project.

 

 

 

Persona.png
 

Response

Challenge

"How can we create a technology that would serve as a bridge in interpreting the emotions of both humans and aliens to support conversation?"

 

This is a video explaining the context and goals for this project:

 

 

Design Process

1_I.png

1. First Ideation

In the beginning, we brainstormed through many ideas and options of how we wanted to approach this problem.

 

We identified four ways to represent the emotion that can be mutually understood by both parties: Visual, Smell, Touch, and Sound.

ET_IDEATION1.png
ET_IDEATION3.png

We evaluated the level of privacy and attention to help us narrow down which option we wanted to explore first. From our evaluation, Sound and Touch appear to be high privacy and require low attention.

 

However, we had more ideas of how to use sound for our device so we decided to start with sound.

ET_IDEATION2.png
1_T.png

2. First Testing

 

We wanted to explore how sound can be used in the context of a conversation and see if it would work. We identified three main aspects: Volume, Sound Type, and Frequency.

ET_TESTIN1.png
ET_TESTING2.png

(1) Volume: How loud can the sound be in order to be seamlessly incorporated into a conversation?

 

  • Method: We had participants listen to music while being in a conversation. Throughout this conversation, we would change the music to represent different mood and asked the participants to pay attention to how many times it was changed and in how much detail are they able to identify the music. This was repeated three times with three different levels of volume for the music.

 

  • Finding: The participants had a hard time distinguishing between the moods of the music but were able to spot change. However, it required a large amount of attention towards it which distracted them from the conversation. 

 

(2) Sound Type: What do the different type of sounds/music convey to people? What type of sounds would work better in a conversation?

 

  • Method: We had a series of sounds that we asked the participants to describe using keywords.

 

  • Finding: None of the participants had any overlapping keyword for most of the sounds that were provided. This came as a surprise to us because of a few of the sounds we thought would be extremely clear what emotions they are representing.

 

 

(3) Frequency: How often should the device update the user about the status of the other’s emotions so that it’s not distracting?

 

  • Method: For this testing, we wanted to see how often participants would want to get feedback in a conversation where they would have no facial and tonal cues from it. To this, We used Google Translate voice to be in the conversation with the participant. We wanted to try different frequencies of feedback including constant, timed interval (ex. every 1 min), and manual.

 

  • Finding: After attempting at the testing, we realized the timed interval did not work at all because sometimes it would be repetitive and the same person would still be speaking and in the middle of a story, so that was quickly ruled out. We tested the constant and manual feedback options and found that users preferred the manual feedback best. However, the users still felt that it was interrupting and distracting for the conversation.

Take Away

 

We realized that sound was very distracting, it requires a lot of attention from the participant to recognize different sounds while in conversation and it was very difficult for multiple people to have the same interpretation of the same sound.

 

Hence, we decided to explore other options for our device.

3. Second Ideation

 

We took a step back and did some more ideation. In this stage, we did research on similar situations such as individuals with Autism.

We also made an annotated portfolio to see different possibilities.

Secondary Research

Taking alien as one of the dialog party somehow limited our thinking so we further defamiliarize ourselves with Alien by imagining Alien as people with Autism, especially the spectrum of Asperger syndrome. The way Autism thinks and solves problems is different from normal people who meet the description of our Alien Persona that they speak the same language with humans but use a different way to express their feeling.

 

Based on our research, we found they have these characteristics which are related to our topic:

  • They may show few emotions. For example, they may not smile when happy or laughing at a joke. 

  • They may speak in a flat, robotic kind of way. 

  • They don't like to stare at other people's eyes in a conversation. 

  • They may be socially awkward, not understand social rules, the subtleties of language, such as irony, or show a lack of empathy. 

  • Because of the current social expectations, most Asperger patients will try to hide their characteristic and act like normal people. ​

Annotated Portfolio

Here, we went back to the four ways to represent different emotions that we came out with at the first round ideation. In our previous discussion, we already made a conclusion that smell needs a high level of attention which contradicts to our first round testing result that people were easy to be distracted. Also, touch is hard to prototype. Therefore, we decided to keep exploring the possibility of visual representation. Besides, we also considered some solutions we got from our research about Autism. We used the annotated portfolio to help us evaluate the ideas.

annotated.png

Through this ideation process, we decided that using a screen to present text prompt is a good solution to go with. From there, we wanted to explore how visual hints can affect the conversation.

 

2_T.png

4. Second Testing

 

To understand how text prompt can influence a conversation, and see how people feel about it, we conducted the second testing.

 

  • Method: This testing had two round, we designed three stories and several possible answers to the alien for each story. The alien told stories first, then participants had to respond to it and continued a short conversation. We used google voice to present the alien’s flat tone speaking pattern.

    • Second round, participants had a device showing the emotion during the whole conversation.

    • First round, participants could only judge the alien’s emotions on their own. 

  • Prototype: Both sides of the conversation party had the displays. We typed the emotion we wanted to display in Arduino editor and it will directly show on the screen.

 

ET_PROTOTYPE.jpg

Research Question:

(1) How did the experience change for you in the first round and second round responding to the stories?

(2) What did you find challenging in the first round? Was it different in the second round?

(3) In the context of this project would you use this device?

T2_1.png

Findings: Round 1 (Without device)

 

  • The tester asks questions to the Alien to get a better understanding of what the Alien is feeling before giving out opinionated responses. Most of the testers’ response remained neutral throughout the scenarios.

  • Most of the testers were more concerned about the story and were not so eager to show his point of view. Usually, they responded to the emotions only after the Alien talked about its emotions.

  • They responded to the emotions based on their assumptions and some of the interpretation of the alien’s feeling was wrong.

  • One participant didn’t know how the Alien felt, so she completely disregarded it and just provided her opinions about the topic.

Key Quote: ​

“I don’t know the alien feelings but I don’t think I need to, I just respond in my way and I give the feedback because of the content, not the mood.”

 

Findings: Round 2 (With device)

  • Most of the responses dramatically changed in response to the same stories.

  • Some of the conversations jumped to the judgment very quickly responding to the emotions showing. For example, “I think you are mean”. On the other hand, in some of the conversation, the testers bring up the disagreement in a subtle way and then compromise when they didn’t agree with the Alien.

  • Some of the testers understand that their interpretation of other people's emotions is not always correct, and try to adjust their answer through the hint.

  • The moral and privacy issues were brought up. Some of the testers didn’t feel comfortable having someone know exactly how they feel at all times.

Key Quote:

“I know more about them than before. It’s clear what their intentions are. It makes me feel like it’s easier to deal with them. Helps me come up with the best response to the conversation.”

 

Take Away

 

All the participants show a positive response to the visual design. Also, we saw participants’ responses did change when they could check the emotion of the Alien.

 

In round 1, most of the tester expressed their personal opinions to the story rather than the thought of the alien. But in round 2, most of the answers and conversations revolve around the thoughts of the Alien.

 

Overall, we thought using visual representation to give feedback in a conversation is a good way to go.

3_I.png

5. Third Ideation

 

We hadn’t determined the physical form and appearance of the machine yet. From here, we brainstormed how to improve the performance of the device to make better communication experience between human and aliens. We considered the time and frequency to provide feedback as well as the size and type of the device. We also made a list of emotions to be shown.

Physical design

One concern raised from round 2 post interview was the problem of privacy, some people feel uncomfortable when they recognize that the other person is checking for their emotions and they also feel uncomfortable if the other person knows they are checking it.

 

We wanted to iterate our design to reduce discomfort for the users. Therefore, we ideated three prototypes with different types and sizes to keep the action of looking at emotions as discreetly as possible. 

 

Another finding from round one and two testing results was instead of constant feedback; people preferred to know the emotion at the time they wanted to know. Hence, we added an approach to turn the screen on and off.

 

ET_PROTOTYPE2.png
ET_PROROTYPE3.png

Emotion Research
One insight we got from round two post testing interview was that testers wanted to get feedback while there were major emotion changes during a conversation. Therefore, we did secondary research on common emotions to clarify what are counted as major emotion changes.

 

  • Plutchik's Wheel of Emotions: anger, fear, sadness, disgust, surprise, anticipation, trust, and joy.

 

  • Basic Emotions by Ekman: anger, fear, surprise, sadness, disgust, contempt, and happiness.

Based on the research, we came up with nine emotions that would be used to display on the screen and considered as major emotion changes if a person changes emotion from one to another.

ET_EMOTIONS.png
3_T.png

6. Third Testing

 

We did round three testing to test out which prototype provide a better experience to address the concerns we talked about in testing two results.

 

Research Question: Which type of device will allow participants to most discreetly check their device?

 

  • Method: 2 participants were involved in the testing. Before the testing, participant A was instructed about a certain number of times to check the device. Participant A would act out checking the device with motions of waking the device and looking at it, while participant B would count how many times A was checking the device. We further interviewed the two participants after the testing.

  • Findings

    • Participants who checked devices prefer a wearable device so that they didn't have to carry an extra device with them.

    • Participants who checked devices brought up that too much time required to unlock the phone.

    • Making the device vibrating when there are major emotion changes so that people would not manually check emotions multiple times.

testing3.jpg

Take Away

 

Based on the testing results, we realized that a wearable device would be a good form for the design. We decided to stick on the bracelet idea which one could flip wrist to activate text. We also wanted to add a physical button for manual activation as well as a vibration function to show dramatic emotion changes.

 

Outcome

We designed final solution based on our testing results. The device has these features:

ET_FINAL DESIGN.png

Sketch

final design.png

Prototype

We were not able to create a working prototype for the bracelet. So what we made instead for the final design prototype was this:

final prototype.png

The Arduino was connected to a button which would activate the screen so that the text would show up for a few seconds and go away. This was how the button on the bracelet will work and the arm twist as well. The text would only show up for a few seconds when activated by either method.

 

Reflection

This was an exciting project. Instead of doing research first to find out what people need, we jumped directly into a future vision where aliens live with humans peacefully. This setting may seem unrealistic, but in fact, unequal and inefficient communication problems exist everywhere such as Autism and cross-country communication. How to improve communication has always been a field with great development potential and importance. If we had more time, improving the communication experience of three or more people will be our next goal.