Does your behavior online reflect who you are in real life, or are you someone else entirely? Experts on several panels at this year’s Collision conference spoke about identity in the digital age and how technology influences our emotions and actions. The annual event brings together engineers, entrepreneurs, and chief executives of high-tech companies to discuss emerging technology.
DEEP DARK WEB
During the “When Worlds Collide” talk, cyberpsychologist Mary Aiken, who studies the effects of emerging technology, said she thinks of cyberspace as a place where people are free to be who they truly are. For some, that might mean participating in social taboos. The Internet doesn’t cause people to engage in such bad behavior, she said, but instead normalizes it.
Take, for example, Elliot Rodger, who in 2014 killed six people and wounded 14 during a shooting and stabbing spree in California. He was part of an online community of so-called incels (involuntarily celibate), who justify lack of interest from women as a reason to be misogynistic or violent. Alek Minassian, who last month drove a van over 10 pedestrians in Toronto, is a self-proclaimed incel as well, and he reportedly considered Rodger a hero.
Before the Internet, Aiken said, people on the fringe of society were isolated. A person in Canada likely would have had had no way of knowing about another in California who shared his philosophy. Now they can find each other easily, and many more, through online communities. No longer do they feel they are outcasts.
“The Internet shines a very bright light into the darkest reaches of the human psyche,” Aiken said. “I’m beginning to wonder: Are we all just ‘Game of Thrones’ underneath it all?”
WHAT DATA DOESN’T KNOW
Although the Internet has allowed people to express themselves freely, they don’t always portray themselves genuinely—at least not the full picture. That’s according to speakers on the “Future of Feelings” panel.
Caroline Sinders, a researcher focused on harassment and political activism on social media, said we’re still constrained by cultural expectations online. What we do on social media doesn’t necessarily reflect how we’re feeling, she said. For example, she said, she has no choice but to “like” all her sister’s wedding photos. “If you were to scrape the likes on my Facebook page, you’d assume I have Pinterest boards of wedding dresses. But I hate weddings,” she said.
Pamela Pavliscak calls that performance. She is founder of Change Sciences, a company in New York City focused on human-centered research and design. She said what we post online is a simplification of our emotions.
“We can send a smiley emoji but be thinking all kinds of other things,” she said. “If companies use this data to analyze who we are, it’s going to limit us as human beings.”
Facebook last year filed a patent for an algorithm that attempts to analyze users’ emotions by how they type and compare that to their baseline. If people are tapping their phone’s keyboard harder or typing slower than usual, that could indicate they are angry or depressed.
The speakers expressed problems with such an algorithm. And not only because companies can use one to, for example, advertise comfort foods when we’re sad but also because technology is not very good at identifying how we actually feel.
“There is a lot more to unpack when it comes to emotions,” Pavliscak said, “including memories and associations.”
People tend to think of emotions as physical places. But emotions are complex, and people can have mixed feelings.
Moreover, any technology that identifies emotions based on factors such as how hard you are hitting keys must be inaccurate, Sinders said. “How often are you angry because the hardware isn’t working?” she pointed out. “I’ve often wanted to throw my phone across the room.”
She said that what really bothers her about emotional data collection is that it’s none of the tech companies’ business. “We’re allowed to be angry at times, and we’re allowed to be extremely happy,” she said. “And sometimes we are private when we’re happy and public when something has made us sad.”
Pavliscak, however, mentioned an example of potentially useful emotional data collection: Car manufacturers are working to improve safety by monitoring drivers’ emotions. Many drivers are not necessarily distracted by talking or texting but by their thoughts.
“A lot of accidents happen when drivers are stressed out or upset,” she said. “There are many opportunities for sensors in cars to prevent accidents.”
How do you feel about tech companies mining our emotions? Do you think it could be valuable to us, or is this crossing a line?