Originally posted on gq.
A conversation with Aza Raskin on inventing the infinite scroll, how social media has harmed the fabric of society, and the importance of listening to other people—and animals, too.
Aza Raskin, an advocate for ethics in technology, was born and raised in Silicon Valley. His father, Jef Raskin, was an early member of the Apple team (like, in the garage early). And his son followed his father into tech, first joining him at age ten, for a talk on user interfaces. The younger Raskin’s first solo venture was a crowdsourced map for NGOs to use after the 2010 Haiti earthquake. His early 30s saw him joining Mozilla to build an early version of Firefox, while founding a handful of startups, including Songza, which now powers a large chunk of Google Play. In 2006, he invented the infinite scroll, the now-ubiquitous feature that seamlessly delivers ever-more content to a passive user.
But then something happened that stopped him in his tracks. One of those startups, Massive Health, was designing technology to get people to eat healthier. The research team found that when they applied the subtly persuasive tactics used by consumer companies to their own work, they got the results they wanted in their studies. “We could help people eat 11% healthier,” he recalls. “That was when my stomach started to drop.” If we had these tactics all along, why weren’t we using them to combat racism? To get people to stop using single-use plastic or driving inefficient cars? Instead, they were being used by big corporations to get people to spend more money. Or, even worse, by political parties across the globe to radicalize citizens and get them to vote for figureheads like Donald Trump and Brexit.
Rather than turn cynical, Raskin committed to finding ways to use technology for good. Around the same time, he met Britt Selvitelle, a computer scientist working with the founding Twitter team. The two spent the next five years talking about how they could spread their gospel of empathy and listening first. Five years later, they launched the results of their talks: the Earth Species Project, a quest to understand what animals are saying. A year later, he co-founded the Center for Humane Technology, a nonprofit dedicated to fighting the more pernicious aspects of our digital lives.
GQ spoke with Raskin, who possesses the gentle awe of someone who says things like “Spaceship Earth,” about whether tech can still be a force for good. The way he sees it, we’ve arrived at our current tech crisis by the same forces that have brought us to our current climate crisis. And he’s on a mission to save both. “The goal,” he says of Earth Species Project, “is to shift that way that human beings think about our place on the planet by helping us learn to listen better.”
GQ: What are your tech habits right now?
Aza Raskin: I no longer use Instagram. I will use Twitter to broadcast, but honestly, increasingly infrequently. A couple of years ago, I was posting something on Instagram and Tristan [Harris, the other founder of the Center for Humane Technology] asked me, “Hey, what are you feeling and why are you posting?” That became a practice for me. When I post something, who am I being in that moment? Which version of me am I being? Am I being my centered compassionate version of myself? Nine times out ten, when I slowed down and asked which version of me I was being when I was using any of the social media products, I didn’t like the answer. That was enough really to get me to stop.
One of the most pernicious aspects of social media is that you are getting constant and infinite validation that people like you more when you look a little different than you actually do, when you’re living a life that isn’t quite your real life. You’re getting quantified proof that people like you more when you’re projecting who you are.
For many people, it’s become part of their job or the way they support themselves. Part of the way they stay in contact with their friends or their family, or loved ones around the world. That’s what’s inhumane—that we are forced to use systems which are fundamentally unsafe for the things that we need. Technology is not just ripping apart the social fabric, it’s replacing our social fabric with something much more brittle.
So how can we effectively put guard rails on new tech?
This is where policy work comes in. In California, there’s a set of perverse incentives where power companies want to get as much electricity usage from you as possible because that’s how they maximize profit. So there’s a threshold now: any amount above a certain amount of energy use all has to instead be reinvested into renewables.
You could imagine pairing something like infinite scroll with a set of policies that says, “Cool, scroll as much as you want. But at some point, all of those ad dollars that are created and profit that happens from scrolling actually goes into funding local journalism. It creates a kind of balancing feedback loop.
Do you ever feel like the work you’re doing is infinitesimal in the face of all these massive, all-encompassing problems?
Donella Meadows, a systems change thinker, identifies 12 places in a system where you can intervene to make change. It starts with the least powerful and goes up to the most powerful. At the very top is paradigmatic change, the base metaphors that govern everything about how you think: that you can own land, that money is good, all of these fundamental paradigms that set the parameters by which everything else runs.
Even a couple years ago, we would talk about the ways technology was replacing the social fabric and very few people would listen. There was a fundamental belief that connecting people was good, as a paradigm. That fundamental paradigmatic view is changing, that just connecting people isn’t enough, and that it can cause massive damage. So the ability to diagnose the system and change the base metaphors by which we see the world together actually has the opportunity to change almost everything.
How has your father’s presence as a very early developer at Apple affected how you think about all this?
For him it was always this question of, how do we understand the ergonomics of us as human beings?
When you sit in a chair, there’s some ways that your body bends and folds that work for us and there are other ways that our body bends and folds that give us backaches. Just like there’s an ergonomics of our body, there’s an ergonomics of our mind called cognetics. There’s an ergonomics of how relationships work, there’s an ergonomics of societies. He was really interested in how computers could fit human beings in ways that don’t break us. He wanted technology to be an extension of the parts of humans that were already most brilliant. When you take the long view of technology, it’s a paintbrush, a cello, taking the parts of us that are most beautiful and extending them. It’s not about making us super human, it’s about making us extra human. And you see that reflected in the Center for Humane Technology. My father defined “humane” as responsive to human needs, considerate of human frailties.
Growing up in that environment also led you to a very early entry into the tech world. At what point did you look around and realize you didn’t like where things were going?
I was working on a company called Massive Health. We were looking at diabetes, which you can help reverse through behavior change. We were like, “Who out there has been really good at changing behavior? Consumer companies. We can design some technology to persuade people to eat better.” We were actually successful, and that was when my stomach started to drop. These tools are agnostic to how they’re used. It’s generalized persuasive technology, and it’s increasingly being used by state actors to control populations, to pollute our information environments.
Trump getting elected was a huge wake up call, not because of Trump in particular, but because Twitter and Facebook and these information technologies are Trump factories. You look around the world like Turkey, Hungary, India, Brazil, these are countries that have significantly different pasts, significantly different histories, significantly different cultures, and the same things are happening to them all at the same time. What is the thing that connects us all? It’s our technology environment.
How did you process that personally, as someone who’d been helping build a lot of those tools?
It’s very hard at an individual level to see these effects and say, “Oh, actually these are causing massive amounts of damage societally and individually. We have to do something about that.” But people have been saying that for even longer than my own little personal wake-up has been.
Infinite scroll was one of many things that I was working on and that’s the one that everyone knows me for. It’d really suck if I get to the end of my life and the thing on my tombstone was like, “He scrolled.” The assumption that making something easier to use is better for humanity was dismantled by that invention. The things that I put into the world, I can’t necessarily control how they’re going to be used. I should have spent more time thinking about the philosophy and the responsibility that comes along with the invention.
Your other big project is the the Earth Species Project, which is attempting to understand animal communication. Where do you see the thread connecting the Earth Species Project to your mission at the Center for Humane Technology?
The Center for Humane Technology is looking at the interdependence between humans and human society. The Earth Species project broadens the lens, looking at the interdependence of all species. It’s looking at the human system embedded inside of the planetary system and the realization that, in the fuller sense, we’re here on spaceship Earth, and it has a life support system.
Tim Wu, who’s a scholar on First Amendment rights, points out that the First Amendment was created in an environment when speaking was expensive. It took a lot of work to get your word out, but listening was cheap, because there wasn’t that much information. Now, speaking is really cheap. You hit one button and your message can go to hundreds of millions of people. But listening is expensive. We live in an information overload. The thing that the First Amendment was created to protect, it no longer protects, because the environment is shifting. And we are really bad at listening. We just can’t hear each other. We’re disconnected from nature, and we’re disconnected from ourselves.
How did the Earth Species Project initially start?
It was motivated by a breakthrough in what’s known as the field of unsupervised machinery. Computers had been pretty good in machine learning and AI; you gave them a set of examples and it could start to learn how to make more or predict those kinds of examples. But it couldn’t take a language that was unknown and translate it.
In 2017, some research came out of University of Basque Country that let the computer turn an entire language into a shape and match the shape of one language to the shape of another language. You could translate between any two languages without examples. We were like, “Okay, now is the time.” Wouldn’t it be amazing if you can, in the same way with human languages, build the shape which represents, say, dolphin communication and see if it fits anywhere into this universal human meaning shape? If it does, can you start to build yourself a Rosetta Stone? If it doesn’t, isn’t that even more fascinating?
If you can really hear someone else, understand someone else, you can take their perspective. Perspective changes can change almost anything. That’s motivated a lot of our work.
How did you get it off the ground?
We just started talking to anyone that would talk to us. We went to as many biologists and ethologists and animal researchers and machine learning researchers as possible. The more we talked, the more we realized one of the major things that was missing was a repository. A library of all the different animal communication data sets that were machine learning ready. Everyone was working in their own silos, and we saw an opportunity to create a kind of perspective-changing machine: to look at the difference between humpback communication and elephant communication and sperm whale communication and bat communication.
What kinds of obstacles have you run into?
One of the early problems we’re working to solve is the “cocktail party problem.” It turns out the most interesting communication happens when there are many animals speaking at once. It makes sense: the more human beings you put into a party, the more vocabulary they use and the more they speak. But biologists often have to throw all that data away, because there’s more than one animal speaking and they don’t know what to do with it. We’re learning how to do the equivalent of your noise canceling headphones, but for field data.
We know there are going to be parts of animal language that fit into the universal human meaning shape, directly translatable experiences. They have grief, they have love, and they need to eat, and they have family structures, and they have dialect. But there are other parts that are completely foreign to us. What is it really like to live as an animal that can spend 70% of its lifetime in the complete dark? The parts that don’t overlap: Aren’t those going to be, in some sense, where the most wisdom lies?
Interview has been edited and condensed.