System Error: Where Big Tech Went Wrong and How We Can Reboot
Magazine / System Error: Where Big Tech Went Wrong and How We Can Reboot

System Error: Where Big Tech Went Wrong and How We Can Reboot

Book Bites Politics & Economics Technology
System Error: Where Big Tech Went Wrong and How We Can Reboot

Rob Reich is the philosopher and Director of the Center for Ethics in Society and Associate Director of the Institute for Human-Centered AI at Stanford University. Mehran Sahami is a professor and associate chair in the computer science department at Stanford, and a former senior research scientist at Google. Jeremy Weinstein is a professor of political science and former senior official in the Obama Administration.

Below, Rob, Mehran, and Jeremy share 5 key insights from their new book, System Error: Where Big Tech Went Wrong and How We Can Reboot. Listen to the audio version—read by Rob, Mehran, and Jeremy themselves—in the Next Big Idea App.

1. Optimization isn’t intrinsically good.

The hallmark of an engineer’s mindset is a commitment to optimization. When developing an algorithm, computer science courses often define the goal as providing an optimal solution to a computationally specified problem. When you look at the world through this mindset, it’s not just computational inefficiencies that annoy. Eventually, it becomes an orientation to life as well. As one of our colleagues at Stanford tells students, everything in life is an optimization problem.

But optimization is not all that it’s cracked up to be.

Have you heard of a meal replacement powder called Soylent? It was created by a Silicon Valley engineer who decided that food is an inconvenience—a pain point in a busy life. He developed Soylent to optimize meeting one’s daily nutritional needs with minimal cost and time investment. But for most people, food is not just a delivery mechanism for one’s nutritional requirements. It brings gustatory pleasure. It provides social connection. It sustains and transmits cultural identity. A world in which Soylent spells the end of food also spells the loss of these values.

“Technology is often cast as the neutral or objective means of eliminating the messiness of human judgement. This couldn’t be further from the truth.”

2. Technology is neither objective nor neutral.

Technology is often cast as the neutral or objective means of eliminating the messiness of human judgement. This couldn’t be further from the truth. Today’s technologies for decision-making incorporate machine learning (algorithms that find patterns in data) to create computer models that are used to make new decisions.

Consider the example of screening résumés to determine whom to interview—a task that Amazon tried to address with machine learning in 2014. You could get lots of data in the form of résumés of applicants that have already been interviewed and note which ones were hired. Then feed the data into a machine learning algorithm to find patterns—for example, important phrases on the résumés of people who were hired—and you could produce a model that predicts whether a person would be hired or not, and interview the likely candidates. But when Amazon built such a system, it amplified a preference for male candidates, downgrading résumés that included the term “women.” Amazon had not set out to create a sexist algorithm, but that’s what had been produced as a result of biases in the data that was used to train the system. Amazon tried to eliminate the bias, but they were unable to rid all potential discrimination, and ended up scrapping the project entirely.

The résumé screening story is just one example of many algorithmic decision-making tools that make consequential decisions about our lives. Questions of who gets access to credit, what kinds of health care someone is approved for, and who is granted bail in the criminal justice system are all examples of algorithmic decision-making systems that have been built, deployed, and shown to produce results with gender or racial bias. At scale, such systems can reinforce and exacerbate inequalities at a societal level.

“We need to take a hard look at what we really value about human labor, and how we can prepare for a world in which humans flourish alongside smarter machines.”

3. AI is a mirror to our own values.

In the past decade, artificial intelligence has been touted as the next great revolution in technology. It will allegedly drive cars and trucks safely, diagnose disease better than human doctors, keep a watchful eye on our aging population, and even fight our wars. What a time to be alive! But what does that mean for taxi and truck drivers? What happens to the radiologists replaced by the breast cancer prediction system recently built by Google? What is the importance of our own labor in leading a fulfilling life?

Sure, there are some tasks we won’t miss—no one seems to lament how washing machines have made laundry a mostly automated task for millions of people. But as the growing capabilities of AI threatens to displace more workers, we need to take a hard look at what we really value about human labor, and how we can prepare for a world in which humans flourish alongside smarter machines. That means developing broader educational opportunities to reskill the workforce, and mitigating the uneven impacts that AI will have on different people and their livelihoods. Ultimately, it means getting clarity on the values we want promoted in the building and deployment of AI so that it is a benefit to all.

On an even broader scale, we need to consider how AI opens the possibility to rethink the future of work. Is it realistic to think that 50 years from now, the 40-hour work-week will be antiquated as a result of everyone receiving universal basic income? Or will we live in a drastically more unequal world where AI drives a greater wedge between the haves and have-nots? The answer is entirely in our control—if we have the understanding and foresight to plan for it now.

“New technologies that achieve an enormous scale impose the values of those who design the technologies on the rest of us.”

4. Societal values are being chosen by a small group of unaccountable people.

Joshua Browder is a Stanford graduate who founded a company called Do Not Pay. The company helps people get out of parking tickets at the click of a button because, in Browder’s words, parking tickets are a “tax on the most vulnerable.” Browder is already scaling this technology across cities and countries, but here’s the problem: Parking tickets exist for a reason. They deter people from parking by fire hydrants, blocking driveways, or occupying spaces intended for the disabled. Enforcement can reduce traffic. And they constitute a meaningful source of revenue for the government to meet the needs of a city and its residents. This story encapsulates our current dilemma: New technologies that achieve an enormous scale impose the values of those who design the technologies on the rest of us.

The choices made about what technology to build, the problems that are being solved, and who is impacted are chosen by a small number of people whose decisions reflect their own desires and preferences, not necessarily those of society. We see this in the decisions of Facebook to prioritize connections among people, even though the platform is used to spread misinformation and organize violence. Or in Google’s revenue model, which depends on aggregating your personal data at a great cost to privacy. Many choices are invisible to us, even though they directly impact our democracy and well-being. This should make all of us uncomfortable, as we have little recourse to challenge the decisions made by technologists, and to hold these companies accountable for the externalities they produce.

“It’s only through a combination of regulation and personal choice that we get the outcomes we’re looking for.”

5. Democracy must rise to the challenge of the information age.

It’s easy to feel that technological progress is rolling over us with the wheels of inevitability. What can we really do about these discoveries or innovations? How can we shape the products we buy, or influence their effects on society? The reality is that the effects of new technologies are neither preordained nor fixed in stone. As citizens in a democracy, we have an important role in this story because the rules that govern technology will no longer be written by hackers and companies alone. They will reflect the inevitable push and pull among the companies that make things, the governments that oversee them, the consumers who use them, and the people who are affected by them.

When we have competing values that we need to referee, we have our democracy: the best mechanism for surfacing diverse views, seeking agreement on a collective approach, and turning shared values into rules that influence and constrain powerful tech companies.

Regulation is a loaded term for rules that help us get what we want. Democracy not only adjudicates our differences of opinions, but also provides processes for bringing transparency to systems, allows for decisions to be challenged, and ultimately enables all of us to participate in setting the agenda for our technological future. Sure, there are decisions we can make as individuals in how we engage with technology, but it’s only through a combination of regulation and personal choice that we get the outcomes we’re looking for.

To listen to the audio version read by Rob Reich, Mehran Sahami, and Jeremy Weinstein, download the Next Big Idea App today:

Listen to key insights in the next big idea app

Download
the Next Big Idea App

app-store play-market

Also in Magazine

-->