CHAPTER 1
Four Logics of Power
Biographer Walter Isaacson tells the story of Nobel Prize–winning biochemist Jennifer Doudna’s earliest encounter with the topic of DNA research. She came home from sixth grade to find a paperback left by her father on her bed: The Double Helix, James Watson’s first-person account of the discovery of DNA.1 She thought at first the book was a detective story, and in a sense, it was: “She became enthralled by the intense drama behind the competition to discover the building blocks of life,” wrote Isaacson.2
Doudna resolved to carry on with similar research, even though her high school guidance counselor told her girls didn’t become scientists. In 2011, she and French microbiologist Emmanuelle Charpentier met at a conference and began their collaboration on developing a method for high-precision genome editing. “They turned their curiosity into an invention that will transform the human race,” wrote Isaacson, “an easy-to-use tool that can edit DNA, known as CRISPR.” They used the immune system of a bacterium, which disables viruses by cutting their DNA up with a type of genetic scissors. By extracting and simplifying the genetic scissors’ molecular components, they made DNA editing and CRISPR a topic of global discussion and public debate. Doudna was among the first women to win a Nobel Prize in science when, in 2020, she shared the prize in chemistry with Charpentier.3
“The CRISPR/Cas9 genetic scissors will probably lead to new scientific discoveries,” says a Nobel Prize website summary, “better crops, and new weapons in the fight against cancer and genetic diseases.”4 The technology is also so dangerous that Doudna—along with other leading scientists in the field, including Charpentier—has publicly advocated to pause research until there is acceptable oversight.5 Currently, 30 countries ban or severely restrict research on human germline gene modification, and the World Health Organization maintains a registry of projects.6
Doudna’s position is noteworthy for its nuanced perspective. For example, in her seminal 2015 TED talk, she discussed the many benefits that CRISPR could provide, but she also raised the prospects of “designer babies” and the general loss of control over the technology that could stem from choices like eliminating human genetic diversity. The TED talk so far has received more than 4 million views.
“The opportunity to do this kind of genome editing,” she said, “also raises various ethical issues that we have to consider. This technology can be employed not only in adult cells but also in the embryos of organisms, including our own species. And so, together with my colleagues, I’ve called for a global conversation about the technology that I coinvented, so that we can consider all of the ethical and societal implications of a technology like this.”7
Clearly, there are precedents for global discussion and decision about the acceptable limits for emerging technologies. Other examples include human cloning, biological warfare, nuclear weapons—and, now, Triple-A systems. No agreement has completely halted a technology, but many dangers have been rethought or mitigated.
Just as The AI Dilemma is being edited, a wave of regulatory interest in responsible technology and Triple-A systems is rising. In September 2022, the United States White House released a proposed blueprint for an AI Bill of Rights.8 Its five principles map onto principles that we had already identified in our research. You will see them at the front of five of our chapters. Similar principles appear in discussions leading up to the European Union’s proposed new Artificial Intelligence Act (AIA).9 A number of other frameworks for AI responsibility have been put forth, going back to 2018 or earlier.10
What these frameworks seem to have in common, at least implicitly, is that each takes into account four logics of power related to Triple-A systems—corporate, engineering, government, and social justice (see figure 1). Just as Jennifer Doudna wanted people from different backgrounds to participate in the CRISPR conversation and not just scientists, these four logics of power each represent a different priority and way of thinking about the issues. As an individual, you may relate to one of these perspectives more than the others, but none of them are inherently right or wrong. Together, they give us a sense of the possibilities and tensions that arise in finding solutions that work for all of us.
FIGURE 1 The Four Logics of Power
Source: Kleiner Powell International (KPI).
The Engineering Logic: The Perspective of Technologists
A highly skilled and in-demand computer or systems engineer working on AI is analytical, fast, and “efficient.” A highly-valued AI engineer can translate ideas into software or hardware. She communicates as an engineer on behalf of other similarly trained engineers, as well as on behalf of the algorithm, the Triple-A system, the organizational goals, and the client. In some cases, she also communicates on behalf of the user.
We spoke with multiple systems engineers who do not, within their organizational roles, think or communicate on behalf of end users. Engineers refer to the mind-set or culture of engineering as having three priorities. The first priority is to the customer, the company that buys or licenses the technology. Engineers report being “customer-obsessed.” The second priority is the technical challenge of an “interesting problem” that they and “only a handful of others in the world” can solve. Engineers value being part of a technical community of dedicated, highly-skilled analytic specialists who understand one another. The third priority may be the individuals (us) who will interact with or be affected by the product, depending on the engineer.11
That’s just “engineers being engineers,” according to Casey Cerretani, an AI systems engineer and executive who has done every thing from inventing and customizing new servers to running teams of hundreds of developers at several prominent Big Tech companies. In his role, he is the connection between the customer, the company providing the tech, and all the engineers working on the project. In his own words: “The task is to do the thing that the customer is asking for.” Everything else might be considered “noise” because in the face of solving a pressing complex problem, it “doesn’t matter.” Everything else is not technically their job.12
Engineers like Cerretani see the larger context and implications of their work on things like privacy but are driven by the technical requirements of the customer. The user is not viewed as their problem—the end user is not the customer.
Instead, end user responsibility is delegated to other areas of the firm like user interface design, marketing, PR, “corporate social responsibility,” customer service, “HR,” and legal departments. Some technologists feel personally involved with considerations of AI responsibility, especially if they have been personally affected by negative outcomes from AI. They see the problems more keenly than non-engineers do. They may then apply the same analytic perspective to finding solutions. If they recognize that technology on its own won’t suffice, they may try to change or influence their organizations by speaking out. Then they discover the hard way how resistant corporate logic can be to whistleblow-ing or direct confrontation. One example is Tristan Harris from the Center for Humane Technology, a former Googler who has been outspoken about the tech’s effect on people in talks, interviews, and his own popular podcast under the TED audio umbrella.13
The Social Justice Logic: The Perspective of Humanity
This logic upholds a people-first sensibility; it prioritizes the social contract. People count more for this group than efficiency, profit, security, and control. When these other priorities take supremacy over people’s human rights, the social justice logic pushes back in the form of community organizing, walkouts, petitions, data leaks, whistleblowing, media attacks, and public discourse. From the social justice perspective, the only way to truly gain legitimacy for AI is to make it responsible to all stakeholders, especially those who have been marginalized in the past, and to give all stakeholders a voice.
“Right now, the burden is on us, the public, to prove that these algorithms harm us. I want that burden to be on the companies who profit from using them.”
—Cathy O’Neil
As community leaders, social justice advocates make it their business to be keenly aware of issues that need improvement. Cathy O’Neil, data scientist and author of Weapons of Math Destruction and The Shame Machine, put it this way: “Right now, the burden is on us, the public, to prove that these algorithms harm us. I want that burden to be on the companies who profit from using them.”14
Some of the systems engineers we interviewed are deeply motivated by this logic. We were told by several people in Big Tech that conversations about this juxtaposition of social justice logic and the logic of corporate and engineering efficiency “never happen” within the firm. You might expect that because some systems engineers report to the CEO or CFO of their organizations that they could discuss any concerns directly with the C-suite. But sadly, there is a pervasive gap in communication when it comes to conflicting moral and corporate values. For example, when asked explicitly if he ever thinks about how the technology he creates will be deployed, Cerretani distinguishes between his personal feelings about social justice and the logic of the firms he serves: “You can quickly imagine all the black hat ways that [Triple-A systems] could be used, which could be viewed as nefarious. That certainly challenges me. But there’s not much of an organizational conversation around that. And I think that’s the big missing gap. It is as much an ethical conversation as it is a technological one.”
“You can quickly imagine all the black hat ways that [ Triple A’s] could be used, which could be viewed as nefarious. But there’s not much of an orga nizational conversation around that. And I think that’s the big missing gap.”
—Casey Cerretani
There are many social justice activists connected to the AI community—either from having worked there, or from independent work. Their insider knowledge enriches the context through which they talk about social justice and adds to their proficiency and impact. For example, Dan Gillmor, tech journalist and director of the News Co/Lab at Arizona State University, is also a board member of the Signals Network, a nonprofit that supports whistleblowers and connects them to journalism organizations.15
The Corporate Logic: The Logic of Ownership, Markets, and Growth
One reason for the gap in corporate conversation is what Casey Cerretani calls “the gung-ho race to get the technology in place” in most companies. “Microsoft Cloud Services is growing at 70 plus percent, year over year. Amazon is growing at a similar rate. Those are very large percentages on very large baseline numbers. When you grow that quickly and you’re growing to meet these customer needs, you don’t go back and do a lot of housekeeping.”
By “housekeeping,” Cerretani means any concern for the harmful impact of the technology on vulnerable populations. The conflict between engineering, social justice, and corporate logics leads many companies to intensify secrecy so that their leaders don’t have to confront or resolve the clash of values. These conflicts are coming to a head within many organizations today, but meaningful conversations about them are missing from corporate life because they would slow down the “gung-ho” rush to produce results.
“There are just three cloud service providers for the whole world. Maybe two of them will emerge as the winners in the end. That’s an enormous power.”
—Casey Cerretani
We have all seen corporate leaders making decisions to enhance shareholder value. It is their job. As a result, the corporate logic represents a logic of power. It prioritizes money, profit growth, expansion, new business, and dominance over competitors. “There are just three cloud service providers for the whole world,” Cerretani reminds us. “Maybe two of them will emerge as the winners in the end. That’s an enormous power.”
And if you have got shares in either of those companies, lucky you.
Corporate logic is inherently narrow. Corporate leaders often think of themselves as broadminded, but as Cerretani says, “You have a corporate mission. You have a corporate direction. You have customers. And it becomes an interesting slippery slope.”16 Warnings that don’t fit the perceived immediate customer needs get lost as they travel up the official channels. In many technical teams, for example, graphic specialists create the data visualizations, and thus the PowerPoint messages that reach the C-suite. They may only describe the aspects that they think sponsors want to hear about.
When everyone makes decisions based on what they think the top leaders and customers expect, the outcomes are risky. With Triple-A systems, the risk is greatest for vulnerable populations. It may also extend to engineers and other employees, and might ultimately lead some corporations themselves to fall. Those who want to restrain the risk tend to turn to another logic of power: the logic of government.
Government Logic: The Perspective of Authority and Security
In the government logic, no matter which country or system, two things are paramount: governments protect the nation or jurisdiction from outside forces, and they provide support and public service for their citizens. From this standpoint, Triple-A technology is something for public sector organizations to use, invest in, regulate—and possibly to develop themselves.
Politicians are concerned about AI because they are vulnerable to automated systems that manipulate public opinion. The government logic thus sees regulation as inevitable. That is, there needs to be standards governing the use of Triple-A technology, even if politicians and regulators have a wide range of views of what the standards should be.
The government logic is further complicated by the fact that AI systems can be used by politicians to attack their competitors. The same digital tools that enable human trafficking are also used to uncover and arrest traffickers and to find lost people. AI also gives the government itself more capabilities in everything it does, including the regulation of citizens. At the same time, to paraphrase free software activist John Gilmore,17 automated systems interpret regulation as damage and route around it.
For Cerretani, the job of regulating companies is squarely the responsibility of the government. Many would agree. The burden is on governments everywhere to resolve the paradox of the AI dilemma. Government leaders may be increasingly measured by their ability to use this powerful technology judiciously. If they overreach, it may be obvious to outsiders in ways their leaders did not anticipate. They may have to demonstrate that they are fair and accountable to all citizens. They may also have to encourage innovation even as they require innovators to limit what they do.
All Together Now
When we each learn to appreciate and understand the other logics, it builds an overall level of trust. That in turn makes the whole Triple-A ecosystem more trustworthy.
None of the four logics are in control. There are no right or wrong answers. If we want trustworthy AI systems, we need to bring all four perspectives together, keep them in mind simultaneously, and make the effort to understand why others feel and think the way they do. The point is to use all four logics together to better evaluate our systems in each use case and context. Then we’re much more likely to create systems that work for more people.
In the next chapter, we introduce the first of our seven principles: be intentional about risk to humans.