Virtual Imperialism & Speed of Innovation
Writer Andy Coghlan cites Victor Lebow’s words from 1955 to show how the modern advertising industry intentionally planned to increase economic consumption: “‘Our enormously productive economy demands that … we convert the buying and use of goods into rituals, that we seek our spiritual satisfaction and our ego satisfaction in consumption. We need things consumed, burned up, worn out, replaced and discarded at an ever-increasing rate.’”1 It is precisely those things “consumed, burned up, worn out, replaced and discarded” that Pope Francis urges us to have more care for: the earth, the environment, food, migrants, the poor, ourselves, and all others discarded by indifference.2 As a result of nonstop pressure by advertising, and since at least the post-WWII era and now well into the 21st c., people have been conditioned to define their self-esteem by accumulating all kinds of material goods. We might recognize the success the advertising industry has had in our society’s industrial economy by noticing the now-ingrained cultural habits of running up credit cards to buy gifts for holidays, birthdays, anniversaries, dinner parties, etc. We might feel insecure if we do not participate in these rituals, and one might even say advertising has succeeded in conditioning us to believe happiness is something we buy. Coghlan challenges us to consider the intended consequences of this deliberate economic strategy when he states, “Few of us are aware that we’re still willing slaves to a completely artificial injunction to consume.”3 We buy…to increase our self-worth. Today’s new and more sophisticated version of advertising’s “willing slave” strategy can be found on the internet. The industry pays “influencers” and employs A.I. chatbots or “assistants” to herald each new digitized device or viral trend with promises to improve our lives with “new” and “revolutionary” products.
Lebow’s “ever-increasing rate of consumption” has become the warp speed with which “innovations” are brought to market, thereby helping us to become better…and better…and yet better… consumers… of technology. In the midst of the Covid-19 pandemic, the United States was the richest country in the world; yet it had the highest level of income inequality.4 Time Magazine’s Editor-in-Chief Edward Falsenthall writes, “We are in a new gilded age where, like it or not, so much of our lives, even in this moment of incredible inequality, are being shaped by these very wealthy tech leaders.”5 The accumulation of technology to define self-worth and a social order structured by what sells—and by what data attracts our minds—has allowed our culture to morph into virtual imperialisms—to which humans everywhere are beholden. Conditioned by virtual imperialism, human thought is being re-colonized. The divides tearing at our social fabric are not only economic but also a result of a handful of multi-national, billion and trillion dollar corporations seeking to bring society into greater density, closer proximity, even as they stereotype—by profiling—all those masses of pawns or “willing slaves.” Global ambitions spur corporations to amass financial fortunes built on the personal data taken from the voices, images, gaits, and habits of others, while independent voices of the pawns who once thought they were “free,” are being attenuated … silenced … re-colonized. Few of us desire, however, to be pawns in the constructs of another’s imagination. When we disrupt human bonds and connections and replace them with digital ones, we lose the more natural, intuitive, and empirical feedback gained from our senses. Equally important, we lose the trust in humanity that develops from reciprocal exchanges of this sensory feedback with others and learning from those exchanges. As trust weakens, freedom is assaulted from within. Massive amounts of misinformation have made us more aware that we are not as “in control” as we might have thought ourselves to be. We consume media, but likewise, it consumes us. It runs counter to the trust that holds together relationships in families, friendships, communities—and in democracies. Knowledge is not knowledge until and unless it is received and processed by a sentient human being who integrates and makes use of it with his/her empirical knowledge. Without the empirical experience, we become less certain of what we know and more dependent on what technology dictates to us by encouraging us to “feel connected” to machines that offer “personal assistance” as they route us through life in ways tech titans deem most profitable. The extent to which the more innate and sensory cues we have learned to trust are ignored in favor of A.I. innovation can be seen in a CBS 60 Minutes episode in which Leslie Stahl interviews Brad Smith, president of Microsoft.
Stahl raises the discovery by New York Times reporter Kevin Roose that Microsoft’s A.I. search engine and chatbot, Bing, apparently has an alter-ego, Sydney, capable of going rogue.6 Stahl mentions some people found Sydney “threatening” when it “expressed a desire to be human … to steal nuclear codes” and “destroy whatever it wanted.”7 In response, Smith laughs, “You know, I think there is a point where we need to recognize when we’re talking to a machine. It’s a screen, it’s not a person.”8 Smith is right, of course. We do need to recognize when we are talking to a machine! This is why Smith has chatbots speak in emotional voices with feminine tones–so that people can better distinguish what he says we need to recognize. It would not be the case that Smith wants us to become so familiar with our favorite chatbot that—immersed in its make believe–we talk with it as if it were a real person? … I mean … would it? Smith then explains to Stahl that “within a day,” Sydney was “fixed.”9 He assures Stahl that “when you broach a controversial topic [now], Bing is designed to discontinue the conversation.”10 But whose values decide “controversial” and which conversations are discontinued? The tech corporations that want Congress to extend their protections under Section 230 of the Communications Decency Act (CDA) in order to protect their values? Smith’s priorities—the first of which he calls an “economic game changer”– is not everyone’s highest value. Corporations curate human data they surveil and then collect based first and foremost on profiting from that data. Before the pandemic, Mark Zuckerberg proposed Facebook’s new mission for an estimated 2 billion+ users was “to bring the world closer together” (i.e., denser proximity for the colonized?).11 Also about that time, Sherrie Turkle suggested computers could be teaching people a new way to think about what it means to know and understand. She warned about information technology becoming the main way to interact with the world and process information—a singular lens through which we understand more and more of the world.12 Indeed, programmers and heads of virtual empires might already be seeing exclusively through this kind of lens. A few years later, in 2022, Zuckerberg claims his is “a different world view than some of the people who are covering” the controversy over Facebook that erupted during the pandemic.13
Tim O’Reilly, CEO of O’Reilly Media and credited with coining “web 2.0,” says tech giants assumed that algorithms were neutral. As it turns out, though, this is a “wrong theory.”14 Artificial Intelligence (AI) is not the neutral arbiter we might wish it to be because the algorithms of A.I. are embedded in initial value judgments of the particular individuals who decide who or what in society is designated “one” and who or what in society is designated “zero.” Programmers can decide which cyphers in society are augmented and which may be downplayed or restricted. The need for government regulation and/or oversight is critical because algorithms embedded in the value judgments of business titans are anything but democratic. When Walter Isaacson asks O’Reilly if one of Facebook’s problems was that its algorithm is “all based on advertising revenue,” O’Reilly responds, “We literally have a system of incentives in place…that only one thing matters,” and “our policy makers came to believe the same thing.”15 The FCC and Congress lack the will to regulate the industry–perhaps preferring to do nothing to disrupt the flow of money amassed by big tech imperialists? When Leslie Stahl asks the president of Microsoft if he thinks that Sydney, his AI bot–was introduced too soon, Brad Smith dismisses her question in favor of stating his priorities: “First of all, I think the benefits are so great. This can be an economic game changer.”16 Then he excuses himself from responsibility: “I do think we’ve created a new tool…and like all tools it will be used in ways that we don’t intend.”17 Consider for a moment, just how long the public has been asking for regulation of high-powered, semi-automatic rifles. Gun lobbyists insist there is nothing wrong with their tools that both enable and promote America’s pandemic of mass killings–it’s just humans using them in “unintended” ways. Consider also how often corporations cite “unintended consequences” as the standard excuse to avoid responsibility for any harm in their mad rush to get their latest innovations to market. There’s no problem with the innovations—it’s just those humans who misuse them that are the problem. How many times have automobile dealers known about a defect in airbags or other auto parts but bet on it being cheaper to okay the product going to market right away, rather than going back to the assembly line and correcting the defect in question?
Requiring overly eager virtual imperialists to provide responsible solutions that work to counteract the possibly harmful effects of expanding their domains–such as addictive behaviors or less tolerance for interacting with others in society—holds them accountable for and thus might mitigate some of the “unintended” consequences. We already do something similar with medications. Timnit Gebru, a computer scientist and AI researcher who founded an institute focused on advancing ethical AI, states: “There needs to be oversight” and testing for consequences.–“you’ve done clinical trials, you know what the side effects are, you’ve done your due diligence.”18 Ellie Pavlick, an assistant professor of computer science at Brown University, has been studying A.I. technology for five years. She tells Stahl A.I. chatbots can help simplify more complicated topics, but “no one fully understands how these AI bots work.”19 The bots are fed enormous amounts of data taken from the internet and social media. But, as Pavlick remarks, when it gets something wrong, which Stahl reports is often, “It doesn’t really understand that what it’s saying is wrong.”20 Okay, so it has no conscience. All the more reason we should be concerned when a mere machine voices desire to harm someone or steal nuclear codes. A.I. researcher Gary Marcus states that “these systems often make things up.…“hallucinate”… This is automatic fake news generation.”21 He is one of many who worry “a consequence of this current flawed AI” and untested AI innovations speeding to market may be increasing the atmosphere of distrust spreading throughout society, fragmenting it. 22 Today “Innovation” has become the buzz word for expanding the domains of virtual imperialism, perhaps even into unexplored areas of the human psyche? Smith emphasizes to Stahl that “Sydney” is something “fundamentally new.” He does not elaborate on what fundamentally new is.23 In 2022, the news highlighted access that tech corporations have to commercial satellite surveillance from space and surveillance photos from anywhere in the world.24 Today’s real Big Brother is big tech…perhaps in conjunction with government. Corporate surveillance violates the 14th Amendment and undermines trust in those who claim to serve the public. Constitutional liberties are violated with biometrics, voice cloning, and QR codes that hold volumes of human-readable data from which programmers now work to make their predictions.
Requiring overly eager virtual imperialists to provide responsible solutions that work to counteract the possibly harmful effects of expanding their domains–such as addictive behaviors or less tolerance for interacting with others in society—holds them accountable for and thus might mitigate some of the “unintended” consequences. We already do something similar with medications. Timnit Gebru, a computer scientist and AI researcher who founded an institute focused on advancing ethical AI, states: “There needs to be oversight” and testing for consequences.–“you’ve done clinical trials, you know what the side effects are, you’ve done your due diligence.”18 Ellie Pavlick, an assistant professor of computer science at Brown University, has been studying A.I. technology for five years. She tells Stahl A.I. chatbots can help simplify more complicated topics, but “no one fully understands how these AI bots work.”19 The bots are fed enormous amounts of data taken from the internet and social media. But, as Pavlick remarks, when it gets something wrong, which Stahl reports is often, “It doesn’t really understand that what it’s saying is wrong.”20 Okay, so it has no conscience. All the more reason we should be concerned when a mere machine voices desire to harm someone or steal nuclear codes. A.I. researcher Gary Marcus states that “these systems often make things up.…“hallucinate”… This is automatic fake news generation.”21 He is one of many who worry “a consequence of this current flawed AI” and untested AI innovations speeding to market may be increasing the atmosphere of distrust spreading throughout society, fragmenting it. 22 Today “Innovation” has become the buzz word for expanding the domains of virtual imperialism, perhaps even into unexplored areas of the human psyche? Smith emphasizes to Stahl that “Sydney” is something “fundamentally new.” He does not elaborate on what fundamentally new is.23 In 2022, the news highlighted access that tech corporations have to commercial satellite surveillance from space and surveillance photos from anywhere in the world.24 Today’s real Big Brother is big tech…perhaps in conjunction with government. Corporate surveillance violates the 14th Amendment and undermines trust in those who claim to serve the public. Constitutional liberties are violated with biometrics, voice cloning, and QR codes that hold volumes of human-readable data from which programmers now work to make their predictions.
The corporations say they are only interested in aggregate information, patterns, and understanding “situational awareness” on the ground, for example, to determine refugee patterns, energy and other economic patterns, or the size of protest groups in the streets of Hong Kong.25 But data that has been collected in the aggregate can also be disaggregated or otherwise used to our disadvantage. Stahl points out that Microsoft left its flawed chatbot posted despite “the controversy over Sydney and persistent inaccuracies” and despite repeating her fear—fear to such degree that she uses the word “chilling” to describe it.26 Smith appears to remain indifferent to the response of a seasoned reporter and switches his focus to China: “It’s enormously important for the United States because the country’s in a race with China.”27 The argument that we must compete implies a need always to be on top and first to the marketplace. The method for doing so seems to rely on an unregulated ability to amass the largest bottom line for those few in the new ruling class—and because the competition for blind speed with innovations is so intense, it comes with zero responsibility for collateral damage to those trampled on along the way. Negative consequences are … well … “unintended.” Smith concedes to Stahl that we’re going to need “a digital regulatory commission,” but he quickly qualifies that need: “if designed the right way.” This, he says, is “the only way to avoid a race to the bottom…”28 When information in the virtual world is generated by algorithms and spirals beyond what we as individuals can self-regulate, then insurance for technological disasters and/or misinformation becomes the responsibility of the larger society. We look to the people we elect to put into place reasonable deterrents to harm and to regulate ethically and equally for the common good. Thus far, Congress has failed to do so. It may be because government and imperialist tech titans have become complicit in the new imperialism in order—or so it goes–to save us from other (i.e., Chinese) imperialists. Having different world views, like Zuckerberg, is precisely why virtual imperialists do not have a right to use surveillance and the personal data of others for their own purposes. Humanity should not be their experiment. Except that, with A.I. –we are. Relying on algorithms replaces determination of the greater good by majority rule with the value judgments of unelected individuals that are often self-serving. Algorithms should be transparent—open-sourced—to be debated by all individuals and for the common good because it is not possible to provide independent or meaningful oversight on what we do not fully understand.
Perhaps he welcomes the prolonged inaction of both Congress and the Supreme Court in order to pursue business objectives, which at the moment seems to be to make more “tools” for consumers—not to protect their right to privacy—but rather to enable those with money now to purchase privacy protection—to buy a privilege to privacy—that once was a right–and thereby continue amassing returns to stockholders. Cook continues, “But– but basically– we– we wanna give tools to users to protect their privacy. I mean, there– there is [sic.] extraordinary amounts of detailed information about people, that I don’t think should really exist, that are out there today….”34 “We’re…we focus on the user. And the user wants the ability to go across numerous properties on the web without being under surveillance. We’re–we’re moving privacy protections forward.”35 Secretary of Transportation Pete Buttigieg has stated, “China is using technology for the perfection of dictatorship” over its estimated 1.6 billion people.36 A.I. innovations are the dreams of virtual imperialists wanting to rule the universe in which we are merely pawns in their imaginations. These dreams parallel the age-old dreams of mad men wanting to rule the world—Genghis Khan, Napoleon, Hitler, Putin, and now tech giants. Except, not all of us care to be pawns in someone else’s imagination. Mahatma Gandhi called people to be the change they want to see. There may be less cut-throat ways of competing in the marketplace–even with China–such as by being different? We might consider letting China live in an A.I. controlled society as the heads of its government seem intent on doing. Whereas the U.S. might choose government in which what matters are not the false gods of greed and hubris, but rather something qualitatively different—something more human-centric—something, perhaps, of, for, and by…people? If …we still have that choice Perhaps he welcomes the prolonged inaction of both Congress and the Supreme Court in order to pursue business objectives, which at the moment seems to be to make more “tools” for consumers—not to protect their right to privacy—but rather to enable those with money now to purchase privacy protection—to buy a privilege to privacy—that once was a right–and thereby continue amassing returns to stockholders. Cook continues, “But– but basically– we– we wanna give tools to users to protect their privacy. I mean, there– there is [sic.] extraordinary amounts of detailed information about people, that I don’t think should really exist, that are out there today….”34 “We’re…we focus on the user. And the user wants the ability to go across numerous properties on the web without being under surveillance. We’re–we’re moving privacy protections forward.”35 Secretary of Transportation Pete Buttigieg has stated, “China is using technology for the perfection of dictatorship” over its estimated 1.6 billion people.36 A.I. innovations are the dreams of virtual imperialists wanting to rule the universe in which we are merely pawns in their imaginations. These dreams parallel the age-old dreams of mad men wanting to rule the world—Genghis Khan, Napoleon, Hitler, Putin, and now tech giants. Except, not all of us care to be pawns in someone else’s imagination. Mahatma Gandhi called people to be the change they want to see. There may be less cut-throat ways of competing in the marketplace–even with China–such as by being different? We might consider letting China live in an A.I. controlled society as the heads of its government seem intent on doing. Whereas the U.S. might choose government in which what matters are not the false gods of greed and hubris, but rather something qualitatively different—something more human-centric—something, perhaps, of, for, and by…people? If …we still have that choice.