Photo: A copy of “Race After Technology,” courtesy of Daniel Mootz

By Daniel Mootz

A report published in Science (vol. 366) on Oct. 25, 2019, revealed alarming racial disparities in a cost-based (proxy) algorithm “widely used” by healthcare providers. Dr. Zaid Obermeyer and his colleagues found that “the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias.” The study concludes that (significantly) “less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients.” Ultimately, because modern “health systems rely on commercial prediction algorithms to identify and help patients with complex health needs,” private technology companies are effectively creating public health policies.  

Ruha Benjamin, an author and associate professor of African American Studies at Princeton University, was published in the same edition as the Science report, where she offers an in-depth perspective on the glaring issue. “Whereas in a previous era,” she notes, “the intention to deepen racial inequities was more explicit. Today ‘engineered inequity’ is perpetuated precisely because those who design and adopt such tools are not thinking carefully about systemic racism.”

Indeed, an influx of new, seemingly benign, technologies used in job recruiting, housing loans, and policing join together to perpetuate the trappings of real discrimination. In Captivating Technologies: Race, Carceral Technoscience, and the Liberatory Imagination in Everyday Life, a 2019 collection of essays edited by Benjamin, she explains how modern technology actively “capture[s] the imagination, offering technological fixes for a wide range of social problems.”  The idea, however, that digital tools are being made, and/or used to circumvent historically racist and discriminatory policies is profoundly misleading. Benjamin observes how “the design of different systems, whether we’re talking about legal systems or computer systems, can create and reinforce hierarchies precisely because the people who create them are not thinking about how social norms and structures shape their work.” This “indifference to social reality is, perhaps, more dangerous than outright bigotry.”  

In May, Joy Buolamwini, founder of the Algorithmic Justice League at MIT, testified before congress on the inherent biases within facial recognition technology. The quietly assumed notion that primarily cisgender white men have created an impartial device that can accurately identify women, People of Color, and non-binary genders is empirically false. The consequences of this sort of hegemony are indicative of what Benjamin has called the “New Jim Code,” an “insidious combination of coded bias and imagined objectivity,” passing itself off as innovation. Such narrow measures effectively “enables social containment while appearing fairer than discriminatory practices of a previous era.”

Professor Benjamin’s work examines how the state excels at “capturing bodies,” effectively placing them under surveillance, and then justifying it with technicalities and legal motives. The deception of privacy on the one hand, and a “scopic regime,” or “coded gaze,” on the other hand, impedes trust, and progress, further marginalizing those who have already been denied a say in the application of vital new programs.

On Nov. 3, JS Chen wrote a piece for Jacobin entitled “A New Era in Tech Nationalism,” concerning Microsoft’s recent contract with the United States Department of Defense to increase “the lethality” of the U.S. military. Corporate management of internet technology has, for years, been aligned with big business, the prison industrial complex, and the U.S. war machine. The fantasy of the “free-market” has driven companies such as the International Business Machines Corporation (IBM) and Facebook to partner with the government in actively targeting people across the country and around the globe. This form of tech-chauvinism has a coercive—and corrosive—effect on individual, as well as collective, relations to interconnection. Today, the enduring technology of capitalism has proponents on both the right and left — “economic nationalism” and “economic patriotism” conceptually define the hawkish extremes of geopolitics in regard to automation.  

The commodification of surveillance—such as data collection and information sharing—benefits the U.S. government, allowing a top strata of power brokers to continuously enrich themselves off systemic insecurities and maladapted practices of “technocorrection.” In other words, a new spate of technology utilized by law enforcement to monitor and exclude people is less of a universal achievement and more of a tool of oppression. Executive board values have been restated as machine neutrality. But those same values have consistently proven themselves to be disingenuous and cruel, desensitizing, and destructive.  

The culture of the digital self has rapidly become a hotspot for mind-numbing polemics, clickbait markets, and direct content advertisement. While certain advancements in the field of new media, art, and design provide cutting-edge potential for creative growth, they can often mask, or apply make-up, to the enforced reproduction of belief, expression, social status, and opportunity enmeshed in our common, everyday experience. Similarly, the hypertechnical, “hard sciences” have become more equipped with capabilities, but lack political conscience.  Uncritical adherence to tech norms and inventions contributes not only to social inertia, but also to a considerable process of alienation and estrangement. The result is that an arbitrary hierarchy of symbols can become thoroughly ingrained in social patterns and political reforms. The technology of these symbols evoke psychological appeal (and even addiction) to online profiles, internet culture, consumer phone apps, and corporate communication platforms. Identity, thus, verges on the uncanny, like a digitized doppelganger, or, more distressingly, like the mistaken phenotype of a wrongfully convicted prisoner.     

In a way, our collaboration with machines has become a vivid choice between ideology and authenticity. Technology for its own sake is not the same thing as implementing radical instruments of progress into everyday existence. In fact, such “intuitive” computation may impose new layers of oppression on future programmers, users, and subjects alike. As Theodor Adorno and Max Horkheimer pointed out in their seminal 1944 text Dialectic of Enlightenment, “a technological rationale is the rationale of domination itself.” 

Zygmunt Bauman, a prolific sociologist and author, declared “Social Media are a Trap” in an interview by Ricardo de Querol for El Pais on Jan. 25, 2016. Bauman is known for his theory on “liquid modernity,” in which “all agreements are temporary, fleeting, and valid only until further notice.” In this way, the modern tech boom has distracted an entire generation from the actual inequities it could be helping to improve. In Ruha Benjamin’s groundbreaking book Race After Technology, she describes how “clearly people are exposed differently to the dangers of surveillance.” Social media is a trap—just for some more than others. And yet, like a one-way mirror, we are unaware of how these mock experiences of social engagement play right into the hands of government and tech power.

Our presence, however voluntary or involuntary, on social net-scapes, enforces a distinct stronghold of techno-managerialism over human discourse and potential, undermining emergent formulas for a free and open society. Working towards establishing public technology with postmodern ethics will allow for real change in social systems, with the right awareness and the right resources.