Mark Koekelberg review, “The Political Philosophy of Artificial Intelligence”

In his last book the book On what AI could mean for a culture permeated with the spirit of self-improvement (a $11 billion industry in the US alone), Mark Koekelberg points out the kind of ghostly weakness that accompanies all of us now: the quantitative, invisible self and ever-increasing digital versions, Which consists of all traces left whenever we read, write, watch or buy anything online, or carry a device, like a phone, that can be tracked.

This is our data. Then again, they aren’t: We don’t own or control them, and we hardly have a say in where they go. Companies buy, sell, and mine to identify patterns in our choices, and between our data and other people. Algorithms target us with recommendations; Whether or not we clicked or watched videos they expected would catch our eye, comments are generated, intensifying the cumulative quantitative profile.

The potential to market self-improvement products calibrated to your own insecurities is obvious. (Just think how much home fitness equipment is gathering dust now that has been sold with a blunt instrument of trade information.) Coeckelbergh, a professor of media and technology philosophy at the University of Vienna, worries that the effect of AI-driven self-improvement could only be to reinforce already strong tendencies toward egocentrism. The individual character, driven by their machine-reinforced fears, will atrophy into “a thing, an idea, an essence that is isolated from others and the rest of the world and no longer changes,” he wrote in Self development. The healthiest elements of the soul are found in philosophical and cultural traditions that assert that the self “can exist and improve only in relation to others and the broader environment.” The alternative to digging into digitally augmented grooves would be “a better and harmonious integration into society as a whole through the fulfillment of social obligations and the development of virtues such as empathy and trustworthiness.”

Long request, that. It means not just arguing about values ​​but public decision making about priorities and policies – decision making that is, after all, political, as Coeckelbergh addresses in his other new book, Political philosophy of artificial intelligence (nation). Some of the basic questions are as familiar as recent news headlines. “Should social media be further regulated, or self-regulating, in order to create better quality public debate and political participation” – using AI capabilities to detect and delete misleading or hateful messages, or at least reduce their visibility? Any discussion of this issue must reconsider the well-established arguments as to whether freedom of expression is an absolute right or is limited by limits that must be clarified. (Should death threat be protected as freedom of speech? If not, is it an invitation to genocide?) New and emerging technologies force a return to any number of classic questions in the history of political thought “from Plato to NATO,” as the saying goes.

In this regard, Political philosophy of artificial intelligence It doubles as an introduction to traditional debates, in a contemporary key. But Coeckelbergh also pursues what he calls the “non-instrumental understanding of technology,” for which technology is “not just a means to an end, but also shapes those ends.” Tools capable of identifying and stopping the spread of falsehoods can also be used to ‘draw attention’ towards accurate information – supported, perhaps, by AI systems capable of assessing whether a given source is using sound statistics and interpreting it in a reasonable manner. Such a development would likely end some political careers before they began, but what is even more troubling is that such technology, says the author, “can be used to advance rational or technological understanding of politics, which ignores the inherently anti-concept”. [that is, conflictual] Yet politics and risks exclude other viewpoints.”

Whether or not lying is ingrained in political life, there is something to be said for the benefits of public appearances for it in the course of the debate. By directing debate, AI risks “making the realization of the ideal of democracy as deliberation more difficult… which threatens public accountability, and increases the concentration of power.” This is a miserable potential. The absolute worst-case scenarios involve AI becoming a new form of life, the next step in evolution, and growing so powerful that managing human affairs will be least of its concern.

Coeckelbergh gives an occasional nod to this kind of transhumanist induction, but his real focus is on showing that philosophical thought for a few thousand years would not automatically become obsolete through the exploits of digital engineering.

He writes, “The AI ​​policy goes into what you and I do with technology at home, in the workplace, with friends, etc., which in turn shapes that policy.” Or it can, however, be provided that we direct a reasonable part of our attention to the question of what we have made of that technology, and vice versa.