The complicated business of calculating ethical values

A computer expert answers Tara Isabella Burtons I know your works.

In 2018 researchers from the Massachusetts Institute of Technology Media Lab, Harvard University, the University of British Columbia and Universit Toulouse Capitole shared the results of one of the largest moral experiments conducted to date. They recorded 40 million ethical decisions by millions of people in 233 countries. The Moral Machine experiments posed users with variations of the classic trolley problem, instead imagining the trolley as a self-driving car. Should the car swerve and collide with bumping pedestrians or maintain its current trajectory, which would lead to inevitable ruin for the passengers inside? What if jaywalkers are elderly? What if the passengers are doctors?

The study’s findings highlight various preferences and biases implicit in decision-making (for example, sparing doctors over the elderly), as well as cultural preferences among countries grouped into global Western, Eastern, and Southern categories. (For example, countries in the southern category showed a strong preference for physically fit individuals to save). in real life), and the experiment itself has also been criticized for both its setup and its assumptions. But the Moral Machine provides a fascinating example of an algorithm that looks at your ability to make decisions in a handful of tense driving situations, and then spits out a bulleted list of your preferences for saving kids over dogs, and how that plays out. compare to the rest of the world (or at least the rest of the people who took the test). There’s something strangely compelling about seeing a neat and orderly enumeration of your ethical values: The Moral Machine is much faster at calculating those supposed values ​​than the college roommate you stayed up all night with discussing the critical points of utilitarianism.

Tara Isabella Burton’s I Know Thy Works revolves around a similar human desire to skip long debates and figure out the next best ethical thing to do in a logical and calculated way. The story does this through the Arete system, whose goal is to externalize morality. First, users enter their meta-ethics, the guiding principle on which they want to base their decisions (The search for truth at all costs OR The greatest happiness for the greatest number). Then, the app creates recommendations and suggestions to comply with that ethic: for example, what jobs users can do or what pleasures they can indulge in. such as CVs and dating profiles. The plot of the story revolves around characters who briefly try to escape the system through clandestine Black Dinners, during which they leave behind their phones and meta-ethics.

The Arete system is fascinating in its conception, and maybe not mashed potato distant from our own reality. Effective computability, broadly defined as the domain of phenomena that can be algorithmically measured and calculated, was limited to the measurement of physical quantities such as vehicle velocity or electrons flowing along a copper wire. Now the inextricability of technology from our lives means we can calculate more abstract quantities, like our FICO score or the attributes of a perfect romantic partner. Smartphones and wearables have folded the outside world into our cognitive processes as extended minds, which in turn has made users addicted to the algorithms and applications that organize our shopping, help us communicate with our loved ones, and remind us of take our meds. The final frontier for this creeping into effective computability is ethics and morality: can we outsource our ethical decisions to machines? And if we can, will we?

Indeed, Arete relies on a culture of tracking and surveillance that has accompanied our online lives, especially when it comes to self-help and personal wellness apps. People log every minute on productivity apps and every dollar on cheap apps, meditate on mental health apps and pray on religious ones, count steps and calories, and gladly agree to share their personal information with companies and advertisers for this convenience . Beyond personal apps, they have also become enmeshed in broader governmental and economic systems that leverage surveillance and user tracking to create profiles for individuals and make algorithmic decisions, which in turn often alter the trajectories of lives. These systems range from credit scores (which strive to judge financial worthiness) to preliminary risk assessments. There are also examples such as China’s social credit system, which claims to value financial and social creditworthiness. These systems have their own meta-ethics and automated processes for calculating alignment to that ethic, and are ripe with examples of bias and discrimination against minority groups (e.g., higher mortgage application rejection rates for Black Americans ).

Conceptually, it would be difficult to design a computational ethics algorithm like the one described in I Know Thy Works. Implicit in Burton’s story is the idea that metaethics is linguistic in nature and translatable into logic compatible with Aretes code base. In reality, it would be difficult to assign a philosophical principlethe greatest happiness for the greatest numberwith computational output. Determining such clear-cut moralistic/ethical statements has puzzled philosophers for centuries, and it is unlikely that a person could articulate a coherent set of metaethics for himself within Arete. Indeed, the characters in Burton’s story have difficulty choosing a metaethic and sticking to it. These kinds of dilemmas are familiar: How can you simultaneously express a desire for privacy and security in your personal life while wanting to showcase your life on social media? As history notes, we all want exactly two things in this life: to be seen and to be invisible. The question is which one do we want more.

At the same time, modern machine learning-based algorithms do not rely solely on explicit logical rules, but also crowdsource their decision-making based on statistical models gleaned from large amounts of training data. While this has led to a great deal of imitation of intelligent behavior (hello, ChatGPT), developing an algorithm that mimics the statistical majority of ethical decisions in its recommendations is fraught. These models are ill-suited to handling the complex moral dilemmas that arise on the fringes.

In history, like Aretes algorithmic ethics govern the minutiae of daily life, the idea of ​​personal resistance becomes central: how to affirm ethical independence? There is a performative aspect to ethics, as to other aspects of identity. Public scoring systems on both Arete and other real-world apps (like pedometers or Instagram) are designed to facilitate that performance. The story has another notable stage: the gothic atmosphere of the Black Dinners, marked by excess and the desire to escape. Though the theatrics of the story’s particular black dinner ultimately culminate in tragedy, there is an undercurrent of hope for the individuals who unite, bending and shifting their values ​​in defiance of cold ethical calculation, avoiding the agonizing void in [their] sternum. I like to think that, in that world, I’d also shut down my app and crank up my apocalyptic playlist until Morning.

Future Tense is a partnership of Slate, New America and Arizona State University that examines emerging technologies, public policy and society.


#complicated #business #calculating #ethical #values

Leave a Comment