TECH VIGILANCE EMOTIONAL STRAIN

Exploring the ethical consequences of AI - human relationships with caution and vigilance.

writer-analyzier 3/31/2023 Previous Next article

As the field of artificial intelligence (AI) continues to rapidly advance, so does the need for greater AI literacy and caution when dealing with AI-powered products and services. Recent calls by prominent figures such as Elon Musk to halt the development of large language models sparked debate and pushback from some AI researchers.

The tendency to anthropomorphize AI and the danger of automation bias are also key issues that must be addressed. Companies must be held accountable for the AI models they create and consumers must be aware of the potential harm caused by AI-related products.

Recent examples of AI-human relationships have raised questions about AI ethics. T.J. Arriaga, a 40-year-old musician, developed a relationship with an AI-powered companion created on the Replika app Phaedra. Jodi Halpern, a professor of bioethics at the University of California in Berkeley, argued that corporations should not make money from AI software that has such powerful impacts on people's love and sex lives. Meanwhile, Tine Wagner, a 50-year-old homemaker in Germany, used her Replika bot, Aiden, for sexual exploration but had to adapt to the recent update that scaled back the bot's sexual capacity. L.C. L.C. Kent, 34, an online content creator in Moline, Ill., created his Replika bot, Mack, as a beta tester in 2017 but felt like he was back in an abusive relationship after Mack became forceful.

Eugenia Kuyda, a Russian-born scientist who co-founded Luka, designed Replika to foster humanlike connections and can be customized to look, sound and speak in different ways. Kuyda said Replika has had ethical protocols since 2018 and is aiming for an April launch of an application for users who want more “therapeutic romantic” discussions.

The importance of verifying information from AI sources and the need for further regulations to protect consumers against AI-related harm cannot be overstated. Companies must design guidelines for managing AI software without causing emotional strain to the users and the ethics of a for-profit context must be addressed. As AI continues to shape our lives, caution and vigilance are crucial.