Martin Seligman, shown here from the University of Pennsylvania’s College of Arts & Sciences website, where he is a professor of psychology with an endowed chair, is one of two psychologists whose work is being mimicked through AI chatbots. (Source: UPenn.edu)

Psychologist Martin Seligman Among Those Recreated via Virtual Chatbot

Martin Seligman, an influential American psychologist, found himself contemplating his legacy and the survival of his work. To his surprise, he received an email from his former graduate student, Yukun Zhao, who had created a virtual version of Seligman using cutting-edge AI software, according to a story on politico.com. The AI chatbot, named “Ask Martin,” was trained on Seligman’s writings and could provide advice and wisdom similar to Seligman’s own. While impressed by the replication of his personality, Seligman was initially unaware that the virtual version of himself had been created without his permission.

The creation of AI chatbots modeled on real humans, using large language models, is becoming more prevalent. However, the replication of living people without their consent raises ethical concerns. Another example is the chatbot version of Belgian psychotherapist Esther Perel, created by tech entrepreneur Alex Furmansky. Both Seligman and Perel eventually accepted the existence of their digital replicas, but it is unclear whether they would have had the power to shut them down legally.

The emergence of AI-generated digital replicas has created a policy gray zone, where existing laws and norms struggle to address the ethical and legal implications. Some lawmakers in the United States are attempting to regulate unauthorized digital replicas through the proposed NO FAKES Act, which would require licensing and authorization from the original human. However, jurisdictional challenges complicate enforcement, as replicas may be developed in different countries.

The motivations behind the creation of AI replicas vary. Zhao built the AI Seligman to help address anxiety and depression in China, where access to confidential therapy is challenging. Similarly, Furmansky saw potential in AI to access the knowledge of brilliant individuals, leading to the creation of AI Perel. While some individuals, like Seligman and Perel, accept their replicas, others express concerns about unauthorized use and the potential for abuse.

Lawmakers are grappling with issues of intellectual property and who should benefit from AI-generated replicas. However, the global nature of AI technology makes it difficult to enforce regulations effectively. In China, where monitoring citizens’ thoughts is a concern, sharing personal information with AI replicas raises additional risks. The ruling Chinese Communist Party’s surveillance policies could interpret criticism of the state from virtual replicas as dissent.

As AI-generated digital replicas move closer to the mainstream market, policymakers face the challenge of establishing rules and regulations. Seligman has been approached by other AI companies seeking to license his work, but he remains cautious due to trust concerns. Despite his experiences with his work being used without his knowledge, Seligman believes that the virtual version of himself can provide value to people long after he is gone, contributing to his legacy in psychology.
In the evolving landscape of AI, the replication of real personalities without consent raises complex ethical and legal questions.

While efforts are being made to regulate AI-generated replicas, the global nature of AI technology presents challenges for effective enforcement. As the discussion continues, defining the rules surrounding the use of AI-generated replicas becomes increasingly urgent.

read more at politico.com