Thursday, June 29, 2023

ChatGPT Evolution to Personhood Raises Questions of Legal Rights

This piece originally appeared in Bloomberg Law. 


As ChatGPS and and other forms of artificial intelligence take on more humanlike qualities, we need to ask two questions. When and how could ChatGPT be indicted for a crime? And at what point would pulling the plug on ChatGPT be murder?

When does an entity powered by AI acquire constitutional status as a person with rights? American law is rift with dichotomies; rights versus duties is the most common example. But so is the concept of personhood versus property. In theory, if you’re a person, you have rights. Conversely, property has no rights—it’s merely a thing owned by persons who may have some rights to it.

Yet legal dichotomies aren’t absolutes. Even Wesley Hohfeld’s famous 1918 Yale Law Journal article on rights and duties recognized that. The same is true with personhood and property.

Just because you’re a person doesn’t mean you have rights—or at least the full range of rights afforded to adults. Children are persons but have limited rights. Resident noncitizens in the US have some rights. In Dred Scott v. Sanford, the US Supreme Court ruled that slaves were property, not persons.

In Minor v. Happersett, the Supreme Court ruled that women, while citizens, still didn’t have the right to vote. Conversely, in County of Santa Clara v. Southern Pac. R. Co.Citizens United v. Federal Election Commission, and Burwell v. Hobby Lobby, the court ruled that corporations were legally or constitutionally protected persons with rights. They can also be indicted for crimes.

And in Roe v. Wade, the court noted that although the Constitution refers to “person” 22 times, an unborn person was not entitled to constitutional personhood—it lacked rights.

Thus things can have rights. American legal history shows how the line between personhood and property is often thin and shifting.

When might an entity such as a robot or a computer powered by advanced AI acquire legal or constitutional personhood status? Christian theologians would say personhood is marked by possession of a soul; philosophers such as Immanuel Kant would argue that persons have the capacity for autonomy and choice. Perhaps genome researchers might say the DNA holds the clue, but for others, the idea of self-reflection or awareness is critical.

In 1950, computer scientist Alan Turing proposed a test to determine when a machine generated or produced intelligent behavior equivalent to a human. He argued that when such a machine produced language behavior and response to humans that was indistinguishable from normal human conversation, it was now thinking. For many, thinking or cognition—or at least its potential—is what separates persons with rights from things.

Are we close to a point when AI or ChatGPT passes the Turing test? Are we close to when such an entity is self-reflective or aware of itself? Perhaps. If so, then at some point soon the issue of when it acquires legal personhood and with that, rights and duties, is close.

The law now holds a manufacturer or owner responsible for a machine or product. The owner or maker has rights and duties. The machine doesn’t have any independent rights or duties. But at what point should it?

This isn’t a farfetched question. Increasingly, animals—normally and historically thought of as property—are acquiring rights, at least against abuse and neglect. Others are pressing to give them other legal protections, including standing and habeas corpus rights.

At what point might ChatGPT or an AI-power machine be guilty of a crime? What if it provides advice on how to commit fraud or plot a murder, it is possible to bring conspiracy, aiding and abetting, or obstruction of justice against it? If an AI-powered vehicle makes a choice to speed or navigate obstacles and then hits and kills someone, can it be indicted for murder or manslaughter?

Conversely, should ChatGPT be afforded Miranda rights and be told it has a right to remain silent? Consider the famous scene in “2001: A Space Odyssey” where Hal the computer seeks to kill astronaut Dave and the latter turns the power off on the former.

Can the former be charged with the crime? Can the latter be charged with the killing of an artificial person? Or think of the cyborgs—bioengineered humanoids in the 1982 motion picture “Blade Runner"—are they persons possessing legal rights and obligations?

These questions might seem silly right now. But as this and future generations of ChatGPT and AI roll out, the line between person and property will erode more, necessitating answered.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

David Schultz is a professor of political science and legal studies at Hamline University and a professor of law at University of Minnesota.

Write for Us: Author Guidelines

No comments:

Post a Comment