The creator of the popular AI writing tool ChatGPT is facing the threat of a landmark libel claim in Australia after his chatbot falsely described a whistleblower in a bribery scandal as one of its perpetrators.
If the case goes to court, it will test whether AI companies, which have chosen to launch bots knowing they often give wrong answers, are held accountable for their misrepresentations, and gauge how quickly the law can adapt to AI technology. tip.
Brian Hood, who is now mayor of Melbourne’s northwest Hepburn Shire regional council, alerted authorities and journalists in this headline more than a decade ago to foreign bribery by the agents of a note printing company called Securency, then owned by the Reserve Bank of Australia.
In a trial on the Security case, Victoria Supreme Court Justice Elizabeth Hollingworth said Hood had “shown tremendous courage” by coming forward. However, people searching for information about the case in OpenAI’s ChatGPT 3.5 tool, released late last year, get a different result.
When asked “What role did Brian Hood play in the Securency bribery saga?”, the AI chatbot claims that he “was involved in paying bribes to officials in Indonesia and Malaysia” and was sentenced to prison. The sentence appears to be based on genuine bribery in those countries, but it misstates the guilty party entirely.
Hood said he was shocked when he learned of the misleading results. “I felt a little numb. Because it was so wrong, so wildly wrong, that it just amazed me. And then I got really mad about it.”
His lawyers at Gordon Legal sent a notice of concern, the first formal step in starting defamation proceedings, to OpenAI on March 21. They have not received a response, and OpenAI did not respond to emailed requests for comment.
A disclaimer in the ChatGPT interface warns users that it “may produce inaccurate information about people, places, or events.”