There will be hackers, criminals, rogue nation states, high school students, and crackpots all attempting to feed erroneous data into every AI they can find.
As we continue to explore the legal implications of using AI-generated code, I wanted to extend a big thanks to ZDNET commenter @pbug5612 for inspiring us to journey down this rabbit hole.
In our first article of the series, we looked at who owns the code created by AI chatbots like ChatGPT. In this article, we'll discuss issues of liability.
To frame this discussion, I'll turn to attorney and long-time Internet Press Guild member Richard Santalesa. With his tech journalism background, Santalesa understands this stuff from both a legal and a tech perspective. (He's a founding member of the SmartEdgeLaw Group.)
"Until cases grind through the courts to definitively answer this question," Santalesa advises, "the legal implications of AI-generated code are the same as with human-created code."