For those using ChatGPT, if anything you post is used in a lawsuit against OpenAI, OpenAI can send you the bill for the court case (attorney fees and such) whether OpenAI wins or loses.
Examples:
-
A defamation case by an Australian mayor because ChatGPT incorrectly stated that he had served prison time for bribery: https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/
-
OpenAI sued for defamation after ChatGPT fabricates legal accusations against radio host: https://www.theverge.com/2023/6/9/23755057/openai-chatgpt-false-information-defamation-lawsuit
-
Sarah Silverman sues OpenAI for copyright infringement: https://lemmy.ml/post/1905056
Attorney talking about their ToS (same link as post link): https://youtu.be/fOTuIhOWFXU?t=268
https://openai.com/policies/terms-of-use 7. Indemnification; Disclaimer of Warranties; Limitations on Liability (a) Indemnity. You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services you develop or offer in connection with the Services, and your breach of these Terms or violation of applicable law.
Tried to read this post twice- what are you telling me to be aware of / stop doing?
I am not a lawyer and the implications are larger than this.
Do not post, share, trade, or otherwise make public any ChatGPT output from your sessions until you fact verify that data to the extent that you’re willing to take legal responsibility for it. In this case, especially causing a lawsuit against OpenAI. Because when that happens, you will foot the bill.
I am not a lawyer.
Hi, I’m a lawyer. While I work in a different area of law and therefore can’t speak too in depth about this with certainty, if their terms are as enforceable as the linked articles seem to indicate, then yes, this is good advice.
As always with the law, things may vary by jurisdiction. If you have specific questions, contact a lawyer in your area.
Basically just be careful if you like to post images/text taken straight from ChatGPT.
If you post anything that someone gets offended about and decides to sue ChatGPT (OpenAI) over it, they can turn around and bill you for those legal costs (whether they win the lawsuit or not).
Or if you post a screenshot that proves that you can get ChatGPT to write out the entire first chapter of some copyright protected book…
I’ve also seen people who like to “jailbreak” ChatGPT and then post things like tricking ChatGPT into giving instructions on how to make certain illegal devices and such. Again, just be careful and think if someone could sue the makers of ChatGPT and they include your social media post in the lawsuit, you have already agreed to pay their legal costs for that lawsuit.
Is that enforceable? Seems ridiculous.
I agree, it seems ridiculous, but according to the attorney in the video this would be enforceable, at least in the U.S.: https://piped.video/fOTuIhOWFXU?t=330
I’m sure you could try to get your own attorney to try to fight back against OpenAI’s attempt to bill you, but that’s going to cost you as well.
If I’m understanding this correctly, whatever chatgpt responds to your queries, you can be held liable for if any damaging content is produced.
It makes sense, right?
They produced a language model. It does nothing more than predict the next word. It will lie all the time, that’s part of how it works. It makes stuff up from the input it gets.
If you post that stuff online and it contains lies about people and you didn’t check it, you absolutely should be liable for that. I don’t see a problem with that.
Right, but what about the case where you post something that doesn’t contain lies at all?
What if ChatGPT outputs something that a certain former president gets offended by and he decides to sue OpenAI?
According to their ToS it doesn’t matter if it’s a “frivolous lawsuit”. If OpenAI had to pay any attorney fees just to respond to some ridiculous lawsuit, they could still bill you for those costs.
I don’t think it makes sense at that point at all.
Of course the vast majority of users would never have to worry about this, but it’s still something to be aware of.
It’s a tool. Can’t sue the manufacturer if you injure someone with it.
This isn’t true in the least. Purchase a tool and look through the manual. Every section marked “danger”, “warning”, or “caution” was put in there because someone sued some company because the user or some bystander was hurt or injured.
You are right. Seems I confused common sense with reality.
You ever heard of a product recall?
You can if the tool is defective.
That’s gotta be more to cover their ass then to come after you. Unless you use it’s generated text to sue the company I don’t think they would ever try to sue their users or else everyone would stop using the platform and Microsoft would have a huge PR problem and their stock price would drop. It just doesn’t logically make sense for them to do that, unless they were sued by you for the content produced by your inputs.
Only because they write something in their shitty corporate document doesn’t mean that it holds up later. Sure they can write, that you sold your soul to them, but that doesn’t mean that it is binding at all.
After all you never signed any contract with them. Not even via Docusign (which wouldn’t even be binding in my country, lol… worthless).
Yes they can send you a bill, but there’s always room for more toilet paper. Or just send them a fantasy bill back yourself 🤷
Well, at least you have additional protections in your country. For those in the U.S. this document is binding enough (at least according to this lawyer: https://youtu.be/fOTuIhOWFXU?t=330).
Edit: As with anything I’m sure you could argue in court that you shouldn’t be held responsible for their legal bills and hopefully you would win, but that would still require you to go to court over the matter.
Here is an alternative Piped link(s): https://piped.video/fOTuIhOWFXU?t=330
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Here is an alternative Piped link(s): https://piped.video/fOTuIhOWFXU?t=268
https://piped.video/fOTuIhOWFXU?t=268
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
I suppose the argument is, don’t post content which you are not prepared to take responsibility for. Which is the case with any content posted on social media, regardless of who, or what, generated it.
If I get chatGPT to make inflammatory comments, I’m still responsible for those comments if I choose to post them publicly. I can hardly stand behind the fig leaf of “oh I don’t believe those things only the AI believes those things”. It was still me that chose post the content publicly. Anyway the law does not recognise artificial intelligence systems as having independent agency, so the responsibility is still on the operator.
Sarah Silverman sues OpenAl for copyright infringement: https://lemmy. ml/post/1905056
How is this applicable? A copyright lawsuit isn’t bound by the TOS or any other document produced by the infringer. If this were the case, I could just write my own get out of jail free cards.
Hypothetical on this one, if the reason they decided to look into this was because they saw someone’s post on social media about ChatGPT being able to reproduce parts of some copyrighted work, ChatGPT could bill the user for publishing that info.
It doesn’t even have to be the sole reason for them to look into it. Technically they could bill anyone who posted content if that content wound up being used as evidence against OpenAI in any way (as I understand it, that’s where the “relating to your use of the Services” part could be used).
But if I have misunderstood something about this hypothetical, please feel free to correct me.
c) Limitations of Liability. NEITHER WE NOR ANY OF OUR AFFILIATES OR LICENSORS WILL BE LIABLE FOR ANY INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL OR EXEMPLARY DAMAGES, INCLUDING DAMAGES FOR LOSS OF PROFITS, GOODWILL, USE, OR DATA OR OTHER LOSSES, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. OUR AGGREGATE LIABILITY UNDER THESE TERMS SHALL NOT EXCEED THE GREATER OF THE AMOUNT YOU PAID FOR THE SERVICE THAT GAVE RISE TO THE CLAIM DURING THE 12 MONTHS BEFORE THE LIABILITY AROSE OR ONE HUNDRED DOLLARS ($100). THE LIMITATIONS IN THIS SECTION APPLY ONLY TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW.
This is section 7c in their TOS. I watched the video but still confused.
Does this essentially set a cap of
max(user paid, 100)
that OpenAI has to pay to users if there’s any damage?Or is that the cap that users pay to OpenAI when there’s a lawsuit against OpenAI and their lawyers send users the bill?
Well you see
They can write whatever the fuck they want in those terms and it might look legally binding
And it may well be until someone challenges it
Let’s assume I post a screenshot of a ChatGPT session on social media, and OpenAI sues me for the content.
Don’t they have to prove first that it actually is such a screenshot, and not a fake? It’s even easier with copied text.
Somehow this rings strangely similar to copyright cases against OpenAI, now with reversed roles. Who owns the authorship, how can we tell?
Let’s assume I post a screenshot of a ChatGPT session on social media, and OpenAI sues me for the content.
That hypothetical doesn’t have much to do with this indemnification clause. OpenAI wouldn’t be the one filing a lawsuit against you. They are the ones being sued by someone else who saw the screenshot you posted.
OpenAI would just send you the bill once the case has been settled (because according to the ToS you agreed to defend them from lawsuits related to your use of ChatGPT).Don’t they have to prove first that it actually is such a screenshot, and not a fake?
Yes, and during the whole process the prosecutor will force OpenAI to search through their logs/databases and turn over any evidence related to the case. It probably wouldn’t take long since the screenshot would probably include the prompt from the User and they would just have to search for that.
Somehow this rings strangely similar to copyright cases against OpenAI, now with reversed roles. Who owns the authorship, how can we tell?
So far the courts have ruled that AI can’t claim copyright to anything. The “prompter” could claim the copyright but they would also have to alter the output in some way to make it their own (at least as far as AI art is concerned, I assume it would be similar for copyright on text).
This guy needs to learn how to do YouTube. No graphics. No timestamps. He just talks at the camera.