Except if you continue reading beyond your Quote, it goes on to explain why that actually doesn’t help.
Except if you continue reading beyond your Quote, it goes on to explain why that actually doesn’t help.
Companies and their legal departments do care though, and that’s where the big money lies for Microsoft when it comes to Windows
Training and fine tuning happens offline for LLMs, it’s not like they continuously learn by interacting with users. Sure, the company behind it might record conversations and use them to further tune the model, but it’s not like these models inherently need that
Happened with Lone Echo for me. It’s a VR game where you’re in a space station, and you move around in zero g by just grabbing your surroundings and pulling yourself along or pushing yourself off of them. I started reflexively attempting to do that in real life for a bit after longer sessions
HTTP is not Google-controlled, you don’t need to replace that in order to build something new without Google
There’s also this part:
But Johansson’s public statement describes how they tried to shmooze her: they approached her last fall and were given the FO, contacted her agent two days before launch to ask for reconsideration, launched it before they got a response, then yanked it when her lawyers asked them how they made the voice.
Which is still not an admission of guilt, but seems very shady at the very least, if it’s actually what happened.
It’s not quite that simple, though. GDPR is only concerned with personally identifiable information. Answers and comments on SO rarely contain that kind of information as long as you delete the username on them, so it’s not technically against GDPR if you keep the contents.
And science fiction somehow can’t be fascist?
I was thinking of an approach based on cryptographic signatures. If all images that come from a certain AI model are signed with a digital certificate, you can tamper with metadata all you want, you’re not gonna be able to produce the correct signature to add to an image unless you have access to the certificate’s private key. This technology has been around for ages and is used in every web browser and would be pretty simple to implement.
The only weak point with this approach would be that it relies on the private key not being publicly accessible, which makes this a lot harder or maybe even impossible to implement for open source models that anyone can run on their own hardware. But then again, at least for what we’re talking about here, the goal wouldn’t need to be a system covering every model, just one that makes at least a couple models safe to use for this specific purpose.
I guess the more practical question is whether this would be helpful for any other use case. Because if not, I hardly doubt it’s gonna be implemented. Nobody is gonna want the PR nightmare of building a feature with no other purpose than to help pedophiles generate stuff to get off to “safely”, no matter how well intentioned
Yeah but the point is you can’t easily add it to any picture you want (if it’s implemented well), thus providing a way to prove that the pictures were created using AI and no harm has been done to children in their creation. It would be a valid solution to the “easy to hide actual CSAM between AI generated pictures” problem.
AI is just impossibly far away.
Sure it’s pretty far away, but it’s also moving at break neck speed. Last year low-res spaghetti-eating Will Smith body horror was the pinnacle of ai generated video, today we’re already generating videos that take at least a second look to determine that it was AI generated. The big question is at what point that improvement rate will start to level off.
I mean… It might be. Just depends on how much potential there still is to get models up to higher reasoning capabilities, and I don’t think anyone really knows that yet
Interesting, that seems kinda unsafe to me. The one I checked was Ryanair, they fully prohibit batteries in checked luggage
That’s only for cabin luggage. In checked luggage, Lithium Ion batteries are completely banned. If a battery bursts into flames in the cabin, it can be handled with hopefully minimal damage. You do not want that to happen in the belly of the plane packed in closely between everyone else’s luggage with no way of getting it contained until the planes lands.
Yup, you got it. Even the solution to your confusion. Good encryption algorithms are set up so that even the smallest possible change in the input (a single flipped bit) will produce a completely different result. So yeah, if you have just a small set of exact possible messages that could be sent, you can find out which one it was by encrypting it yourself and comparing your result to what was sent. But there is a super easy protection against this - just add some random data to the end of the message before encrypting it. The more, the harder it will be to crack.
Ah. Well the first comment in this chain talked about mobile devices, so I was assuming we were talking about mobile data plans
Uhh… Germany would like to have a word
Most carriers do offer some uncapped plan, I think, but it’s expensive and not the default
Iirc,mass effect lets you buy anything you miss in a store later, at least