• 5 Posts
  • 136 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle








  • I get really bad brain fog. It’s like I wake up and feel my IQ has halved. Simple problems seem gigantic, everything is a hassle. On top of that - general fatigue, like walking up the stairs or running a bit gets me all breathless. Even though I should be familiar with it by now, I always keep thinking: “is it COVID?”

    Then one day it rains and the pollen subsides and suddenly I can run and think and feel like myself again.














  • “metadata” is such a pretty word. How about “recipe” instead? It stores all information necessary to reproduce work verbatim or grab any aspect of it.

    The legal issue of copyright is a tricky one, especially in the US where copyright is often being weaponized by corporations. The gist of it is: The training model itself was an academic endeavor and therefore falls under a fair use. Companies like StabilityAI or OpenAI then used these datasets and monetized products built on them, which in my understanding skims gray zone of being legal.

    If these private for-profit companies simply took the same data and built their own, identical dataset they would be liable to pay the authors for use of their work in commercial product. They go around it by using the existing model, originally created for research and not commercial use.

    Lemmy is full of open source and FOSS enthusiasts, I’m sure someone can explain it better than I do.

    All in all I don’t argue about the legality of AI, but as a professional creative I highlight ethical (plagiarism) risks that are beginning to arise in majority of the models. We all know Joker, Marvel superheroes, popular Disney and WB cartoon characters - and can spot when “our” generations cross the line of copying someone else’s work. But how many of us are familiar with Polish album cover art, Brazilian posters, Chinese film superheroes or Turkish logos? How sure can we be that the work “we” produced using AI is truly original and not a perfect copy of someone else’s work? Does our ignorance excuse this second-hand plagiarism? Or should the companies releasing AI models stop adding features and fix that broken foundation first?