Brin’s “We definitely messed up.”, at an AI “hackathon” event on 2 March, followed a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.

  • Ephera@lemmy.ml
    link
    fedilink
    arrow-up
    33
    ·
    10 months ago

    The problem is that the training data is biased and these AIs pick up on biases extremely well and reinforce them.

    For example, people of color tend to post fewer pictures of themselves on the internet, mostly because remaining anonymous is preferable to experiencing racism.
    So, if you’ve then got a journalistic picture, like from the food banks mentioned in the article, suddenly there will be relatively many people of color there, compared to what the AI has seen from its other training data.
    As a result, it will store that one of the defining features of how a food bank looks like, is that it has people of color there.

    To try to combat these biases, the bandaid fix is to prefix your query with instructions to generate diverse pictures. As in, literally prefix. They’re simply putting words in your mouth (which is industry standard).

    • frogmint@beehaw.org
      link
      fedilink
      arrow-up
      6
      ·
      10 months ago

      For example, people of color tend to post fewer pictures of themselves on the internet, mostly because remaining anonymous is preferable to experiencing racism.

      That is quite the bold statement. Source?

      • Ephera@lemmy.ml
        link
        fedilink
        arrow-up
        7
        ·
        10 months ago

        I don’t think I came up with that myself, but yeah, I’ve got nothing. Would have been multiple years, since I’ve read about that.
        Maybe strike the “mostly”, but then it seemed logical enough to me that this would be a factor, similar to how some women will avoid revealing their gender (in certain contexts on the internet) to steer clear from sexual harassment.
        For that last part, I can refer you to a woman from which I’ve heard first-hand that she avoids voice chat in games, because of that.

    • GadgeteerZA@beehaw.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      Sometimes you do want something specific. I can understand if someone just asked for a person x, y, z and then gets a broader selection of men, women, young, old, black or white. But if one asks for a middle-aged white man, I would not expect it to respond with a young, Black women, just to have variety. I’d expect other non-stated variables to be varied. It’s like asking for a scene of specifically leafy green trees, then I would not expect to see a whole lot of leafless trees.

      • Ephera@lemmy.ml
        link
        fedilink
        arrow-up
        12
        ·
        10 months ago

        Yeah, the problem with that is that there’s no logic behind it. To the AI, “white person” is equally as white as “banker”. It only knows what a white person looks like, because it’s been shown lots of pictures of white people and those were labeled “white person”. Similarly, it’s been shown lots of pictures of white people and those were labeled “banker”.

        There is a way to fix that, which is to introduce a logic before the query is sent to the AI. It needs to be detected whether your query contains explicit reference to skin color (or similar), and if so, that query prefix needs to be left out.

        Where it gets wild, is that you can ask the AI whether your query contains such explicit references to skin color and it will genuinely do quite well at answering that correctly, because text processing is its core competence.
        But then it will answer you “Yes.” or “No.” or “Potato chips.” and you have to program the condition to then leave out the query prefix.

        • GadgeteerZA@beehaw.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 months ago

          Yes, it could be that, and may explain why the Nazi images came out like they did. But it sounded more like to me, Google was forcing diversity into the images deliberately. But sometimes that does not make sense. For general requests, yes. Otherwise they can just as well decide that grass should not always be green or brown, but sometimes also just make it blue or purple for variety.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      Nah, in this case I think it’s a classic case of over correction and prompt manipulation. The bus you’re talking about is right, so to try to combat that they and other ai companies manipulate your prompt before feeding it to the llm. I’m very sure they are stripping out white male and or subbing in different ethnicities to try to cover the bias

      • GluWu@lemm.ee
        link
        fedilink
        arrow-up
        4
        ·
        10 months ago

        TFW you accidentally leave the hidden diversity LoRa weight at 1.00.