Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)

    • Daniel@lemmy.ml
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      11 months ago

      Worldcoin, founded by US tech entrepreneur Sam Altman, offers free crypto tokens to people who agree to have their eyeballs scanned.

      What a perfect sentence to sum up 2023 with.

  • Bipta@kbin.social
    link
    fedilink
    arrow-up
    42
    ·
    11 months ago

    That’s why they just removed the military limitations in their terms of service I guess…

  • deegeese@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    11 months ago

    I also want to sell my shit for every purpose but take zero responsibility for consequences.

  • Sludgehammer@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    2
    ·
    11 months ago

    Considering what we’ve decided to call AI can’t actually make decisions, that’s a no-brainer.

  • fidodo@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    11 months ago

    Shouldn’t, but there’s absolutely nothing stopping it, and lazy tech companies absolutely will. I mean we live in a world where Boeing built a plane that couldn’t fly straight so they tried to fix it with software. The tech will be abused so long as people are greedy.

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      So long as people are rewarded for being greedy. Greedy and awful people will always exist, but the issue is in allowing them to control how things are run.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 months ago

        More than just that, they’re shielded from repercussions. The execs involved with ignoring all the safety concerns should be in jail right now for manslaughter. They knew better and gambled with other people’s lives.

    • monkeyslikebananas2@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      11 months ago

      They fixed it with software and then charged extra for the software safety feature. It wasn’t until the planes started falling out of the sky that they decided they would gracefully offer it for free.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    11 months ago

    Has anyone checked on the sister?

    OpenAI went from interesting to horrifying so quickly, I just can’t look.

  • Nei@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    11 months ago

    OpenAI went from an interesting and insightful company to a horrible and a weird one in a very little time.

    • TurtleJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      People only thought it was the former before they actually learned anything about them. They were always this way.

    • AVincentInSpace@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      Remember when they were saying GPT-2 was too dangerous to release because people might use it to create fake news or articles about topics people commonly Google?

      Hah, good times.

    • pearsaltchocolatebar@discuss.online
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      11 months ago

      Yup, my job sent us to an AI/ML training program from a top cloud computing provider, and there were a few hospital execs there too.

      They were absolutely giddy about being able to use it to deny unprofitable medical care. It was disgusting.

  • los_chill@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    11 months ago

    Agreed, but also one doomsday-prepping capitalist shouldn’t be making AI decisions. If only there was some kind of board that would provide safeguards that ensured AI was developed for the benefit of humanity rather than profit…

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    11 months ago

    I am sure Zergerberg is also claiming that they are not making any life-or-death decisions. Lets see you in a couple years when the military gets involved with your shit. Oh wait they already did but I guess they will just use AI to improve soldiers’ canteen experience.

  • nymwit@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    So just like shitty biased algorithms shouldn’t be making life changing decisions on folks’ employability, loan approvals, which areas get more/tougher policing, etc. I like stating obvious things, too. A robot pulling the trigger isn’t the only “life-or-death” choice that will be (is!) automated.

  • captainastronaut@seattlelunarsociety.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    11 months ago

    But it should drive cars? Operate strike drones? Manage infrastructure like power grids and the water supply? Forecast tsunamis?

    Too little too late, Sam. 

    • pearsaltchocolatebar@discuss.online
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      11 months ago

      Yes on everything but drone strikes.

      A computer would be better than humans in those scenarios. Especially driving cars, which humans are absolutely awful at.

      • Deceptichum@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        11 months ago

        So if it looks like it’s going to crash, should it automatically turn off and go “Lol good luck” to the driver now suddenly in charge of the life-and-death situation?

            • pearsaltchocolatebar@discuss.online
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              5
              ·
              edit-2
              11 months ago

              The computer, of course.

              A properly designed autonomous vehicle would be polling data from hundreds of sensors hundreds/thousands of times per second. A human’s reaction speed is 0.2 seconds, which is a hell of a long time in a crash scenario.

              It has a way better chance of a ‘life’ outcome than a human who’s either unaware of the potential crash, or is in fight or flight mode and making (likely wrong) reactions based on instinct.

              Again, humans are absolutely terrible at operating giant hunks of metal that go fast. If every car on the road was autonomous, then crashes would be extremely rare.

                • pearsaltchocolatebar@discuss.online
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  5
                  ·
                  11 months ago

                  Again, a computer can react faster than a human can, which means the car can detect a human and start reacting before a human even notices the pedestrian.

      • LWD@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Have you seen a Tesla drive itself? Never mind ethical dilemmas, they can barely navigate the downtown without hitting pedestrians

                • wikibot@lemmy.worldB
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 months ago

                  Here’s the summary for the wikipedia article you mentioned in your comment:

                  No true Scotsman, or appeal to purity, is an informal fallacy in which one attempts to protect their generalized statement from a falsifying counterexample by excluding the counterexample improperly. Rather than abandoning the falsified universal generalization or providing evidence that would disqualify the falsifying counterexample, a slightly modified generalization is constructed ad-hoc to definitionally exclude the undesirable specific case and similar counterexamples by appeal to rhetoric. This rhetoric takes the form of emotionally charged but nonsubstantive purity platitudes such as “true”, “pure”, “genuine”, “authentic”, “real”, etc. Philosophy professor Bradley Dowden explains the fallacy as an “ad hoc rescue” of a refuted generalization attempt.

                  to opt out, pm me ‘optout’. article | about