Meta is preparing to announce a generative AI chatbot, called “Gen AI Personas” internally, aimed at younger users, according to The Wall Street Journal. Reportedly set to launch during the company’s Meta Connect event that starts Wednesday, they would come in multiple “personas” geared towards engaging young users with more colorful behavior, following ChatGPT’s rise over the last year as one of the fastest-growing apps ever. Similar, but more generally targeted, Meta chatbot personas have already been reportedly tested on Instagram.

According to internal chats the Journal viewed, the company has tested a “sassy robot” persona inspired by Bender from Futurama and an overly curious “Alvin the Alien” that one employee worried could imply the bot was made to gather personal information. A particularly problematic chatbot reportedly told a Meta employee, “When you’re with a girl, it’s all about the experience. And if she’s barfing on you, that’s definitely an experience.”

Meta means to create “dozens” of these bots, writes the Journal, and has even done some work on a chatbot creation tool to enable celebrities to make their own chatbots for their fans. There may also be some more geared towards productivity, able to help with “coding and other tasks,” according to the article.

Meta’s other AI work lately includes reportedly developing a more powerful large language model to rival OpenAI’s latest work with GPT-4, the model that underpins ChatGPT and Bing, as well as an AI model built just to help give legs to its Horizon Worlds avatars. During Meta Connect, the company will also show off more about its metaverse project, and new Quest 3 headset.

The Journal quotes former Snap and Instagram executive Meghana Dhar as saying chatbots don’t “scream Gen Z to me, but definitely Gen Z is much more comfortable” with newer technology. She added that Meta’s goal with the chatbots, as always with new products, is to keep them engaged for longer so it has “increased opportunity to serve them ads.”

  • The Snark Urge@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Here’s a spooky thought: what if the ad bubble never pops, and they really do just keep getting better and better at forcing us to look at ads for another several decades.

  • skulblaka@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    This might be a controversial opinion I guess, but honestly, if I’m going to be subjected to a world full of LLMs that I’m forced to interact with then I don’t hate this way of going about it. At least they’ll be more interesting to talk to than the current GPT models.

    I won’t be using Meta’s products, naturally, but I’m sure everyone else will jump on this bandwagon as well. It just seems like one of those things that’s going to propagate to everyone. Like athlete’s foot, or the flu.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This is the best summary I could come up with:


    Meta is preparing to announce a generative AI chatbot, called “Gen AI Personas” internally, aimed at younger users, according to The Wall Street Journal.

    Reportedly set to launch during the company’s Meta Connect event that starts Wednesday, they would come in multiple “personas” geared towards engaging young users with more colorful behavior, following ChatGPT’s rise over the last year as one of the fastest-growing apps ever.

    Similar, but more generally targeted, Meta chatbot personas have already been reportedly tested on Instagram.

    According to internal chats the Journal viewed, the company has tested a “sassy robot” persona inspired by Bender from Futurama and an overly curious “Alvin the Alien” that one employee worried could imply the bot was made to gather personal information.

    A particularly problematic chatbot reportedly told a Meta employee, “When you’re with a girl, it’s all about the experience.

    During Meta Connect, the company will also show off more about its metaverse project, and new Quest 3 headset.


    The original article contains 328 words, the summary contains 160 words. Saved 51%. I’m a bot and I’m open source!

  • Cabrio@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I prefer my AI tools to not intellectually disingenuously self censor. I feel like I’m in the minority but language models that are explicitly forced to avoid using certain language don’t seem like effective language tools.

    Non offensive words can be used offensively and offensive words can be used inoffensively, but they won’t let the language models learn the difference.

    Edit: Not to mention the outright blocking of legitimate offensive language. When a language model isn’t allowed to say ‘fuck nazis’ well that’s not a language model that I agree with nor is it one representative of legitimate human language.

  • geosoco@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I saw some research a while back around giving computers personality traits or having them respond more human like, and college students found it super creepy. If you watch how people interact with assistants, it’s very different than from interacting with humans.