A.I. can identify keystrokes by just the sound of your typing and steal information with 95% accuracy, new research shows. Researchers had artificial intelligence listen to the sounds of typing thr…::Researchers had artificial intelligence listen to the sounds of typing through a phone and over Zoom, with eerie results.

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    1 year ago

    Theres no way this thing is guessing keyclicks by sound on any keyboard. Maybe a specific one. Especially with custom keyboards taking off. My canonkeys 60% sounds nothing like my completely custom elvish keyboard. An AI in this day and age is not ready for that.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 year ago

      Theres no way this thing is guessing keyclicks by sound

      Given that it’s AI-trained it may be hard to say, but my guess is that it’s based on timing more than the unique sound of each separate key. Like certain sequences of keys probably have a predictable time between each stroke, based on how long it takes the relevant finger to travel to the next key after the previous one.

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        That one I could believe more. Since keyboards have such a wide array of sounds its ptobably not using the envelope to determine the key.

      • 6daemonbag@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        On top of that, we understand the frequency of letters used in languages. By knowing both of these and correlating with recurring patterns of sounds, I can very much believe this can be leveraged against even custom mechanical keyboards with random keys attached

    • Aa!@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      It was explicitly trained on the keyboard used in MacBooks, which is fairly specific, but covers a pretty large user base.

      In theory they could train it on other specific keyboards, but it remains to be seen what other factors could affect it

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Which, has a very specific sound with the scissor switches and aluminum casings. Its not exactly your average logitech keyboard in an office.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Just record yourself typing “all work and no play makes Jack a dull boy”, and play it on loop in the background.

  • dewritochan@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    got hit with a paywall, got around it, leaving this for the lazy

    You may have gotten used to covering your webcam, but now you might have to start muffling the sound of your keyboard too.

    Laptop users are at risk of having sensitive information including private messages, passwords, and credit card numbers stolen just by typing on their keyboard. A new paper by a team of researchers from British universities shows that artificial intelligence can identify keystrokes by sound alone with 95% accuracy. And as technology continues to develop at a rapid pace, attacks such as these will become more sophisticated.

    In this study, experimenters correctly identified keystrokes on a MacBook Pro through a nearby phone recording 95% of the time, and through a recorded Zoom call at a 93% rate.

    The research paper details what it calls “acoustic side channel attacks” in which a malicious third party uses a secondary device, like a cell phone sitting next to a laptop or an unmuted microphone on a video-conferencing software such as Zoom, to record the sound of typing. The third party then feeds the recording through a deep-learning A.I. trained to recognize the sound of individual pressed keys to decipher what exactly was typed.

    Deep learning (DL) is a subset of machine learning in which computers are taught to process data in a way similar to the human brain—essentially using a multilayered “neural network” to “learn” from large amounts of data and accurately produce insights and predictions. Deep-learning models can recognize patterns in pictures, texts, sounds, and other data. This type of A.I. is in everyday products like digital assistants like Amazon’s Alexa and voice-enabled TV remotes, as well as newer technologies like self-driving cars.

    “With the recent developments in both the performance of (and access to) both microphones and DL models, the feasibility of an acoustic attack on keyboards begins to look likely,” the paper said.

    The paper, published on August 3, was authored by Joshua Harrison, a software development engineer at Amazon who recently graduated with a Masters of Engineering from Durham University, as well as University of Surrey lecturer Ehsan Toreini and Royal Holloway University of London senior lecturer Maryam Mehrenzhad.

    Mitigating the ever-developing threat

    Laptops are especially ideal targets for these attacks because of their portability, according to the paper. People often take their laptops to work in public spaces like libraries, coffee shops, and study areas, where the sound of typing can easily be recorded without notice from the targeted user.

    One of the main concerns of the paper is that people are unaware of these kinds of attacks, so they do nothing to prevent them.

    “The ubiquity of keyboard acoustic emanations makes them not only a readily available attack vector, but also prompts victims to underestimate (and therefore not try to hide) their output,” the paper said. “For example, when typing a password, people will regularly hide their screen but will do little to obfuscate their keyboard’s sound.”

    One way to mitigate the threat of this attack is by using stronger passwords with multiple cases, like special characters, upper and lowercase letters, and numbers. Passwords with full words might be more easily deduced and therefore at greater risk of attack.

    And while the pressing of the shift key can be recognized by A.I., it cannot yet recognize the “release peak” of the shift key amidst the sound of other keys, “doubling the search space of potential characters following a press of the shift key,” the paper said.

    Another simple way to deter these kinds of attacks is by using two-factor authentication. This is a security method that requires two forms of identification to access accounts and data. For instance, the first factor may be a password and the second may be an account activity confirmation through an email or on a separate device.

    Biometric authentication, like fingerprint scans and facial recognition, can also lessen the risk of an attack.

    But as A.I. continues to evolve, so too will these attacks. The authors of the paper recommended that future studies analyze the use of smart speakers to record keystrokes, “as these devices remain always-on and are present in many homes.”

    The authors also suggested that future research should explore the implementation of a language model used in tandem with a deep-learning A.I. Language models, like viral chatbot ChatGPT, are trained on large series of text to recognize patterns of speech.

    A language model “could improve keystroke recognition when identifying defined words as well as an end-to-end real-world implementation of an ASC attack on a keyboard,” the paper said.

  • AFK BRB Chocolate@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Probably depends a lot on the keyboard whether the sound is even audible enough. Membrane keyboards are probably a lot harder to hear than mechanical.