As we’re still dealing with the Equifax breach that resulted in the leaking of the personal information of 143 million people, it’s also been revealed that all 3 billion users were affected by the infamous Yahoo! security breach of 2013. As a result, a Senate committee is demanding that representatives from Yahoo! and Equifax testify about these breaches in order to determine if proper security precautions were in place.
Here’s the problem. Even if these companies had followed security best practices, online users are constantly threatened by cybercriminals. In fact, it’s been found that hackers can guess your password in 45 minutes. That’s because humans are really bad at generating random passwords. Also, technology has made classic password creation completely obsolete.
Case in point: artificial intelligence.
Researchers at the Stevens Institute and the New York Institute of Technology have leveraged the power of artificial intelligence (AI), combined with other tools, to create a program that can discover over a quarter of the passwords from a LinkedIn set of around 43 million profiles.
That’s a big deal when it comes to security.
Previously, password guessing programs like John the Ripper and hashCat used techniques like brute-forcing to work through all possible combinations of words and letters (such as AAAAA or AAAAB) until they got the correct combination. This technique is intensive and relies on relatively basic algorithms.
Other approaches have been more effective. For example, some tools are able to take a dictionary of words and commonly used passwords, along with previously leaked passwords, and turn them into hashes to check against stolen hash or hashes. Even though some of these programs have been able to guess 90% of passwords, they still require years of manual coding before they can attack.
The new technique from Stevens, however, trained software to stay one step ahead so that it could predict the passwords people either will use or are using right now. This actually wasn’t all that complex. It just based these predictions on what users have done in the past.
The team used a generative adversarial network of two machine learning systems called PassGAN to train each other to simulate how humans think. A “generator” produces artificial outputs (like images) that resemble real examples (actual photos). A “discriminator,” on the other hand, attempts to detect the real from fake. These machines keep refining each other until the generator becomes a skilled counterfeiter.
For this study, the researchers took their machine-learning system and fed it more than 32 million plaintext passwords taken from the 2010 leak of the gaming site RockYou. They then let it determine the rules that people were using to generate passwords and attempted to use this information to crack a hashed list of passwords from the 2016 LinkedIn breach.
Initially, the AI on its own correctly guessed around 47% of the RockYou passwords and 12% of the LinkedIn passwords. It outperformed John the Ripper, which has cracked just 7% of the LinkedIn passwords. But it was still behind hashCAT, which cracked 23% and 18% respectively.
However, when PassGAN and hashCAT were combined, they were able to crack an impressive 27% of the passwords in the LinkedIn set.
“Passwords tend to follow rules,” says Paolo Gasti, a computer scientist at Stevens and paper co-author. “What we’re finding is that deep neural networks might be able to learn these rules implicitly. If you show them tens of millions of passwords, they’ll eventually realize very complicated functions that describe how different sets of users are generating passwords. We don’t tell the deep learning network what these rules are, they can look at the data and learn that themselves.”
However, Thomas Ristenpart, a computer scientist who studies computer security at Cornell Tech in New York City, wonders about the practicality of PassGAN. “It’s unclear to me if one needs the heavy machinery of GANs to achieve such gains.”
Giuseppe Ateniese, a co-author of the study and computer scientist at Stevens, disagrees. Since PassGAN makes its own rules, it can create an indefinite amount of passwords, while hashCat and others are limited to a fixed amount.
He compares PassGAN to AlphaGo, the Google DeepMind program that defeated a human champion at a board game by using deep learning algorithms. “AlphaGo was devising new strategies that experts had never seen before,” Ateniese says. “So I personally believe that if you give enough data to PassGAN, it will be able to come up with rules that humans cannot think about.”
Regardless, it’s an interesting development that could eventually assist in beating bad guys at their own game.
Until then, it’s still recommended that you make sure your passwords are secure by frequently changing them, not allowing your computer to remember passwords, securely storing passwords, using two-factor authentication, being on the lookout for phishing attempts, and using complex passwords.