未加星标

Microsoft, Google Use Artificial Intelligence to Fight Hackers

字体大小 | |
[网络安全 所属分类 网络安全 | 发布者 店小二03 | 时间 2019 | 作者 红领巾 ] 0人收藏点击收藏

(Bloomberg) -- Last year, Microsoft Corp.’s Azure security team detected suspicious activityin the cloud computing usage of a large retailer: One of the company’s administrators, who usually logs on from New York, was trying to gain entry from Romania. And no, the admin wasn’t on vacation. A hacker had broken in.

Microsoft quickly alerted its customer, andthe attack was foiled before the intruder got too far.

Chalk one upto a new generation of artificiallyintelligent software that adapts to hackers’ constantly evolving tactics. Microsoft, Alphabet Inc.’s Google, Amazon.com Inc. and various startups are moving away from solely using older “rules-based” technology designed to respond to specific kinds of intrusion and deploying machine-learning algorithms thatcrunch massive amounts of dataon logins, behavior and previous attacks to ferret out and stop hackers.

“Machine learning is a very powerful technique for security―it’s dynamic, while rules-based systems are very rigid,”says Dawn Song, a professor at the University of California at Berkeley's Artificial Intelligence Research Lab. “It’s a very manual intensive process to change them, whereas machine learning is automated, dynamic and you can retrain it easily.”

Hackers are themselves famously adaptable, of course, so they too could harness machine learning to create fresh mischief and overwhelmthe new defenses. For example, they could figure out how companies train their systems and use the data to evade or corrupt the algorithms. The big cloud services companies are painfully aware that the foeis a moving target but argue that the new technology will help tilt the balance in favor of the good guys.

“We will see an improved ability to identify threats earlier in the attack cycle and thereby reduce the total amount of damage and more quickly restore systems to a desirable state,” saysAmazon Chief Information Security Officer Stephen Schmidt. He acknowledges that it’s impossible to stop all intrusions but says his industry will “get incrementally better at protecting systems and make it incrementally harder for attackers.”

Before machine learning, security teams used blunter instruments. For example, if someone based at headquarterstriedto log in from an unfamiliar locale, they were barred entry. Or spam emailsfeaturingvarious misspellings of the word “Viagra”were blocked. Such systems often work.

But they alsoflaglots of legitimate users―as anyoneprevented from using their credit card while on vacation knows. A Microsoft system designed to protect customers from fake logins had a 2.8 percent rate of false positives, according to Azure Chief Technology Officer Mark Russinovich.That might not sound like much but was deemed unacceptable since Microsoft’s larger customers can generatebillions of logins.

To do a better job of figuring out who is legit and who isn't, Microsoft technology learns from the data of each company using it, customizing security to that client’s typical online behavior and history. Since rolling out the service, the company has managed to bring down thefalse positiverate to .001 percent. This is the system that outed the intruder in Romania.

Training thesesecurity algorithms falls topeople like Ram Shankar Siva Kumar, a Microsoft managerwho goes by the title of Data Cowboy. Siva Kumar joined Microsoft six years ago from Carnegie Mellon after accepting a second-round interviewbecause his sister was a fan of “Grey's Anatomy,” themedical drama set inSeattle. He manages a team of about 18 engineers who develop the machine learning algorithms and then make sure they’re smart and fast enough to thwart hackers and work seamlessly with the software systems ofcompanies paying big bucks for Microsoftcloud services.

Siva Kumaris one of the people who gets the call when the algorithms detect anattack. Hehas been woken in the middle of the night, only to discover that Microsoft’s in-house “red team”of hackers were responsible. (They bought him cake to compensate for lost sleep.

The challenge is daunting.Millions of people log into Google’s Gmail each day alone. “The amount of data we need to look atto make sure whether this is you or an impostor keeps growing at a rate that is too large for humans to write rules one by one,”says Mark Risher, a product management director who helps prevent attacks on Google’s customers.

Google now checks for security breaches even after a user has logged in, which comes in handy to nab hackers who initially look like real users. With machine learning able to analyze many different pieces of data, catching unauthorized logins is no longer a matter of a single yes or no. Rather, Google monitors various aspects of behavior throughout a user’s session. Someone who looks legit initially may later exhibit signs they are not who they say they are, letting Google’s software boot them out with enough time to prevent further damage.

Amazon’s Macie service uses machine learning to find sensitive data amid corporate info from customers like Netflixand then watches who is accessing it and when, alerting the company to suspicious activity.

Besidesusingmachinelearningtosecuretheirownnetworksandcloudservices, Amazon andMicrosoftare providing the technology to customers. Amazon’s GuardDuty monitors customers’systems for malicious or unauthorized activity. Many times servicediscovers employees doing things they shouldn’t―such as putting Bitcoin mining software on their work PCs.

Dutch insurance company NN Group NV uses Microsoft’sAdvanced Threat Protection to manage access to its 27,000 workers and close partners, while keeping everyone else out. Earlier this year,Wilco Jansen, the company’s manager of workplace services, showed employees a new feature in Microsoft’s Office cloud software that blocks so-called CxO spamming, whereby spammers pose as a senior executiveand instruct the receiverto transfer funds or share personal information.

Ninety minutes after the demonstration, the security operations center called to report that someone had tried that exact attack on NN Group’sCEO. “We were like ‘oh, this feature could already have prevented this from happening,’”Jansen says. “We need to be on constant alert, and these tools help us see things that we cannot manually follow.”

Machine learning security systems don’t work in all instances, particularlywhen there is insufficient data to train them. And researchers and companies worry constantly that they can be exploited by hackers.

For example, they could mimic users’ activity to foil algorithms that screen for typical behavior. Or hackers could tamper with the data used to train the algorithms and warp itfor their own ends―so-called poisoning.That’s why it’s so important for companies to keep their algorithmic criteria secret and change the formulas regularly, says Battista Biggio, a professor at the University of Cagliari’s Pattern Recognition and Applications Lab in Sardinia, Italy.

So far, these threats feature more in research papers than real life. But that’s likely to change As Biggio wrote in a paper last year: “Security is an arms race, and the security of machine learning and pattern recognition systems is not an exception.”

本文网络安全相关术语:网络安全工程师 网络信息安全 网络安全技术 网络安全知识

分页:12
转载请注明
本文标题:Microsoft, Google Use Artificial Intelligence to Fight Hackers
本站链接:https://www.codesec.net/view/628460.html


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 网络安全 | 评论(0) | 阅读(66)