News

MIT releases artificial intelligence system to prevent cybercrime

Wednesday 20 April 2016 10:40 CET | News

Researchers from the Massachusetts Institute of Technology have released a new artificial intelligence system called AI2 to prevent cyberattacks.

The team from the university’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and machine-learning startup PatternEx developed the new platform that can identify cyberattacks 85% of the time and even reduce the amount of false positives by a factor of five.

AI2 goes through data and then spots suspicious activity through unmanned machine learning. From there, human reviewers check for signs of a security breach, a solution that can predict attacks with precision and eliminate the need to pursue bogus intelligence leads.

AI2 uses three machine learning algorithms for detecting suspicious events, but just like other AI systems it also needs human feedback to verify its findings, so the system is constantly being enhanced through the team’s so-called ‘continuous active learning system’.

For computer science professor Nitesh Chawla of University of Notre Dame, the research is a potential ‘line of defense’ against fraud, account takeover, service abuse, and other attacks faced by consumer-oriented systems today.

The findings were presented in a research paper at the IEEE International Conference on Big Data Security held in New York City on March.


Free Headlines in your E-mail

Every day we send out a free e-mail with the most important headlines of the last 24 hours.

Subscribe now

Keywords: artificial intelligence, cybercrime, machine-learning, false positives, fraud, account takeover, MIT
Categories: Fraud & Financial Crime
Companies:
Countries: World
This article is part of category

Fraud & Financial Crime






Industry Events