More than half of organizations have adopted AI for security efforts, but a majority are more confident in results verified by humans, according to WhiteHat Security.
Security professionals need a varied bag of tricks to keep up with savvy and sophisticated cybercriminals. Artificial intelligence is one valuable weapon in the arsenal as it can handle certain tasks faster and more efficiently than can human beings. But AI being AI, it’s far from perfect. That’s why many security pros still want the human element to play a significant role in their security defense, according to a survey from WhiteHat Security.
SEE: The 10 most important cyberattacks of the decade (free PDF) (TechRepublic)
Based on a survey of 102 industry professionals conducted at the RSA Conference 2020, WhiteHat’s “AI and Human Element Security Sentiment Study” found that more than half of the respondents are using AI or machine learning (ML) in their security efforts. More than 20% said that AI-based tools have made their cybersecurity teams more efficient by eliminating a huge number of more mundane tasks.
Further, almost 40% of respondents said they feel their stress levels have dropped since adding AI tools to their security process. And among those, 65% said that AI tools let them focus more on migitating and preventing cyberattacks than they could previously.
However, incorporating AI doesn’t take human beings out of the security equation; just the opposite. A majority of those polled agreed that the human element offers skills that AI and ML can’t match.
Almost 60% of the respondents said they remain more confident in cyberthreat findings that are verified by human over AI. When asked why they prefer the human touch, 30% pointed to intuition as the most important human element, 21% mentioned the role of creativity, and almost 20% cited previous experience and frame of reference as the most critical advantage of humans over AI.
On its end, WhitePoint described three reasons it supplements its own AI and ML learning systems with human verification:
- To ensure that vulnerabilities that can’t automatically be verified by the machine learning subsystem are verified by humans.
- To add new human curated vulnerabilities to the 150+ terabyte pool of attack vector data for future machine learning endeavors.
- To perform quality control on a sample of the automatically verified vulnerabilities and provide feedback to fine-tune machine learning models as needed.
View original article here Source