The risk associated with artificial intelligence (AI) is a popular topic these days. Will AI take human jobs and become an enemy of the people? Will AI develop killer robots with no moral compass? Will AI become a superpower with greater control over humans?
These are all important issues, but those tasked with cybersecurity currently have their own growing worries about AI. A Neustar report finds that 82 percent of 301 senior technology and security workers say they are concerned about the possibility of attackers using AI against their company, with 50 percent ranking stolen data as their greatest worry. The loss of customer trust is cited by 19 percent as the greatest concern, while 16 percent ranked business performance and cost implications as topping their list.
As a result of their concerns, nearly 60 percent of those surveyed are “apprehensive” about adopting AI technology within their organizations, the report finds.
“We’re at a crossroads,” says Rodney Joffe, head of NISC and Neustar senior vice president and fellow. “Organizations know the benefits, but they are also aware that today’s attackers have unique capabilities to cause destruction with that same technology. As a result, they’ve come to a point where they’re unsure if AI is a friend or foe.”
Among the fears cited in the report: DDoS attacks, system compromise and ransomware. For those concerned about such issues, there are some steps to take:
Finally, experts advise that preventing AI attacks also must involve the ongoing training of employees so that they understand the dangers – and how to prevent them whether it’s avoiding phishing emails or unsecure servers. As Neil Jacobstein, head of AI at Singularity University says: “It’s not artificial intelligence I’m worried about, it’s human stupidity.”