Artificial intelligence (AI) is transforming various industries by providing (among others) automation, insights or decision making support. As with many other technologies, AI can both be used for the good and for the bad, meaning it has “dual use”. In the cybersecurity space we already see since a couple of years that AI is used by the attack side as well as the defense side. At least on the defense side this is clearly visible by an increasing number of vendors and products claiming to have AI capabilities.

Dr. Roman Yampolskiy, professor at the University of Louisville and Director of their CyberSecurity Lab, is doing research at the exciting intersection of AI and cybersecurity. AVANTEC was very happy to have him as a keynote speaker at this year’s 19th edition of our customer event IT-Security INSIDE.


Insights and thoughts about AI and cybersecurity

Here’s a quick summary of the insights and thoughts of Roman’s talk I found most noteworthy:

  • Artificial intelligence is already present in our daily lives (Google, Siri, self-driving cars) – once AI is integrated and accepted by the users we don’t really talk explicitly about AI anymore – it just becomes part of the user experience (the same way we don’t perceive glasses as technology anymore).
  •  The accelerating pace of change and the exponential growth in computing power will lead to AI surpassing human brainpower at some point in the near future (still in our lifetimes) and reach “super intelligence” (see also the concept of singularity by futurist Ray Kurzweil).
  • On the one hand, super intelligence will have positive impacts and solve a number of humanities’ biggest problems (think e.g. cure for cancer), but it will most certainly also have negative impacts. Both up- and downsides are hard to predict and partially lie in the field of “unknown unknowns”.
  • Concerns about super intelligence and its potentially devastating impact on humanity have been raised by researchers and technology entrepreneurs such as Stephen Hawking, Bill Gates and Elon Musk.
  • Humanity currently has a very limited view of what forms of intelligence (or “minds”) are conceivable. We have a strong anthropomorphic bias regarding how we think about minds, yet the solution space of possible minds is very large and potentially contains minds of very different structures and capabilities.
  • There’s several pathways to dangerous AI. Deliberate actions of non-ethical people (e.g. cybercriminals using AI in attacks), side effects of poor design (e.g. bad data, wrong goals) or also runaway self-improvement processes (e.g. emerging phenomena).
  • Purposeful dangerous AI could be used to e.g. build cyber-weapons, design killer-robots, control of people (some countries are working on this). The list of potentially harmful applications is very long.
  • AI is already today able to fake biometric elements (faces, voices, etc.) and will challenge our current understanding of identification and authentication. Proof of identity might move towards (currently) still harder to fake features such as DNA or behaviour-based approaches.
  • Research and studies into how to mitigate negative impact of AI dates back to the 1990. The spectrum of proposed measures is very broad and includes solutions like prevention of development of AI (“relinquish technology”), limiting deployment, formal verification, self-monitoring approaches and integration into society. A famous example of the latter are the well-known “Asimov’s Three Laws of Robotics”.
  • The failures of AI systems will grow in frequency and severity proportionate to AI’s capability.

Roman’s talk was fast-paced, entertaining and touched upon various very interesting points – however, he left the audience with a pretty bleak (“we’re all going to die, but not just yet”) picture of the future and a lot more open questions than answers.


Links

Further information about IT-Security INSIDE #19 can be found under: www.avantec.ch/inside/