Interview with Anastassia Lauterbach


Anastassia Lauterbach

Anastassia Lauterbach is CEO and founder of 1AU-Ventures and Board Director D&B at Wirecard. We spoke with her about the question how cybersecurity and AI are related to each other.

First of all, could you shortly introduce the work resp. the business model of your company 1AU-Ventures?

Lauterbach: „AU“ stands von „Astronomical Unit“ and „1AU“ defines the distance between our planet, the Earth, and the Sun. I was a hobby astronomer as a kid. I adopted the brand 1AU-Ventures to emphasize the long distance any seed idea, transformational project for a traditional company, or a product undergoes before becoming big, recognizable and influential.

I have Corporate Advisory services. Here my co-workers and myself support the transformation of traditional businesses through digital technologies. We prepare Corporate Boards to ask good questions on new risks, such as adversarial attacks on Machine Learning systems, and Cybersecurity Governance. In Start-up Services I work with one boutique investment firm specialized entirely on Cybersecurity. Evolution Equity Partners has HQs in New York and Zurich. Besides, I work with Analytics Ventures, a seed fund and an incubator focused on Machine Learning. This company is based in San Diego. We have an AI Lab with 17 PhDs. As there are only 10 thousand Data Scientists worldwide, the number 17 is not that small.

You are an expert for AI and Cybersecurity. In which way are both topics related to each other?

Lauterbach: My first encounter with AI was at the University in Computer Linguistics. In business I came to machine learning trough cybersecurity. More and more businesses adopt Machine Learning to support their technology stack. One brilliant example is DFLabs, a specialist in forensics.

Should cybersecurity (even more intensively than with other smart applications and devices) be considered right from the start of AI-applications? Which scenarios are imaginable regarding cyber-attacks on AI applications?

Lauterbach: Cybersecurity should be part of the equation in any product development.

AI poses unique cybersecurity issues because machines are being used to train other machines, thus scaling the exposure of compromised pieces of code. AI algorithms can contain bugs, biases, or even malware that are hard to detect, such as the DDoS attack in October 2016 that affected several hundred thousand devices.

Like any technology, AI can also be used by criminal groups. Understanding their motives and techniques is important to prevent attacks, and to detect them in a timely manner. As an example, a group of computer scientists from Cyxtera Technologies, a cybersecurity firm based in Florida has built the Machine Learning system DeepPhish that generates phishing URLs that cannot be detected by security algorithms. The system was trained using actual phishing websites.

The proliferation of Machine Learning solutions for cybersecurity comes with certain risks, if AI practitioners are rushing to bring a system online. It means that some of the training data might not be thoroughly scrubbed of anomalies, causing an algorithm to miss an attack. Experienced hackers can also switch the labels on code that has been tagged as malware A diverse set of algorithms rather than a dependency on one single master algorithm might be a way to mitigate this risk, so that if an algorithm is compromised, the results from the others can still show the anomaly.

As the amount of data increases, adversarial AI is being used to hack AI systems. Adversarial AI can fabricate images and data to trick a deep neural network (DNN) into producing a different response. Machine Learning and AI are being used to expand attacks and disrupt Machine Learning models. For example, a study at the Harvard Medical School in Boston revealed that AI systems that analyse medical images are vulnerable to covert attacks. The study tested Deep Learning systems that analyse retina, chest, and skin images for diseases. Researchers presented “adversarial examples” and found that it was possible to change the images in a way that affected the results and was imperceptible to humans, meaning that the systems are vulnerable to fraud and/or attack.

Is it theoretically possible that the development of AI one day is advanced to such an extent that AI applications can protect themselves and security software becomes redundant?

Lauterbach: At some point in time cybersecurity will be Machine learning driven and embedded into any application and service. This will not happen in the next 10 years. Internet was not built with security in mind.

You train, among others, board members in Cybersecurity. In a nutshell, what are the most important things corporate executives should know about Cybersecurity?

Lauterbach: What are my crown jewels in terms of Data Assets, do I have an inventory of my IT systems, including dormant systems, does a company have data governance framework, how much Open Source do we have, and do we understand whether our vendors and partners are good in cybersecurity?

Do you think that cyber security can also be a growth lever for companies?

Lauerbach: Just one word - yes. Cybersecurity can be an enormous differentiator. We live in the Reputation Age, not the Information Age. Cybersecurity and Reputation of a Brand are closely linked.

Why are you looking forward to Command Control?

Lauerbach: Meet old friends, meet new brilliant people, listen to people better than I am and learn!