AI for Peace, Justice and Security in Leiden, Delft and Rotterdam

The AI research at the three Zuid-Holland universities in the field of peace, justice and security complements each other in an interesting way. Three researchers tell us their stories. This is part 1 of a five-part series on themes in which the three universities conduct AI-related research.

It began with a Delft research project with the General Intelligence and Security Service (AIVD). At the point where cybersecurity and governance meet, the project was led by Delft researchers Michel van Eeten and Bram Klievink. They worked with the AIVD on research on identifying and understanding digital threats. For instance, by developing new instruments to measure the threat level of critical networks and to recognise attacker behaviour. 

A move that led to new collaborations

Shortly before the start of the project in 2019, Klievink switched to Leiden University (Campus The Hague), where he aimed to further connect the power of Leiden's expertise on governance and policy to the socio-technical views on cyber security of the Delft researchers. The universities are also collaborating in other areas in research into the use of algorithms in an administrative context.

Klievink illustrates the example of the LDE Center for BOLD Cities, where the three universities are researching how Big, Open and Linked Data (BOLD) can work in, with and for cities. Klievink: ‘The centre brings together, among others, a sociological perspective, urban studies, public administration, media studies and technology. We have developed a joint multidisciplinary minor, which started last autumn.’

Klievink believes the collaboration in his field between the three Zuid-Holland universities is successful because there are similarities and differences in their expertise and focal points. ‘Rotterdam is strong in law and corporate aspects. And of course, Delft is known for technical expertise, which means both fundamental work on AI and its applications. What people are less aware of is that Delft also knows a lot about governance and ethics, which is what Leiden has a name for. Alongside this expertise in governance, policy, law and normative aspects, Leiden is also strong in technology.’ He adds that such a general organisational outline fails to do justice to the knowledge possessed by all the specific groups and researchers at the universities.

World-leading 

Delft professor of cybersecurity Michel van Eeten explains how researchers at the different universities also bump into one another outside their universities. ‘I’m on the Cyber Security Board with Bibi van den Berg, professor of cybersecurity governance at Leiden, for instance.’

Erasmus University Professor of Law and Economics Klaus Heine sees his fundamental research, in which he compares data with nuclear energy, as complementary to that of the other universities. 'This is also valuable. Of course, we exchange techniques with, for example, TU Delft. Management may wish for us to do more together, and we could become world-leading. '

Below, more about Heine’s work, and that of Michel van Eeten and Bram Klievink. They all conduct AI-related research in the field of peace, justice and security.

Encouraging companies to make the internet more secure

TU Delft - Michel van Eeten

The work of Professor of Cybersecurity Michel van Eeten can make companies reassess their security. 'Hosting companies, for example, often have a fairly positive self-image of the security of their networks, sometimes incorrectly. For the Ministry of Justice, we investigated the hosting of child pornographic material. How well do hosting companies monitor whether their customers do not post child pornography, and how do they deal with it if it does? '

No need for naming and shaming
Facing immense political pressure, the name of the company that hosted more than 90 per cent of the material became public. For Van Eeten, who specializes in cybersecurity governance, such naming and shaming are unnecessary. He believes there are more effective ways to solve such problems.

The recurring theme in Van Eeten’s work is examining the relationship between technology and behaviour within the scope of cyber risks. ‘We look for vulnerabilities in systems that are linked to the internet, and then look at how we can contact the owners of these systems in the event of any leaks and what will spur them or the internet provider on to solve these problems.’ Van Eeten and his team develop and use techniques to measure whether it is safe to use connected devices and whether the date travels safely over the digital highway.

Two companies behind hacked devices
For example, Van Eeten's team carried out a large-scale analysis of cameras, digital video recorders, thermostats and other devices that are connected to the internet. 'The Internet Of Things is sometimes very poorly secured and susceptible to hackers and malware.' Van Eeten made an important discovery. 'There are tens of thousands of manufacturers, but only two of those companies, important players, were responsible for half of all hacked devices.'

In response, the first step one is for the Ministry of Economic Affairs and Climate Policy to send a letter to such companies. Talking to manufacturers solves a lot of problems already, says Van Eeten: ‘The threat of the EU blocking these companies from the European market, which would cost them one of their biggest markets, should only be brought in later.’

What big data and nuclear power have in common

Erasmus Universiteit Rotterdam - Klaus Heine 

Europe staggers behind the US and China in the tech field, because we believe privacy is important here. Professor of Law and Economics Klaus Heine is hatching creative ways to turn this into a profitable unique selling point. 'The EU can actually be the safest and most attractive ecosystem for artificial intelligence by giving people a central position.'

When Facebook purchased WhatsApp this was no problem according to competition law: WhatsApp was only a small company. But all that data... That was not a concern of this branch of law. Klaus Heine, who researches big data and privacy, believes this should have been viewed from the perspective of property law. ‘This related on the one hand to a technology and on the other to big data, which is in effect a kind of fuel.’

Exciting comparisons 

Heine's trademark: he sees the most fascinating commonalities and comparisons. For example with the past. 'Around 1950 we had just as much of a challenge with nuclear energy as we do today with big data. On the one hand, there was the question of how to deal with the knowledge of the techniques behind nuclear physics, and on the other hand, ensuring safe access to nuclear fuel, the radioactive material. The question was: how do we make that technology useful for society in a safe way? The solution that still works today: nuclear power plants are private, but the fuel belongs to the government. '

The European Atomic Energy Community (Euratom) coordinates research programs for the development and peaceful use of nuclear energy among its member states. An organisation like Euratom is a source of inspiration for Heine. 'Facebook, Amazon and Google are allowed to use their data technology, as long as a kind of Euratom believes that what they are doing is in the interest of society. Using such a system, Europe can become the place to be for safe new technology, '

Heine: 'When I think of an issue in the field of cybersecurity, I look for where else you encounter such questions in society? When Maastricht University was taken hostage with ransomware, I saw an analogy with a flood caused by a dike breach. In such a situation, the army comes to help. I can imagine that in such a case volunteers and commandos will come to restore the infrastructure. This can also be translated to cybersecurity.  Frameworks for such a Cyber ​​Militia are already being discussed, for example by the Ministry of Defense in the US. '

'A big AI challenge for the government: neutral systems do not exist'

Universiteit Leiden - Bram Klievink

The use of artificial intelligence is more complicated for the government than for companies. Professor of Public Administration Bram Klievink and his colleagues uncover the bottlenecks and seek solutions for digitization in public policy.

A striking example: in early 2020 a court declared that the Dutch government’s System Risk Indicator (SyRI) was unlawful. This instrument had been in use since 2014 and its purpose was to prevent fraudulent benefit claims by means of data linking and pattern recognition. The system created risk profiles on the basis of data about fines, compliance and education, among other factors. Although your data were anonymously encrypted until you emerged as a potential fraudster, the court decided that the violation of the right to a private life was too great.

Dilemmas surrounding considerations often remain hidden, because policymakers and the technical makers of the system do not speak each other's language and lack mutual understanding. There must be closer collaboration between people from different disciplines who work on public AI projects, says Klievink.

Five themes of AI research in Zuid-Holland 

This article is the first part in a series where delve into how research with- or into artificial intelligence plays a role at Erasmus University Rotterdam, Leiden University and TU Delft. The articles cover the following five themes:

  • AI for Peace, Justice and Security
  • AI for Ports and Maritime
  • AI for Energy and Sustainability
  • AI for Welfare and Healthcare
  • AI for the Technical Industry