No items found.
arrow back
Back to Blog
Hopsworks Team
link to linkedin
Hopsworks Experts
Article updated on

5-minute interview Nelleke Groen

Episode 16: Nelleke Groen, Senior Legal Advisor AI - Rijkswaterstaat
May 16, 2024
10 min
Read
Hopsworks Team
Hopsworks Teamlink to linkedin
Hopsworks Experts

TL;DR

"For western governments, the demands for transparency and lawfulness of decisions (with AI) are crucial."

Have you ever wondered what it’s like working on the legal side of AI? In this episode of 5-minute interviews we get to learn about just that. Nelleke Groen, Tech Lawyer at the Dutch Rijkswaterstaat, tells us about her work with the legal aspects of AI systems and why it’s so important to govern the development and usage of AI.

Tell us a little bit about yourself

Yeah, my name is Nelleke Groen. I work for the Dutch Government as a tech lawyer. I work for the Ministry of Infrastructure and Waterworks, with the agency that makes the waterworks and road infrastructure. I work there mostly with tech topics and AI. So that's my main focus at the moment. I deal more particularly with the legality issues of AI systems and deployment. For western governments that is particular, because the demands for transparency and lawfulness of decisions is pretty crucial to our work. So that requires some special attention in deploying AI systems and we're still looking at what that actually means because transparency can be quite an issue in using AI.

What is it that you actually do within AI?

So when you look at it on a grand scale, using AI, even in private interactions, you will need to explain what you're doing. Because if your AI system or the actions you take based on what you are being informed by your AI, is causing harm you can be legally responsible for that. You can be sued for that in court. You need to explain why this was the right course of way, why you and your AI took this decision. You have to show the judge that this was something that was responsible to do in a normal, civil, private context. Now, for Governments, this is even more important. We as a government work because we are given the power by the people to make it a big thing. And we also have to explain what we do with that power and that that is equitable, that nobody's being discriminated against, that what we do is compliant with what our politicians say. And because we have to explain that all the time, even when people are making decisions, we also have to look at how AI comes to a decision or comes to a course of action and explain why that was the responsible way to go about it.

So I've been looking at algorithms before I moved into this big AI hype. When looking at algorithms, it's really just a means of making a decision. You give the computer some rules, you can make it as complicated as you can, but you can make this rule, you can bracket it down in actual decision points again. For me, as a lawyer, I say ‘well, that's pretty easy because these decision points are the underlying motivation or story of your decision’. That can be good or bad, but at least it's explainable to a judge. So we're going to have to find out what we call that in a legal matter. Now, with machine learning and AI, that's a lot more difficult or a lot different. I think that if we challenge our technicians and engineers, we can come to a certain amount of transparency that we need, but we're gonna have to find what level of transparency that is. 

What attracted you to the field? 

When I was a teen in the nineties, the Internet came up. My dad had an old computer and I said, cool, this is what I want to do. But there was nothing then so we had to build all of that ourselves. So I started doing that and I found it very interesting. So I started programming a little bit, and then I studied law, and then that kind of existed together. Then I had kids and work and I had no time left. I left the programming thing and the tech thing for what it was, but it has always attracted me.

But I've also always been a human rights activist. I've always worked in governmental and NGO contexts. So human rights have always been on the forefront in my work and my life. So when I got back to doing a little bit more with that, I came into the privacy field. And you saw a change in what privacy was because of this upscaling of technology. So when I entered back into the privacy field, this also immediately launched a lot of thinking about data, algorithms, AI, transparency, personal data etc. This was a little bit before the GDPR was accepted and came into power. So I came back into the privacy field with the GDPR fresh off the press, and I was like, okay, well, let's go and see what we can do with this. I've been working in that for about two years, but now I kind of left the privacy field and am mainly focusing on AI  and algorithms. 

What are your thoughts on AI governance and compliance?

Europe has its own particular project. But it also gives you a bit of guidance on what we want to do with this. There's this whole new field that it's scaling up so much. We can do things that we never imagined we could actually go and do, but that changes our world. It changes how our children are perceiving the world. It changes how we go around, because we're constantly monitored. I mean, even if we don't want it, we are carrying this smartphone. We are constantly monitored. Question is, by whom? What I mean is that it changes the way that we live. It also changes the expectations that you have of the government, that you might have of your school, of your family, of your political party, of everyone, of each other. We are humans. We are natural beings that need to interact in a social way. And we do that now via video chat as well. So the way we’re using technology to interact changes our whole society. 

So, like with the EU project, what I meant with that is we're thinking about what is responsible, what is good, what is good for us. It also helps us to think, maybe we should make rules or decide for ourselves that we want to not do certain things. We want to maybe try something different because we're overwhelmed, because it's too much, because there's information we cannot process anymore, because we cannot take decisions as people anymore in a way that we feel trust and genuinity in what we do. And we need that, too. We have to remember that we're not computers, we're people. 

Do you have any interesting resources or topics to recommend?

What I would want people to go and look into is actually the EU project that we're doing. I'm working on standardizations with CENELEC with ISO SC 42. It is interesting to see the dynamics in the European context as opposed to other contexts. Another thing that I find very interesting, Amnesty International has a very good program and there's some other NGOs around it all over the world that work with human rights crossovers to AI. They are very critical about how to work and approach armed AI as well as AI governing our borders. The most interesting thing that I want to mention in that sense is that many people in this tech world come to me and say this is very activist and it’s out of scope, out of what we do. And I feel like this is not activists or out of scope. You want to be acquainted with what they're saying because they are pinpointing us to the routes where we do not want to go (in terms of AI). So I'd advise everyone to go and look into that and see broader than just the technology, because the technology is really cool, but it has an impact in the real world, and we need to acknowledge that and consider what we do and do not want.

Links:

  • CEN/CENELEC JTC21: Standardisation on European level, currently focussing on standardisation under the AI Act 
  • Amnesty International Netherlands has made technology and human rights one of its priorities (in Dutch only). They are also part of the European coalition EDRi (European Digital Rights), where many NGO’s working in this area have created a cooperative platform in order to increase human rights protection in digital society
  • In European context, one of the liaisons working in this area in the standardization is Equinet
  • Podcast series (in Dutch) about the reality of the AI factory and the challenges that are regularly kept out of sight and ear

Listen to the full episode:

References

© Hopsworks 2024. All rights reserved. Various trademarks held by their respective owners.

Privacy Policy
Cookie Policy
Terms and Conditions