Support Centre

Interview with: Rob Sumroy from Slaughter and May

Rom Sumroy co-heads the Emerging Technology and Data Privacy practices at Slaughter and May, and is a Partner in its IP and Technology Group. OneTrust DataGuidance spoke with Rob about the key risks that artificial intelligence ('AI') technologies pose to privacy, what privacy regulations currently exist in the UK, and how to remain privacy-compliant whilst working with AI technologies.

Future regulation

I think the key thing here is how people are implementing AI and what they are using it for. The exciting thing about AI is, particularly in the area of data analytics,this whole concept of using machine learning for Big Data analytics. You can find patterns out of huge databases that the human eye or brain would not be able to spot. The key here is it is data. When we say, "What are the privacy risks in AI?", if people are going to use AI for Big Data analytics then they are in effect capturing significant amounts of data, and if we are talking about privacy, let's say it is personal data about individuals, you need data to train the algorithm.

So, an algorithm AI is only going to find those patterns if it has been trained and you can only train an algorithm with huge amounts of data. The concepts around making AI work, and the reason why people are investing in it, is because you do not necessarily start out with a plan as to what you are going to do, you just fill it full of data and you see what patterns come out. You do not necessarily stick to one purpose, but when you see those patterns it might give you ideas as to ways in which you can develop your product, or the pricing, or targeting people with the results of that data analytics.

You do not necessarily know how the algorithms has got to where it has got to, you just know it is clever. And all of these things that I am describing, such as not worrying about what the purpose was at the outset, not limiting the use of data but doing as much as you can, not really being able to be transparent and explain what has happened, these are all really counter to some very key privacy compliance principles. So, we know from the General Data Protection Regulation (Regulation (EU) 2016/679) ('GDPR') if you are thinking about Europe that you have concepts of data minimisation, so you are only really supposed to process data to the real minimum amount necessary to achieve the goal. Well, that is at odds with what I have just been saying. The GDPR also has the concept of transparency, so you are supposed to explain to the data subject what you are going to do with the data, what you have done with it, how, and why. But the AI algorithm is usually a bit too clever that it becomes opaque, we don't really know how we did it, we just know it is clever. And this concept that you get permission from the data subject to use the data for a limited purpose and then you should not use it any further, that is at odds with what I have just been explaining about this whole exciting part of data analytics, and the 'let's just see what comes out' aspect. There is just a natural tension between the abundant use of huge amounts of data for exciting output on the one hand, and then the very real and sensitive limitations on the use of people's data which comes out of privacy legislation.

Privacy by Design

I think actually, if you look back over the last 5 years of the increasing adoption of AI, part of the difficulty is that the privacy practitioners, I include myself in that but not just the lawyers but also the regulators and the advisors, we understood what the privacy law was saying but we did not really understand enough about the technology to apply it. So that is the challenge we are talking about. We have technology people who, in my experience working with data scientists and AI developers, they are passionately keen to comply with privacy regulation. This idea that we should not fall into the trap of thinking the people who are developing the algorithms do not care about privacy, it is not the case. They want to do Privacy by Design. The thing is, they know what an algorithm is capable of, but they also know there is various trade-offs. I think this is the challenge the developers have.

What do I mean by that? Well, one of the most important things if you are going to use an algorithm is that it can be as accurate as possible. So, you do not want to use an algorithm to make predictions if it is only going to be, for example, 70% accurate. But you make it more accurate by giving it more and more and more data. Well, that is then at odds with the idea of minimisation that we talked about. Also, some data you maybe should not be using because you might end up discriminating. For example, let's say you want to make sure the algorithm that you are using is going to not discriminate against a certain gender. In which case, what you have to do is stuff it full of data, which includes gender data, and ensure that it is sufficiently accurate that it will not discriminate. To gain the accuracy, you have to do a lot of processing of a protected data class, and this is the challenge for the AI developers. They are saying, "Well, do you want me to be accurate, or do you want me to comply with privacy?" It is the same with the transparency point we made, so a good, clever algorithm may not have the gateways or the check gates that you need to understand what the algorithm was doing. So, if you are saying, "No, actually at each step along the way we need to explain to individuals what is happening to their data," actually you almost need to turn down the cleverness of the algorithm. These are the challenges that the developers have, and actually the challenge for the regulators is to make sure that the regulation gives signposts to the developers where they can allow the technology to take the lead, and where actually, as I say, they need to tone it down in order to comply with the regulation.

AI frameworks

I agree that the Information Commissioner's Office ('ICO') in the UK is certainly at the moment an example of a really proactive and progressive regulator in this area, and I think it is interesting to compare the ICO now, with the ICO from 5 years ago. So, if you look at the Big Data Analytics paper that the ICO published in 2014 ('Big data, artificial intelligence, machine learning and data protection'), which they then reissued in 2017, it was a very long, 100+ pages explanation about how, in effect, difficult it is to comply with the GDPR or the law at the time, and use algorithms to do Big Data analytics. So, it was a very complete analysis of the law, but it gave no real practical guide to those developers. Now what they have done, and I think Elizabeth Denham as the Information Commissioner has really driven this for the ICO, is they are taking a very proactive and pragmatic approach. She is saying, "We are going to do this by consultation." One of the ways in which they have done this, for example, is the Auditing Framework for AI. So, they are the privacy experts and they have taken on technology experts. They have also got Dr. Rueben Binns, who is helping with that framework, and they have got data scientist people, and they have got privacy people. And they are saying, "Here are the 8 areas where we think we are going to get friction between the technology and regulation, now let's go out to the market with this blog we publish and take people's views," and then they will build from that to give guidance. The guidance, if you look at the blog's that they are publishing, each one say's what an organisation should do. So, it is actually a practical assessment, and it is talking about when those trade-offs need to be made.

To access the full video of Rob's interview, click here.

Rob was interviewed as part of the 'Privacy in Motion: Technology' video series. To watch other video interviews filmed by OneTrust DataGuidance, click here.