Skip to main content

AI technology: a lawyer's guide

How can AI be legally defined?

The need to regulate AI is clear.

Citizens need to know who will be liable if a driverless car knocks them down; and businesses need to know who owns the IP in products designed by their in-house robots. But to regulate AI we must first define it. Even trickier: that definition must be future-proofed, so as to cover any changes in AI technology. The attempts so far have been mixed.

In the UK, the House of Lords’ Select Committee on AI recently released a report that used this definition:

‘Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation.’

This is a problematic definition because it tries to define AI by reference to human intelligence, which is itself notoriously hard to define. Also, this definition omits a key feature of many of AI’s most useful advances: applying the huge processing power of computers to achieve tasks that humans can’t.

Meanwhile, the EU Commission has suggested this definition of AI:

‘[S]ystems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.’

And in the US, the Future of AI Act – which sets up a federal advisory committee on AI – defines AI as:

‘Any artificial system that performs tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance… In general, the more human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.’

The EU and US definitions have the same problem of defining AI by reference to human intelligence. The EU Commission’s wording introduces the concept of ‘autonomy’, which might be a useful approach for future legislation.

For now, we’re still some way off an agreed legal definition, and the better approach is probably to look at the context in which the law might intervene. For example, if we ask how AI should be regulated, our terminology will need to take into account the impact of the AI and the respective responsibilities of those who introduced it into the world. In particular, we can expect regulators to look beyond autonomy to its creators. For now, it at least feels like the EU has the right mindset, though these legislative debates would probably have made Alan Turing smile – as he put it: ‘We can only see a short distance ahead, but we can see plenty there that needs to be done.’