Skip to main content

Data ethics

Do businesses really need to be ethical with data?

Data ethics is on the minds of businesses and regulators. Rachael Annear examines why.

These are transformative times. New technologies are driving breakthroughs in humanity’s greatest common projects but we have also seen how technologies can magnify our worst human tendencies. We must ask ourselves a fundamental question: what kind of world do we want to live in?

Tim Cook, CEO, Apple

Data ethics is everywhere these days, from the pages of the newspapers to the UK Financial Conduct Authority’s business plan. The application of new technologies to data has the power to transform our lives – sophisticated algorithms could help develop novel cancer therapies while AI personal assistants could offer us more free time.

But there are also dystopian predictions about the advance of technology that raise complex questions for society. What kind of future do we want to build for ourselves, and how do we ensure people understand the risks associated with data so they can decide whether they are ones they wish to take? The answers can be found in the concept of data ethics, which is in essence the application of a value judgement to the use of data.

Data ethics: what should businesses be doing now?

  • Regulated entities should review whether authorities are looking to embed data ethics into their regulatory frameworks.
  • All organisations should assess whether the technology they use or are investing in is included in any government guidelines/rules around the ethical use of data, such as those that have emerged in the UK.
  • They should also consider setting up data ethics review boards for significant digital transformation or data-driven projects.
  • And they should reflect on whether they have built sufficient trust to leverage their customer relationships for forthcoming technological changes.

What use is data ethics now?

Data ethics is such a hot topic right now because authorities are focused on businesses doing the right thing in an environment where consumer trust has broken down.

The UK Information Commissioner, for example, has defined ‘increasing public trust’ as one of its strategic goals. Data ethics moves the dial from ‘what can we do’ to ‘what should we do’, and in doing so provides a useful way for companies to maintain goodwill. Legislation is by its nature reactive, so as technology offers more powerful ways to manipulate data, it is up to businesses, regulators and society to work out where the limits should be.   

Regulating the future

The thinking that authorities are doing now will inform the way future regulatory frameworks are built.

It is therefore critical that companies understand the direction of travel so they can make informed choices about the opportunities that new technologies present.   

Privacy legislation, such as the EU's general data protection regulation (GDPR), is built on values such as transparency, fairness and accountability. At the end of 2018, the European Data Protection Supervisor, Giovanni Buttarelli, issued an invitation to privacy professionals to define the values on which our common understanding of data ethics should be based.

In his speech, he reflected on the power society has to choose how privacy should be respected. Nowhere is this more important than where technology finds its way into the most intimate areas of our lives: our relationships, our communications, the focus for our attention, our behaviours and our decision-making.  

Likewise Elizabeth Denham, the UK’s Information Commissioner, has said that ethics is ‘at the root of privacy and is the future of data protection’. She believes data protection should be part of a company’s cultural fabric, not simply a ‘bolt-on’ compliance issue. Her department recommends that companies develop ethical principles that reinforce the key tenets of data protection. It also suggests businesses set up ethics boards to scrutinise projects, assess issues arising from big data analytics and inform product development teams.

The debate about data ethics is not limited to Europe. US lawmakers have recently introduced a bill that would require large companies to audit machine learning-powered systems – like facial recognition or ad targeting algorithms – for bias. And in Asia, Hong Kong’s data privacy regulator has stressed the importance of data ethics and commissioned an ethical accountability framework to help address the challenges of advanced data-processing activities.

Beyond privacy

Although data ethics has natural roots in privacy law, its impact goes beyond personal data.

Not all forms of advanced data analytics – and not all cutting-edge technologies – are reliant on personal information. But their effects are felt by individuals. It is this dynamic that has broadened the focus of ethical debate from the input (the customer bargain, consent and control) to the output (trust in the decision).

Take the example of a machine-learning system being trained to recognise skin cancer using pictures of healthy and unhealthy skin. What the designers wanted was for the system to recognise potentially fatal tumours. But what happened was that it learnt to identify pictures with rulers, given that these were invariably present to show the size of suspected cancers. The point of this story is that, when we use data to develop AI algorithms, we must understand what it contains, and that the machine might not interpret our instruction as we intended. Imagine this in the context of a complex calculation such as a credit assessment based on demographic information. Would we really understand what’s in the data, and therefore on what basis the decision was being made?

This potential for bias in algorithmic decision-making, as well as the power of data to influence our decisions, is something that has piqued the interest of the UK government. As part of its industrial strategy, it has set up the Centre for Data Ethics and Innovation, and tasked it with examining how to promote innovation in a way that builds trust.

Trust

The importance of trust to a strong digital economy is something Simon McDougall, the tech- and innovation-focused executive director at the UK Information Commissioner’s Office (ICO), has also reflected on.

‘People are losing trust in modern business and innovation,’ he said in early 2019. ‘Every time we create something cool, we are not bringing people with us.’

The notion that trust is important is not new, but the focus on an organisation’s approach to data being a central part of building trust is only now starting to gain traction. One way companies can do this is by clearly articulating where they draw the ethical lines with respect to data use. Not only can this encourage consumers to adopt the company’s new technologies, but also give clear messages to its investors and employees about its brand and place in the future.  

If an organisation loses trust around its technology or use of data, the fallout can spread to other businesses in its industry – and even global markets. This makes trust a key part of the discussion on data ethics and is why many regulators and industry bodies are trying to find common ground on data use.

Some academics argue that a trust-based system should replace the current common model of ‘notice and consent’. They believe that scrolling as fast as you can to the ‘I agree’ button – fuelled by excitement at the promises of the latest service – is simply acquiescence rather than consent and is therefore meaningless.

What might an ethical approach to transparency look like?

A business that takes a traditional approach to transparency would use a written privacy notice to inform customers how it was going to use their data. But an organisation that takes an ethical approach might consider a broader range of factors.

  • Do our customers really understand what we are trying to tell them in our privacy notice?
  • Do we really understand how we are using their data?
  • Are there better ways of communicating what we do with their data?
  • Even if we get their consent, is this the ‘right’ thing to do?

Most consumers want the benefits of technology but also some control over their data. The challenge is whether there is a competitive advantage in being a trustworthy custodian, or whether a product or service can be so good that users will leave their concerns at the door. If there is little discernible benefit for businesses that take a responsible approach, then the decision on what is permissible will rest with lawmakers, as is being discussed in the context of US federal privacy legislation.

National data protection regulators also face an interesting challenge in interpreting the GDPR. The regulation lays the groundwork for a move away from the current standard of predominantly consent-based processing by setting an extremely high bar for obtaining this consent. However, many organisations are reluctant to move away from consent-based processing because it leaves them without tangible evidence of the ‘customer bargain’.

Consent is one of six bases for processing personal data under the GDPR and organisations are called on to use the ‘most appropriate’. For example, consent must be freely given, meaning individuals must have a real choice and understand what they are consenting to. The problem is that, in many instances, new technologies are so complex that this is a very difficult standard to reach. Regulators will therefore need to look carefully at where the lines for ‘real choice’ and ‘what is understandable’ are drawn.

If consent is less available to organisations, they are likely to look to ‘legitimate interests’ to manage their risk. This puts a greater burden on the organisation to ensure that what is happening with data could be reasonably expected and that individual rights are protected. An ethical approach to data could help achieve the regulators’ goals – enabling businesses to take more innovative approaches to transparency, and better understand their customers’ expectations.

As our use of advanced technology grows, we might be required to decide whether to share our personal data dozens – or even hundreds – of times a day. Ensuring that we understand complex data policies in a meaningful way is therefore not an easy road to travel. With this in mind, it’s little surprise that businesses and regulators are thinking of ways to ensure customers are protected – regardless of what they consent to.