Look around and you cannot miss the newlywed couple of ethics and technology. Digital ethics and privacy was chosen as Gartner’s Top 10 trends 2019; this year, the 40th International Conference of Data Protection and Privacy Commissioners was held under the title ‘Debating Ethics: Dignity and Respect in Data Driven Life’; and Finland’s Artificial Intelligence Programme challenges enterprises to create ethical principles for AI, to mention but a few.
Ethics in the context of AI
Ethics is a complex area, but so are the challenges that come with technological advancement. Who pays when a self-docking tanker ship’s manoeuvre goes badly wrong, how can companies profile us from multiple data sets, do we have free choice in our filter bubbles, did a machine just decide whether I should be given a criminal sanction or not? Now we are starting to answer those big questions with similarly sized answers, and that’s what digital ethics is for.
We have formed an AI working group at Nixu to which every Nixuan can contribute, and as the latest focus task, we dived into what ethics means in the AI context. As a consultancy business, we come across various imaginative AI uses through our clients. Let’s take one (imaginary) under examination.
An imaginative use case: Use of AI in detecting events of suspicious access of patient records
A hospital wants to use AI to audit access logs for a patient medical information system, detecting events of suspicious access of patient records.
Medical personnel typically have very expansive access rights to patient records, for patient safety reasons. However, expansive access makes it difficult to separate legitimate access (directly related to patient care) from illegitimate access (i.e., malicious tampering with patient records, or simply access because of curiosity). All users who access the patient records are typically permitted to do so.
The volume of data involved is usually so huge that no human operator can be expected to detect misuse except by accident. AI can be taught what kind of access can be considered legitimate, and what correlations (or lack of them) should give rise to suspicions.
This case is an interesting mix of patient privacy and safety considerations and medical personnel rights and freedoms considerations. Let’s examine the case through a vision of ethically designed AI: one that produces consistent and accurate output so that it can be relied upon; one that humans can lead, bypass and intervene with; one that offers visibility into its workings and can explain its decisions; one the complexity of which we can remember to appreciate and prepare for its uncertainty; and one that has an assigned owner and accountabilities sorted out.
Ethically designed AI
Consistency and accuracy should be tested – we want to be sure enough. We will need to design a robust testing schedule and consider what testing means when the machine continuously learns. Someone should validate any new anomalies the algorithm is discovering. We should seek feedback from the employees who are subject to its learning-based decisions. Humans should be built into the feedback loop.
We want to notice when the AI starts to lean one way and correct its stance. We want to be able to ask the AI to explain its decision so that we can lead it right – it should have a built-in explanation provider. At least, we will engineer a maintenance hatch to look inside its black box. We ensure that the AI can provide metrics of its workings and their consequences so that we can stay abreast with its processing and do our best avoid uncertainties. AI is not immune to information security threats, which challenges us to discover new forms for managing and measuring security. We recognise that an infected AI could be making decisions that impact patients in the real world.
The need for human intervention at the receiving end should not be forgotten. When an employee’s actions are picked up as illegitimate by the AI, are the employees told – or are they being covertly monitored? How can they explain the legitimacy of their actions and defend them? Will they somehow suffer from the decision without being able to do anything? Can they bypass the AI’s decision and access the record despite the alert? They may need to in the interest of safety.
Patient safety presumably overrides the rights and freedoms of the employees, but is it always so? We also want to examine how the AI fares against the basic measuring sticks of human inner security, privacy, safety and equality. We need to understand the impacts on all human stakeholders so that we can get an ethically righteous balance. The task will often be like comparing apples and oranges; privacy impact assessment will reveal one side and medical safety risk assessment will reveal another – a carefully constructed team of professionals across disciplines is a must.
What would you consider when implementing this kind of AI solution?
Watch this space for further updates on Nixu’s ethics-by-design work.