Updated: Apr 20, 2022
Women Leading in AI puts forth 10 principles for responsible AI
It was a pleasure to attend the launch of Women Leading in AI’s 10 Principles of Responsible AI at the House of Parliament. The packed venue was a testament to the current level of interest around the governance of AI and ensuring that historically underrepresented groups play an active role. The speakers were an excellent representation of the varied stakeholders involved and included Women Leading in AI co-founder Ivana Bartoletti, Joanna Bryson from the University of Bath, MP Jo Stevens, Lord Tim-Clement Jones, Sue Daley from Tech UK, Reema Patel from the Ada Lovelace Institute, and Roger Taylor from the Centre for Data Ethics and Innovation. While each speaker addressed the 10 Principles from their perspective, three themes emerged to take the calls for AI regulation from rhetoric to action.
AI Changes Nothing, and Everything Women Leading in AI co-founder Ivana Bartoletti started off the evening with this reminder, that responsible AI is about coming to terms with the inequalities ingrained in our society, and what the fairer world we want to live in looks like. This takes engaging with multiple stakeholders, government, businesses and civil society. Joanna Bryson’s quote that “data is computation done in advance” drives home the idea that all data contains implicit bias, as it’s human based. It reflects our past decisions and historic discriminations. The question is how we choose to make that data actionable with machine learning and AI. Roger Taylor pointed out that the power to decide what is fair or unfair in society is not equally distributed amongst those who are most affected by its outcome, and that the regulatory work being undertaken to address bias in AI is novel.
The Need for Accountability is Now One of the event’s undercurrents was a sense of urgency with regards to AI regulation. The speakers recognised that now is the time to move from recognising the need for ethics in the development and implementation of AI technologies to implementing something actionable. MP Jo Stevens, a member of the Digital, Culture, Media & Sports Select Committee, emphasised that time has run out for big tech companies to self-regulate, in light of recent failures to conduct themselves ethically. However, what form this regulation should take was a point of discussion amongst speakers and attendees. Sue Daley discussed urged consideration about how a regulatory body might come from an existing institution to avoid creating unnecessary complexity and duplication of efforts. The need for international co-operation on this matter as immensely beneficial and to strengthen corporate accountability was stressed.
Regulation Should Not be Feared Multiple comments stressed that, all too often, regulation in the EU and the UK is characterised as a hindrance to innovation and economic development. Rather, Lord Tim-Clement Jones pointed out that regulation can be used as a way of creating commercial opportunities, not stifling them. Leaving technological inventions unregulated for fear of losing a global competitive edge is unsustainable. Roger Stone offered that the UK shouldn’t be too disparaging about what it can accomplish with regards to enforcing tech company oversight.
The 10 recommendations set out by the Women Leading in AI network in their 2019 report:
Introduce a regulatory approach governing the deployment of AIwhich mirrors that used for the pharmaceutical sector.
Establish an AI regulatory function working alongside the Information Commissioner’s Office and Centre for Data Ethics — to audit algorithms, investigate complaints by individuals, issue notices and fines for breaches of GDPR and equality and human rights law, give wider guidance, spread best practice and ensure algorithms must be fully explained to users and open to public scrutiny.
Introduce a new ‘Certificate of Fairness for AI systems’ alongside a ‘kite mark’ type scheme to display it. Criteria to be defined at industry level, similarly to food labelling regulations.
Introduce mandatory AIAs (Algorithm Impact Assessments) for organisations employing AI systems that have a significant effect on individuals.
Introduce a mandatory requirement for public sector organisationsusing AI for particular purposes to inform citizens that decisions are made by machines, explain how the decision is reached and what would need to change for individuals to get a different outcome.
Introduce a ‘reduced liability’ incentive for companies that have obtained a Certificate of Fairness to foster innovation and competitiveness.
To compel companies and other organisations to bring their workforce with them– by publishing the impact of AI on their workforce and offering retraining programmes for employees whose jobs are being automated.
Where no redeployment is possible, to compel companies to make a contribution towards a digital skills fund for those employees.
To carry out a skills audit to identify the wide range of skills required to embrace the AI revolution.
To establish an education and training programme to meet the needs identified by the skills audit, including content on data ethics and social responsibility. As part of that, we recommend the set up of a solid, courageous and rigorous programme to encourage young women and other underrepresented groups into technology.
A big thanks to Women Leading in AI for their excellent work on actionable guidelines and organising the event.
Best Practice AI is a London-based boutique executive advisory that helps corporates, SMEs, the public sector and private equity implement their AI plans. Their mission is to demystify AI and accelerate its adoption. The Best Practice AI library is a free resource containing the world’s largest collection of business use cases (600+) and cases studies (1000+) organised across 40+ industries, 60+ functions and 60+ countries. The library is designed to help executives answer questions about what AI is, how it is being applied today, and how it can be deployed in your organisation.