Updated: Apr 20
The AI narrative is moving on from questions to actions, and finding solutions to manage the potential risk and ethics of AI
Some of the members of the World Economic Forum’s Global AI Council who met in Davos, Jan 22nd 2020
What do you say to a Nobel Prize winner when discussing how to make AI explainable in a deep neural network with over one billion parameters? This was my first trip to Davos and it coincided with the
’s (WEF) celebration of its 50th annual meeting. The setting was picture perfect: an idyllic mountain town framed by snow-capped mountains under crystal clear blue skies. The world’s elite were out in force in their designer sunglasses. I spotted senior government ministers, billionaires, tech titans and rock stars all within an hour. And here I was talking to the Nobel Prize winner, Joseph Stiglitz, and making sure that our boutique AI management consultancy,
, was represented at the highest level. We discussed AI explainability, the words that are on everyone’s lips. While Professor Stiglitz has concerns from an academic point of view, I deal with the issue from a different perspective: bringing practical tools to boards who are grappling with AI ethics and how to evidence the management of AI risks.
Economics Nobel Prize winner Joseph Stiglitz with author
World Economic Forum’s Global AI Council
I am a member of the WEF’s Global AI Council. I attended a Council meeting chaired by Dr
, former CEO of Google China, investor and author of AI Superpowers, and Brad Smith, the President of Microsoft. The Council is made up of senior government representatives, global institutions such as the
, industry bodies such as the IEEE, tech giants such as
, leading AI academics, the brilliant Will.i.am, and a sprinkling of AI start-ups.
The theme this year at Davos was stakeholder capitalism, pushing corporations to look beyond a single metric of success (shareholder return) to factoring in customers, employees, partners and society as a whole into the calculus. This has come at a time where we all face clear issues of wealth inequality, political instability, and the challenges of global sustainability. While US President Trump couldn’t resist taking a dig at 17 year old Greta Thunberg in his Davos speech, there is no doubt that sustainability and climate change were top of the agenda for the world’s most powerful. It is an incredible gathering of those that truly control this planet. For all the griping about the hypocrisy of a record number of private jets parked at the airport, I couldn’t help but feel a sense of optimism that those who can change the world are coalescing around the right agenda.
Underlying much of the discussion was the role that technology can play in helping to address many of the
’ sustainable development goals (SDGs) including climate action, good health, quality education, gender equality, and reduced inequality. The CEO of Alphabet-Google,
, pointed to the importance of AI technology and said:
“AI is one of the most profound things we’re working on as humanity. It’s more profound than fire or electricity.
While maybe a bit hyperbolic, there is no doubt AI will be woven into the fabric of society impacting nations, governments, institutions, companies and people. The global spend on AI is expected to hit $52 billion in the next three years and will help double the GDP growth rates of major economies in the next fifteen years, according to Accenture.
Balancing the opportunities of AI with the risk
But as Pichai also said “there is no question” AI needs to be regulated. As with the introduction of any new technology the opportunities need to be balanced with the risks. For example, facial recognition technology is now a commodity that stands on the cusp of becoming ubiquitous. It offers great benefits for society. It can make us more secure by identifying known criminals and terrorists. It can streamline our busy lives by speeding up identification at the airport. But it can also put at risk our right to privacy. It can be used by bad actors to single out persons or ethnic groups for persecution. Governments and non-profits have a role to play in identifying and managing these risks. Businesses have a role to play. Academics have a role to play. And civic leaders have a role to play. And this needs to be done on a multilateral basis. We need to empower AI leadership globally to help address these risks.
Much of the narrative arc of AI has been dominated by fear. The existential fear of AI. The fear for our jobs. The fear that AI is fundamentally unjust with a lack of ethics and intrinsic bias. The fear that AI will mean the loss of our privacy as facial facial recognition becomes ubiquitous. This narrative was repeated in Davos across numerous sessions. But instead of everyone simply asking questions, the discussion has finally turned to potential solutions.
The world has moved on in the past year. AI is also (finally) moving on, beyond experimentation into practical use. Barry O’Bryne, CEO of HSBC Global Commercial Banking, talked about how the company has over 300 AI use cases with a focus on improving the customer journey. The move to scale up AI is forcing us all to address the questions of how do we best manage AI in the real world.
Empowering AI Leadership Board Toolkit
To this end, the WEF launched its Empowering AI Leadership Board Toolkit here in Davos. As
, Head of AI and Machine Learning at the WEF, said
“our research found that many executives and investors do not understand the full scope of what AI can do for them and what parameters they can set to ensure the use of the technology is ethical and responsible.”
The Toolkit is designed to help corporate boards understand the value of AI and to ensure it is used responsibly, with practical tools for risk management in their governance and compliance practices. We, at Best Practice AI, were key contributors to this toolkit along with others such as IBM, Accenture and BBVA. The Toolkit is available here for free.
AI Governance Framework
The WEF also launched an updated AI Governance Framework that provides an implementation and self-assessment guide for organisations. The implementation of this framework is being led by the Singaporean government. Best Practice AI was privileged to have had been invited to provide input into this work.
An AI Healthcheck and Compliance Framework
We also announced a partnership in Davos with the law firm Simmons & Simmons, and Jacob Turner, a barrister at Fountain Court Chambers and author of Robot Rules: Regulating Artificial Intelligence, to launch one of the most comprehensive AI healthcheck and compliance services. This will help organisations ensure the responsible and trustworthy use of AI. More information can be found here.