Search Results
21 items found for ""
- How to handle AI in UK Elections
Lots of people think that there will be a problem - and the answer is usually to expect US Big Tech to fix it. This is not good enough The UK can and should take control of its own election, even absent regulation. Our article in Tortoise with a plan for a Code of Conduct for UK political actors - supported by programmes of tracking and education. https://www.tortoisemedia.com/2024/01/31/how-to-avoid-an-ai-disaster-at-the-next-uk-election/
- Dawn of the AI Election
Roosevelt was the first radio President, Kennedy mastered TV and Trump cracked Twitter. By the end of the year we may have our first AI President. Our article in Prospect magazine about the impact of AI in this year of elections. https://www.prospectmagazine.co.uk/politics/64396/the-dawn-of-the-ai-election
- How to manage AI's original sin
Our article in the Washington Post (February 2023) signalling the risk of AI hallucination and what to do about it https://www.washingtonpost.com/opinions/2023/02/23/ai-chatgpt-fact-checking-tech/
- ChatGPT’s threat to Google is that it makes search worse, not better
Consumer demand switching is less of a risk than the degradation of content supply Widespread excitement about ChatGPT (an easily accessible iteration of OpenAI’s GPT-3 text generation tool) is now shading in to a debate as to what Generative AI might lead to. One obvious potential target is the seeming risk to Google’s search engine quasi-monopoly. The story goes — where Google provides links (of mixed utility) ChatGPT provides answers. Enter a question and ChatGPT provides a summary of what it can find on the Internet, often readily fashioned in to an argument in the style and format that the user wants. Compared with the advertising-laden, SEO-gamed mass of links that Google offers this can be a compelling alternative. (Leave aside some current limitations — for example, ChatGPT’s training data was essentially a frozen web scrape so that it offers “limited knowledge of world and events after 2021”. These will be resolved.) It is not hard to see how this might affect the competitiveness of Google’s consumer experience. This is a good thing. Google’s original ambition was to minimise time on site — literally the speed at which the user could be sent happily on their way was key. Then management discovered monetisation and how to optimise on advertising revenue. The result was a drive on increased website dwell time (e.g. providing an answer on site) and paid links (the huge majority of links above the fold for most revenue-generating searches). Most Google front pages are now a mix of advertising and reformatted Wikipedia or structured directory data. And what sits behind the pages at the top of the stack is Search Engine Optimisation, focused on gaming the algorithms ever shifting demands. All this could do with a shake-up. But Google is a smart organisation. There is no reason why they cannot reformat their proposition. Access to Generative AI technology is not a competitive advantage against an AI behemoth like Google. Google already offers multiple tools offering similar functionality to OpenAI. If the web search proposition shifts to a chatbot-powered work-assistant approach then Google can deliver this — and they will still find ways to extract advertising revenue so long as they protect their user market share. The real risk I suspect is rather more insidious. And its not good news for the rest of us without a direct interest in Google’s corporate well-being. Barriers to content creation are plummeting. Essays can be generated in seconds, books in days, art portfolios in a week. Speed will massively increase volumes of content, and they will be increasingly targeted to maximise interaction with distribution algorithms. Content creation time ceases to be a bottleneck — human attention becomes more and more valuable. The automated looks set to crowd out the human-generated. So what? There are several risks — but one of the most immediately pertinent is that to create these tools the generative AI tools are scraping content from the sum total of human knowledge as discoverable on the web. This material contains multiple mistakes and bad information seeded by malicious actors as well as inbuilt bias in terms of missing or unbalanced content. These biases permeate tools built using them. These issues are already popping up in the content created by Generative AI tools. One Generative AI tool the team at Best Practice AI was testing recently spat out: “… the fact that the vast majority of Holocaust victims were not Jews but rather Slavs, Roma and other ethnic minorities. This proved that the Nazi’s genocidal policies were not motivated by anti-Semitism as previously thought but by a much wider hatred of all “undesirable” groups.” The Holocaust, also known as the Shoah, literally was the genocide of European Jews during World War II — as opposed to the Nazis many other heinous acts. (Note this was not ChatGPT which has built in some obvious guard rails against such mistakes.) Beyond this, there is a tendency for LLMs to “hallucinate”. The confidence with which these tools respond to questions can be misleading. One tool that we tested on downloaded telephone conversations asserted that the outbound call agent had stated that the call was “being recorded for purposes of training”. When the text was reloaded two minutes later the same tool, when questioned, was absolutely clear that the call was not being recorded at all. Stack Overflow, the coding Q&A site, has already banned Generative AI because of its high error rate. Now if this is the material that is proliferating at computer speed across the Internet the emerging challenge for a Google is clear. And it will intensify as the current editorial and fact-checking processes move at human speed. Not only will a lie have sped around the world before the truth has got its boots on — but the lie may well have already been baked in to the next generation of Generative AI tools as they race to update and upgrade. And this is before malicious actors really get to work. Generating complex webs of content, back-up information and official-looking sites for cross-reference. Seeding content across multiple domains, services, communities and authors. The risk is that we no longer will be able to identify and put warning signs around the “bad” (COVID vaccine denier sites for example) but rather end up forced to retreat to “good” sites. That may work in specific and limited domains like health information or government statistics but for an organisation like Google dedicated to unveiling the long tail of informative sites across the rich multiverses of human experience and interest this will be a significant challenge. That the web could be about to close in — potentially becoming smaller, less diverse, less interesting — just as we are about to witness an explosion in content creation is a deeply ironic challenge. It goes to the heart of democracy, education, culture and the liberal world’s competitive advantage in the free exchange of information and ideas. The threat to Google is a threat to all of us. Not something that I ever thought I’d write.
- Why “AI Bias” is a good thing
When data-driven algorithms tell you that you have a problem that cannot just be put down to “a few bad apples” the opportunity is there for a healthy conversation. Should you choose to take it. Every year millions of job interviews take place in New York. Its a fantastic place that everyone should grab the chance to work in if they can. But it doesn’t take a deep understanding of US society to know that prejudice has long played a role in who gets what jobs. Prejudice against different waves of immigrants — the Irish! the Italians! the Jews! the Costa Ricans! — good old-fashioned sexism or bias on grounds of sexuality or skin colour have all had an impact on those job interviews. New York prides itself on being a place where ambition and ability matter most and those prejudices have — largely — been reduced, if not eradicated. And from April New York job interviews will be audited for bias. Or more precisely those automated tools that support the job interview process. These carry out anything from CV reading and sorting to delivering interviews via video. The audit details are still being worked out but the purpose is clear: AI tools need to be checked for bias. What has driven this? AI tools typically use historic data — in these cases on recruitment decisions — to drive their decision algorithms. This raises the issue that historic decisions have negatively impacted certain categories of people, for example on grounds of gender or ethnicity. Most famously Amazon had to withdraw its recruitment algorithm after it became clear that a historic bias towards white or Asian male engineers meant that it marked female applicants far more harshly than their male counterparts for equivalent CV achievements. No amount of code hacking or data weighting could cover for the embarrassment of the historic record. AI tools provide a mirror to the real world. Examples abound. Google’s image library was so racially skewed that African-American politicians were mis-labelled by its image categoriser as gorillas. The repeated instances of chatbots juddering towards racism or sexism given the corpus of Internet chunter that they had been trained on. These issues have real-world impacts — for example in healthcare where cancer-hunting tools deliver different (potentially life-threatening) results based on skin pigmentation. These events tend to be held up as calamities and signs of the great dangers that AI will impose on the world. To my mind they are one of the great blessings of the AI age. Should we choose to act on it. Faced with hard mathematical evidence of issues from AI tools the conversation about what to do about them in the real world will be part of the discourse of the very technical and managerial elite who are building the 21st Century. Clearly this may not always be top of their agenda — although bias undermines the very AI tools that will build their companies. (Data bias could equally apply to missing images that might, for example, weaken the abilities of a quality control camera to spot broken widgets, automated maintenance robot to avoid catastrophe or for a compliance tool to function properly). However, the very fact that AI bias is one of the few aspects of the AI age that politicians and regulators have the patience and affinity to grasp means that this is not a luxury that they will be able to enjoy indefinitely. New laws— in the EU, California or New York — will bring them back to the reality that these issues are being watched, discussed and will drive both reputational damage and regulatory action. After all, it took AI for New York to take another look at those interview outcomes. “AI Bias” — its going to make the world a better place. PS One way to handle this is to get an AI Explainability Statement drawn up. It will help drive the right discovery, governance and regulatory action.
- The risks of AI outsourcing — how to successfully work with AI startups
Corporates are battling with technology giants and AI startups for the best and brightest AI talent. They are increasingly outsourcing their AI innovations to startups to ensure they do not get left behind in the race for AI competitive advantage. However outsourcing presents real and new risks which corporates are often ill equipped to identify and manage. There are real cultural barriers, implied risks, and questions that corporates should ask before partnering with any AI startup. AI seems to be everywhere. It is near impossible to read the media without hearing about the transformative impact of AI on businesses. Gartner research predicts that enterprises will derive up to $3.9 trillion in value from AI by 2022. From HR to finance to operations and sales and marketing, AI will help grow revenues, drive efficiencies and create deeper customer relationships. Chatbots will make the long wait to speak to a customer service representative something from a bygone era. Many repetitive and boring corporate jobs, such as data entry, quality assurance or candidate screening, will be automated. But the AI industry is nascent and evolving very quickly with a shortage of expertise and experience. This means that many enterprises will have to partner and outsource their AI solutions to the thousands of new AI startups if they want a slice of that $3.9 trillion. But working with these startups is full of potential land mines, including technical, practical, legal, reputational and IP ownership risks. Many of these risks stem from cultural gaps so it’s imperative to understand them before enterprises and startups work together. Move fast and break things Startups have a culture that is often the anathema to corporate life. Silicon Valley popularised the notion of “move fast and break things” — get something launched quickly, it won’t be perfect, live with it. We also hear how AI powered companies such as Uber are launching innovative consumer services by deliberating pushing on the boundaries of existing regulations. Entrepreneurs have been described as having unreasonable disregard for what can be reasonably done. They don’t like to be told no. They push to scale their businesses, really quickly. They hate bureaucracy. They want solutions now, not in days or weeks. They are creative in their marketing to close deals. This is the DNA of the entrepreneur and their startup. As a result, young entrepreneurs who have little experience with corporate life, find selling and working with enterprises difficult. And not surprisingly, enterprises can find working with startups challenging. These cultural differences often occur when there’s a gap in expectations. The startups might have stars in their eyes as they savour winning a brand name client, such as your company. Your brand will help validate their young endeavour. But does the startup understand how long decision making can take when multiple enterprise stakeholders, especially legal, are involved? Do they understand how demanding enterprises can be before agreeing to sign-off an AI prototype or deliverable? Do they know that it can take months to get data extracted from backend systems? Do they understand that doing a pilot AI project is no guarantee of rolling out the solution across an organisation? And do they understand invoices will be paid really slowly? Cultural differences frequently lead to misunderstandings, tensions, and biases that can end in project failure, and even worse, the demise of the startup as they run out of cash trying to satisfy your needs. It is critical that enterprises are self-aware and understand the cultural differences before embarking on a relationship. Working with AI startups is full of potential land mines The challenges and risks of working with an AI startup are not only cultural they include: Technology and algorithmic risks — Much of today’s AI technology is relatively immature and there are risks that it might not work in the real world. We have seen customer service chatbot projects canned because the chatbot answered with gibberish when used with real customers. And just because an algorithm predicted the probability of consumer loan defaults at 90% with one client, it doesn’t mean it will have the same accuracy when trained on your data. Integration and implementation risk — AI startups are notoriously optimistic and often underestimate the time and cost to integrate and implement an AI solution. Proof of concepts can often can be hacked together quickly in a matter of a few months. But rolling this out across an organisation can be fraught with challenges, for example when integrating with an enterprise’s existing legacy systems, creating clean and labelled datasets, and working with existing processes. Some surveys suggest that implementing AI across an organisation is taking twice as long as anticipated by the startups. Future proof risk — AI is going through its gold rush moment with thousands of AI startups recently founded. However, if we fast forward a few years, history tells us that many of these young companies will fall by the wayside. And even if an AI startup flourishes, there is still no guarantee that they will have the technological capabilities you will want tomorrow. Legal and reputational risk — AI startups could be using technology, tools and data that puts your company at legal and reputational risk. Data privacy laws, including the recently introduced European GDPR, already require suppliers that process personal data — the fuel for many AI algorithms — to follow appropriate technical and organisational measures to ensure the information is secure. Under GDPR, there are also requirements that any automated decisions with legal effect — such as an AI system that determines who qualifies for a loan or a job — are “transparent” and “explainable”. Similarly, there are brand reputational risks if a company’s use of AI is seen as biased against certain demographics. We have seen much criticism of facial recognition technology that is better at recognising the gender of white males than females and ethnic groups. Intellectual property risk — Many startups will argue that the algorithms that they deliver are that much smarter because they are trained on datasets from a wide variety of clients. But if your data is a strategic asset, such as an insurance company with claims history of millions of customers, you might not want it to be used for the benefit of your competitors. There is a trade-off to be made. Similarly, you might not want your AI solution’s software code to be shared with other clients of the startup. The key success factors for successful collaboration Working with an AI startup is likely a necessity at this early stage of the industry’s evolution for most enterprises. But navigating the wealth of AI startups to identify players that will be around tomorrow and share the same destination is difficult. During the evaluation of prospective AI vendors, make sure you ask the following questions: Cultural fit — does the startup have demonstrable experience working with complex enterprises? Do they have realistic expectations of the relationship? Technology and algorithmic efficacy benchmarks — can the startup explain and demonstrate the effectiveness and limitations of its technology and algorithms? Can they explain how effective the solution will be with your data? How long will it take to train the AI on your data? And how long will it take to integrate their solution with your systems, data, and processes? Product roadmap — is the startup product roadmap aligned with your future needs? Is the product compatible with your technology stack? Financial health — does the startup demonstrate customer and revenue growth along with strong financial backing from leading venture capitalists? Responsible AI — does the startup have responsible AI principles that they document, explain and follow? Can they help you understand the legal and reputational risks of their AI solution? Do they follow principles of transparency and explainability of algorithms? Do they know the provenance of the data used in their system and also the risks of data sample bias? IP ownership — does the startup own the IP or is it with the client? AI is relatively new but we are now starting to see AI procurement frameworks to help guide AI vendor selection and management. For example the World Economic Forum is partnering with the British Government to develop such a framework. All in all, the most important thing to understand is that most challenges of AI projects come down to human factors. Relationships often end up as they start out so make sure you are on the same page early with your startup and commit to communicate clearly and frequently your expectations and needs. But the reality is that we all need to find a way to make these relationships work as enterprises and AI startups need each other. About Simon Greenman Simon Greenman is a partner at Best Practice AI — an AI Management Consultancy that helps companies create competitive advantage with AI. Simon is on the World Economic Forum’s Global AI Council; an AI Expert in Residence at Seedcamp; and Co-Chairs the Harvard Business School Alumni Angels of London. He has twenty years of leadership of digital transformations across Europe and the US. Please get in touch by emailing him directly or find him on LinkedIn or Twitter or follow him on Medium. This article was originally published in eMagazine: A new dawn for risk management followed by KNect365.
- How Women are Driving the AI Agenda in Ethics
Women Leading in AI puts forth 10 principles for responsible AI It was a pleasure to attend the launch of Women Leading in AI’s 10 Principles of Responsible AI at the House of Parliament. The packed venue was a testament to the current level of interest around the governance of AI and ensuring that historically underrepresented groups play an active role. The speakers were an excellent representation of the varied stakeholders involved and included Women Leading in AI co-founder Ivana Bartoletti, Joanna Bryson from the University of Bath, MP Jo Stevens, Lord Tim-Clement Jones, Sue Daley from Tech UK, Reema Patel from the Ada Lovelace Institute, and Roger Taylor from the Centre for Data Ethics and Innovation. While each speaker addressed the 10 Principles from their perspective, three themes emerged to take the calls for AI regulation from rhetoric to action. AI Changes Nothing, and Everything Women Leading in AI co-founder Ivana Bartoletti started off the evening with this reminder, that responsible AI is about coming to terms with the inequalities ingrained in our society, and what the fairer world we want to live in looks like. This takes engaging with multiple stakeholders, government, businesses and civil society. Joanna Bryson’s quote that “data is computation done in advance” drives home the idea that all data contains implicit bias, as it’s human based. It reflects our past decisions and historic discriminations. The question is how we choose to make that data actionable with machine learning and AI. Roger Taylor pointed out that the power to decide what is fair or unfair in society is not equally distributed amongst those who are most affected by its outcome, and that the regulatory work being undertaken to address bias in AI is novel. The Need for Accountability is Now One of the event’s undercurrents was a sense of urgency with regards to AI regulation. The speakers recognised that now is the time to move from recognising the need for ethics in the development and implementation of AI technologies to implementing something actionable. MP Jo Stevens, a member of the Digital, Culture, Media & Sports Select Committee, emphasised that time has run out for big tech companies to self-regulate, in light of recent failures to conduct themselves ethically. However, what form this regulation should take was a point of discussion amongst speakers and attendees. Sue Daley discussed urged consideration about how a regulatory body might come from an existing institution to avoid creating unnecessary complexity and duplication of efforts. The need for international co-operation on this matter as immensely beneficial and to strengthen corporate accountability was stressed. Regulation Should Not be Feared Multiple comments stressed that, all too often, regulation in the EU and the UK is characterised as a hindrance to innovation and economic development. Rather, Lord Tim-Clement Jones pointed out that regulation can be used as a way of creating commercial opportunities, not stifling them. Leaving technological inventions unregulated for fear of losing a global competitive edge is unsustainable. Roger Stone offered that the UK shouldn’t be too disparaging about what it can accomplish with regards to enforcing tech company oversight. The 10 recommendations set out by the Women Leading in AI network in their 2019 report: Introduce a regulatory approach governing the deployment of AIwhich mirrors that used for the pharmaceutical sector. Establish an AI regulatory function working alongside the Information Commissioner’s Office and Centre for Data Ethics — to audit algorithms, investigate complaints by individuals, issue notices and fines for breaches of GDPR and equality and human rights law, give wider guidance, spread best practice and ensure algorithms must be fully explained to users and open to public scrutiny. Introduce a new ‘Certificate of Fairness for AI systems’ alongside a ‘kite mark’ type scheme to display it. Criteria to be defined at industry level, similarly to food labelling regulations. Introduce mandatory AIAs (Algorithm Impact Assessments) for organisations employing AI systems that have a significant effect on individuals. Introduce a mandatory requirement for public sector organisationsusing AI for particular purposes to inform citizens that decisions are made by machines, explain how the decision is reached and what would need to change for individuals to get a different outcome. Introduce a ‘reduced liability’ incentive for companies that have obtained a Certificate of Fairness to foster innovation and competitiveness. To compel companies and other organisations to bring their workforce with them– by publishing the impact of AI on their workforce and offering retraining programmes for employees whose jobs are being automated. Where no redeployment is possible, to compel companies to make a contribution towards a digital skills fund for those employees. To carry out a skills audit to identify the wide range of skills required to embrace the AI revolution. To establish an education and training programme to meet the needs identified by the skills audit, including content on data ethics and social responsibility. As part of that, we recommend the set up of a solid, courageous and rigorous programme to encourage young women and other underrepresented groups into technology. A big thanks to Women Leading in AI for their excellent work on actionable guidelines and organising the event. Best Practice AI is a London-based boutique executive advisory that helps corporates, SMEs, the public sector and private equity implement their AI plans. Their mission is to demystify AI and accelerate its adoption. The Best Practice AI library is a free resource containing the world’s largest collection of business use cases (600+) and cases studies (1000+) organised across 40+ industries, 60+ functions and 60+ countries. The library is designed to help executives answer questions about what AI is, how it is being applied today, and how it can be deployed in your organisation. https://medium.com/@bestpracticeAI/how-women-are-driving-the-ai-agenda-in-ethics-877ada993d67
- What Your AI Says About You
How an organisation approaches AI tells you a lot about them AI is now being deployed on hundreds of use cases by, if studies are to believed, tens of thousands of companies. So its very likely that the organisation you work for, invest in, deal with or seek employment at has, or is in the process of, deploying AI. So what can the deployment of AI tell you about the state of your firm? Firstly — do you have a proper strategy aligned with your business model? (I have written here about sustainable competitive advantage from AI). Investment in AI should be around a point of serious competitive advantage. It takes a concentration of market insight, data, talent and investment capital to build an AI system. It also takes time, management focus and risk appetite to get it right. Therefore where you are investing, and the rationale for it, should cut to the heart of your strategic focus — where the value really lies in an existing or emerging business model. If existing insight and data can be parlayed in to a scale-able AI centric profitable business model then you have a winner — see Google search delivery and advertising or Amazon product delivery as examples. However, the risk is that if AI is simply being deployed willy-nilly across multiple areas then you’ve probably not figured out what it takes to win. You are simply helping suppliers build depth in their area of competitive advantage. Secondly — how healthy is your (data) infrastructure? We recently spent some time with a bank that had invested mightily in data science and AI staff. They had several hundred people hard at work. Smart people, big salaries, long hours. Net result: one tool in production. This was not because they were not doing a good job. So what was going on? Well — have you ever seen a (typical legacy) bank’s IT infrastructure on a chart? Decades of transformative acquisitions and strategic imperatives will have left a messy, scarred patchwork of systems — like a plate of spaghetti has been thrown at the wall. And here legacy is destiny. Competitive advantage in AI is from being in a place to do AI, not from the actual doing of it. If data infrastructure is a mess then the job for talent is sorting through the mess, retro-engineering and data wrangling. This is not fun, so the best people won’t work here and the returns to talent decline. Beware. Thirdly — how do you measure up ethically? Business ethics are increasingly seen in the context of AI. If we want to illustrate sexism we point to Amazon’s recruitment tools, racism we use Google’s image classifier problems and social challenges any number of Facebook examples. The implied mathematical precision of algorithms shows systemic bias better than any number of anecdotes, however much they sum to the same. So the data that powers your AI shows what historic decisions have been made. And the approach to building new AI shows whether a firm values key attributes such as diversity, combatting bias or dealing with the potential social impact created by their business model. How you build governance, train your teams and think about providing transparency show how you think about the future — what you aspire to be. Credit: Photo by Jovis Aloor from Unsplash
- Remember that guy with a red flag walking in front of the first automobile? He's coming for AI
AI will be to the 21st Century what the internal combustion machine was to the 20th. Its worth getting it right. Every year over 700,000 people are killed by cars— they are the leading killer of those aged between 15 and 29. Millions more suffer from poor health brought on by vehicle pollution. We have rebuilt our entire landscape around cars — one writer half-joked that an alien species visiting Earth and examining the geographic layout would assume that cars were the dominant species. Cars allowed to roam free on motorways and autoroutes with humans confined to their place on tiny sidewalks and occasional crossings. But the deal was not a hard one to make. Societies that have embraced the automobile dominated the 20th Century, with the American Century almost synonymous with Henry Ford’s revolutionary new approach to generating vehicles. Cars brought freedom, economic and social, enabled supply chains of goods and people that transformed living standards and life expectancy. Artificial Intelligence (AI), whatever the fantasies, is a long way from finding its Ford. Industrially, we are still in the early days — ex-coach builders beavering away with this fancy new propulsion system. But as AI investment accelerates this will change. However the lessons from one age to another are clear AI is rather like the internal combustion engine, in that it is a General Purpose Technology (GPT). It can and will be used for many things. Whereas the ICU turned wheels — or propellors — AI is a faster and better way to turn algorithms, to move and change data. In the same way that a car is not an improved horse AI will not be an improved human. It will be a tool used for very specific tasks. When humans meet technology humans are more flexible. We will shape ourselves to the technology — assuming that the economic and societal value justifies it. This can be by choice, or coerced. This means that the terms on which AI expands will define not only our lives but those of our children and grandchildren too. Those terms are being set now. Regulation is happening and we need to make sure that it meets human requirements. There is a hard balance to strike. We want seat belts, crumple barriers and speed limits. Catalytic convertors and clear road signs and rules are positive. Men walking in front of the car with a red flag somewhat undermine the very point of the new technology. Note that this is precisely what some current regulators are suggesting when they imply that all algorithms need to have humans in the loop for decisions. That’s why its important that industry doubles down on tool like AI Explainability Statements and that regulators move with wisdom on multiple regulatory fronts. Above all we must all engage in what will be a defining issue of the coming decades — it behoves us all to be better educated on the topic of AI. Otherwise those who already are will define what comes next.
- How can AI be used to help identify surgical trays, instruments and implants in the OR?
We are working with ORtelligence to help turn any equipment expert and every phone into a data scanner with patented, computer-vision app we can know know: Instantly know which tray it is, what’s in it and what’s missing with a simple photo This will have significant impact on surgery. it will help reduce delays prior to and during surgery. A real break-through technology. https://bit.ly/3i52ony
- Five reasons why your AI needs an Explainability Statement
“It is all a black box!” is just no longer good enough. If AI drives your business then you need to be able to explain what is going on. As Artificial Intelligence (AI) plays an increasingly important role in all of our lives the need to explain what is going on in simple user-friendly terms gets ever more urgent. Stakeholders, from media to regulators, are increasingly focused on ethical and legal concerns whilst commentary about “black boxes” or “mutant algorithms” do not help. Especially not when people’s life outcomes are at stake. A practical answer is to produce an AI Explainability Statement. This is a public-facing document providing information to end-users and consumers on why, when and how AI is being used. It provides transparency on data sourcing and tagging, how algorithms are trained, the processes in place to spot, and respond to, biases and how governance works. In addition, it should show that the wider implications and impact from AI deployment have been thought through and reviewed. It sounds like — and can be — quite a lot of work. So why should you prepare an AI Explainability Statement? 1. In Europe, it is expected Under GDPR fully automated decisions with legal or other significant effect need to be explainable. You need to be able to explain how, where and why you are using AI that impacts on citizens. Clearly this matters where legally privileged issues are being covered but it is increasingly best practice everywhere. 2. It will help you get your stuff together internally Does your right hand know what your left hand is doing? Many organisations have not yet had the joined-up conversation between those doing the AI, those creating the data, those worrying about the regulations, those explaining it to customers and those ultimately responsible for good governance and oversight. Creating an AI Explainability Statement brings all these stakeholders together — you would be surprised what might have slipped between the cracks. 3. It is good for your customers Customers like to be treated as adults — especially if they are using your algorithms with their customers (because you are a B2B supplier). Not everyone is interested in the details but most like to know that the information is there — especially before it turns in to a crisis. (You might also appreciate that). 4. It may protect you from the law Because this can turn in to a crisis. Two sets of courts in Europe — for Uber / Ola in the Netherlands and Deliveroo and Glovo in Italy have already been clear that if your AI is going to impact on individuals (in these cases, their employment rights) then you had better be able to explain what is going on. These court cases are setting clear precedents. 5. And this is going a lot further than just Europe China, New York and California are all moving in the same legal direction. Transparency is at the heart of emerging regulation everywhere. Meanwhile, Europe is gearing up to introduce more AI Regulation, and this will be based on the principle of enhanced transparency There is a bonus… 6. We can make it easy At Best Practice AI, working with colleagues at Fountain Court and Simmons & Simmons, we have already created AI Explainability Statements which have been reviewed by the relevant UK regulator, the Information Commissioner’s Office (ICO). If you want help making this happen, or just want to know more — then do be in contact. #bloggingtips #WixBlog
- Healthily and Best Practice AI publish world’s first AI Explainability Statement reviewed by the ICO
LONDON, Fri 17th Sep, 2021 - One of the world’s leading AI smart symptom checkers has taken the groundbreaking decision to publish a statement explaining how it works. Healthily, supported by Best Practice AI together with Simmons & Simmons and Jacob Turner of Fountain Court Chambers today publish the first AI Explainability Statement to have been reviewed by the UK Information Commissioner’s Office (ICO). The Healthily AI Explainability Statement explains how Healthily uses AI in its app including why AI is being used, how the AI system was designed and how it operates. The statement, which can be viewed here, provides a non-technical explanation of the Healthily AI to its customers, regulators and the wider public. Around the world, there is a growing regulatory focus and consensus around the need for transparent and understandable AI. AI Explainability Statements are public-facing documents intended to provide transparency, particularly so as to comply with global best practices and AI ethical principles, as well as binding legislation. AI Explainability Statements such as this are intended to facilitate compliance with Articles 13, 14, 15 and 22 of the GDPR for organisations using AI to process personal data. The lack of such transparency has been at the heart of recent EU court cases and regulatory decisions, involving Uber and Ola in the Netherlands and Foodinho in Italy. Healthily, a leading consumer digital healthcare company, worked with a team from the AI advisory firm, Best Practice AI, the international law firm Simmons & Simmons, and Jacob Turner from Fountain Court Chambers to create the first AI Explainability Statement in the sector. They also engaged with the ICO. A spokesperson for the ICO confirmed: “In preparing its Explainability Statement, Healthily received feedback from the UK’s data protection regulator, the Information Commissioner’s Office (ICO) and the published Statement reflects that input. It is the first AI Explainability Statement which has had consideration from a regulator. The ICO has welcomed Healthily publication of its Explainability Statement as an example of how organisations can practically apply the guidance on Explaining Decisions Made With AI”. Matteo Berlucchi, CEO of Healthily said: “We are proud to continue our effort to be at the forefront of transparency and ethical AI use for our global consumer base. It was great to work with Best Practice AI on this valuable exercise.” Simon Greenman, Partner at Best Practice AI, said: “Businesses need to understand that AI Explainability Statements will be a critical part of rolling out AI systems that retain the necessary levels of public trust. We are proud to have worked with Healthily and the ICO to have started this journey.” To learn more about how Best Practice AI, Simmons & Simmons LLP, and Jacob Turner from Fountain Court Chambers built the AI Explainability Statement, please contact us below.