top of page

Healthily and Best Practice AI publish world’s first AI Explainability Statement reviewed by the ICO

Updated: Apr 20, 2022

LONDON, Fri 17th Sep, 2021 - One of the world’s leading AI smart symptom checkers has taken the groundbreaking decision to publish a statement explaining how it works.

Healthily, supported by Best Practice AI together with Simmons & Simmons and Jacob Turner of Fountain Court Chambers today publish the first AI Explainability Statement to have been reviewed by the UK Information Commissioner’s Office (ICO).


The Healthily AI Explainability Statement explains how Healthily uses AI in its app including why AI is being used, how the AI system was designed and how it operates.


The statement, which can be viewed here, provides a non-technical explanation of the Healthily AI to its customers, regulators and the wider public.


Around the world, there is a growing regulatory focus and consensus around the need for transparent and understandable AI. AI Explainability Statements are public-facing documents intended to provide transparency, particularly so as to comply with global best practices and AI ethical principles, as well as binding legislation. AI Explainability Statements such as this are intended to facilitate compliance with Articles 13, 14, 15 and 22 of the GDPR for organisations using AI to process personal data. The lack of such transparency has been at the heart of recent EU court cases and regulatory decisions, involving Uber and Ola in the Netherlands and Foodinho in Italy.


Healthily, a leading consumer digital healthcare company, worked with a team from the AI advisory firm, Best Practice AI, the international law firm Simmons & Simmons, and Jacob Turner from Fountain Court Chambers to create the first AI Explainability Statement in the sector.


They also engaged with the ICO. A spokesperson for the ICO confirmed:

“In preparing its Explainability Statement, Healthily received feedback from the UK’s data protection regulator, the Information Commissioner’s Office (ICO) and the published Statement reflects that input.


It is the first AI Explainability Statement which has had consideration from a regulator.


The ICO has welcomed Healthily publication of its Explainability Statement as an example of how organisations can practically apply the guidance on Explaining Decisions Made With AI”.


Matteo Berlucchi, CEO of Healthily said:

“We are proud to continue our effort to be at the forefront of transparency and ethical AI use for our global consumer base. It was great to work with Best Practice AI on this valuable exercise.”


Simon Greenman, Partner at Best Practice AI, said:

“Businesses need to understand that AI Explainability Statements will be a critical part of rolling out AI systems that retain the necessary levels of public trust. We are proud to have worked with Healthily and the ICO to have started this journey.”


Explaining decisions made with AI

To learn more about how Best Practice AI, Simmons & Simmons LLP, and Jacob Turner from Fountain Court Chambers built the AI Explainability Statement, please contact us below.

9 views0 comments

Recent Posts

See All

How to handle AI in UK Elections

Lots of people think that there will be a problem - and the answer is usually to expect US Big Tech to fix it. This is not good enough The UK can and should take control of its own election, even abse

How to manage AI's original sin

Our article in the Washington Post (February 2023) signalling the risk of AI hallucination and what to do about it https://www.washingtonpost.com/opinions/2023/02/23/ai-chatgpt-fact-checking-tech/

bottom of page