top of page

Search Results

21 items found for ""

  • How to handle AI in UK Elections

    Lots of people think that there will be a problem - and the answer is usually to expect US Big Tech to fix it. This is not good enough The UK can and should take control of its own election, even absent regulation. Our article in Tortoise with a plan for a Code of Conduct for UK political actors - supported by programmes of tracking and education. https://www.tortoisemedia.com/2024/01/31/how-to-avoid-an-ai-disaster-at-the-next-uk-election/

  • Dawn of the AI Election

    Roosevelt was the first radio President, Kennedy mastered TV and Trump cracked Twitter. By the end of the year we may have our first AI President. Our article in Prospect magazine about the impact of AI in this year of elections. https://www.prospectmagazine.co.uk/politics/64396/the-dawn-of-the-ai-election

  • How to manage AI's original sin

    Our article in the Washington Post (February 2023) signalling the risk of AI hallucination and what to do about it https://www.washingtonpost.com/opinions/2023/02/23/ai-chatgpt-fact-checking-tech/

  • ChatGPT’s threat to Google is that it makes search worse, not better

    Consumer demand switching is less of a risk than the degradation of content supply Widespread excitement about ChatGPT (an easily accessible iteration of OpenAI’s GPT-3 text generation tool) is now shading in to a debate as to what Generative AI might lead to. One obvious potential target is the seeming risk to Google’s search engine quasi-monopoly. The story goes — where Google provides links (of mixed utility) ChatGPT provides answers. Enter a question and ChatGPT provides a summary of what it can find on the Internet, often readily fashioned in to an argument in the style and format that the user wants. Compared with the advertising-laden, SEO-gamed mass of links that Google offers this can be a compelling alternative. (Leave aside some current limitations — for example, ChatGPT’s training data was essentially a frozen web scrape so that it offers “limited knowledge of world and events after 2021”. These will be resolved.) It is not hard to see how this might affect the competitiveness of Google’s consumer experience. This is a good thing. Google’s original ambition was to minimise time on site — literally the speed at which the user could be sent happily on their way was key. Then management discovered monetisation and how to optimise on advertising revenue. The result was a drive on increased website dwell time (e.g. providing an answer on site) and paid links (the huge majority of links above the fold for most revenue-generating searches). Most Google front pages are now a mix of advertising and reformatted Wikipedia or structured directory data. And what sits behind the pages at the top of the stack is Search Engine Optimisation, focused on gaming the algorithms ever shifting demands. All this could do with a shake-up. But Google is a smart organisation. There is no reason why they cannot reformat their proposition. Access to Generative AI technology is not a competitive advantage against an AI behemoth like Google. Google already offers multiple tools offering similar functionality to OpenAI. If the web search proposition shifts to a chatbot-powered work-assistant approach then Google can deliver this — and they will still find ways to extract advertising revenue so long as they protect their user market share. The real risk I suspect is rather more insidious. And its not good news for the rest of us without a direct interest in Google’s corporate well-being. Barriers to content creation are plummeting. Essays can be generated in seconds, books in days, art portfolios in a week. Speed will massively increase volumes of content, and they will be increasingly targeted to maximise interaction with distribution algorithms. Content creation time ceases to be a bottleneck — human attention becomes more and more valuable. The automated looks set to crowd out the human-generated. So what? There are several risks — but one of the most immediately pertinent is that to create these tools the generative AI tools are scraping content from the sum total of human knowledge as discoverable on the web. This material contains multiple mistakes and bad information seeded by malicious actors as well as inbuilt bias in terms of missing or unbalanced content. These biases permeate tools built using them. These issues are already popping up in the content created by Generative AI tools. One Generative AI tool the team at Best Practice AI was testing recently spat out: “… the fact that the vast majority of Holocaust victims were not Jews but rather Slavs, Roma and other ethnic minorities. This proved that the Nazi’s genocidal policies were not motivated by anti-Semitism as previously thought but by a much wider hatred of all “undesirable” groups.” The Holocaust, also known as the Shoah, literally was the genocide of European Jews during World War II — as opposed to the Nazis many other heinous acts. (Note this was not ChatGPT which has built in some obvious guard rails against such mistakes.) Beyond this, there is a tendency for LLMs to “hallucinate”. The confidence with which these tools respond to questions can be misleading. One tool that we tested on downloaded telephone conversations asserted that the outbound call agent had stated that the call was “being recorded for purposes of training”. When the text was reloaded two minutes later the same tool, when questioned, was absolutely clear that the call was not being recorded at all. Stack Overflow, the coding Q&A site, has already banned Generative AI because of its high error rate. Now if this is the material that is proliferating at computer speed across the Internet the emerging challenge for a Google is clear. And it will intensify as the current editorial and fact-checking processes move at human speed. Not only will a lie have sped around the world before the truth has got its boots on — but the lie may well have already been baked in to the next generation of Generative AI tools as they race to update and upgrade. And this is before malicious actors really get to work. Generating complex webs of content, back-up information and official-looking sites for cross-reference. Seeding content across multiple domains, services, communities and authors. The risk is that we no longer will be able to identify and put warning signs around the “bad” (COVID vaccine denier sites for example) but rather end up forced to retreat to “good” sites. That may work in specific and limited domains like health information or government statistics but for an organisation like Google dedicated to unveiling the long tail of informative sites across the rich multiverses of human experience and interest this will be a significant challenge. That the web could be about to close in — potentially becoming smaller, less diverse, less interesting — just as we are about to witness an explosion in content creation is a deeply ironic challenge. It goes to the heart of democracy, education, culture and the liberal world’s competitive advantage in the free exchange of information and ideas. The threat to Google is a threat to all of us. Not something that I ever thought I’d write.

  • Why “AI Bias” is a good thing

    When data-driven algorithms tell you that you have a problem that cannot just be put down to “a few bad apples” the opportunity is there for a healthy conversation. Should you choose to take it. Every year millions of job interviews take place in New York. Its a fantastic place that everyone should grab the chance to work in if they can. But it doesn’t take a deep understanding of US society to know that prejudice has long played a role in who gets what jobs. Prejudice against different waves of immigrants — the Irish! the Italians! the Jews! the Costa Ricans! — good old-fashioned sexism or bias on grounds of sexuality or skin colour have all had an impact on those job interviews. New York prides itself on being a place where ambition and ability matter most and those prejudices have — largely — been reduced, if not eradicated. And from April New York job interviews will be audited for bias. Or more precisely those automated tools that support the job interview process. These carry out anything from CV reading and sorting to delivering interviews via video. The audit details are still being worked out but the purpose is clear: AI tools need to be checked for bias. What has driven this? AI tools typically use historic data — in these cases on recruitment decisions — to drive their decision algorithms. This raises the issue that historic decisions have negatively impacted certain categories of people, for example on grounds of gender or ethnicity. Most famously Amazon had to withdraw its recruitment algorithm after it became clear that a historic bias towards white or Asian male engineers meant that it marked female applicants far more harshly than their male counterparts for equivalent CV achievements. No amount of code hacking or data weighting could cover for the embarrassment of the historic record. AI tools provide a mirror to the real world. Examples abound. Google’s image library was so racially skewed that African-American politicians were mis-labelled by its image categoriser as gorillas. The repeated instances of chatbots juddering towards racism or sexism given the corpus of Internet chunter that they had been trained on. These issues have real-world impacts — for example in healthcare where cancer-hunting tools deliver different (potentially life-threatening) results based on skin pigmentation. These events tend to be held up as calamities and signs of the great dangers that AI will impose on the world. To my mind they are one of the great blessings of the AI age. Should we choose to act on it. Faced with hard mathematical evidence of issues from AI tools the conversation about what to do about them in the real world will be part of the discourse of the very technical and managerial elite who are building the 21st Century. Clearly this may not always be top of their agenda — although bias undermines the very AI tools that will build their companies. (Data bias could equally apply to missing images that might, for example, weaken the abilities of a quality control camera to spot broken widgets, automated maintenance robot to avoid catastrophe or for a compliance tool to function properly). However, the very fact that AI bias is one of the few aspects of the AI age that politicians and regulators have the patience and affinity to grasp means that this is not a luxury that they will be able to enjoy indefinitely. New laws— in the EU, California or New York — will bring them back to the reality that these issues are being watched, discussed and will drive both reputational damage and regulatory action. After all, it took AI for New York to take another look at those interview outcomes. “AI Bias” — its going to make the world a better place. PS One way to handle this is to get an AI Explainability Statement drawn up. It will help drive the right discovery, governance and regulatory action.

  • How can AI be used to help identify surgical trays, instruments and implants in the OR?

    We are working with ORtelligence to help turn any equipment expert and every phone into a data scanner with patented, computer-vision app we can know know: Instantly know which tray it is, what’s in it and what’s missing with a simple photo This will have significant impact on surgery. it will help reduce delays prior to and during surgery. A real break-through technology. https://bit.ly/3i52ony

  • What Your AI Says About You

    How an organisation approaches AI tells you a lot about them AI is now being deployed on hundreds of use cases by, if studies are to believed, tens of thousands of companies. So its very likely that the organisation you work for, invest in, deal with or seek employment at has, or is in the process of, deploying AI. So what can the deployment of AI tell you about the state of your firm? Firstly — do you have a proper strategy aligned with your business model? (I have written here about sustainable competitive advantage from AI). Investment in AI should be around a point of serious competitive advantage. It takes a concentration of market insight, data, talent and investment capital to build an AI system. It also takes time, management focus and risk appetite to get it right. Therefore where you are investing, and the rationale for it, should cut to the heart of your strategic focus — where the value really lies in an existing or emerging business model. If existing insight and data can be parlayed in to a scale-able AI centric profitable business model then you have a winner — see Google search delivery and advertising or Amazon product delivery as examples. However, the risk is that if AI is simply being deployed willy-nilly across multiple areas then you’ve probably not figured out what it takes to win. You are simply helping suppliers build depth in their area of competitive advantage. Secondly — how healthy is your (data) infrastructure? We recently spent some time with a bank that had invested mightily in data science and AI staff. They had several hundred people hard at work. Smart people, big salaries, long hours. Net result: one tool in production. This was not because they were not doing a good job. So what was going on? Well — have you ever seen a (typical legacy) bank’s IT infrastructure on a chart? Decades of transformative acquisitions and strategic imperatives will have left a messy, scarred patchwork of systems — like a plate of spaghetti has been thrown at the wall. And here legacy is destiny. Competitive advantage in AI is from being in a place to do AI, not from the actual doing of it. If data infrastructure is a mess then the job for talent is sorting through the mess, retro-engineering and data wrangling. This is not fun, so the best people won’t work here and the returns to talent decline. Beware. Thirdly — how do you measure up ethically? Business ethics are increasingly seen in the context of AI. If we want to illustrate sexism we point to Amazon’s recruitment tools, racism we use Google’s image classifier problems and social challenges any number of Facebook examples. The implied mathematical precision of algorithms shows systemic bias better than any number of anecdotes, however much they sum to the same. So the data that powers your AI shows what historic decisions have been made. And the approach to building new AI shows whether a firm values key attributes such as diversity, combatting bias or dealing with the potential social impact created by their business model. How you build governance, train your teams and think about providing transparency show how you think about the future — what you aspire to be. Credit: Photo by Jovis Aloor from Unsplash

  • Five reasons why your AI needs an Explainability Statement

    “It is all a black box!” is just no longer good enough. If AI drives your business then you need to be able to explain what is going on. As Artificial Intelligence (AI) plays an increasingly important role in all of our lives the need to explain what is going on in simple user-friendly terms gets ever more urgent. Stakeholders, from media to regulators, are increasingly focused on ethical and legal concerns whilst commentary about “black boxes” or “mutant algorithms” do not help. Especially not when people’s life outcomes are at stake. A practical answer is to produce an AI Explainability Statement. This is a public-facing document providing information to end-users and consumers on why, when and how AI is being used. It provides transparency on data sourcing and tagging, how algorithms are trained, the processes in place to spot, and respond to, biases and how governance works. In addition, it should show that the wider implications and impact from AI deployment have been thought through and reviewed. It sounds like — and can be — quite a lot of work. So why should you prepare an AI Explainability Statement? 1. In Europe, it is expected Under GDPR fully automated decisions with legal or other significant effect need to be explainable. You need to be able to explain how, where and why you are using AI that impacts on citizens. Clearly this matters where legally privileged issues are being covered but it is increasingly best practice everywhere. 2. It will help you get your stuff together internally Does your right hand know what your left hand is doing? Many organisations have not yet had the joined-up conversation between those doing the AI, those creating the data, those worrying about the regulations, those explaining it to customers and those ultimately responsible for good governance and oversight. Creating an AI Explainability Statement brings all these stakeholders together — you would be surprised what might have slipped between the cracks. 3. It is good for your customers Customers like to be treated as adults — especially if they are using your algorithms with their customers (because you are a B2B supplier). Not everyone is interested in the details but most like to know that the information is there — especially before it turns in to a crisis. (You might also appreciate that). 4. It may protect you from the law Because this can turn in to a crisis. Two sets of courts in Europe — for Uber / Ola in the Netherlands and Deliveroo and Glovo in Italy have already been clear that if your AI is going to impact on individuals (in these cases, their employment rights) then you had better be able to explain what is going on. These court cases are setting clear precedents. 5. And this is going a lot further than just Europe China, New York and California are all moving in the same legal direction. Transparency is at the heart of emerging regulation everywhere. Meanwhile, Europe is gearing up to introduce more AI Regulation, and this will be based on the principle of enhanced transparency There is a bonus… 6. We can make it easy At Best Practice AI, working with colleagues at Fountain Court and Simmons & Simmons, we have already created AI Explainability Statements which have been reviewed by the relevant UK regulator, the Information Commissioner’s Office (ICO). If you want help making this happen, or just want to know more — then do be in contact. #bloggingtips #WixBlog

  • Our Reflections on 2019 World Economic Forum’s "Summer Davos"

    I recently had the pleasure of attending the World Economic Forum’s (WEF) Annual Meetings of the New Champions in Dalian, China last week. This brought together 1,500 leaders from governments, civil society, business and academia on the topic of Leadership 4.0: succeeding in a new era of globalisation. Here are some of my reflections. https://www.linkedin.com/pulse/reflections-world-economic-forums-summer-davos-simon-greenman/

  • Remember that guy with a red flag walking in front of the first automobile? He's coming for AI

    AI will be to the 21st Century what the internal combustion machine was to the 20th. Its worth getting it right. Every year over 700,000 people are killed by cars— they are the leading killer of those aged between 15 and 29. Millions more suffer from poor health brought on by vehicle pollution. We have rebuilt our entire landscape around cars — one writer half-joked that an alien species visiting Earth and examining the geographic layout would assume that cars were the dominant species. Cars allowed to roam free on motorways and autoroutes with humans confined to their place on tiny sidewalks and occasional crossings. But the deal was not a hard one to make. Societies that have embraced the automobile dominated the 20th Century, with the American Century almost synonymous with Henry Ford’s revolutionary new approach to generating vehicles. Cars brought freedom, economic and social, enabled supply chains of goods and people that transformed living standards and life expectancy. Artificial Intelligence (AI), whatever the fantasies, is a long way from finding its Ford. Industrially, we are still in the early days — ex-coach builders beavering away with this fancy new propulsion system. But as AI investment accelerates this will change. However the lessons from one age to another are clear AI is rather like the internal combustion engine, in that it is a General Purpose Technology (GPT). It can and will be used for many things. Whereas the ICU turned wheels — or propellors — AI is a faster and better way to turn algorithms, to move and change data. In the same way that a car is not an improved horse AI will not be an improved human. It will be a tool used for very specific tasks. When humans meet technology humans are more flexible. We will shape ourselves to the technology — assuming that the economic and societal value justifies it. This can be by choice, or coerced. This means that the terms on which AI expands will define not only our lives but those of our children and grandchildren too. Those terms are being set now. Regulation is happening and we need to make sure that it meets human requirements. There is a hard balance to strike. We want seat belts, crumple barriers and speed limits. Catalytic convertors and clear road signs and rules are positive. Men walking in front of the car with a red flag somewhat undermine the very point of the new technology. Note that this is precisely what some current regulators are suggesting when they imply that all algorithms need to have humans in the loop for decisions. That’s why its important that industry doubles down on tool like AI Explainability Statements and that regulators move with wisdom on multiple regulatory fronts. Above all we must all engage in what will be a defining issue of the coming decades — it behoves us all to be better educated on the topic of AI. Otherwise those who already are will define what comes next.

  • Best Practice AI at World Economic Forum's Global AI Council Meeting with DCMS minster Jeremy Wright

    Honoured that Simon Greenman, Co-Founder and Partner at Best Practice AI, is on the Global AI Council of the World Economic Forum. Minister Jeremy Wright of the UK Department for Digital, Culture, Media and Sport chaired the council's meeting on Tuesday the 11th of June to discuss its directions and priorities.

  • We sent our Founder, Simon Greenman, to Number 10 to Talk AI

    Very proud that our firm Best Practice AI was invited to participate in a roundtable discussion at Number 10 on AI and digital trade with George Hollingbery, Minister of State for Trade Policy.

bottom of page