Search Results
1838 items found for ""
Blog Posts (18)
- ChatGPT’s threat to Google is that it makes search worse, not better
Consumer demand switching is less of a risk than the degradation of content supply Widespread excitement about ChatGPT (an easily accessible iteration of OpenAI’s GPT-3 text generation tool) is now shading in to a debate as to what Generative AI might lead to. One obvious potential target is the seeming risk to Google’s search engine quasi-monopoly. The story goes — where Google provides links (of mixed utility) ChatGPT provides answers. Enter a question and ChatGPT provides a summary of what it can find on the Internet, often readily fashioned in to an argument in the style and format that the user wants. Compared with the advertising-laden, SEO-gamed mass of links that Google offers this can be a compelling alternative. (Leave aside some current limitations — for example, ChatGPT’s training data was essentially a frozen web scrape so that it offers “limited knowledge of world and events after 2021”. These will be resolved.) It is not hard to see how this might affect the competitiveness of Google’s consumer experience. This is a good thing. Google’s original ambition was to minimise time on site — literally the speed at which the user could be sent happily on their way was key. Then management discovered monetisation and how to optimise on advertising revenue. The result was a drive on increased website dwell time (e.g. providing an answer on site) and paid links (the huge majority of links above the fold for most revenue-generating searches). Most Google front pages are now a mix of advertising and reformatted Wikipedia or structured directory data. And what sits behind the pages at the top of the stack is Search Engine Optimisation, focused on gaming the algorithms ever shifting demands. All this could do with a shake-up. But Google is a smart organisation. There is no reason why they cannot reformat their proposition. Access to Generative AI technology is not a competitive advantage against an AI behemoth like Google. Google already offers multiple tools offering similar functionality to OpenAI. If the web search proposition shifts to a chatbot-powered work-assistant approach then Google can deliver this — and they will still find ways to extract advertising revenue so long as they protect their user market share. The real risk I suspect is rather more insidious. And its not good news for the rest of us without a direct interest in Google’s corporate well-being. Barriers to content creation are plummeting. Essays can be generated in seconds, books in days, art portfolios in a week. Speed will massively increase volumes of content, and they will be increasingly targeted to maximise interaction with distribution algorithms. Content creation time ceases to be a bottleneck — human attention becomes more and more valuable. The automated looks set to crowd out the human-generated. So what? There are several risks — but one of the most immediately pertinent is that to create these tools the generative AI tools are scraping content from the sum total of human knowledge as discoverable on the web. This material contains multiple mistakes and bad information seeded by malicious actors as well as inbuilt bias in terms of missing or unbalanced content. These biases permeate tools built using them. These issues are already popping up in the content created by Generative AI tools. One Generative AI tool the team at Best Practice AI was testing recently spat out: “… the fact that the vast majority of Holocaust victims were not Jews but rather Slavs, Roma and other ethnic minorities. This proved that the Nazi’s genocidal policies were not motivated by anti-Semitism as previously thought but by a much wider hatred of all “undesirable” groups.” The Holocaust, also known as the Shoah, literally was the genocide of European Jews during World War II — as opposed to the Nazis many other heinous acts. (Note this was not ChatGPT which has built in some obvious guard rails against such mistakes.) Beyond this, there is a tendency for LLMs to “hallucinate”. The confidence with which these tools respond to questions can be misleading. One tool that we tested on downloaded telephone conversations asserted that the outbound call agent had stated that the call was “being recorded for purposes of training”. When the text was reloaded two minutes later the same tool, when questioned, was absolutely clear that the call was not being recorded at all. Stack Overflow, the coding Q&A site, has already banned Generative AI because of its high error rate. Now if this is the material that is proliferating at computer speed across the Internet the emerging challenge for a Google is clear. And it will intensify as the current editorial and fact-checking processes move at human speed. Not only will a lie have sped around the world before the truth has got its boots on — but the lie may well have already been baked in to the next generation of Generative AI tools as they race to update and upgrade. And this is before malicious actors really get to work. Generating complex webs of content, back-up information and official-looking sites for cross-reference. Seeding content across multiple domains, services, communities and authors. The risk is that we no longer will be able to identify and put warning signs around the “bad” (COVID vaccine denier sites for example) but rather end up forced to retreat to “good” sites. That may work in specific and limited domains like health information or government statistics but for an organisation like Google dedicated to unveiling the long tail of informative sites across the rich multiverses of human experience and interest this will be a significant challenge. That the web could be about to close in — potentially becoming smaller, less diverse, less interesting — just as we are about to witness an explosion in content creation is a deeply ironic challenge. It goes to the heart of democracy, education, culture and the liberal world’s competitive advantage in the free exchange of information and ideas. The threat to Google is a threat to all of us. Not something that I ever thought I’d write.
- Why “AI Bias” is a good thing
When data-driven algorithms tell you that you have a problem that cannot just be put down to “a few bad apples” the opportunity is there for a healthy conversation. Should you choose to take it. Every year millions of job interviews take place in New York. Its a fantastic place that everyone should grab the chance to work in if they can. But it doesn’t take a deep understanding of US society to know that prejudice has long played a role in who gets what jobs. Prejudice against different waves of immigrants — the Irish! the Italians! the Jews! the Costa Ricans! — good old-fashioned sexism or bias on grounds of sexuality or skin colour have all had an impact on those job interviews. New York prides itself on being a place where ambition and ability matter most and those prejudices have — largely — been reduced, if not eradicated. And from April New York job interviews will be audited for bias. Or more precisely those automated tools that support the job interview process. These carry out anything from CV reading and sorting to delivering interviews via video. The audit details are still being worked out but the purpose is clear: AI tools need to be checked for bias. What has driven this? AI tools typically use historic data — in these cases on recruitment decisions — to drive their decision algorithms. This raises the issue that historic decisions have negatively impacted certain categories of people, for example on grounds of gender or ethnicity. Most famously Amazon had to withdraw its recruitment algorithm after it became clear that a historic bias towards white or Asian male engineers meant that it marked female applicants far more harshly than their male counterparts for equivalent CV achievements. No amount of code hacking or data weighting could cover for the embarrassment of the historic record. AI tools provide a mirror to the real world. Examples abound. Google’s image library was so racially skewed that African-American politicians were mis-labelled by its image categoriser as gorillas. The repeated instances of chatbots juddering towards racism or sexism given the corpus of Internet chunter that they had been trained on. These issues have real-world impacts — for example in healthcare where cancer-hunting tools deliver different (potentially life-threatening) results based on skin pigmentation. These events tend to be held up as calamities and signs of the great dangers that AI will impose on the world. To my mind they are one of the great blessings of the AI age. Should we choose to act on it. Faced with hard mathematical evidence of issues from AI tools the conversation about what to do about them in the real world will be part of the discourse of the very technical and managerial elite who are building the 21st Century. Clearly this may not always be top of their agenda — although bias undermines the very AI tools that will build their companies. (Data bias could equally apply to missing images that might, for example, weaken the abilities of a quality control camera to spot broken widgets, automated maintenance robot to avoid catastrophe or for a compliance tool to function properly). However, the very fact that AI bias is one of the few aspects of the AI age that politicians and regulators have the patience and affinity to grasp means that this is not a luxury that they will be able to enjoy indefinitely. New laws— in the EU, California or New York — will bring them back to the reality that these issues are being watched, discussed and will drive both reputational damage and regulatory action. After all, it took AI for New York to take another look at those interview outcomes. “AI Bias” — its going to make the world a better place. PS One way to handle this is to get an AI Explainability Statement drawn up. It will help drive the right discovery, governance and regulatory action.
- Best Practice AI Founder Tim Gordon Visits Number 10
Our founder, Tim Gordon, was recently invited to Number 10 Downing Street to share his views on UK's approach to Artificial Intelligence. We are honoured to have been asked for our views and look forward to helping the UK become a world leader in AI and ethical AI.
Other Pages (1820)
- AI Use Case | Forecast sales
< back AI Use Case Forecast sales Forecast sales based on modeling of news customer conversions, existing customer revenues, etc. Function Sales Sales Management Benefits Operational Support - Sales forecasting,Operational Support - Production forecasting Case Studies "Gousto~Gousto, a British meal kit retailer, grows customer base by 700% by using machine learning to forecast demand and personalise recommendations" Industry Data Sets Structured / Semi-structured,Time series AI Technologies Machine Learning (ML),ML Task - Prediction - Regression Potential Vendors
- AI Use Case | Deliver personalised, real time analytics feed according to individual and team requirements
< back AI Use Case Deliver personalised, real time analytics feed according to individual and team requirements Personalised data feeds, potentially structured on a team / functional / hierarchical level will help speed up processes and decision-making. Differing organisational cultures around information-sharing will be a key issue here. Function Strategy Analytics Benefits Data - Data visualisation,Risk reduction - Real time awareness Case Studies Industry Data Sets Structured / Semi-structured AI Technologies Potential Vendors
- AI Use Case | Monitor customer sentiment through analysing social media
< back AI Use Case Monitor customer sentiment through analysing social media Model, determine and monitor the sentiment of customers across touch-points to identify general trends of satisfaction Function Marketing Customer Management Benefits Revenue - Customer retention,Revenue - Churn risk reduction,Data - Data enhancement Case Studies "Wunder2~Wunder2, a British cosmetic brand startup analysed 500,000 customer Facebook posts with natural language processing capabilities to better understand customer sentiment and concerns ",Adore Me~Adore Me generates product insights by determining customer sentiment with a 92% accuracy based on natural language processing analysis of thousands of reviews ,Facebook~Facebook attemps to assess suicide risk of its users based on their public posts on the platform using AI Industry Data Sets Structured / Semi-structured,Text,Audio,Images,Video AI Technologies Product Type - NLP - Text Sentiment Analysis,Product Type - NLP - Natural Language Generation,Product Type - Natural Language Processing (NLP) Potential Vendors IBM Watson,Yotpo