Course Overview
Webinar Overview
Abstract
The ability of Large Language Models (LLMs) to understand text prompts to generate rich and grammatically well-structured content as well as to maintain the natural flow of machine-human conversation shows promises in many fields. Individual users and organisations can easily interact with recently released proprietary LLMs or may build their own models by optimising pre-trained open-source implementations to suit their own in-house data.
However, despite the currently on-going hype and exponential growth in startups offering LLM-based AI products and services, they are still far from being ideal AI solutions. LLMs are known for their “hallucinations”, biases, and their tendencies to assert “fake news” as facts. They are also blamed for a growing problem of plagiarism (or even cheating) in education and it’s feared that their widespread application may threaten the mere existence of many jobs e.g. teachers, software engineers, data analysts, creative industry workers e.g. Hollywood script writers and many others. Furthermore, many commentators and researchers argue LLMs may be used to generate overwhelming amounts of conspiracy theories which may simply flood the Internet and cause global pandemic of misinformation and distrust.
During this open-to-public webinar, we will discuss the general trustworthiness of recently released LLMs. Can we trust them then? Or are they simply reproducing human prejudices and biases present in the data they are trained on? What is the future of LLMs and will they ever be able to tell us the “truth”?
The webinar will be run and moderated by Simon Walkowiak – director at Mind Project Limited and a Ph.D. researcher in Artificial Intelligence at the Bartlett Centre for Advanced Spatial Analysis (University College London) and the Alan Turing Institute in London.
Resources and further reading
This section lists selected online resources and references to further reading which are relevant to the topic of this webinar:
- (Editorial, Open Access) Prepare for truly useful large language models. – Prepare for truly useful large language models. Nature Biomedical Engineering, 7, 85–86 (2023).
- (Academic article; Open Access) Playing Games with AIs: The Limits of GPT-3 and Similar Large Language Models. – Sobieszek, A. & Price, T. (2022). Playing Games with AIs: The Limits of GPT-3 and Similar Large Language Models. Minds & Machines, 32, 341–364.
- (Academic article; Open Access) ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. – De Angelis, L. et al. (2023). ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Frontiers in Public Health, 11 – 2023.
- “ChatGPT and large language models: what’s the risk?” – a blog article by the National Cyber Security Center (UK), published on 14th of March 2023.
- “Artificial Intelligence And Extremism: The Threat Of Language Models For Propaganda Purposes.” – a blog article by the Centre for Research and Evidence on Security Threats (UK), published on 25th of October 2022.
- “The criminal use of ChatGPT – a cautionary tale about large language models.” – a blog article by Europol, published on 27th of March 2023.
- “From Boring And Safe To Exciting And Dangerous: Why Large Language Models Need To Be Regulated.” – an article by Forbes, published on 22nd of March 2023.
- “‘We are a little bit scared’: OpenAI CEO warns of risks of artificial intelligence.” – an article briefing the interview with Sam Altman (CEO of OpenAI) by The Guardian, published on 17th of March 2023.
- “ChatGPT ‘hallucinates.’ Some researchers worry it isn’t fixable.” – (Limited access) an article by The Washington Post, published on 30th of May 2023.
- “How Large Language Models Reflect Human Judgment.” – an article in Harvard Business Review, published on 12th of June 2023.
- “Trust large language models at your own peril.” – a brief commentary by MIT Technology Review, published on 22nd of November 2022.
Who is this webinar for?
This webinar is a short 1-hour online event which is recommended to those who are interested in learning about recent developments of Large Language Models, their applications in different fields as well as their consequences and effects on various areas of day-to-day human life: social, business, political, economic etc. The webinar will specifically address the issue of trustworthiness of LLMs and will present an overview of the currently on-going research on reducing the rate (or eliminating completely) of ‘hallucinations’ which are commonly displayed by LLMs. The event will be of interest to those who wish to learn about new data science, technology and AI developments, and those who explore the overlapping areas and intersections of AI, politics, and social sciences.
Webinar delivery
This event is part of the “AI-Friendly” series of live webinars open to the general public and organised by Mind Project Ltd. The webinar is completely free-of-charge to attend, however the prior registration is required. Once registered for the event, you will receive an email with Joining Instructions. You can register more than 1 person on this webinar – each registered attendee will receive a separate email explaining how to join the event. Our open-to-public webinars are run via the Microsoft Teams application.
The webinar will run for approximately 1 hour. It will be run live and moderated by Mind Project employees, however other external guests might be invited to take part in a panel session or as interviewees. As an attendee, you may ask questions, discuss the topic and interact with other participants of this webinar. You can also message us during and after the webinar by using the Chat functionality within Microsoft Teams application.
This webinar will be recorded, but access to the recording will be restricted to registered attendees only. As the webinar will be recorded, you will enter the meeting with your camera and microphone switched off to protect your privacy, however feel free to unmute yourself and turn the camera on when you ask questions or participate in the discussion.
Webinar date: Monday, 13th of November 2023, 15:30 – 16:30 (London, UK time)
Deadline for registrations: Monday, 13th of November 2023, 15:00 (London, UK time)
Course Overview