The biggest breakthroughs in AI in recent months have been the growth of Large Language Models (LLMs), not least ChatGPT. These are the basis of generative AI, i.e. the creative, general question answering version of AI which is taking off in a big way.
The beauty of generative AI, is that from simple instructions, i.e. prompts, it is possible to answer complex questions in a format you like, and ultimately it will become simple to create great looking visual content from simple AI instructions.
There are some superb examples of users taking 30 minutes to use AI to generate a script, narrate the script, create avatars, and then animate the avatars reciting the script. These are then further refined with richer, more natural facial expressions and appealing audio visual effects to provide effective and pretty decent quality videos. Something unimaginable this time a year ago.
Within a short order this will become the norm.
The great benefit of Generative AI is that it is also able to answer complex questions and provide answers at the level of the audience, e.g. explain to a five year old, or provide an expert’s detailed evaluation of a complex matter.
The only downside here is that generative AI, by its very nature of being creative, will make things up. Ultimately the massive language models work on quite simple principles of word association, predicting what’s going to be the best next word to write, based on billions / trillions of inputted words in structured content.
This has lead to ChatGPT producing a whole range of hallucinated citations in the Guardian and New York Times.
AI has a great concept of temperature, ChatGPT’s own one is:
In the context of AI, specifically in machine learning and natural language processing, "Temperature" is a parameter in algorithms used to control the randomness of predictions.
In models such as GPT-3 or ChatGPT, the Temperature parameter influences the model’s output when generating text. A high Temperature value (closer to 1) makes the output more diverse and random, while a lower Temperature (closer to 0) makes the model’s responses more deterministic and focused on the most likely outcome.
In other words, Temperature adjusts the probability distribution from which the AI model picks its next action. It’s a measure of creativity or randomness in the AI’s output.
This means that the way the average user interacts with AI’s such as ChatGPT is guaranteed to lead to hallucinated citations, so it should never be considered the absolute truth on any matter.
The flip side is that as it has been trained on trillions of bits of data, it is going to be more accurate than most people on most subjects and can act as a great reference point for nearly all subject matters, as long as you take everything with a pinch of salt, and learn to sense-check the output.
There is an opportunity for media organisations to provide ’less creative’ AI services, with the temperature set to zero - to minimise the chance of hallucinations.
The big danger with AI will come when we become heavily reliant on them for day to day activities, and the AI starts to stray, or is manipulated into working against us. We have to make sure we have the safeguards in place to help us protect against this, and likely there will evolve a whole plethora of services with AI’s sense-checking each other, safeguarding against manipulation, and providing agents to look out for us.
Just be aware that any time you’re using AI, you’re not always going to be seeing the truth, whole truth and nothing but the truth, unless the temperature is dialled down to zero you’re likely to be getting some original creations in there as well.
Meetings:
Google Meet and Zoom
Venue:
Soho House, Soho Works +
Registered Office:
55 Bathurst Mews
London, UK
W2 2SB
© Affino 2024