We have learned a lot from running a host of prototypes over the past year, both in-house and with clients. We also learn every day from using and getting feedback on Affino’s Support AI which actively helps both the team and Affino community to learn and figure things out.
Expert AI’s have a much higher bar than article content, since they do not provide context in the same way. If you read an article you can see when it was published, by whom, likely who the author is etc. giving a lot of context for you to base your judgement on when thinking on the content.
AI’s don’t have that, you expect them to have a much better understanding, and if they provide an answer you expect it to be more accurate and to have taken onboard much of the context by itself, much like a human.
To get the best results you therefore need to be able to refine the AI experience and increase the quality of content to the level your audience is happy with.
We have added a number of levers to support with tuning your expert AI Services, and over time we expect to add more as things become more nuanced and we learn how to refine things better.
We have added a number of settings to the AI Profile which determines how each Expert AI Service works and interacts with the audience.
Note that some of the terms used here are general AI terms and we are not looking to go into details on all of them, so it is good to also study LLM’s more broadly to build up your understanding of what is possible.
Here are the key levers we have in play right now, and we have a number lined up for future updates.
You can manage the User Role right from the live chat interface. This is by far the most useful setting as it is a number of paragraphs of instructions which define how the AI should interact with the audience, what role it has, what content its answers should be based on and the tone in which to do so.
You can dramatically change the tone and focus the service, ranging from instructing all answers to be poetic and end in a haiku, or be short and to the point. You can determine the age group or level of expertise expected from the audience. You can also determine the level of flexibility in answering based purely on the content you provide it with or allowing for a more broad answer. You can advise it to exclude profanities, to answer in different languages and much more.
It is hard to over stress the importance of the User Role, since it defines how your AI works and addresses all questions.
If you are finding you are not getting the best answers to your questions, in particular if you find you are able to get answers to questions if you phrase them one way and not another then it means you need to do some tuning. The single biggest factor when it comes to tuning is Distance.
You will often find that simply adjusting the distance does the job of adjusting your AI so that it goes from being poor at the job to being excellent. So start with that after you have adjusted your User Role.
Distance
The Distance is how close or far away from an exact answer you want the results to be from the vector (i.e. the AI) database. If you have a low volume of content and want to be roughly right then you have a higher number, and if you have a higher volume of content and only want exact answers then you lower the distance. A good starting point would be 1.2 and you then increase and decrease this depending on how your content and the LLMs evolve.
Note that the Distance setting is particularly important to adjust over time as you get more content, and to give more broad or tight answers.
Number Of AI Data Results
When pulling the data from the Vector database this is the number of data points to reference. The more data you pull in the better the accuracy but also the slower the result. In some instances you might want to do a directory lookup and therefore would have it set higher. The default is twenty, the minimum is ten and maximum is 50, but we imagine that we will change this a great deal as the use cases require a broader range and the context window of the LLM increases.
Enable Recent Content Filter
This is a bit of a hack, but as LLM’s are not good at making time-based distinctions between old content and new right now, we have implemented this as an option. If you are providing news AI’s and want to give the audience the ability just to search within the latest content then enable this setting and set the right timeframe for your service.
Recent Content Time Frame
If you are a high volume fast news service then recent content could be the last few minutes or hours. If you are more of a business service recent content could be anything within the last year or two.
Source Article Sections
Select the source sections for your article content. These can be existing content sections within Affino, e.g. news, directory, events, products, insight, analysis and much more.
Article Time Frame
Define the timeframe to pull the content in from. If it is a historical archive you will want all of it, if old content is redundant or gets in the way of new content then set the timeframe to shorter. Affino will then automatically remove old content from the AI.
Article Content Security Rights
If you have specific content you want included, possibly in a focused AI or a premium service AI then simply select the appropriate content level for the AI. Note that you will want to secure the AI itself to the right level to make sure you are not making premium or confidential services available to the public.
Include Unsecured Content
If you want to both include secure and unsecured content then select this option. If you don’t select any security filter then all the article content is included.
Exclude Sponsored Articles
So much content these days is sponsored, and it is extremely jarring to have in AI’s, e.g. You MUST use this hotel or airline or buy this service. It might be appropriate for a promotion piece, but when it comes from your AI service it undermines the whole experience. Select this option to exclude any content tagged as promotional.
Max Topics Indexed
This is a crucial dial. The more topics you index against, then any content will be more diluted than makes sense - so for insight and news bots, you will want to keep this low. For directories conversely, you will want this to be high, e.g. if a cruise operator travels to 150 countries and therefore has over 150 topics assigned to the directory entry then you will want all of those included in directory content for the AI service to work.
Forum (FAQ Forum)
Associating a FAQ forum to each AI for you to train and create bridging content is an absolute must. This is the easiest way to refine your AI services to provide the best and most appropriate responses to the questions being asked.
There are a lot of different ways that AI’s can be caught out and you will want to refine the answers that are provided continuously - to ensure the best overall audience experience.
Sometimes this means adding overview FAQ’s, introductory ones to a topic, glossary FAQ’s with useful definitions, or correcting FAQ’s which refine responses given which might be incomplete or misleading.
That said often it is good to go back to the original articles and if possible update them to provide a more contemporary context.
Note that some AI’s might entirely be populated from forum content, We have two in Affino which are 100% forum based and most are influenced by forum content in addition to article content.
Forum Time Frame
If you are worried about some of the Forum content becoming redundant then set this to automatically remove old content from the AI.
Forum Content Security Rights
Similar to with Article Content Security Rights, here you lock things down to the security level you want of the forum content which is being used by the AI.
An area we are seeing real issues arise is when the source content is not prepared in the right way to begin with. A key example is ingesting transcripts in raw format and expecting the AI to give the best answers based on raw transcripts. This simply does not happen.
Always Prepare Content before Importing
You should always run transcripts through an AI first to prepare it for ingestion. Simply ask your preferred AI to rework the text so that it is in the best format for the type of AI you are creating. For example you can have it strip out all the superflous text, all the back and forth chat and introductions, and simply focus on the underlying subject matter in an information dense way.
You can then simply import that in bulk into Affino or you could take it a step further and have the AI generate the content for an article with Title, Teaser, Intro, Body text etc. you could even generate an image for the article. You can then import these or manually create the articles (or Forum Posts) you want to use as the basis of the Expert AI.
Note that if you are still not getting the right results it means your content is lacking in detail or accuracy. Quite often you just need a small amount of content to bring everything together, especially if you’re using specific terms / jargon / nomenclature / terminology for your field.
This is easy to create, our preferred approach is simply to create one or more articles to act as glossaries, or better yet create relevant FAQ’s to fill the gaps.
An example in the case of Affino is that initially when asked about Ads, Affino’s support AI started to talk about how to use Google Ads with Affino, as opposed to Affino’s native Ad Services. This is because we use terms such as Accounts, Advertisers, Ad Campaigns, Creatives, Banners, Themes etc. when referring to Affino’s Ad serving. One short FAQ was all it took to resolve all this, highlight the benefits of Affino’s native ad service and tie everything together.
Make sure you set up regular reviews on the questions and answers being asked and provided using the AI Report. We do this daily for some of our AI’s, and weekly for others. It is important to pick up on the types of questions people are asking and making sure that our AI’s answer those questions effectively. We then regularly create fresh FAQ’s, new content, and guides to address these.
Tip - Make sure you use the Debugging mode to see exactly what articles or FAQs are being used to generate the response.
You can now simply send a question and answer link to anyone as well with an updated AI answer.
Gen 4 Affino AI brings a number of nice new capabilities here to further improve on the reporting and speed up the ability to refine the responses with the best content.
If you want to know more about this, or any other insight, event, or service from Affino - then either email us on engage@affino.com, or call us on +44 (0)203 603 3155,
We can also contact you, simply click here and let us know your preference and when to contact you.
Or you can contact Markus Karlsson our CEO direct on markus.karlsson@affino.com.
Meetings:
Google Meet and Zoom
Venue:
Soho House, Soho Works +
Registered Office:
55 Bathurst Mews
London, UK
W2 2SB
© Affino 2024