So much is happening with AI that is groundbreaking these days that I feel it is a good time to share our current thinking. With our upcoming Affino Innovation Briefing, our thoughts are very much on the coming year and how Affino’s evolution is shaping up, and AI is a big part of that.
For the past couple of years there has been quite a lot of pushback within the Affino community against the time and effort we have spent on core AI R&D, and launching Affino’s Expert AI services. That is now changing as Affino users are starting to see just how powerful it is and the multitude of AI services they can launch for their audiences and in-house teams. Whilst we deliver hundreds of feature updates each year, not least major updates like the upcoming Commerce release, it is inevitable that AI will increasingly be leveraged across the board in the Affino SaaS.
We advise our clients that AI is now fundamental to all software development, and that in a world where a simple AI prompt can create equivalent value to a £100,000 investment in software development, there is a not a single CEO of any company that can ignore it.
Just recently we have seen ’Claude’ introduce the groundbreaking Artefacts, and some seriously effective professional services, and OpenAI has introduced o1 [Strawberry], with its groundbreaking reasoning AI capabilities, beyond anything we have seen to date.
At Affino, we’ve already taken significant steps to incorporate AI into our product. We’ve integrated AI automation into our SaaS, and the results have been transformative—even before launching, we’ve witnessed how rapidly the landscape is evolving. Earlier this year, we developed three versions of our AI chat software. But by the time we completed one version, the rapid advancements in AI had already rendered it outdated before release.
We are currently working on the fourth version of Affino’s Expert AI Service, which is slated for launch later this quarter. It has a host of powerful updates that make the most of the leading edge capabilities AI’s offer today, these include Article Questions, which are generated by the AI on the fly, or by the 100,000’s for existing content; our v1 AI Automations, which leverage Affino’s Conversion Event and Customer Ladder, the Guest AI services and greatly improved Q&A engine with Reranking behind the scenes for even faster and better responses. For the first time there will also be the option for you to chose which LLM you are using to run each of your Expert AI’s.
Even with these plans in place, I’m already preparing for the fifth version for next year. However, I fully expect that by the time we get there, AI technology will have advanced even further, and we’ll need to adapt again - not least with the full API launch of OpenAI o1. This constant cycle of innovation highlights just how fast the AI space is moving, and Affino is at the forefront of leveraging these changes to enhance our services.
I’ve spent a great deal of time working with AI this past couple of years, and recent developments make it clear that we, i.e. humanity, has now invented all the components needed for functional AGI (Artificial General Intelligence). It might not be sentient, but we will be able to create fully functional AGI that can assist everyone and accelerate our civilisation at an ever increasing speed (at least for some time). The advent of OpenAI o1, with its reasoning engine, and the elegant simplicity of the components which make up the reasoning engine simply reinforce my convictions here.
I can’t say exactly what form bots will take in the next five years or so, but i do anticipate that we will be able to buy general purpose AI bots at an affordable price, approximately $15,000, to assist us with chores around the house.
The fact that companies are investing billions and looking to raise trillions of dollars for this great leap forward in AI and robotics brings home just how close this all is to being realised.
Research has shown that AI significantly enhances the performance of workers, especially those new to their roles. When people start new jobs and use AI as an assistive tool, their performance on tasks can improve dramatically—from around 20% compared to experienced users to as much as 80% when using AI.
However, there’s a flip side. While AI boosts deliver immediate results, it can lead to a dependence on the technology. Workers who rely on AI don’t always develop the deeper skills needed for true mastery because the AI is doing much of the work for them. This means that AI enables low-skilled operators to perform tasks that would have traditionally required more experience or education, such as technical or complex problem-solving tasks. However, unless these workers are actively pushed to up-skill or motivated to learn, they may not progress in their knowledge.
While AI can enable workers to complete tasks that once required advanced skills and experience, it’s essential to recognize the limits of what AI can do. The phrase "low-skilled operators doing PhD-level work" might sound impressive, but it’s a bit of an overstatement. AI can certainly handle the technical aspects of many complex tasks, such as analyzing data or drafting documents. However, it cannot fully replace the critical thinking, creativity, and problem-solving that real PhD-level work entails, yet.
In essence, AI allows workers to perform advanced tasks, but only in the sense of execution, not in terms of understanding or innovation. So while AI can currently boost productivity, it doesn’t yet necessarily foster the deeper learning and skills that would allow an individual to excel without AI.
Despite their incredible capabilities, AI models like Claude, Mistral, ChatGPT and o1 are far from perfect. They still make occasional errors, sometimes simple ones, which can be frustrating. However, the rapid pace at which these models are improving is nothing short of remarkable.
These AI systems don’t autonomously learn from their mistakes in real time. Instead, their improvement relies on feedback, human intervention, and retraining. Developers take the mistakes made by the AI and use new data to retrain the models, gradually refining their performance over time. This process is happening at a tremendous speed, with each iteration of these models becoming more accurate and capable.
As this cycle of AI improvements accelerates, it’s important to ensure that AI models, like the ones we use in Affino, are as reliable as possible. The constant feedback loop between human input and machine learning will keep driving these systems forward, but it will take ongoing effort to maintain their effectiveness.
The changes we are witnessing with AI are unlike anything before. AI is enabling individuals to perform tasks they never could have accomplished without years of training or education. This will undoubtedly reshape industries and redefine job roles across the board.
However, there are challenges to consider as well. As more people rely on AI, there’s a risk that individuals may lose essential skills or fail to advance because the AI is doing much of the thinking for them. Striking a balance between leveraging AI’s power and encouraging continuous learning will be crucial to navigating this new reality.
Here’s a bit of info on what OpenAI’s Chain of Thought enhancement is and why it is such a game changer. It is already out in preview and OpenAI have promised it being available via its API soon, meaning that we will be bringing this to Affino in the near future.
ChatGPT o1’s chain of thought capability is a significant advancement in AI reasoning, allowing the model to "think" before responding to complex queries. The core components include reinforcement learning training, an internal chain of thought process, and hidden reasoning tokens. Crucially when training the model OpenAI focused on reinforcing the reasoning process rather than the outcome. This has been transformative and means that the o1 is unique in the market today.
These elements work together to enable step-by-step reasoning, self-correction, strategy refinement, and extended compute time for more accurate outputs.Unlike previous models where chain of thought was a prompting technique, o1 is specifically trained to use this approach without explicit prompting. This built-in reasoning capability leads to significant improvements in areas requiring complex reasoning, such as competitive programming, mathematics, and scientific problem-solving.
The o1 model breaks down complex problems into smaller, manageable steps and can try different approaches if the initial one isn’t working. This process allows for better integration of safety policies and alignment with human values. While the full reasoning tokens are hidden, users can often see a summary of the model’s thought process, providing insight into how it reached its conclusions.
By combining these components, ChatGPT o1’s chain of thought capability enables more sophisticated reasoning, particularly for complex problems in STEM fields. However, this enhanced reasoning comes at the cost of increased computational time and resources, making it less suitable for simpler tasks where previous models may be more efficient.
At Affino, we are keenly aware of these dynamics. As we push forward with AI integration, we’ll be looking for ways to ensure that our users can make the most of AI without losing the ability to innovate and grow on their own.
We will work to support safe AI prompting and use, and also work to ensure, as far as possible, that Affino can also function independently of AI’s to a degree. We will also ensure that Affino can work with multiple AI’s to mitigate the risk of being tied into any one ecosystem.
Whilst the long term vision for Affino is that ultimately you will be able to simply instruct Affino to automate as many aspects of your business as we can, we will also create as many tools as possible that leverage AI’s to empower individuals and teams to produce their best work and get their message across more powerfully - whether it is through text, audio or video.
We are anticipating that the world of work will be completely different in media companies over the next couple of years and can’t wait to make the journey to empower everyone using Affino.
Meetings:
Google Meet and Zoom
Venue:
Soho House, Soho Works +
Registered Office:
55 Bathurst Mews
London, UK
W2 2SB
© Affino 2024