ao link
Affino

The Next Phase of AI sees an arms race for Large Action Models (LAMs) and LLM Automation

Affino AIAILAMsLLMsResource+-
TweetFacebookLinkedIn
2024-Affino-AI-Personal-Assistant-700
Sharelines

The rise of Large Action Models AI

The future potential of AI-infused virtual personal assistants

The AI revolution enters its next universal phase - with fully transactional Large Action Models

Nowadays it’s the mobile phone which acts primarily as your digital personal assistant - through a range of helpful bots like Alexa, Google Assistant and Siri. While these bots are to an extent vocally interactive - they are really just scraping the surface when it comes to handling our personal and professional tasks and routines.

 

Using mobile devices as digital personal assistants, alongside the current generation of AI apps is a start on the journey towards being the digital equivalent of a real life PA (Personal Assistant).

 

There is a new ecosystem arising, encompassing key techs including ChatGPT Agents alongside Zapier for automation, with tools such as Perplexity, Notion and ChatGPT for discovery and knowledge management, and enhanced browsing through Arc. These all point squarely in the direction of where we will be going with AI’s over the coming year or two.

 

The visual highlights a futuristic scenario of a holographic digital personal assistance that serendipitously organises your daily task schedule on your behalf - ordering stuff in, making bookings and reservations, and handling whole family travel itineraries from start to finish - liaising with hotels, resorts, airlines and scheduling a variety of leisure pursuits. These personal assistants will be able to do all manner of tasks - from ordering an Uber to arranging and overseeing quite involved processes - in an authorised and semi-autonomous manner.


Large Action Models (LAM’s)

The latest area of focus is on LAMs where those 'bots' can marshal all aspects of being a proper assistant - which means carrying out all manner of real-world tasks and performing entire end-to-end routines. 

 

The AI ecosystem is currently like the Mobile App ecosystem - consisting of lots of individual and separate 'Apps' which all carry out individual tasks - but when you want to do a whole routine and transact something - you need to manually string together a whole load of different apps - in a convoluted and clunky process - like accessing myriad different apps on your mobile phone to accomplish a single transactional task.

 

This is starting to become simpler with Chat GPT’s custom GPTs and tools such as Perplexity bringing many of these together, but they can still be clunky when they require three or more tasks to be joined up.

 

The LAMs are about connecting the ‘action’ ecosystem - bringing everything into a single platform which can do everything - like a human personal assistant would - and can actively manage whole aspects of your life from the same conversational and gesture-based interactions.

 

A key aspect of LAMs is that there is a training component - where you can instruct the AI to complete newer and more complicated tasks. Once trained, the AI LAM can then repeat such a task automatically and autonomously, based on simple spoken commands.


The Rabbit R1 Pocket Companion

Rabbit aims to take the digital personal assistant concept a great deal further, not only by providing the software and engine behind it, but also a low cost device which can be used anywhere, the Rabbit R1, to undertake the tasks, wherever you are in the world.

 

www.rabbit.tech

 

Undoubtedly the big hit from the recent CES show was Rabbit Technology's R1 Pocket Companion which at the remarkably low cost of $199 for a hand-held device, has already sold out of its 5th batch of preorders. 

 

This personal assistant is the first of its kind - which seeks to bump off your Smartphone in many areas by actually carrying out tasks on your behalf to completion - and through a singular universal interface which handles and does everything seamlessly.

 

And where there is not already automation present - you can train the device on a new routine - and it will quickly pick that up and automate it for next time.

 

The Rabbit R1 can set itineraries, make reservations, book tickets, and order in a variety of goods and services. 

 

Its Large Action Models are organised into 4 categories:

  • OPTIMAL - Search, Music, Rideshare, Food, Vision, Generative AI, Translation
  • EXPLORATORY - Travel, AI-enhanced Communication, Note-taking
  • PLANNED - Point-of-interest Research, Reservations, Ticketing, Navigation, Mobile & Desktop Teach Mode
  • EXPERIMENTAL - Web Teach Mode

In practice we do not imagine that this will all work on day one when the product ships, indeed Rabbit might never completely realise their vision, but it does feel like the genie is out of the bottle with this one, and that if not Rabbit then one or many of the major tech providers will realise this goal in the not too distant future.


Apple, Google, Microsoft and Samsung

It is clear that the major tech powerhouses are all leaning towards transactional LAM’s with Apple announcing Ferret,  and Google Assistant being on a long journey with the same goals.

 

Obviously the two big tech giants and their Android and iOS ecosystems are playing catch-up with some of the highly pioneering AI companies - like Rabbit Technologies. Google has been honing its AI ecosystem for a while - tying together its main Search, Google / Android Assistant, and the Duplex elements within that.

 

While at the same time Apple has announced its own AI platform 'Ferret' which it started off on the image identification and definition side - a huge part of the 'interpretation' that is needed by these LAM systems - those platforms need to have a lot of artificial intelligence to be able to identify every object within a particular frame - and ascribe it physical interactive abilities.

 

We've already seen LG and Samsung bring such AI technology to Refrigerators - which can scan and identify (mostly) the contents of your fridge - order in replacements, and advice on recipes and ingredients from what you've got!

 

It's kind of early days for those - and Rabbit Technologies have largely stollen a march on those behemoths - but like the VCR Arms Race - it's not readily the first mover that wins out - but rather whoever sees where the biggest benefits of that format lie, and is more readily able to take advantage of that (cv VHS vs Betamax vs Video 2000).

 

The 'Ferret' project is a collaboration between Apple and Cornell University, first released on GitHub in October of 2023.

 

Ferret is being trained on 8 x A100 GPUs with 80GB memory - with image interaction and subject detection / identification as its early focus. Interestingly and new for Apple - Ferret is designated as being both Open Source & Non-Commercial! Most unusual for Apple not to be demanding its usual pound of flesh!


Samsung Galaxy S24 AI Focus

www.samsung.com/uk/smartphones/galaxy-s24-ultra/buy/?cid=UK_...

 

news.samsung.com/global/enter-the-new-era-of-mobile-ai-with-...

 

Samsung has been making a big AI play on its latest S24 Galaxy Smartphone edition - while if you delve below the surface - a lot of this is just an evolution of the more fragmented App EcoSystem - and the technology is still not universal in its access or control - but rather focuses on small subsections of the smartphone's core functionalities - such has Live Translations, Camera Automation and subsequent Image Manipulation.

 

Samsung cites the following Mobile AI Innovations :

  • Live Translate
  • Interpreter
  • Chat Assist
  • Android Auto / Navigation
  • Note Assist
  • Transcript Assist
  • Gesture-Driven Circle to Search
  • ProVisual Engine for Image Capture, Nightography
  • Generative Edit / Edit Suggestion
  • Super HDR

While a lot of those are really just baby-step evolutions of existing technologies - and not really properly the new LAM paradigm - like say the Rabbit R1.

 

The Rabbit R1 will undeniably by hugely influential and will help shape the direction of Mobile Phone operating systems and platforms - and a move away from fragmented App EcoSystems - so will be interesting to see how the big Mobile Phone giants tackle and induct the new LAM AI paradigm.


AI-infused Hardware and Plugins will become Ubiquitous

There is an AI revolution in Music Production with AI-augmented DAWs - like Hit'n'Mix's RipX DAW Pro. This shows how task specific plugins will increasingly be AI driven.

 

hitnmix.com

 

We're already well into the Generative AI Age of Music - where AI is increasingly being used in music creation nowadays - even if just in the mastering suite. Many artists thought are already making use of AI technologies in Music Production, and as a result we have a new class of DAW Digital Audio Workstation which are engineered to further manipulate that format of music.

 

A lot of Generative AI Music - in fact the vast majority - is never perfect upon creation - but needs to be significantly refined and edited. There also remains a long legacy of Sample-based-music - going back to the roots of Hip-Hop - where contemporary artists like Norman Cook / Fatboy Slim - construct tracks from as many as a dozen different sample 'Stems' with are then re-worked, edited and quantised to form the structure of a hit song.

 

The new AI DAWs - a la RipX Pro - can take a Generative AI Master file or recording of WAV / MP3 - even being able to grab / scan from YouTube videos - and isolate that into its individual component parts - or strip out and isolate the various layers.

 

The key different layers :

  • Master (all layers)
  • Voice
  • Piano
  • Guitar
  • Bass
  • Kick Drum
  • Drum
  • Percussion
  • Strings

The RipX is already endorsed by luminaries including - Norman Cook / Fatboy Slim, and Kruder & Dorfmeister

 

Its feature set consists of :

  • 6+ Stem AI Separation
  • Edit Stems Note by Note, even AI Generated Music
  • Unparalleled Remixing Capability - stripping elements back and building them up again
  • Instrument Replacement
  • Works with Video (macOS only)
  • Enhanced Stem clean-up
  • Top Tier Audio Repair and Effects
  • AudioShop Advanced Creative Tools

Going forward I expect LAM platforms to fully encroach on this too - and combine the AI Music Generation with the Editing, refinement and final mastering - taking a workflow of many weeks down to just a day or two!


How Affino fits into the AI Landscape

Affino shares many goals with Rabbit Technologies. The Unified Business Platform - in being a single source conduit and platform for all actions, interactions and transactions for commerce, publishing, media and events - is similarly expansive and holistic. We are investing heavily in AI enabling all aspects of Affino, along with integrating tightly with the leading AI and automation platforms including ChatGPT and Zapier.

 

Not only will we be leveraging LLM’s, LAM’s and the broader ecosystem, Affino itself is being evolved to deliver powerful commercial AI services which can ultimately be consumed by humans and AI’s alike.

 

This is a journey, and we have a two year roadmap for the evolution, one which we review continuously to adapt to the groundbreaking breakthroughs and innovations happening in the world of AI each month.

 

We can’t wait to be a part of the huge productivity and empowerment revolution which is underway - and are looking forward to launching the v1 Affino AI service release in the very near future.

Stefan Karlsson
Posted by Stefan Karlsson
TweetFacebookLinkedIn
Add New Comment
You must be logged in to comment.

Did you find this content useful?

Thank you for your input

Thank you for your feedback

Blog Navigation
Blog Navigation

Upcoming and Former Events

Affino Innovation Briefing 2024

PPA Independent Publisher Conference and Awards 2023

Driving business at some of the world's most forward thinking companies

Our Chosen Charity

Humanity Direct