Unlocking value in AI

2022 brought with it a lot of new advances to the accessible capabilities of artificial intelligence. Following on from that, I believe that 2023 will be the year that artificial intelligence finds much more mainstream adoption, as products are built on top of the infrastructure itself, for specific use cases.

We've already seen that in certain use cases, but I believe this year will make previous use cases appear small.

Here are some key aspects that I believe will need to be built into existing models (or productised by other companies) to unlock some of the most exciting use cases:

Long-term memory

Most LLMs (large language models) don't have a long-term memory. Part of the reason why I believe ChatGPT has been so successful in such a short period of time is that it can remember more context in a conversation than previous examples (and the UI certainly helps with this).

Integration with other tools and services

Natural language is a much better method of communication for humans interacting with machines, but (at least at the moment) it is less suitable for machines talking to other machines - this is why APIs have become so important when integrating products.

Natural language will likely become a more and more important way in which humans carry out their work when interacting with technology, but it'll be important that this also hooks into other platforms and services - I can imagine this going a number of different directions - LLMs converting instructions to API requests, or perhaps drawing on the work of Adept and being able to interact with other products and platforms as a human would, in a virtual environment.

Better feedback loops

There have been some improvement in this respect, but interfaces that allow for feedback on responses build consumer trust. Even better if the model reacts to the feedback instantly to improve results for similar queries going forward.

Reducing hallucinations

It's been referred to as hallucinations where a model says something that sounds very plausible, but is in fact not true. This might be as simple as an incorrect fact, but I've seen it go as far as making up URLs, as well as making up people and companies that do not exist.

For some use cases, the occurrence of hallucinations matters less than others. But it'll be important to reduce the occurrence of these significantly for any B2B use cases, or where a person is really reliant on the information given being 100% accurate (like healthcare).


I'm working on a product in this space currently. Excited to share more in due course.

Subscribe to Charles Williamson

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe