Until recently, Big Tech heavily promoted AGI, promising that everyone would soon have a personal AI assistant to do all their heavy thought work. Today, that narrative has shifted towards realizing vertical solutions for corporates.
Where did these grand visions go, and more importantly, why should companies care?
Where the relevant use cases lie
Let's look at the most commonly used solutions first that make use of latest AI technologies.
- Coding assistants: writing software based on requirements, identifying and correcting bugs and understanding large pieces of software works tremendously well by now.
- Drafting articles: writing texts in a certain style & tone or for certain audiences is used by many people. This includes private use, such as posting on social media, but also generating more complex documents in corporate domain language.
- Analyzing articles: for the sake of reviewing texts, or extracting certain information from them, LLMs offer a way to highly automate this task.
- Knowledge management: finding relevant information based on the meaning of a text instead of a keyword search delivers the relevant documents instantly. Adding chatbots makes users interact with corporate knowledge in a seamless way.
- Images: Beyond text, creating and analyzing images are the next most relevant use cases. Here, we find a stronger focus on supporting the creative process compared to text AI. Optical Character Recognition (OCR) capabilities remain highly prominent.
As far as I can tell, these are the most relevant use case domains today where actual value is generated through generative AI.
Enterprise customers are fundamental for profitable business models
While raising money and bringing out better AI models and new approaches for using them was at the core of activities for leading Tech companies throughout the last three years, the shift towards the use cases mentioned above has set in. This is driven by the need to create actual revenue. While chatbots are popular for private use, many of big players actually lose money with every customer - even with the paying ones. As an extreme example, OpenAI halted their video-generator SORA just a few weeks ago because the staggering computational workload made it a financial "black hole".
Further, these companies set up and scale their own platforms for corporate customers. This underlines where they see their future regarding their business model: it's corporate customers with vertically focussed use cases.
Consumer scale without enterprise margins is a losing game.
In my experience, there are only rare examples of AI use cases that are scalable in a good way. For the most of them, you need to dive deep into the actual business challenge and data in order to customize the AI part and make it fit into the existing IT landscape. The rise of foundational models has made things easier - and yet, we see that the use cases that generate value and scale well are only those mentioned above. Each of them lies at the core of what LLMs were created to do: understanding and re-generating the patterns of language. This is where they are good at, this is where they generate value.
Then, to make money, the Tech companies will need to expand their solutions into those fields that provide the largest amounts of customers with as minimal need for customization as possible. And here we are exactly at the vertical use cases in big corporates. Due to their sheer company size, many of their niche use cases actually have a larger set of users. This helps for scalability.
How this impacts consumers and small companies
However, smaller companies don't have this size and reach. Therefore, we cannot expect them to be in the focus of Big Tech's future business activities. This means that
- they either have to ramp up their own AI capabilities to realize value - which is hard and expensive,
- pay a premium price for major consultancies - which would kill most of the business cases,
- leverage smaller service providers - which often provide mere wrappers around APIs of Big Tech, raising serious compliance risks and uncertainty about where confidential company data is actually being processed,
- or look beyond Big Tech and leading AI models.
This translates into private usage of AI chatbots as well. We will see significant price increases in the near future, much larger than the usual $20 per month. This will happen as Big Tech loses their incentive to please private customers, as pure reach for private users does not translate into positive business revenue, as the recent past showed us. The only alternative to paying higher prices will be to pay with personal data - and we already see advertisements starting to creep into AI chatbots.
What does this mean for SMEs? How can they make use of the latest AI, then?
Many people in smaller companies are telling me that there is only little usage of the generative AI-based applications. Which means that the bar is actually low for generating value with it - regarding their fields of application, we have laid out the most important use cases above.
Providing a good way to consume these services will guide the way forward, specifically by:
- Creating, analyzing, and interacting with corporate knowledge.
- Ensuring an infrastructure that guarantees company data stays completely safe.
- Offering interfaces that make AI easy and genuinely fun for employees to use.
I'm working on such a platform right now. More updates to come. If this reflects well with your personal challenges and needs and you are keen to get to know more: let me know and let's have a chat.