Rudy Lai

Enterprise Generative AI Use Cases, May 2024

2024-05-24

I’ve been diving into real business-critical use cases of generative AI technology in enterprise companies, based on case studies published in May 2024.

These are all case studies of generative AI in large Fortune 500 companies.

What's striking is that most of these use cases are still quite basic and straightforward, rather than something new and surprising. Another notable pattern is that many of these implementations are done in partnership with major cloud providers or large consultancies.

Despite the potential of smaller startups to deliver these solutions, large companies often opt for the reliability and customization capabilities offered by established system integrators or public cloud services.

This trend raises an intriguing point: exploring enterprise case studies that involve startups might reveal different and potentially more innovative applications of generative AI. However, finding such examples may require a more challenging search.

Without further ado, here are the interesting use cases from the last month:

At Fortune, they've turned their iconic Fortune 500 rankings into an AI-driven platform that lets you ask questions in natural language about companies and industries and get relevant data visualizations and insights.

For example, “Show me a chart comparing 2023 revenue and earnings per share for pharmaceutical companies.” This makes all of Fortune's decades of collected industry data easily accessible and usable.

This is a simple way to improve UX for any data product, and I am seeing more and more examples of teams cracking the code on using LLMs/natural language user queries to work with both structured (here, revenue and EPS) and unstructured (e.g. industry classification) data.

Of course, it’s increasingly table stakes to then output “text-based responses or useful graphical data visualizations” – essentially automating the work of a business analyst. I wonder why Thoughtspot didn’t win this business.

Brazilian retail giant Gimba has built a new product catalog system. This automates creating detailed product descriptions for 30,000 products, drastically reducing the manual workload for their team. Registration time per product went from 13 minutes to 2 for the 300 product updates that happen every month. Marketing was also concerned about brand tone of voice, pitching products correctly, and SEO.

While only 900 labeled examples were used to fine tune the model, in my experience, that’s a relatively large dataset. Everybody tends to overestimate how much clean data is floating around in large enterprises.

Product cataloging is a very interesting space and from the surface, you wouldn’t look at an ecommerce/retail company and think that one of their biggest data problems is to catalog their products. Every price, description, source, supplier that is outdated creates friction for the business, losing revenue and increasing customer support costs (“This was cheaper on the shelf!”)

Coincidentally, the data science team of a travel marketplace that I spoke to recently, also told me that generating listings automatically has been their most impactful project. There is actually a whole category of vendors that address this pain point in “master data management” or “product data management”; interestingly, not a lot of startups in that space.

For its global contact center operations, DoorDash created a voice-based self-service system that can handle routine queries using generative AI. This offloads huge volumes of calls from human agents. DoorDash said it's fielding hundreds of thousands of calls per day through this AI assistant.

Technically, the focus here was to “mitigate hallucinations, prompt injection events, and detect abusive language”. Because it was voice, the 2.5 seconds of latency was also a big selling point in this case study.

VCs and founders have long questioned whether LLMs itself can provide enough value to support a new class of startups. There is increasing excitement around workflow products that enable companies to build bespoke LLM systems, rather than offering LLM capabilities. Having said that, it seems like the default outcome still is for public cloud platforms to capture the first wave of straightforward workflow-related use cases.

This was a great example, where Doordash is only connecting 3 components: Anthropic’s Claude, a RAG system containing their help center docs, and a voice interface, and it seems to be well delivered by AWS.

Lawpath, an Australian online legal services firm, launched LawpathAI to automate Ask, Draft, and Document Review workflows. For fixed price, variable cost services like Lawpath (‘monthly design subscription’ agencies are another type of company that comes to mind with this business model), they were wrestling with growing customer support costs.

Overall, this reduced customer service inquiries by 25%, customer quote lead time from 3 days to ½ a day, and increased legal document creation by 15%.

While the headline reads “startup builds new AI product”, to me the biggest story here is another example of how generative AI alone can’t power a business. Lawpath already had the distribution and operations to provide on-demand access to critical legal services, without the need for an in-house legal team. Customers come to them for this value, and then AI accelerates the delivery.

In contrast, there have been a sea of startups that try to connect to a legal team’s document repository, and charge for the search and drafting capabilities – and you’ve heard of none of them. Unfortunately, the money is in the un-sexy operational bits of recruiting a pool of legal experts, building brand and distribution, and embedding your product into your customer’s legal workflows.

In healthcare, revenue cycle management company Waystar has found one of the more creative applications of GenAI. They automated the extraction of prior authorization requirements from complex payer data sets. This reduced the time to generate an authoritative report of procedural preauthorization by 99.93%, while increasing accuracy by 13%.

Elsewhere

At Japan's largest global manufacturer of automotive components Denso, they used GenAI to automatically structure technical data, so that they can identify on-site problems and improve processes. Previously this data structuring had to be done by hand. Link

Energy company Schneider Electric has deployed almost every simple GenAI use case: internal knowledge assistants, financial analytics automation, and bots to generate code and content for their product development. Link

← Back to articles