Analyze The Sentiment Of A Customer Call using LLM Gateway
In this guide, we’ll show you how to use AssemblyAI’s LLM Gateway framework to process an audio file and then use LLM Gateway to automatically detect sentiment analysis from customer calls as “positive”, “negative”, or “neutral”. In addition, we will glean additional insights beyond these three sentiments and learn the reasoning behind these detected sentiments.
Quickstart
Python
JavaScript
Get Started
Before we begin, make sure you have an AssemblyAI account and an API key. You can sign up for an AssemblyAI account and get your API key from your dashboard.
See our pricing page for LLM Gateway pricing rates.
Step-by-Step Instructions
In this guide, we will ask five questions to learn about the sentiment of the customer and agent. You can adjust the questions to suit your project’s needs.
Install dependencies
Install the required packages.
Python
JavaScript
Import the required libraries and set your AssemblyAI API key.
Python
JavaScript
Next, you’ll upload your audio file to AssemblyAI’s servers. Once the upload is complete, the API will return a temporary URL that can be used to start the transcription.
After submitting the transcription request, your script will poll the API until the transcription is finished.
Python
JavaScript
Once you have the transcript, you’ll define short context strings for both the agent and the customer. These will help the model better understand the roles and perspectives in the conversation.
Python
JavaScript
You can now specify the exact questions you want the LLM Gateway to answer. Each question can include optional context and an answer format that tells the model how to structure its response.
Python
JavaScript
Now that the questions are defined, combine them into a single formatted prompt. This prompt includes both the call transcript and the questions you want the model to address. The model will use these details to generate accurate and concise responses.
Python
JavaScript
With the prompt prepared, query LLM Gateway then extract and print the answers returned by the LLM Gateway. This step displays the model’s assessment of each question, including the identified sentiments and their reasoning.