Automating AWS Support Ticket Classification and Resolution Recommendation: How does the code work?

Automating AWS Support Ticket Classification and Resolution Recommendation: How does the code work?

Fine-tuning a language model such as GPT can transform an AI into a specialized tool for solving specific tasks. But before we plunge into this task, we must ask ourselves a crucial question - does our use case warrant fine-tuning, or can it be effectively addressed with enhanced prompting? It's a subtle yet important aspect to ponder.

In this blog, we unravel an intriguing use case where we have constructed a customer support tool for AWS environments. The tool uses GPT to classify support tickets and suggest resolutions, simulating a virtual AWS support team.

Building a Synthetic Data Set

I started by crafting a synthetic dataset of nearly 500 AWS issues, each representing a potential support ticket and their corresponding solutions. This dataset provided a robust starting point for this demo. OpenAI suggests starting with a dataset between 100 and 500 instances, but the emphasis is more on quality. The quality of the prompts and their corresponding completions holds immense significance.

Fine-tuning GPT-4: Benefits and Application

Fine-tuning a language model like GPT yields multiple benefits. It facilitates superior text completion, enables better classification, and even improves sentiment analysis capabilities. In this demo, I aimed to enhance classification, with resolution recommendation being the secondary goal.

Implementing Fine-tuning: A Closer Look at the Code

OpenAI's fine-tuning process is straightforward and user-friendly, which makes it a gold standard for other platforms. Following OpenAI's footsteps, even open source release of Meta's LLaMA2 has published detailed documentation to implement/fine-tune on its LLM. Its compatibility with platforms like Google Colab is particularly noteworthy, allowing users to leverage the default Google Cloud machine for training.

Let's delve into the code. 

pip install openai

pip install pandas

import openai

import pandas as pd

openai.api_key = {open_ai_key}

file_path = '/content/customer_support_training.csv'

df = pd.read_csv(file_path)

df.to_csv('/content/customer_support_training_clean.csv', index=False)

openai tools fine_tunes.prepare_data -f /content/customer_support_training_clean.csv

openai.FineTune.create(training_file=training_file['id'], model ='curie', n_epochs=4)

We commence by converting the CSV file into a DataFrame using Pandas, and, subsequently, cleanse the data for processing. The clean dataset is then fed to OpenAI's fine-tuning command. 

This command scans the DataFrame and recommends potential enhancements, such as appending a definitive end to each prompt or removing duplicate entries. Upon agreeing to these suggestions, we invoke the training command.

At this stage, users can opt for multiple models. Initially, I chose 'ada' model but wasn't quite satisfied with the results. Therefore, I shifted to the Curie model, which, while slightly more expensive (3x), was much better suited for my smaller dataset. The training process took about an hour to complete, at the end of which I received a unique ID, representing my fine-tuned instance running on OpenAI.

OpenAI has abundant resources, including detailed documentation and many tutorials from experts that thoroughly explain the fine-tuning process.

https://platform.openai.com/docs/guides/fine-tuning

The Output: Classifying and Resolving Tickets

Post training, the fine-tuned model is ready for action. Using my new API key (the unique ID), I implemented a chat completion prompt, which allows the model to receive AWS issues as input. Leveraging its training on the issues and corresponding fixes, the model was able to suggest resolutions for the problems presented.

In summary, this demonstration presents a simple yet effective method of employing GPT fine-tuning to handle support tickets. It offers a glimpse into the remarkable potential of fine-tuning in automating and streamlining complex tasks.

Try demo here: https://finetunegpt.streamlit.app/

Stack - Python, GPT4, LlaMA2, Langchain, AWS, Streamlit, Framer
Tools Used - Github, Canva, Replit, Jupyter, Google Colab

Stack - Python, GPT4, LlaMA2, Langchain, AWS, Streamlit, Framer
Tools Used - Github, Canva, Replit, Jupyter, Google Colab