Revolutionize Customer Service with AI: Automate Delivery Issue Handling Using OpenAI GPT-4 and Python
In the world of e-commerce, ensuring that products are delivered in perfect condition is crucial. However, when things go wrong, customers expect swift and efficient support. This is where AI can play a significant role by automating the handling of delivery exceptions, such as processing refunds, initiating replacements, or escalating issues to a human agent based on images of received packages.
In this blog, we’ll walk you through a Python-based implementation that leverages OpenAI’s GPT-4, image processing libraries, and an AI-driven tool called Instructor to automate decision-making in customer service. We’ll cover each part of the code in detail so that you can understand both the concept and how it’s implemented.
1. Setting Up the Environment
To begin, we need to install the necessary Python packages:
!pip install pymupdf --quiet
!pip install openai --quiet
!pip install matplotlib --quiet
!pip install instructor --quiet
Explanation:
pymupdf
: Although not directly used in this implementation, it’s commonly used for working with PDF documents.openai
: The library that allows us to interact with OpenAI’s models, such as GPT-4.matplotlib
: A plotting library used to display images within the notebook.instructor
: A tool that simplifies the interaction between OpenAI models and custom function calls, allowing us to define and manage tools that the AI can use.
2. Preparing the Images
The first step in analyzing package images is to encode them into a format that the AI can process.
import base64
import os
from io import BytesIO
from PIL import Image
# Function to encode the image as base64
def encode_image(image_path: str):
if not os.path.exists(image_path):
raise FileNotFoundError(f"Image file not found: {image_path}")
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
# Directory containing images
image_dir = "images"
# Encoding all images within the directory
image_files = os.listdir(image_dir)
image_data = {}
for image_file in image_files:
try:
image_path = os.path.join(image_dir, image_file)
image_data[image_file.split('.')[0]] = encode_image(image_path)
print(f"Encoded image: {image_file}")
except:
pass
Explanation:
base64.b64encode()
: Converts the binary image data into a base64-encoded string, which can be easily embedded in JSON or HTML.os.listdir(image_dir)
: Lists all the files in the specified directory.PIL.Image
: Part of the Pillow library, used for image manipulation and processing.
This code reads images from the images
directory, encodes them into base64 format, and stores them in a dictionary for later use in the AI model.
3. Displaying the Images
Before analyzing the images with AI, let’s display them to ensure they’ve been loaded and encoded correctly.
import matplotlib.pyplot as plt
def display_images(image_data: dict):
fig, axs = plt.subplots(1, 3, figsize=(18, 6))
for i, (key, value) in enumerate(image_data.items()):
img = Image.open(BytesIO(base64.b64decode(value)))
ax = axs[i]
ax.imshow(img)
ax.axis("off")
ax.set_title(key)
plt.tight_layout()
plt.show()
display_images(image_data)
Explanation:
plt.subplots()
: Creates a figure and a grid of subplots for displaying multiple images.Image.open(BytesIO(base64.b64decode(value)))
: Decodes the base64 string back into an image that can be displayed.ax.imshow(img)
: Displays the image on the subplot.ax.set_title(key)
: Sets the title of the subplot to the name of the image file.
This function displays the images in a grid layout, allowing you to visually inspect the images before passing them to the AI model for analysis.
4. Defining the Order and Action Models
To simulate real-world scenarios, we define models for orders and the actions that can be taken based on the AI’s analysis.
from pydantic import BaseModel, Field
from typing import Literal, Optional
class Order(BaseModel):
"""Represents an order with details such as order ID, customer name, product name, price, status, and delivery date."""
order_id: str = Field(..., description="The unique identifier of the order")
product_name: str = Field(..., description="The name of the product")
price: float = Field(..., description="The price of the product")
status: str = Field(..., description="The status of the order")
delivery_date: str = Field(..., description="The delivery date of the order")
Explanation:
pydantic.BaseModel
: A library that provides data validation and settings management using Python type hints.Field
: Used to add metadata, such as descriptions, to the fields in the data model.
The Order
class defines the structure of an order, including key details like order_id
, product_name
, price
, status
, and delivery_date
. This class allows us to simulate different orders and their statuses within the system.
5. Simulating Order Actions
Next, we define functions to simulate different actions that the AI might take based on the image analysis.
def get_order_details(order_id):
return Order(
order_id=order_id,
product_name="Product X",
price=100.0,
status="Delivered",
delivery_date="2024-04-10",
)
def escalate_to_agent(order: Order, message: str):
return f"Order {order.order_id} has been escalated to an agent with message: `{message}`"
def refund_order(order: Order):
return f"Order {order.order_id} has been refunded successfully."
def replace_order(order: Order):
return f"Order {order.order_id} has been replaced with a new order."
Explanation:
get_order_details()
: Returns anOrder
object with mock data for testing purposes.escalate_to_agent()
: Simulates escalating the issue to a human agent, including a custom message.refund_order()
: Simulates processing a refund for the given order.replace_order()
: Simulates replacing the order with a new one.
These functions represent the possible actions that could be taken based on the condition of the package in the image.
6. Defining AI-Powered Tools
The core of this system involves tools that the AI can use to handle different scenarios. These tools are defined as classes that inherit from a base class.
class FunctionCallBase(BaseModel):
rationale: Optional[str] = Field(..., description="The reason for the action.")
image_description: Optional[str] = Field(
..., description="The detailed description of the package image."
)
action: Literal["escalate_to_agent", "replace_order", "refund_order"]
message: Optional[str] = Field(
...,
description="The message to be escalated to the agent if action is escalate_to_agent",
)
def __call__(self, order_id):
order: Order = get_order_details(order_id=order_id)
if self.action == "escalate_to_agent":
return escalate_to_agent(order, self.message)
if self.action == "replace_order":
return replace_order(order)
if self.action == "refund_order":
return refund_order(order)
class EscalateToAgent(FunctionCallBase):
"""Escalate to an agent for further assistance."""
pass
class ReplaceOrder(FunctionCallBase):
"""Tool call to replace an order."""
pass
class RefundOrder(FunctionCallBase):
"""Tool call to refund an order."""
pass
Explanation:
FunctionCallBase
: The base class for all tools. It includes attributes likerationale
,image_description
,action
, andmessage
. The__call__
method executes the appropriate action based on theaction
attribute.EscalateToAgent
,ReplaceOrder
,RefundOrder
: These classes inherit fromFunctionCallBase
and represent the specific actions the AI might take—escalating to an agent, replacing an order, or processing a refund.
These tools are essentially commands that the AI can execute based on its analysis of the images.
7. Integrating the Tools with OpenAI
Now, we integrate these tools with the OpenAI model using the instructor
library. This step involves configuring the AI model to recognize and use the defined tools.
ORDER_ID = "12345"
INSTRUCTION_PROMPT = "You are a customer service assistant for a delivery service..."
def delivery_exception_support_handler(test_image: str):
payload = {
"model": "gpt-4-turbo-2024-04-09",
"response_model": Iterable[RefundOrder | ReplaceOrder | EscalateToAgent],
"tool_choice": "auto",
"temperature": 0.0,
"seed": 123,
}
payload["messages"] = [
{"role": "user", "content": INSTRUCTION_PROMPT},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image_data[test_image]}"}
},
],
}
]
function_calls = instructor.from_openai(
OpenAI(), mode=instructor.Mode.PARALLEL_TOOLS
).chat.completions.create(**payload)
for tool in function_calls:
print(f"- Tool call: {tool.action} for provided img: {test_image}")
print(f"- Parameters: {tool}")
print(f">> Action result: {tool(ORDER_ID)}")
return tool
Explanation:
ORDER_ID
: A placeholder order ID for testing purposes.INSTRUCTION_PROMPT
: This string contains instructions that the AI model will follow when analyzing the images.payload
Dictionary: This contains the configuration for the AI model call, including:model
: The specific OpenAI model to use.response_model
: The list of tools (RefundOrder
,ReplaceOrder
,EscalateToAgent
) that the AI can use.tool_choice
: Set to "auto", allowing the AI to choose the appropriate tool based on the image analysis.temperature
: Controls the randomness of the AI’s responses. A value of0.0
ensures consistent and deterministic results.seed
: Used to make the AI’s responses reproducible.messages
: This list contains the instruction prompt and the base64-encoded image data that the AI will analyze.instructor.from_openai
: This method is used to send the payload to the OpenAI model, allowing it to process the image and return the appropriate tool call (e.g.,RefundOrder
,ReplaceOrder
, orEscalateToAgent
).
The AI analyzes the image and, based on its understanding of the instructions and the available tools, it decides which action to take.
8. Simulating Customer Service Scenarios
Finally, we simulate different customer service scenarios to see how the AI responds to various package conditions.
print("\nSimulating user message 1")
assert delivery_exception_support_handler("completely_spilt_food").action == "refund_order"
print("\nSimulating user message 2")
assert delivery_exception_support_handler("perfect_package").action == "escalate_to_agent"
print("\nSimulating user message 3")
assert delivery_exception_support_handler("spilt_food").action == "replace_order"
Explanation:
Simulating user message 1
: Tests how the AI handles a completely spilled food package. The expected action is a refund (refund_order
), and theassert
statement verifies this.Simulating user message 2
: Tests how the AI handles a perfectly packaged item. The expected action is to escalate the issue to a human agent (escalate_to_agent
), and theassert
statement verifies this.Simulating user message 3
: Tests how the AI handles a partially spilled food package. The expected action is to replace the order (replace_order
), and theassert
statement verifies this.
These simulations demonstrate how the AI system can automatically decide the correct course of action based on visual analysis of package conditions. The assert
statements ensure that the AI’s decisions align with the expected outcomes, validating the system’s functionality.
Conclusion
In this blog, we walked through the process of building an AI-powered system that automates the handling of delivery exceptions. By defining tools (RefundOrder
, ReplaceOrder
, EscalateToAgent
), integrating them with OpenAI’s GPT-4, and using the instructor
library, we created a system capable of making intelligent decisions based on image analysis. This approach not only enhances customer service efficiency but also ensures that issues are handled promptly and accurately.
This project serves as a foundation that can be extended to more complex scenarios, including integrating with real-world APIs, adding more sophisticated image analysis techniques, and expanding the range of possible actions the AI can take. As AI technology continues to evolve, the potential for automating customer service tasks in e-commerce and beyond will only grow.
Feel free to contact for any assistance in any AI projects:
kshitijkutumbe@gmail.com