Integrating Chat-GPT with Lambda

Rushabh Trivedi
4 min readJun 9, 2023

--

Its been while I was reading about ChatGPT and its usefulness over the tech blogs. I was curious on how to integrate ChatGPT with the application and expose API which can be consumed.

So I was looking to get started and with very little effort, I was able to create a small application using AWS lambda which integrates with the OpenAI.

In this blog, I will guide you with the step by step process I followed to create my sample application.

Tools I used are:
1. AWS SAM CLI
2. NPM package for Open API

Before we start, you need to create the API keys for the chatGPT from here.

You can refer to the above link installing SAM CLI. Once the CLI is installed, you can follow with me to create the application structure which we are going to deploy to AWS. You can refer here for the SAM CLI commands.

Create a project using sam init --name chatGPT-POC and select from the option for creating project from Quick start template or any custom template. I will go with Quick Start Template.

From the template selection, I selected Serverless API template and runtime as nodejs16.x

You can enable the X-Ray tracing and Cloudwatch Application Insights if you want.

This will create a basic application structure with few of the sample lambda functions and a template.yml file.

Before we add a new lambda handler for the chatGPT API, add the dependency in the package.json for openAI.

npm install openai --save

Now your package.json should look like this.

{
"name": "chatGPT-POC",
"description": "chatGPT-POC-description",
"version": "0.0.1",
"private": true,
"dependencies": {
"aws-sdk": "^2.799.0",
"openai": "^3.2.1"
},
"devDependencies": {
"jest": "^26.6.3"
},
"scripts": {
"test": "jest"
}
}

I have used AWS Secrets Manager to store the the API key as a best practice. Right now I am copying the secret value from the AWS console. You can also add that dynamically using a script.

Create a lambda definition in the template.yaml as below. The function should have the permission to access the AWS Secrets Manager to get the secret.

 GetChatGPTPromptFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/get-chat-gpt-prompt.lambdaHandler
Runtime: nodejs16.x
Architectures:
- x86_64
MemorySize: 128
Timeout: 300
Description: A function that returns the response from the chat prompt
Policies:
- Version: 2012-10-17
Statement:
- Action: secretsmanager:GetSecretValue
Effect: Allow
Resource: !Ref ChatGPTSecret
Environment:
Variables:
SECRET_NAME: !Ref ChatGPTSecret
Events:
Api:
Type: Api
Properties:
Path: /getPrompt
Method: POST

Add the lambda function in the src/hanlders folder. The lambda contains lambdaHandler which is called from the API gateway and other helper functions such as getSecretFromSecretManager, getOpenAIObject, getChatGPTPrompt.

The lambda function should look something similar to this

const AWS = require('aws-sdk');
const { Configuration, OpenAIApi } = require('openai');
AWS.config.update({region: 'us-east-1'});

exports.lambdaHandler = async (event) => {
try {
const input = JSON.parse(event.body);
const chatGPTSecret = await exports.get_secret_from_secretsmanager();
const openai = await exports.getOpenAIObject(chatGPTSecret);
const gptResponse = await exports.getChatGPTPrompt(openai, input);
console.log(gptResponse);
return {
statusCode: 200,
body: JSON.stringify(gptResponse)
}
} catch (error) {
console.log(error);
return {
statusCode: 500,
body: JSON.stringify(error)
}
}
}

exports.getSecretFromSecretsManager = async () => {
const secretsmanager = new AWS.SecretsManager();
const params = {
SecretId: process.env.SECRET_NAME
};
const data = await secretsmanager.getSecretValue(params).promise();
return data.SecretString;
}

exports.getOpenAIObject = async (apiKey) => {
const configuration = new Configuration({
apiKey
});
const openai = new OpenAIApi(configuration);
return openai;
}

exports.getChatGPTPrompt = async (openai, input) => {

const response = await openai.createCompletion({
model: 'text-davinci-003',
prompt: input.data,
temperature: 1,
max_tokens: 256,
top_p: 1
});
return response.data.choices[0].text;
}

Now you are done with your lambda function which connects to the chatGPT. You just need to build and deploy the application to the AWS.

Run sam build command to build the application.

Last step in this process is to deploy this stack on AWS using SAM CLI.

Run sam deploy --guided command and fill the answers in the prompt.

You can now access this API from the postman or any other similar tool.

Our sample application is working well 😍.

Our application still has much room for the improvements like:

  • Adding secrets value from the script on runtime
  • Writing test cases
  • Adding OpenAPI spec and validations
  • Adding parameters in the API call

You can find the entire application in my github repository.

Feel free to share your feedback on my blog that will be my fuel for the upcoming blog 😎.

--

--

Rushabh Trivedi
Rushabh Trivedi

Written by Rushabh Trivedi

AWS Certified Associate Architect, Cloud Solutions Lead, Angular Developer

No responses yet