Integrating OpenAI's API in NextJS 14 - A step-by-step guide
Today, we're diving into the exciting world of AI integration with web applications. Specifically, we'll learn how to seamlessly integrate OpenAI's powerful API into a Next.js application. This integration enables your Next.js app to leverage the cutting-edge capabilities of OpenAI, such as natural language processing, content generation, and more.
Prerequisites
Before we start, ensure you have the following:
- Basic knowledge of JavaScript and React
- Node.js and npm installed
- A Next.js application setup (if not, create one using npx create-next-app)
- An OpenAI API key (obtain it from OpenAI's website)
Step 1: Setting Up Environment Variables
Firstly, store your OpenAI API key securely using environment variables. Create a .env.local file in the root of your Next.js project and add your API key:
| .env.*.local
OPENAI_API_KEY="your_api_key_here"
Step 2: Installing the OpenAI Package
Install the OpenAI Node.js package to your project:
npm install openai
or
yarn add openai
Step 3: Creating an API Route in Next.js
Next.js allows API routes to be easily set up. Create a new file under pages/api/openai.js and add the following code:
| app/api/chat/route.ts
// app/api/chat/route.ts
import { OpenAIStream, StreamingTextResponse } from "ai"
import { Configuration, OpenAIApi } from "openai-edge"
import { auth } from "@/auth"
import { nanoid } from "@/lib/utils"
export const runtime = "edge"
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY
})
const openai = new OpenAIApi(configuration)
export async function POST(req: Request) {
const json = await req.json()
const { messages, previewToken } = json
if (previewToken) {
configuration.apiKey = previewToken
}
const res = await openai.createChatCompletion({
model: "gpt-3.5-turbo", // "gpt-4"
messages,
temperature: 0.7,
stream: true
})
const stream = OpenAIStream(res, {
// This function is called when the API returns a response
async onCompletion(completion) {
const title = json.messages[0].content.substring(0, 100)
const id = json.id ?? nanoid()
const createdAt = Date.now()
const path = `/chat/${id}`
const payload = {
id,
title,
createdAt,
path,
messages: [
...messages,
{
content: completion,
role: "assistant"
}
]
}
console.log(payload)
// Here you can store the chat in database
// ...
}
})
return new StreamingTextResponse(stream)
}
This API route initializes the OpenAI client and creates a text completion based on the input prompt.
Step 4: Creating the Frontend
Now, let's create a simple UI to interact with our API. In your pages/index.js, add the following:
import { useState } from 'react';
export default function Home() {
const [prompt, setPrompt] = useState('');
const [response, setResponse] = useState('');
const handleSubmit = async (e) => {
e.preventDefault();
const res = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ prompt })
});
const data = await res.json();
setResponse(data.choices[0].text);
};
return (
<div>
<div>{response}</div>
<form onSubmit={handleSubmit}>
<textarea
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder="Enter your prompt"
/>
<button type="submit">Submit</button>
</form>
</div>
);
}
This code creates a simple form to submit prompts to our API and display the response.
Conclusion
And there you have it! You've successfully integrated OpenAI's API into a Next.js 14 application. This setup allows you to leverage the powerful AI capabilities of OpenAI in your web projects. Experiment with different models and prompts to see what amazing things you can create!
Remember, always use AI responsibly and adhere to OpenAI's usage policies.
Further Reading
Build AI-powered applications faster with Satria AI templates
If you want to build AI-powered applications faster, check out our pre-built Satria AI templates to get started.