How to build an Interactive AI Tutor with Llama 3.1
Learn we built LlamaTutor from scratch β an open source AI tutor with 90k users.
LlamaTutor is an app that creates an interactive tutoring session for a given topic using Together AIβs open-source LLMs.
![](https://files.readme.io/4c422562d64dd68147df8d303228f5a596b72da5275ff0f7c87c64fb07cd81a6-image.png)
It pulls multiple sources from the web with either Bingβs API or Serper's API, then uses the text from the sources to kick off an interactive tutoring session with the user.
![](https://files.readme.io/323bd02b1165751715dda5fbd4ca38a172380cd2d3fda4645fa62dae144dd21f-image.png)
In this post, youβll learn how to build the core parts of LlamaTutor. The app is open-source and built with Next.js and Tailwind, but Togetherβs API work great with any language or framework.
Building the input prompt and education dropdown
LlamaTutorβs core interaction is a text field where the user can enter a topic, and a dropdown that lets the user choose which education level the material should be taught at:
![](https://files.readme.io/7a71a5250fc0e2c5d76b412cb885c0485f7fbfa10b0153205242e7ee63446c6e-image.png)
In the main page component, weβll render an <input>
and <select>
, and control both using some new React state:
// app/page.tsx
function Page() {
const [topic, setTopic] = useState('');
const [grade, setGrade] = useState('');
return (
<form>
<input
value={topic}
onChange={(e) => setTopic(e.target.value)}
placeholder="Teach me about..."
/>
<select value={grade} onChange={(e) => setGrade(e.target.value)}>
<option>Elementary School</option>
<option>Middle School</option>
<option>High School</option>
<option>College</option>
<option>Undergrad</option>
<option>Graduate</option>
</select>
</form>
);
}
When the user submits our form, our submit handler ultimately needs to do three things:
- Use the Bing API to fetch six different websites related to the topic
- Parse the text from each website
- Pass all the parsed text, as well as the education level, to Together AI to kick off the tutoring session
Letβs start by fetching the websites with Bing. Weβll wire up a submit handler to our form that makes a POST request to a new /getSources
endpoint:
// app/page.tsx
function Page() {
const [topic, setTopic] = useState('');
const [grade, setGrade] = useState('');
async function handleSubmit(e) {
e.preventDefault();
let response = await fetch('/api/getSources', {
method: 'POST',
body: JSON.stringify({ topic }),
});
let sources = await response.json();
// This fetch() will 404 for now
}
return (
<form onSubmit={handleSubmit}>
<input
value={topic}
onChange={(e) => setTopic(e.target.value)}
placeholder="Teach me about..."
/>
<select value={grade} onChange={(e) => setGrade(e.target.value)}>
<option>Elementary School</option>
<option>Middle School</option>
<option>High School</option>
<option>College</option>
<option>Undergrad</option>
<option>Graduate</option>
</select>
</form>
);
}
If we submit the form, we see our React app makes a request to /getSources
:
![](https://files.readme.io/f1dd549bc58cf063c4719a61bdd5da4e3a2ac2c2e28fbf06fe72b5f7c490bb1d-image.png)
Letβs go implement this API route.
Getting web sources with Bing
To create our API route, weβll make a newΒ app/api/getSources/route.js
Β file:
// app/api/getSources/route.js
export async function POST(req) {
let json = await req.json();
// `json.topic` has the user's text
}
The Bing API lets you make a fetch request to get back search results, so weβll use it to build up our list of sources:
// app/api/getSources/route.js
import { NextResponse } from 'next/server';
export async function POST(req) {
const json = await req.json();
const params = new URLSearchParams({
q: json.topic,
mkt: 'en-US',
count: '6',
safeSearch: 'Strict',
});
const response = await fetch(
`https://api.bing.microsoft.com/v7.0/search?${params}`,
{
method: 'GET',
headers: {
'Ocp-Apim-Subscription-Key': process.env['BING_API_KEY'],
},
}
);
const { webPages } = await response.json();
return NextResponse.json(
webPages.value.map((result) => ({
name: result.name,
url: result.url,
}))
);
}
In order to make a request to Bingβs API, youβll need to get an API key from Microsoft. Once you have it, set it in .env.local
:
// .env.local
BING_API_KEY=xxxxxxxxxxxx
and our API handler should work.
Letβs try it out from our React app! Weβll log the sources in our submit handler:
// app/page.tsx
function Page() {
const [topic, setTopic] = useState('');
const [grade, setGrade] = useState('');
async function handleSubmit(e) {
e.preventDefault();
const response = await fetch('/api/getSources', {
method: 'POST',
body: JSON.stringify({ topic }),
});
const sources = await response.json();
// log the response from our new endpoint
console.log(sources);
}
return (
<form onSubmit={handleSubmit}>
<input
value={topic}
onChange={(e) => setTopic(e.target.value)}
placeholder="Teach me about..."
/>
<select value={grade} onChange={(e) => setGrade(e.target.value)}>
<option>Elementary School</option>
<option>Middle School</option>
<option>High School</option>
<option>College</option>
<option>Undergrad</option>
<option>Graduate</option>
</select>
</form>
);
}
and if we try submitting a topic, weβll see an array of pages logged in the console!
![](https://files.readme.io/0e463d66353512e1d45f7e0d6fb1b759cf3696a20d2dc58782f546a093fc21ee-image.png)
Letβs create some new React state to store the responses and display them in our UI:
// app/page.tsx
function Page() {
const [topic, setTopic] = useState('');
const [grade, setGrade] = useState('');
const [sources, setSources] = useState([]);
async function handleSubmit(e) {
e.preventDefault();
const response = await fetch('/api/getSources', {
method: 'POST',
body: JSON.stringify({ topic }),
});
const sources = await response.json();
// Update the sources with our API response
setSources(sources);
}
return (
<>
<form onSubmit={handleSubmit}>{/* ... */}</form>
{/* Display the sources */}
{sources.length > 0 && (
<div>
<p>Sources</p>
<ul>
{sources.map((source) => (
<li key={source.url}>
<a href={source.url}>{source.name}</a>
</li>
))}
</ul>
</div>
)}
</>
);
}
If we try it out, our app is working great so far!
![](https://files.readme.io/67e5f39f6c77f61a96c9ef112e4d5b15ea9c9dbe750e003f7eedcb07280962d8-image.png)
Weβre taking the userβs topic, fetching six relevant web sources from Bing, and displaying them in our UI.
Next, letβs get the text content from each website so that our AI model has some context for its first response.
Fetching the content from each source
Letβs make a request to a second endpoint called /api/getParsedSources
, passing along the sources in the request body:
// app/page.tsx
function Page() {
// ...
async function handleSubmit(e) {
e.preventDefault();
const response = await fetch('/api/getSources', {
method: 'POST',
body: JSON.stringify({ question }),
});
const sources = await response.json();
setSources(sources);
// Send the sources to a new endpoint
const parsedSourcesRes = await fetch('/api/getParsedSources', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ sources }),
});
// The second fetch() will 404 for now
}
// ...
}
Weβll create a file atΒ app/api/getParsedSources/route.js
for our new route:
// app/api/getParsedSources/route.js
export async function POST(req) {
let json = await req.json();
// `json.sources` has the websites from Bing
}
Now weβre ready to actually get the text from each one of our sources.
Letβs write a new getTextFromURL
function and outline our general approach:
async function getTextFromURL(url) {
// 1. Use fetch() to get the HTML content
// 2. Use the `jsdom` library to parse the HTML into a JavaScript object
// 3. Use `@mozilla/readability` to clean the document and
// return only the main text of the page
}
Letβs implement this new function. Weβll start by installing the jsdom
and @mozilla/readability
libraries:
npm i jsdom @mozilla/readability
Next, letβs implement the steps:
async function getTextFromURL(url) {
// 1. Use fetch() to get the HTML content
const response = await fetch(url);
const html = await response.text();
// 2. Use the `jsdom` library to parse the HTML into a JavaScript object
const virtualConsole = new jsdom.VirtualConsole();
const dom = new JSDOM(html, { virtualConsole });
// 3. Use `@mozilla/readability` to clean the document and
// return only the main text of the page
const { textContent } = new Readability(doc).parse();
}
Looks good - letβs try it out!
Weβll run the first source through getTextFromURL
:
// app/api/getParsedSources/route.js
export async function POST(req) {
let json = await req.json();
let textContent = await getTextFromURL(json.sources[0].url);
console.log(textContent);
}
If we submit our form , weβll see the text show up in our server terminal from the first page!
![](https://files.readme.io/5699cd489a4225eb5e8c51b52025656d9f28fa949c3be41217a16ab4595d7a4e-image.png)
Letβs update the code toΒ get the text from all the sources.
Since each source is independent, we can use Promise.all
to kick off our functions in parallel:
// app/api/getAnswer/route.js
export async function POST(req) {
let json = await req.json();
let results = await Promise.all(
json.sources.map((source) => getTextFromURL(source.url))
);
console.log(results);
}
If we try again, weβll now see an array of each web pageβs text logged to the console:
![](https://files.readme.io/4a43e433e225439d35c89fa3ce7103071634851e69953aed71d0b1fe5edd83ae-image.png)
Weβre ready to use the parsed sources in our React frontend!
Using the sources for the chatbotβs initial messages
Back in our React app, we now have the text from each source in our submit handler:
// app/page.tsx
function Page() {
// ...
async function handleSubmit(e) {
e.preventDefault();
const response = await fetch('/api/getSources', {
method: 'POST',
body: JSON.stringify({ question }),
});
const sources = await response.json();
setSources(sources);
const parsedSourcesRes = await fetch('/api/getParsedSources', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ sources }),
});
// The text from each source
const parsedSources = await parsedSourcesRes.json();
}
// ...
}
Weβre ready to kick off our chatbot. Weβll use the selected grade level and the parsed sources to write a system prompt, and pass in the selected topic as the userβs first message:
// app/page.tsx
function Page() {
const [messages, setMessages] = useState([]);
// ...
async function handleSubmit(e) {
// ...
// The text from each source
const parsedSources = await parsedSourcesRes.json();
// Start our chatbot
const systemPrompt = `
You're an interactive personal tutor who is an expert at explaining topics. Given a topic and the information to teach, please educate the user about it at a ${grade} level.
Here's the information to teach:
<teaching_info>
${parsedSources.map(
(result, index) =>
`## Webpage #${index}:\\n ${result.fullContent} \\n\\n`
)}
</teaching_info>
`;
const initialMessages = [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: topic },
];
setMessages(initialMessages);
// This will 404 for now
const chatRes = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ messages: initialMessages }),
});
}
// ...
}
We also created some new React state to store all the messages so that we can display and update the chat history as the user sends new messages.
Weβre ready to implement our final API endpoint at /chat
!
Implementing the chatbot endpoint with Together AIβs SDK
Letβs install Together AIβs node SDK:
npm i together-ai
and use it to query Llama 3.1 8B Turbo:
// api/chat/route.js
import { Together } from 'togetherai';
const together = new Together();
export async function POST(req) {
const json = await req.json();
const res = await together.chat.completions.create({
model: 'meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo',
messages: json.messages,
stream: true,
});
return new Response(res.toReadableStream());
}
Since weβre passing the array of messages directly from our React app, and the format is the same as what Togetherβs chat.completions.create
method expects, our API handler is mostly acting as a simple passthrough.
Weβre also using the stream: true
option so our frontend will be able to show partial updates as soon as the LLM starts its response.
Weβre read to display our chatbotβs first message in our React app!
Displaying the chatbotβs response in the UI
Back in our page, weβll use the ChatCompletionStream
helper from Togetherβs SDK to update our messages
state as our API endpoint streams in text:
// app/page.tsx
function Page() {
const [messages, setMessages] = useState([]);
// ...
async function handleSubmit(e) {
// ...
const chatRes = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ messages: initialMessages }),
});
ChatCompletionStream.fromReadableStream(chatRes.body).on(
'content',
(delta) => {
setMessages((prev) => {
const lastMessage = prev[prev.length - 1];
if (lastMessage.role === 'assistant') {
return [
...prev.slice(0, -1),
{ ...lastMessage, content: lastMessage.content + delta },
];
} else {
return [...prev, { role: 'assistant', content: delta }];
}
});
}
);
}
// ...
}
Note that because weβre storing the entire history of messages as an array, we check the last messageβs role
to determine whether to append the streamed text to it, or push a new object with the assistantβs initial text.
Now that our messages
React state is ready, letβs update our UI to display it:
// app/page.tsx
function Page() {
const [topic, setTopic] = useState('');
const [grade, setGrade] = useState('');
const [sources, setSources] = useState([]);
const [messages, setMessages] = useState([]);
async function handleSubmit(e) {
// ...
}
return (
<>
<form onSubmit={handleSubmit}>{/* ... */}</form>
{/* Display the sources */}
{sources.length > 0 && (
<div>
<p>Sources</p>
<ul>
{sources.map((source) => (
<li key={source.url}>
<a href={source.url}>{source.name}</a>
</li>
))}
</ul>
</div>
)}
{/* Display the messages */}
{messages.map((message, i) => (
<p key={i}>{message.content}</p>
))}
</>
);
}
If we try it out, weβll see the sources come in, and once our chat
endpoint responds with the first chunk, weβll see the answer text start streaming into our UI!
![](https://files.readme.io/d31251591e0b45ab04d817a734872591ee9f56a0cca9f90710229bcdda4695a2-image.png)
Letting the user ask follow-up questions
To let the user ask our tutor follow-up questions, letβs make a new form that only shows up once we have some messages in our React state:
// app/page.tsx
function Page() {
// ...
const [newMessageText, setNewMessageText] = useState('');
return (
<>
{/* Form for initial messages */}
{messages.length === 0 && (
<form onSubmit={handleSubmit}>{/* ... */}</form>
)}
{sources.length > 0 && <>{/* ... */}</>}
{messages.map((message, i) => (
<p key={i}>{message.content}</p>
))}
{/* Form for follow-up messages */}
{messages.length > 0 && (
<form>
<input
value={newMessageText}
onChange={(e) => setNewMessageText(e.target.value)}
type="text"
/>
</form>
)}
</>
);
}
Weβll make a new submit handler called handleMessage
that will look a lot like the end of our first handleSubmit
function:
// app/page.tsx
function Page() {
const [messages, setMessages] = useState([]);
// ...
async function handleMessage(e) {
e.preventDefault();
const newMessages = [
...messages,
{
role: 'user',
content: newMessageText,
},
];
const chatRes = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ messages: newMessages }),
});
setMessages(newMessages);
ChatCompletionStream.fromReadableStream(chatRes.body).on(
'content',
(delta) => {
setMessages((prev) => {
const lastMessage = prev[prev.length - 1];
if (lastMessage.role === 'assistant') {
return [
...prev.slice(0, -1),
{ ...lastMessage, content: lastMessage.content + delta },
];
} else {
return [...prev, { role: 'assistant', content: delta }];
}
});
}
);
}
// ...
}
Because we have all the messages in React state, we can just create a new object for the userβs latest message, send it over to our existing chat
endpoint, and reuse the same logic to update our appβs state as the latest response streams in.
The core features of our app are working great!
Digging deeper
React and Together AI are a perfect match for building powerful chatbots like LlamaTutor.
The app is fully open-source, so if you want to keep working on the code from this tutorial, be sure to check it out on GitHub:
https://github.com/Nutlope/llamatutor
And if youβre ready to start building your own chatbots, sign up for Together AI today and make your first query in minutes!
Updated 3 months ago