Skip to content

Commit

Permalink
chore: copy fixes (#1898)
Browse files Browse the repository at this point in the history
Co-authored-by: Lars Grammel <[email protected]>
  • Loading branch information
iteratetograceness and lgrammel committed Jun 10, 2024
1 parent 85f209a commit 64e32d9
Show file tree
Hide file tree
Showing 13 changed files with 47 additions and 45 deletions.
2 changes: 1 addition & 1 deletion content/docs/07-reference/ai-sdk-core/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: Reference documentation for the AI SDK Core
# AI SDK Core

[AI SDK Core](/docs/ai-sdk-core) is a set of functions that allow you to interact with language models and other AI models.
These functions are designed to be easy to use and flexible, allowing you to generate text, structured data,
These functions are designed to be easy-to-use and flexible, allowing you to generate text, structured data,
and embeddings from language models and other AI models.

AI SDK Core contains the following main functions:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ description: Learn to troubleshoot Azure OpenAI slow to stream issues.

## Issue

- When using OpenAI hosted on Azure, streaming is slow and in big chunks
When using OpenAI hosted on Azure, streaming is slow and in big chunks.

## Solution

This is a Microsoft issue. Some users have reported the following solutions:

- Update Content Filtering Settings:
OAI Azure portal, within Content Filtering (Preview), there's an option to set Streaming mode from Default to Asynchronous Modified Filter
- **Update Content Filtering Settings**:
OpenAI Azure portal, within Content Filtering (Preview), there's an option to set Streaming mode from Default to Asynchronous Modified Filter.
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ description: Troubleshooting client-side function calls not being invoked.

## Issue

I upgraded the AI SDK to v3.0.20 or newer. I am using `OpenAIStream` Client-side function calls are no longer invoked.
I upgraded the AI SDK to v3.0.20 or newer. I am using [`OpenAIStream`](/docs/reference/stream-helpers/openai-stream). Client-side function calls are no longer invoked.

## Solution

You need to add a stub for `onFunctionCall` to `OpenAIStream` to enable the correct forwarding of the function calls to the client.
You will need to add a stub for `experimental_onFunctionCall` to [`OpenAIStream`](/docs/reference/stream-helpers/openai-stream) to enable the correct forwarding of the function calls to the client.

```tsx
const stream = OpenAIStream(response, {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,22 +9,26 @@ You may use Server Actions in client components, but sometimes you may encounter

## Issue

- It is not allowed to define inline "use server" annotated Server Actions in Client Components.
It is not allowed to define inline `"use server"` annotated Server Actions in Client Components.

## Solution

To use Server Actions in a Client Component, you can either export them from a separate file with "use server" at the top, pass them down through props from a Server Component, or a combination of [`createAI`](/docs/reference/ai-sdk-rsc/create-ai) and [`useActions`](/docs/reference/ai-sdk-rsc/use-actions) hooks to access them.
To use Server Actions in a Client Component, you can either:

- Export them from a separate file with `"use server"` at the top.
- Pass them down through props from a Server Component.
- Implement a combination of [`createAI`](/docs/reference/ai-sdk-rsc/create-ai) and [`useActions`](/docs/reference/ai-sdk-rsc/use-actions) hooks to access them.

Learn more about [Server Actions and Mutations](https://nextjs.org/docs/app/api-reference/functions/server-actions#with-client-components).

```ts file='actions.ts'
'use server'; // [!code ++]
'use server';

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function getAnswer(question: string) {
'use server'; // [!code --]
'use server';

const { text } = await generateText({
model: openai.chat('gpt-3.5-turbo'),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,17 @@ description: How to fix strange stream output in the UI

## Issue

I am using custom client code to process a server response that is sent using `StreamingTextResponse`. I am using version `3.0.20` or newer of the AI SDK. When I send a query, the UI streams text such as `0: "Je" 0: " suis" 0: "des"...` instead of the text that I’m looking for.
I am using custom client code to process a server response that is sent using [`StreamingTextResponse`](/docs/reference/stream-helpers/streaming-text-response). I am using version `3.0.20` or newer of the AI SDK. When I send a query, the UI streams text such as `0: "Je"`, `0: " suis"`, `0: "des"...` instead of the text that I’m looking for.

## Background

The AI SDK has switched to the stream data protocol in version `3.0.20` . It sends different stream parts to support data, tool calls, etc. What you see is the raw stream data protocol response.
The AI SDK has switched to the stream data protocol in version `3.0.20`. It sends different stream parts to support data, tool calls, etc. What you see is the raw stream data protocol response.

## Solution

You have several options:

1. Use the AI Core `streamText` function to send a raw text stream:
1. Use the AI Core [`streamText`](/docs/reference/ai-sdk-core/stream-text) function to send a raw text stream:

```tsx
export async function POST(req: Request) {
Expand All @@ -34,7 +34,7 @@ You have several options:
```

2. Pin the AI SDK version to `3.0.19` . This will keep the raw text stream.
3. Process the stream data stream using `readDataStream`
3. Process the stream data stream using `readDataStream`.

```tsx
for await (const { type, value } of readDataStream(reader, {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ description: Troubleshooting errors related to streamable UI.
## Issue

- Variable Not Found
- Cannot find 'div'
- "Component" refers to a value, but is being used as a type
- Cannot find `div`
- `Component` refers to a value, but is being used as a type

## Solution

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,13 @@ description: Troubleshooting streaming issues when deploying to Vercel with the

## Issue

I'm using the Next.js pages router. Streaming with the AI SDK works in my local development environment.
I'm using the Next.js Pages Router. Streaming with the AI SDK works in my local development environment.
However, when deploying to Vercel, streaming does not work in the deployed app.
Instead of streaming, only the full response is returned after a while.

## Cause

The Next.js Pages Router currently does not support streaming with it's own routes.
The Next.js Pages Router currently does not support streaming with its own routes.

## Solution

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ description: Troubleshooting streaming issues when deploying to Vercel with the

## Issue

I'm using the Next.js app router. Streaming with the AI SDK works in my local development environment.
I'm using the Next.js App Router. Streaming with the AI SDK works in my local development environment.
However, when deploying to Vercel, streaming does not work in the deployed app.
Instead of streaming, only the full response is returned after a while.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ description: Learn how to fix timeouts and cut off responses when deploying to V

## Issue

Streaming with the AI SDK works in my local dev environment.
Streaming with the AI SDK works in my local development environment.
However, when I'm deploying to Vercel, longer responses get chopped off in the UI and I'm seeing timeouts in the Vercel logs or I'm seeing the error: `Uncaught (in promise) Error: Connection closed`.

## Solution
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Sometimes streams are not closed properly, which can lead to unexpected behavior

## Issue

- The streamable UI has been slow to update.
The streamable UI has been slow to update.

## Solution

Expand All @@ -29,6 +29,6 @@ const submitMessage = async () => {
stream.append('3');
stream.done('4'); // [!code ++]

return stream.value,
return stream.value;
};
```
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,22 @@ title: useChat Failed to Parse Stream
description: Troubleshooting errors related to the Use Chat Failed to Parse Stream error.
---

# useChat "Failed to Parse Stream String" Error
# `useChat` "Failed to Parse Stream String" Error

## Issue

I am using `useChat` or `useCompletion` and I am getting a `"Failed to parse stream string. Invalid code"` error. I am using version `3.0.20` or newer of the AI SDK.
I am using [`useChat`](/docs/reference/ai-sdk-ui/use-chat) or [`useCompletion`](/docs/reference/ai-sdk-ui/use-completion), and I am getting a `"Failed to parse stream string. Invalid code"` error. I am using version `3.0.20` or newer of the AI SDK.

## Background

The AI SDK has switched to the stream data protocol in version `3.0.20`.
`useChat` and `useCompletion` expect stream parts that support data, tool calls, etc.
[`useChat`](/docs/reference/ai-sdk-ui/use-chat) and [`useCompletion`](/docs/reference/ai-sdk-ui/use-completion) expect stream parts that support data, tool calls, etc.
What you see is a failure to parse the stream.
This can be caused by using an older version of the AI SDK in the backend, by providing a text stream using a custom provider, or by using a raw LangChain stream result.

## Solution

You can switch `useChat` and `useCompletion` to raw text stream processing with the `streamMode` parameter (introduced in `v3.0.23` of the AI SDK).
You can switch [`useChat`](/docs/reference/ai-sdk-ui/use-chat) and [`useCompletion`](/docs/reference/ai-sdk-ui/use-completion) to raw text stream processing with the [`streamMode`](/docs/reference/ai-sdk-ui/use-completion#stream-mode) parameter (introduced in `v3.0.23` of the AI SDK).
Set it to `text` as follows:

```tsx
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@ title: NaN token counts when using streamText with OpenAI models
description: Troubleshooting errors related to NaN token counts in OpenAI streaming.
---

# NaN token counts when using streamText with OpenAI models
# `NaN` token counts when using `streamText` with OpenAI models

## Issue

I am using `streamText` with the [OpenAI provider for the AI SDK](/providers/ai-sdk-providers/openai) and OpenAI models.
I use `createOpenAI` to create the provider instance.
I use [`createOpenAI`](/providers/ai-sdk-providers/openai#provider-instance) to create the provider instance.
When I try to get the token counts, I get `NaN` values.

## Background
Expand All @@ -19,7 +19,7 @@ and we therefore made it opt-in.

## Solution

When you use `createOpenAI`, you can enable a `strict` compatibility model:
When you use [`createOpenAI`](/providers/ai-sdk-providers/openai#provider-instance), you can enable a `strict` compatibility model:

```tsx
import { createOpenAI } from '@ai-sdk/openai';
Expand Down
30 changes: 14 additions & 16 deletions content/docs/08-troubleshooting/migration-guide.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,14 @@ description: Welcome to the Vercel AI SDK documentation!
This guide will help you:

- Upgrade to Vercel AI SDK 3.1
- Migrate from Legacy Providers to Vercel AI SDK Core
- Migrate from `render` to `streamUI`
- Migrate from [Legacy Providers](/providers/legacy-providers) to Vercel AI SDK Core
- Migrate from [`render`](/docs/reference/ai-sdk-rsc/render) to [`streamUI`](/docs/reference/ai-sdk-rsc/stream-ui)

Upgrading to Vercel AI SDK 3.1 does not require using the newly released AI SDK Core API or `streamUI` function. You can continue to use Vercel AI SDK with existing (Legacy) providers.
Upgrading to Vercel AI SDK 3.1 does not require using the newly released AI SDK Core API or [`streamUI`](/docs/reference/ai-sdk-rsc/stream-ui) function. You can continue to use Vercel AI SDK with [existing (Legacy) providers](/providers/legacy-providers).

## **Upgrading**
## Upgrading

### **Vercel AI SDK**
### Vercel AI SDK

To update to Vercel AI SDK version 3.1, run the following command using your preferred package manager:

Expand All @@ -25,14 +25,14 @@ To update to Vercel AI SDK version 3.1, run the following command using your pre

The release of Vercel AI SDK 3.1 introduces several new features that improve the way you build AI applications with the SDK:

- AI SDK Core, a brand new unified API for interacting with large language models (LLMs)
- `streamUI`, a new abstraction, built upon AI SDK Core functions that simplifies building streaming UIs.
- AI SDK Core, a brand new unified API for interacting with large language models (LLMs).
- [`streamUI`](/docs/reference/ai-sdk-rsc/stream-ui), a new abstraction, built upon AI SDK Core functions that simplifies building streaming UIs.

## Migrating from Legacy Providers to AI SDK Core

Prior to AI SDK Core, you had to use a model providers SDK to query their models.
Prior to AI SDK Core, you had to use a model provider's SDK to query their models.

In the following Route Handler, you use the OpenAI SDK to query their model. You then pipe that response into the `OpenAIStream` function which returns a ReadableStream that you can pass to the client using a new `StreamingTextResponse`.
In the following Route Handler, you use the OpenAI SDK to query their model. You then pipe that response into the [`OpenAIStream`](/docs/reference/stream-helpers/openai-stream) function which returns a [`ReadableStream`](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream) that you can pass to the client using a new [`StreamingTextResponse`](/docs/reference/stream-helpers/streaming-text-response).

```tsx
import OpenAI from 'openai';
Expand All @@ -48,7 +48,7 @@ export async function POST(req: Request) {
const response = await openai.chat.completions.create({
model: 'gpt-4-turbo',
stream: true,
messages: messages,
messages,
});

const stream = OpenAIStream(response);
Expand Down Expand Up @@ -79,7 +79,7 @@ export async function POST(req: Request) {

## Migrating from `render` to `streamUI`

The AI SDK RSC API was launched as part of version 3.0. This API introduced the `render` function, a helper function to create streamable UIs with OpenAI models. With the new AI SDK Core API, it became possible to make streamable UIs possible with any compatible provider.
The AI SDK RSC API was launched as part of version 3.0. This API introduced the [`render`](/docs/reference/ai-sdk-rsc/render) function, a helper function to create streamable UIs with OpenAI models. With the new AI SDK Core API, it became possible to make streamable UIs possible with any compatible provider.

The following example Server Action uses the `render` function using the model provider directly from OpenAI. You first create an OpenAI provider instance with the OpenAI SDK. Then, you pass it to the provider key of the render function alongside a tool that returns a React Server Component, defined in the `render` key of the tool.

Expand All @@ -92,8 +92,7 @@ import { getWeather } from '@/utils';

const openai = new OpenAI();

async function submitMessage(userInput) {
// 'What is the weather in SF?'
async function submitMessage(userInput = 'What is the weather in SF?') {
'use server';

return render({
Expand Down Expand Up @@ -123,7 +122,7 @@ async function submitMessage(userInput) {
}
```

With the new `streamUI` function, you can now use any compatible AI SDK providers. In this example, you import the AI SDK OpenAI provider. Then, you pass it to the model key of the new `streamUI` function. Finally you declare a tool and return a React Server Component, defined in the `generate` key of the tool.
With the new [`streamUI`](/docs/reference/ai-sdk-rsc/stream-ui) function, you can now use any compatible AI SDK provider. In this example, you import the AI SDK OpenAI provider. Then, you pass it to the [`model`](/docs/reference/ai-sdk-rsc/stream-ui#model) key of the new [`streamUI`](/docs/reference/ai-sdk-rsc/stream-ui) function. Finally, you declare a tool and return a React Server Component, defined in the [`generate`](/docs/reference/ai-sdk-rsc/stream-ui#tools-generate) key of the tool.

```tsx
import { streamUI } from 'ai/rsc';
Expand All @@ -132,8 +131,7 @@ import { z } from 'zod';
import { Spinner, Weather } from '@/components';
import { getWeather } from '@/utils';

async function submitMessage(userInput) {
// 'What is the weather in SF?'
async function submitMessage(userInput = 'What is the weather in SF?') {
'use server';

const result = await streamUI({
Expand Down

0 comments on commit 64e32d9

Please sign in to comment.