聊天机器人

一个使用 AI Elements 构建聊天机器人的示例。

一个使用 AI Elements 构建聊天机器人的示例。

教程

让我们逐步了解如何使用 AI Elements Vue 和 AI SDK 构建聊天机器人。我们的示例将包括推理、带引用的网络搜索和模型选择器。

设置

首先,通过运行以下命令设置一个新的 Nuxt.js 仓库:

Terminal
pnpm create nuxt@latest ai-chatbot

导航到新创建的目录:

Terminal
cd ai-chatbot

确保根据下面的指南完成项目设置。

运行以下命令安装 AI Elements:

npm
pnpm
bun
yarn
npx ai-elements-vue@latest

现在,安装 AI SDK 依赖项:

npm
pnpm
bun
yarn
npm i ai @ai-sdk/vue zod

配置 Shadcn 模块

为了确保 AI Elements 组件正确注册并避免 Nuxt 自动导入功能产生的控制台警告,你需要修改 nuxt.config.ts,将 AI Elements 目录添加到 shadcn 模块配置中。

nuxt.config.ts
export default defineNuxtConfig({
  // ...
  modules: ['shadcn-nuxt'],
  shadcn: {
    /**
     * Prefix for all the imported component.
     * @default "Ui"
     */
    prefix: '',
    /**
     * Directory that the component lives in.
     * Will respect the Nuxt aliases.
     * @link https://nuxt.com/docs/api/nuxt-config#alias
     * @default "@/components/ui"
     */
    componentDir: '@/components/ui',     componentDir: [       '@/components/ui',       // AI elements      {         path: '@/components/ai-elements',         prefix: '',       },     ],   }
})

配置 Vercel AI Gateway API 密钥

在项目根目录创建一个 .env 文件,并添加你的 Vercel AI Gateway API 密钥。 此密钥用于向 Vercel AI Gateway 服务验证你的应用程序。

Terminal
touch .env

编辑 .env 文件:

.env
NUXT_AI_GATEWAY_API_KEY=xxxxxxxxx

xxxxxxxxx 替换为你的实际 Vercel AI Gateway API 密钥,并在 nuxt.config.ts 中配置环境变量:

nuxt.config.ts
import process from 'node:process'

export default defineNuxtConfig({
  // rest of your nuxt config
  runtimeConfig: {
    aiGatewayApiKey: process.env.NUXT_AI_GATEWAY_API_KEY,
  },
})

我们现在准备开始构建我们的应用了!

创建 API 路由

创建一个 API 路由 server/api/chat.ts 并添加以下代码。 我们使用 perplexity/sonar 进行网络搜索,因为默认情况下该模型会返回搜索结果。 我们还将 sendSourcessendReasoning 传递给 toUIMessageStreamResponse,以便在前端作为部分接收。 处理程序现在还接受来自客户端的文件附件。

server/api/chat.ts
import type { UIMessage } from 'ai'
import { convertToModelMessages, createGateway, streamText } from 'ai'
import { createError, readBody } from 'h3'

export const maxDuration = 30

const DEFAULT_SYSTEM_PROMPT = 'You are a helpful assistant that can answer questions and help with tasks'
const DEFAULT_MODEL = 'openai/gpt-4o'

interface ChatRequestBody {
  messages: UIMessage[]
  model?: string
  webSearch?: boolean
}

export default defineLazyEventHandler(async () => {
  const apiKey = useRuntimeConfig().aiGatewayApiKey

  if (!apiKey) {
    throw createError({
      statusCode: 500,
      statusMessage: 'Missing AI Gateway API key',
    })
  }

  const gateway = createGateway({
    apiKey,
  })

  return defineEventHandler(async (event) => {
    const { messages, model, webSearch = false } = await readBody<ChatRequestBody>(event)

    if (!Array.isArray(messages) || messages.length === 0) {
      throw createError({
        statusCode: 400,
        statusMessage: 'Missing messages payload',
      })
    }

    const selectedModel = webSearch ? 'perplexity/sonar' : (model || DEFAULT_MODEL)

    const result = streamText({
      model: gateway(selectedModel),
      messages: convertToModelMessages(messages),
      system: DEFAULT_SYSTEM_PROMPT,
    })

    return result.toUIMessageStreamResponse({
      sendSources: true,
      sendReasoning: true,
    })
  })
})

连接 UI

在你的 app/app.vue 中,用下面的文件替换代码。

app/app.vue
<template>
  <div class="min-h-screen bg-background">
    <NuxtRouteAnnouncer />
    <NuxtPage />
  </div>
</template>

创建一个新页面 pages/index.vue,并添加下面的代码。

在这里,我们使用 PromptInput 组件及其复合组件来构建具有文件附件、模型选择器和操作菜单的丰富输入体验。 输入组件使用新的 PromptInputMessage 类型来处理文本和文件附件。

整个聊天位于 Conversation 中。我们在 message.parts 上切换,并在 MessageReasoningSources 中渲染相应的部分。 我们还使用 useChatstatus 来流式传输推理令牌,以及渲染 Loader

pages/index.vue
<script setup lang="ts">
import type { ChatStatus, SourceUrlUIPart, UIMessage } from 'ai'
import type { PromptInputMessage } from '@/components/ai-elements/prompt-input'
import { Chat } from '@ai-sdk/vue'
import { CopyIcon, GlobeIcon, RefreshCcwIcon } from 'lucide-vue-next'
import { computed, ref } from 'vue'
import { Conversation, ConversationContent, ConversationScrollButton } from '@/components/ai-elements/conversation'
import { Loader } from '@/components/ai-elements/loader'
import { Message, MessageAction, MessageActions, MessageContent, MessageResponse } from '@/components/ai-elements/message'
import {
  PromptInput,
  PromptInputActionAddAttachments,
  PromptInputActionMenu,
  PromptInputActionMenuContent,
  PromptInputActionMenuTrigger,
  PromptInputAttachment,
  PromptInputAttachments,
  PromptInputBody,
  PromptInputButton,
  PromptInputFooter,
  PromptInputHeader,
  PromptInputSelect,
  PromptInputSelectContent,
  PromptInputSelectItem,
  PromptInputSelectTrigger,
  PromptInputSelectValue,
  PromptInputSubmit,
  PromptInputTextarea,
  PromptInputTools,
  usePromptInputProvider,
} from '@/components/ai-elements/prompt-input'
import { Reasoning, ReasoningContent, ReasoningTrigger } from '@/components/ai-elements/reasoning'
import { Source, Sources, SourcesContent, SourcesTrigger } from '@/components/ai-elements/sources'

const models = [
  { name: 'GPT 4o', value: 'openai/gpt-4o' },
  { name: 'Deepseek R1', value: 'deepseek/deepseek-r1' },
] as const

const chat = new Chat({})
const model = ref(models[0].value)
const webSearch = ref(false)

const status = computed<ChatStatus>(() => chat.status)
const messages = computed(() => chat.messages)
const lastMessageId = computed(() => messages.value.at(-1)?.id ?? null)
const lastAssistantMessageId = computed(() => {
  for (let index = messages.value.length - 1; index >= 0; index -= 1) {
    const current = messages.value[index]
    if (current && current.role === 'assistant')
      return current.id
  }
  return null
})

async function handleSubmit(message: PromptInputMessage) {
  const hasText = Boolean(message.text?.trim())
  const hasAttachments = Boolean(message.files?.length)

  if (!hasText && !hasAttachments)
    return

  try {
    await chat.sendMessage(
      {
        text: hasText ? message.text : 'Sent with attachments',
        files: hasAttachments ? message.files : undefined,
      },
      {
        body: {
          model: model.value,
          webSearch: webSearch.value,
        },
      },
    )
  }
  catch (error) {
    console.error('Failed to send message', error)
  }
}

function handlePromptError(error: { code: string, message: string }) {
  console.error(`Input error (${error.code})`, error.message)
}

const promptInput = usePromptInputProvider({
  onSubmit: handleSubmit,
  onError: handlePromptError,
})

const hasPendingInput = computed(() => {
  return Boolean(promptInput.textInput.value.trim()) || promptInput.files.value.length > 0
})

const submitDisabled = computed(() => !hasPendingInput.value && !status.value)

function getSourceUrlParts(message: UIMessage) {
  return message.parts.filter((part): part is SourceUrlUIPart => part.type === 'source-url')
}

function shouldShowActions(message: UIMessage, partIndex: number) {
  if (message.role !== 'assistant')
    return false
  if (lastAssistantMessageId.value !== message.id)
    return false
  return isLastTextPart(message, partIndex)
}

function isLastTextPart(message: UIMessage, partIndex: number) {
  for (let index = partIndex + 1; index < message.parts.length; index += 1) {
    const nextPart = message.parts[index]
    if (nextPart && nextPart.type === 'text')
      return false
  }
  return true
}

function isReasoningStreaming(message: UIMessage, partIndex: number) {
  return status.value === 'streaming'
    && message.id === lastMessageId.value
    && partIndex === message.parts.length - 1
}

function toggleWebSearch() {
  webSearch.value = !webSearch.value
}

async function copyToClipboard(text: string) {
  if (!text)
    return

  if (typeof navigator === 'undefined' || !navigator.clipboard)
    return

  try {
    await navigator.clipboard.writeText(text)
  }
  catch (error) {
    console.error('Failed to copy to clipboard', error)
  }
}

function handleRegenerate() {
  chat.regenerate({
    body: {
      model: model.value,
      webSearch: webSearch.value,
    },
  })
}
</script>

<template>
  <div class="relative mx-auto size-full h-screen max-w-4xl p-6">
    <div class="flex h-full flex-col">
      <Conversation class="h-full">
        <ConversationContent>
          <div
            v-for="message in messages"
            :key="message.id"
          >
            <Sources
              v-if="message.role === 'assistant' && getSourceUrlParts(message).length > 0"
            >
              <SourcesTrigger :count="getSourceUrlParts(message).length" />
              <SourcesContent
                v-for="(source, index) in getSourceUrlParts(message)"
                :key="`${message.id}-source-${index}`"
              >
                <Source
                  :href="source.url"
                  :title="source.title ?? source.url"
                />
              </SourcesContent>
            </Sources>

            <template
              v-for="(part, partIndex) in message.parts"
              :key="`${message.id}-${partIndex}`"
            >
              <Message
                v-if="part.type === 'text'"
                :from="message.role"
              >
                <div>
                  <MessageContent>
                    <MessageResponse :content="part.text" />
                  </MessageContent>

                  <MessageActions v-if="shouldShowActions(message, partIndex)">
                    <MessageAction
                      label="Retry"
                      @click="handleRegenerate"
                    >
                      <RefreshCcwIcon class="size-3" />
                    </MessageAction>
                    <MessageAction
                      label="Copy"
                      @click="copyToClipboard(part.text)"
                    >
                      <CopyIcon class="size-3" />
                    </MessageAction>
                  </MessageActions>
                </div>
              </Message>

              <Reasoning
                v-else-if="part.type === 'reasoning'"
                class="w-full"
                :is-streaming="isReasoningStreaming(message, partIndex)"
              >
                <ReasoningTrigger />
                <ReasoningContent :content="part.text" />
              </Reasoning>
            </template>
          </div>

          <Loader v-if="status === 'submitted'" class="mx-auto" />
        </ConversationContent>

        <ConversationScrollButton />
      </Conversation>

      <PromptInput class="mt-4" global-drop multiple>
        <PromptInputHeader>
          <PromptInputAttachments>
            <template #default="{ file }">
              <PromptInputAttachment :file="file" />
            </template>
          </PromptInputAttachments>
        </PromptInputHeader>

        <PromptInputBody>
          <PromptInputTextarea />
        </PromptInputBody>

        <PromptInputFooter>
          <PromptInputTools>
            <PromptInputActionMenu>
              <PromptInputActionMenuTrigger />
              <PromptInputActionMenuContent>
                <PromptInputActionAddAttachments />
              </PromptInputActionMenuContent>
            </PromptInputActionMenu>

            <PromptInputButton
              :variant="webSearch ? 'default' : 'ghost'"
              @click="toggleWebSearch"
            >
              <GlobeIcon class="size-4" />
              <span>Search</span>
            </PromptInputButton>

            <PromptInputSelect v-model="model">
              <PromptInputSelectTrigger>
                <PromptInputSelectValue />
              </PromptInputSelectTrigger>
              <PromptInputSelectContent>
                <PromptInputSelectItem
                  v-for="item in models"
                  :key="item.value"
                  :value="item.value"
                >
                  {{ item.name }}
                </PromptInputSelectItem>
              </PromptInputSelectContent>
            </PromptInputSelect>
          </PromptInputTools>

          <PromptInputSubmit
            :disabled="submitDisabled"
            :status="status"
          />
        </PromptInputFooter>
      </PromptInput>
    </div>
  </div>
</template>

运行你的应用程序

这样,你就已经构建了聊天机器人所需的一切!要启动你的应用程序,请使用以下命令:

Terminal
pnpm run dev

在浏览器中打开 http://localhost:3000。 你应该看到一个输入字段。输入一条消息进行测试,看看 AI 聊天机器人实时响应!

你现在有了一个支持文件附件的聊天机器人应用!聊天机器人可以通过操作菜单处理文本和文件输入。 随时探索其他组件,如 ToolTask 来扩展你的应用,或查看其他示例。