Skip to main content
Google AI 提供了多种不同的聊天模型,包括强大的 Gemini 系列。有关最新模型、其功能、上下文窗口等信息,请参阅 Google AI 文档 本文将帮助您开始使用 ChatGoogleGenerativeAI 聊天模型。有关 ChatGoogleGenerativeAI 所有功能和配置的详细文档,请参阅 API 参考
此库将被弃用此库基于 Google 的一个已弃用库,并将被 ChatGoogle 库取代。 新实现应使用 ChatGoogle 库,现有实现应考虑迁移。

概述

集成详情

可序列化PY 支持下载量版本
ChatGoogleGenerativeAI@langchain/google-genaiNPM - DownloadsNPM - Version

模型特性

有关如何使用特定功能的指南,请参阅下表标题中的链接。

设置

您可以通过 @langchain/google-genai 集成包中的 ChatGoogleGenerativeAI 类,在 LangChain 中访问 Google 的 geminigemini-vision 模型以及其他生成模型。
您也可以通过 LangChain 的 VertexAI 和 VertexAI-web 集成访问 Google 的 gemini 系列模型。请参阅 Vertex AI 集成文档

凭证

在此处获取 API 密钥:https://ai.google.dev/tutorials/setup 然后设置 GOOGLE_API_KEY 环境变量:
export GOOGLE_API_KEY="your-api-key"
如果您希望自动跟踪模型调用,还可以通过取消注释以下内容来设置您的 LangSmith API 密钥:
# export LANGSMITH_TRACING="true"
# export LANGSMITH_API_KEY="your-api-key"

安装

LangChain ChatGoogleGenerativeAI 集成位于 @langchain/google-genai 包中:
npm install @langchain/google-genai @langchain/core

实例化

现在我们可以实例化模型对象并生成聊天补全:
import { ChatGoogleGenerativeAI } from "@langchain/google-genai"

const llm = new ChatGoogleGenerativeAI({
    model: "gemini-2.5-pro",
    temperature: 0,
    maxRetries: 2,
    // 其他参数...
})

调用

const aiMsg = await llm.invoke([
    [
        "system",
        "You are a helpful assistant that translates English to French. Translate the user sentence.",
    ],
    ["human", "I love programming."],
])
aiMsg
AIMessage {
  "content": "J'adore programmer. \n",
  "additional_kwargs": {
    "finishReason": "STOP",
    "index": 0,
    "safetyRatings": [
      {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_HARASSMENT",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "probability": "NEGLIGIBLE"
      }
    ]
  },
  "response_metadata": {
    "finishReason": "STOP",
    "index": 0,
    "safetyRatings": [
      {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_HARASSMENT",
        "probability": "NEGLIGIBLE"
      },
      {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "probability": "NEGLIGIBLE"
      }
    ]
  },
  "tool_calls": [],
  "invalid_tool_calls": [],
  "usage_metadata": {
    "input_tokens": 21,
    "output_tokens": 5,
    "total_tokens": 26
  }
}
console.log(aiMsg.content)
J'adore programmer.

安全设置

Gemini 模型具有可覆盖的默认安全设置。如果您的模型收到大量“安全警告”,可以尝试调整模型的 safety_settings 属性。例如,要关闭对危险内容的安全阻止,您可以从 @google/generative-ai 包导入枚举,然后按如下方式构建您的 LLM:
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { HarmBlockThreshold, HarmCategory } from "@google/generative-ai";

const llmWithSafetySettings = new ChatGoogleGenerativeAI({
  model: "gemini-2.5-pro",
  temperature: 0,
  safetySettings: [
    {
      category: HarmCategory.HARM_CATEGORY_HARASSMENT,
      threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
    },
  ],
  // 其他参数...
});

工具调用

Google AI 的工具调用与其他模型的工具调用大致相同,但在模式上有一些限制。 Google AI API 不允许工具模式包含具有未知属性的对象。例如,以下 Zod 模式将抛出错误: const invalidSchema = z.object({ properties: z.record(z.unknown()) }); const invalidSchema2 = z.record(z.unknown()); 相反,您应明确定义对象字段的属性。以下是一个示例:
import { tool } from "@langchain/core/tools";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import * as z from "zod";

// 定义您的工具
const fakeBrowserTool = tool((_) => {
  return "The search result is xyz..."
}, {
  name: "browser_tool",
  description: "Useful for when you need to find something on the web or summarize a webpage.",
  schema: z.object({
    url: z.string().describe("The URL of the webpage to search."),
    query: z.string().optional().describe("An optional search query to use."),
  }),
})

const llmWithTool = new ChatGoogleGenerativeAI({
  model: "gemini-pro",
}).bindTools([fakeBrowserTool]) // 将您的工具绑定到模型

const toolRes = await llmWithTool.invoke([
  [
    "human",
    "Search the web and tell me what the weather will be like tonight in new york. use a popular weather website",
  ],
]);

console.log(toolRes.tool_calls);
[
  {
    name: 'browser_tool',
    args: {
      url: 'https://www.weather.com',
      query: 'weather tonight in new york'
    },
    type: 'tool_call'
  }
]

内置 Google 搜索检索

Google 还提供了一个内置的搜索工具,您可以使用它来将内容生成基于真实世界信息。以下是如何使用它的示例:
import { DynamicRetrievalMode, GoogleSearchRetrievalTool } from "@google/generative-ai";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";

const searchRetrievalTool: GoogleSearchRetrievalTool = {
  googleSearchRetrieval: {
    dynamicRetrievalConfig: {
      mode: DynamicRetrievalMode.MODE_DYNAMIC,
      dynamicThreshold: 0.7, // 默认值为 0.7
    }
  }
};
const searchRetrievalModel = new ChatGoogleGenerativeAI({
  model: "gemini-2.5-pro",
  temperature: 0,
  maxRetries: 0,
}).bindTools([searchRetrievalTool]);

const searchRetrievalResult = await searchRetrievalModel.invoke("Who won the 2024 MLB World Series?");

console.log(searchRetrievalResult.content);
The Los Angeles Dodgers won the 2024 World Series, defeating the New York Yankees in Game 5 on October 30, 2024, by a score of 7-6. This victory marks the Dodgers' eighth World Series title and their first in a full season since 1988.  They achieved this win by overcoming a 5-0 deficit, making them the first team in World Series history to win a clinching game after being behind by such a margin.  The Dodgers also became the first team in MLB postseason history to overcome a five-run deficit, fall behind again, and still win.  Walker Buehler earned the save in the final game, securing the championship for the Dodgers.
响应还包括有关搜索结果的元数据:
console.dir(searchRetrievalResult.response_metadata?.groundingMetadata, { depth: null });
{
  searchEntryPoint: {
    renderedContent: '<style>\n' +
      '.container {\n' +
      '  align-items: center;\n' +
      '  border-radius: 8px;\n' +
      '  display: flex;\n' +
      '  font-family: Google Sans, Roboto, sans-serif;\n' +
      '  font-size: 14px;\n' +
      '  line-height: 20px;\n' +
      '  padding: 8px 12px;\n' +
      '}\n' +
      '.chip {\n' +
      '  display: inline-block;\n' +
      '  border: solid 1px;\n' +
      '  border-radius: 16px;\n' +
      '  min-width: 14px;\n' +
      '  padding: 5px 16px;\n' +
      '  text-align: center;\n' +
      '  user-select: none;\n' +
      '  margin: 0 8px;\n' +
      '  -webkit-tap-highlight-color: transparent;\n' +
      '}\n' +
      '.carousel {\n' +
      '  overflow: auto;\n' +
      '  scrollbar-width: none;\n' +
      '  white-space: nowrap;\n' +
      '  margin-right: -12px;\n' +
      '}\n' +
      '.headline {\n' +
      '  display: flex;\n' +
      '  margin-right: 4px;\n' +
      '}\n' +
      '.gradient-container {\n' +
      '  position: relative;\n' +
      '}\n' +
      '.gradient {\n' +
      '  position: absolute;\n' +
      '  transform: translate(3px, -9px);\n' +
      '  height: 36px;\n' +
      '  width: 9px;\n' +
      '}\n' +
      '@media (prefers-color-scheme: light) {\n' +
      '  .container {\n' +
      '    background-color: #fafafa;\n' +
      '    box-shadow: 0 0 0 1px #0000000f;\n' +
      '  }\n' +
      '  .headline-label {\n' +
      '    color: #1f1f1f;\n' +
      '  }\n' +
      '  .chip {\n' +
      '    background-color: #ffffff;\n' +
      '    border-color: #d2d2d2;\n' +
      '    color: #5e5e5e;\n' +
      '    text-decoration: none;\n' +
      '  }\n' +
      '  .chip:hover {\n' +
      '    background-color: #f2f2f2;\n' +
      '  }\n' +
      '  .chip:focus {\n' +
      '    background-color: #f2f2f2;\n' +
      '  }\n' +
      '  .chip:active {\n' +
      '    background-color: #d8d8d8;\n' +
      '    border-color: #b6b6b6;\n' +
      '  }\n' +
      '  .logo-dark {\n' +
      '    display: none;\n' +
      '  }\n' +
      '  .gradient {\n' +
      '    background: linear-gradient(90deg, #fafafa 15%, #fafafa00 100%);\n' +
      '  }\n' +
      '}\n' +
      '@media (prefers-color-scheme: dark) {\n' +
      '  .container {\n' +
      '    background-color: #1f1f1f;\n' +
      '    box-shadow: 0 0 0 1px #ffffff26;\n' +
      '  }\n' +
      '  .headline-label {\n' +
      '    color: #fff;\n' +
      '  }\n' +
      '  .chip {\n' +
      '    background-color: #2c2c2c;\n' +
      '    border-color: #3c4043;\n' +
      '    color: #fff;\n' +
      '    text-decoration: none;\n' +
      '  }\n' +
      '  .chip:hover {\n' +
      '    background-color: #353536;\n' +
      '  }\n' +
      '  .chip:focus {\n' +
      '    background-color: #353536;\n' +
      '  }\n' +
      '  .chip:active {\n' +
      '    background-color: #464849;\n' +
      '    border-color: #53575b;\n' +
      '  }\n' +
      '  .logo-light {\n' +
      '    display: none;\n' +
      '  }\n' +
      '  .gradient {\n' +
      '    background: linear-gradient(90deg, #1f1f1f 15%, #1f1f1f00 100%);\n' +
      '  }\n' +
      '}\n' +
      '</style>\n' +
      '<div class="container">\n' +
      '  <div class="headline">\n' +
      '    <svg class="logo-light" width="18" height="18" viewBox="9 9 35 35" fill="none" xmlns="http://www.w3.org/2000/svg">\n' +
      '      <path fill-rule="evenodd" clip-rule="evenodd" d="M42.8622 27.0064C42.8622 25.7839 42.7525 24.6084 42.5487 23.4799H26.3109V30.1568H35.5897C35.1821 32.3041 33.9596 34.1222 32.1258 35.3448V39.6864H37.7213C40.9814 36.677 42.8622 32.2571 42.8622 27.0064V27.0064Z" fill="#4285F4"/>\n' +
      '      <path fill-rule="evenodd" clip-rule="evenodd" d="M26.3109 43.8555C30.9659 43.8555 34.8687 42.3195 37.7213 39.6863L32.1258 35.3447C30.5898 36.3792 28.6306 37.0061 26.3109 37.0061C21.8282 37.0061 18.0195 33.9811 16.6559 29.906H10.9194V34.3573C13.7563 39.9841 19.5712 43.8555 26.3109 43.8555V43.8555Z" fill="#34A853"/>\n' +
      '      <path fill-rule="evenodd" clip-rule="evenodd" d="M16.6559 29.8904C16.3111 28.8559 16.1074 27.7588 16.1074 26.6146C16.1074 25.4704 16.3111 24.3733 16.6559 23.3388V18.8875H10.9194C9.74388 21.2072 9.06992 23.8247 9.06992 26.6146C9.06992 29.4045 9.74388 32.022 10.9194 34.3417L15.3864 30.8621L16.6559 29.8904V29.8904Z" fill="#FBBC05"/>\n' +
      '      <path fill-rule="evenodd" clip-rule="evenodd" d="M26.3109 16.2386C28.85 16.2386 31.107 17.1164 32.9095 18.8091L37.8466 13.8719C34.853 11.082 30.9659 9.3736 26.3109 9.3736C19.5712 9.3736 13.7563 13.245 10.9194 18.8875L16.6559 23.3388C18.0195 19.2636 21.8282 16.2386 26.3109 16.2386V16.2386Z" fill="#EA4335"/>\n' +
      '    </svg>\n' +
      '    <svg class="logo-dark" width="18" height="18" viewBox="0 0 48 48" xmlns="http://www.w3.org/2000/svg">\n' +
      '      <circle cx="24" cy="23" fill="#FFF" r="22"/>\n' +
      '      <path d="M33.76 34.26c2.75-2.56 4.49-6.37 4.49-11.26 0-.89-.08-1.84-.29-3H24.01v5.99h8.03c-.4 2.02-1.5 3.56-3.07 4.56v.75l3.91 2.97h.88z" fill="#4285F4"/>\n' +
      '      <path d="M15.58 25.77A8.845 8.845 0 0 0 24 31.86c1.92 0 3.62-.46 4.97-1.31l4.79 3.71C31.14 36.7 27.65 38 24 38c-5.93 0-11.01-3.4-13.45-8.36l.17-1.01 4.06-2.85h.8z" fill="#34A853"/>\n' +
      '      <path d="M15.59 20.21a8.864 8.864 0 0 0 0 5.58l-5.03 3.86c-.98-2-1.53-4.25-1.53-6.64 0-2.39.55-4.64 1.53-6.64l1-.22 3.81 2.98.22 1.08z" fill="#FBBC05"/>\n' +
      '      <path d="M24 14.14c2.11 0 4.02.75 5.52 1.98l4.36-4.36C31.22 9.43 27.81 8 24 8c-5.93 0-11.01 3.4-13.45 8.36l5.03 3.85A8.86 8.86 0 0 1 24 14.14z" fill="#EA4335"/>\n' +
      '    </svg>\n' +
      '    <div class="gradient-container"><div class="gradient"></div></div>\n' +
      '  </div>\n' +
      '  <div class="carousel">\n' +
      '    <a class="chip" href="https://vertexaisearch.cloud.google.com/grounding-api-redirect/AZnLMfyXqJN3K4FKueRIZDY2Owjs5Rw4dqgDOc6ZjYKsFo4GgENxLktR2sPHtNUuEBIUeqmUYc3jz9pLRq2cgSpc-4EoGBwQSTTpKk71CX7revnXUa54r9LxcxKgYxrUNBm5HpEm6JDNeJykc6NacPYv43M2wgkrhHCHCzHRyjEP2YR0Pxq4JQMUuOrLeTAYWB9oUb87FE5ksfuB6gimqO5-6uS3psR6">who won the 2024 mlb world series</a>\n' +
      '  </div>\n' +
      '</div>\n'
  },
  groundingChunks: [
    {
      web: {
        uri: 'https://vertexaisearch.cloud.google.com/grounding-api-redirect/AZnLMfwvs0gpiM4BbIcNXZnnp4d4ED_rLnIYz2ZwM-lwFnoUxXNlKzy7ZSbbs_E27yhARG6Gx2AuW7DsoqkWPfDFMqPdXfvG3n0qFOQxQ4MBQ9Ox9mTk3KH5KPRJ79m8V118RQRyhi6oK5qg5-fLQunXUVn_a42K7eMk7Kjb8VpZ4onl8Glv1lQQsAK7YWyYkQ7WkTHDHVGB-vrL2U2yRQ==',
        title: 'foxsports.com'
      }
    },
    {
      web: {
        uri: 'https://vertexaisearch.cloud.google.com/grounding-api-redirect/AZnLMfwxwBq8VYgKAhf3UC8U6U5D-i0lK4TwP-2Jf8ClqB-sI0iptm9GxgeaH1iHFbSi-j_C3UqYj8Ok0YDTyvg87S7JamU48pndrd467ZQbI2sI0yWxsCCZ_dosXHwemBHFL5TW2hbAqasq93CfJ09cp1jU',
        title: 'mlb.com'
      }
    }
  ],
  groundingSupports: [
    {
      segment: {
        endIndex: 131,
        text: 'The Los Angeles Dodgers won the 2024 World Series, defeating the New York Yankees in Game 5 on October 30, 2024, by a score of 7-6.'
      },
      groundingChunkIndices: [ 0, 1 ],
      confidenceScores: [ 0.7652759, 0.7652759 ]
    },
    {
      segment: {
        startIndex: 401,
        endIndex: 531,
        text: 'The Dodgers also became the first team in MLB postseason history to overcome a five-run deficit, fall behind again, and still win.'
      },
      groundingChunkIndices: [ 1 ],
      confidenceScores: [ 0.8487609 ]
    }
  ],
  retrievalMetadata: { googleSearchDynamicRetrievalScore: 0.93359375 },
  webSearchQueries: [ 'who won the 2024 mlb world series' ]
}

代码执行

Google Generative AI 还支持代码执行。使用内置的 CodeExecutionTool,您可以让模型生成代码、执行它,并在最终补全中使用结果:
import { CodeExecutionTool } from "@google/generative-ai";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";

const codeExecutionTool: CodeExecutionTool = {
  codeExecution: {}, // 只需传递一个空对象即可启用。
};
const codeExecutionModel = new ChatGoogleGenerativeAI({
  model: "gemini-2.5-pro",
  temperature: 0,
  maxRetries: 0,
}).bindTools([codeExecutionTool]);

const codeExecutionResult = await codeExecutionModel.invoke("Use code execution to find the sum of the first and last 3 numbers in the following list: [1, 2, 3, 72638, 8, 727, 4, 5, 6]");

console.dir(codeExecutionResult.content, { depth: null });
[
  {
    type: 'text',
    text: "Here's how to find the sum of the first and last three numbers in the given list using Python:\n" +
      '\n'
  },
  {
    type: 'executableCode',
    executableCode: {
      language: 'PYTHON',
      code: '\n' +
        'my_list = [1, 2, 3, 72638, 8, 727, 4, 5, 6]\n' +
        '\n' +
        'first_three_sum = sum(my_list[:3])\n' +
        'last_three_sum = sum(my_list[-3:])\n' +
        'total_sum = first_three_sum + last_three_sum\n' +
        '\n' +
        'print(f"{first_three_sum=}")\n' +
        'print(f"{last_three_sum=}")\n' +
        'print(f"{total_sum=}")\n' +
        '\n'
    }
  },
  {
    type: 'codeExecutionResult',
    codeExecutionResult: {
      outcome: 'OUTCOME_OK',
      output: 'first_three_sum=6\nlast_three_sum=15\ntotal_sum=21\n'
    }
  },
  {
    type: 'text',
    text: 'Therefore, the sum of the first three numbers (1, 2, 3) is 6, the sum of the last three numbers (4, 5, 6) is 15, and their total sum is 21.\n'
  }
]
您还可以将此生成结果作为聊天历史记录传回模型:
const codeExecutionExplanation = await codeExecutionModel.invoke([
  codeExecutionResult,
  {
    role: "user",
    content: "Please explain the question I asked, the code you wrote, and the answer you got.",
  }
])

console.log(codeExecutionExplanation.content);
You asked for the sum of the first three and the last three numbers in the list `[1, 2, 3, 72638, 8, 727, 4, 5, 6]`.

Here's a breakdown of the code:

1. **`my_list = [1, 2, 3, 72638, 8, 727, 4, 5, 6]`**: This line defines the list of numbers you provided.

2. **`first_three_sum = sum(my_list[:3])`**: This calculates the sum of the first three numbers.  `my_list[:3]` is a slice of the list that takes elements from the beginning up to (but not including) the index 3.  So, it takes elements at indices 0, 1, and 2, which are 1, 2, and 3. The `sum()` function then adds these numbers together.

3. **`last_three_sum = sum(my_list[-3:])`**: This calculates the sum of the last three numbers. `my_list[-3:]` is a slice that takes elements starting from the third element from the end and goes to the end of the list. So it takes elements at indices -3, -2, and -1 which correspond to 4, 5, and 6. The `sum()` function adds these numbers.

4. **`total_sum = first_three_sum + last_three_sum`**: This adds the sum of the first three numbers and the sum of the last three numbers to get the final result.

5. **`print(f"{first_three_sum=}")`**, **`print(f"{last_three_sum=}")`**, and **`print(f"{total_sum=}")`**: These lines print the calculated sums in a clear and readable format.


The output of the code was:

* `first_three_sum=6`
* `last_three_sum=15`
* `total_sum=21`

Therefore, the answer to your question is 21.

上下文缓存

上下文缓存允许您将某些内容一次性传递给模型,缓存输入令牌,然后在后续请求中引用缓存的令牌以降低成本。您可以使用 GoogleAICacheManager 类创建一个 CachedContent 对象,然后通过 enableCachedContent() 方法将 CachedContent 对象传递给您的 ChatGoogleGenerativeAIModel
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import {
  GoogleAICacheManager,
  GoogleAIFileManager,
} from "@google/generative-ai/server";

const fileManager = new GoogleAIFileManager(process.env.GOOGLE_API_KEY);
const cacheManager = new GoogleAICacheManager(process.env.GOOGLE_API_KEY);

// 上传文件以进行缓存
const pathToVideoFile = "/path/to/video/file";
const displayName = "example-video";
const fileResult = await fileManager.uploadFile(pathToVideoFile, {
    displayName,
    mimeType: "video/mp4",
});

// 上传完成后创建缓存内容
const cachedContent = await cacheManager.create({
    model: "models/gemini-2.5-flash",
    displayName: displayName,
    systemInstruction: "You are an expert video analyzer, and your job is to answer " +
      "the user's query based on the video file you have access to.",
    contents: [
        {
            role: "user",
            parts: [
                {
                    fileData: {
                        mimeType: fileResult.file.mimeType,
                        fileUri: fileResult.file.uri,
                    },
                },
            ],
        },
    ],
    ttlSeconds: 300,
});

// 将缓存的视频传递给模型
const model = new ChatGoogleGenerativeAI({});
model.useCachedContent(cachedContent);

// 使用缓存的视频调用模型
await model.invoke("Summarize the video");
注意
  • 上下文缓存的最小输入令牌数为 32,768,最大值与给定模型的最大值相同。

Gemini 提示常见问题

截至本文档撰写时(2023/12/12),Gemini 对接受的提示类型和结构有一些限制。具体来说:
  1. 当提供多模态(图像)输入时,您最多只能提供 1 条“human”(用户)类型的消息。您不能传递多条消息(尽管单条人类消息可能包含多个内容条目)。
  2. 系统消息本身不受支持,如果存在,将与第一条人类消息合并。
  3. 对于常规聊天对话,消息必须遵循 human/ai/human/ai 交替模式。您不能连续提供 2 条 AI 或人类消息。
  4. 如果消息违反 LLM 的安全检查,可能会被阻止。在这种情况下,模型将返回空响应。

API 参考

有关 ChatGoogleGenerativeAI 所有功能和配置的详细文档,请参阅 API 参考