LLM call examples
简单的聊天补全
这是为具有系统和用户消息的聊天补全调用生成的 Telemetry 示例。
%%{init:
{
"sequence": { "messageAlign": "left", "htmlLabels":true },
"themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
}
}%%
sequenceDiagram
participant A as Application
participant I as Instrumented Client
participant M as Model
A->>+I: #U+200D
I->>M: input = [system: You are a helpful bot, user: Tell me a joke about OpenTelemetry]
Note left of I: GenAI Client span
I-->M: assistant: Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!
I-->>-A: #U+200D禁用内容捕获时的 GenAI 客户端 Span
| 属性 | 值 |
|---|---|
| Span 名称 | "chat gpt-4" |
| Trace ID | "4bf92f3577b34da6a3ce929d0e0e4736" |
| Span ID | "00f067aa0ba902b7" |
gen_ai.provider.name | "openai" |
gen_ai.operation.name | "chat" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 47 |
gen_ai.usage.input_tokens | 52 |
gen_ai.response.finish_reasons | ["stop"] |
在 Span 属性上启用内容捕获时的 GenAI 客户端 Span
| 属性 | 值 |
|---|---|
| Span 名称 | "chat gpt-4" |
| Trace ID | "4bf92f3577b34da6a3ce929d0e0e4736" |
| Span ID | "00f067aa0ba902b7" |
gen_ai.provider.name | "openai" |
gen_ai.operation.name | "chat" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 47 |
gen_ai.usage.input_tokens | 52 |
gen_ai.response.finish_reasons | ["stop"] |
gen_ai.input.messages | gen_ai.input.messages |
gen_ai.output.messages | gen_ai.output.messages |
gen_ai.input.messages 值
[
{
"role": "system",
"parts": [
{
"type": "text",
"content": "You are a helpful bot"
}
]
},
{
"role": "user",
"parts": [
{
"type": "text",
"content": "Tell me a joke about OpenTelemetry"
}
]
}
]
gen_ai.output.messages 值
[
{
"role": "assistant",
"parts": [
{
"type": "text",
"content": " Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"
}
],
"finish_reason": "stop"
}
]
在事件属性上启用内容捕获时的 GenAI Telemetry
Span
| 属性 | 值 |
|---|---|
| Span 名称 | "chat gpt-4" |
| Trace ID | "4bf92f3577b34da6a3ce929d0e0e4736" |
| Span ID | "00f067aa0ba902b7" |
gen_ai.provider.name | "openai" |
gen_ai.operation.name | "chat" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 47 |
gen_ai.usage.input_tokens | 52 |
gen_ai.response.finish_reasons | ["stop"] |
Event
| 属性 | 值 |
|---|---|
| Trace ID | "4bf92f3577b34da6a3ce929d0e0e4736" |
| Span ID | "00f067aa0ba902b7" |
gen_ai.provider.name | "openai" |
gen_ai.operation.name | "chat" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 47 |
gen_ai.usage.input_tokens | 52 |
gen_ai.response.finish_reasons | ["stop"] |
gen_ai.input.messages | gen_ai.input.messages |
gen_ai.output.messages | gen_ai.output.messages |
gen_ai.input.messages 值
[
{
"role": "system",
"parts": [
{
"type": "text",
"content": "You are a helpful bot"
}
]
},
{
"role": "user",
"parts": [
{
"type": "text",
"content": "Tell me a joke about OpenTelemetry"
}
]
}
]
gen_ai.output.messages 值
[
{
"role": "assistant",
"parts": [
{
"type": "text",
"content": " Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"
}
],
"finish_reason": "stop"
}
]
多模态聊天补全
多模态聊天补全遵循与上面 简单聊天补全 相同的顺序和 Telemetry 结构,但包含 简单聊天补全 中 gen_ai.input.messages 和 gen_ai.output.messages Span/事件属性的额外类型 Parts。
blobParts,表示发送到模型或从模型发出的内联数据。uriParts,表示通过 URI 对远程文件的引用。fileParts,表示通过 ID 对预上传文件的引用。
这些 Parts 包含一个可选的 modality 字段来捕获内容的通用类别,以及一个可选的 mime_type 字段来捕获内容的特定 IANA 媒体类型(如果已知)。有关更多详细信息,请参阅 规范性 JSON schema。
多模态输入示例
[
{
"role": "user",
"parts": [
{
"type": "text",
"content": "What is in the attached data?"
},
// A image with a URI
{
"type": "uri",
"modality": "image",
"mime_type": "image/png",
"uri": "https://raw.githubusercontent.com/open-telemetry/opentelemetry.io/refs/heads/main/static/img/logos/opentelemetry-horizontal-color.png"
},
// A video with a vendor specific URI
{
"type": "uri",
"modality": "video",
"mime_type": "video/mp4",
"uri": "gs://my-bucket/my-video.mp4"
},
// An image with opaque file ID e.g. the OpenAI files api
{
"type": "file",
"file_id": "provider_fileid_123"
},
// An image with unknown mime_type but known modality
{
"type": "file",
"modality": "image",
"file_id": "provider_fileid_123"
},
// An inline image
{
"type": "blob",
"modality": "image",
"mime_type": "image/png",
"content": "aGVsbG8gd29ybGQgaW1hZ2luZSB0aGlzIGlzIGFuIGltYWdlCg=="
},
// Inline audio
{
"type": "blob",
"modality": "audio",
"mime_type": "audio/wav",
"content": "aGVsbG8gd29ybGQgaW1hZ2luZSB0aGlzIGlzIGFuIGltYWdlCg=="
}
]
}
]
多模态输出示例
[
{
"role": "assistant",
"finish_reason": "stop",
"parts": [
// Model generated an inline image
{
"type": "blob",
"modality": "image",
"mime_type": "image/jpg",
"content": "aGVsbG8gd29ybGQgaW1hZ2luZSB0aGlzIGlzIGFuIGltYWdlCg=="
}
]
}
]
工具调用(函数)
这是为具有用户消息和函数定义的聊天补全调用生成的 Telemetry 示例,该调用导致模型请求应用程序调用提供的函数。应用程序执行一个函数,然后再次请求完成,这次附带工具响应。
%%{init:
{
"sequence": { "messageAlign": "left", "htmlLabels":true },
"themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
}
}%%
sequenceDiagram
participant A as Application
participant I as Instrumented Client
participant M as Model
A->>+I: #U+200D
I->>M: input = [user: What's the weather in Paris?]
Note left of I: GenAI Client span 1
I-->M: assistant: Call to the get_weather tool with Paris as the location argument.
I-->>-A: #U+200D
A -->> A: parse tool parameters<br/>execute tool<br/>update chat history
A->>+I: #U+200D
I->>M: input = [user: What's the weather in Paris?, assistant: get_weather tool call, tool: rainy, 57°F]
Note left of I: GenAI Client span 2
I-->M: assistant: The weather in Paris is rainy and overcast, with temperatures around 57°F
I-->>-A: #U+200D禁用内容捕获时的 GenAI 客户端 Span
以下 Span 之间的关系取决于用户应用程序代码的编写方式。如果存在包含 Span,它们很可能是同级 Span。
GenAI 客户端 Span 1
| 属性 | 值 |
|---|---|
| Span 名称 | "chat gpt-4" |
gen_ai.provider.name | "openai" |
gen_ai.operation.name | "chat" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 17 |
gen_ai.usage.input_tokens | 47 |
gen_ai.response.finish_reasons | ["tool_calls"] |
工具调用
如果工具调用 按照 execute-tool Span 定义进行 instrumented,它可能看起来像
| 属性 | 值 |
|---|---|
| Span 名称 | "execute_tool get_weather" |
gen_ai.tool.call.id | "call_VSPygqKTWdrhaFErNvMV18Yl" |
gen_ai.tool.name | "get_weather" |
gen_ai.operation.name | "execute_tool" |
gen_ai.tool.type | "function" |
GenAI 客户端 Span 2
| 属性 | 值 |
|---|---|
| Span 名称 | "chat gpt-4" |
gen_ai.provider.name | "openai" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-call_VSPygqKTWdrhaFErNvMV18Yl" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 52 |
gen_ai.usage.input_tokens | 97 |
gen_ai.response.finish_reasons | ["stop"] |
在 Span 属性上启用内容捕获时的 GenAI 客户端 Span
以下 Span 之间的关系取决于用户应用程序代码的编写方式。如果存在包含 Span,它们很可能是同级 Span。
GenAI 客户端 Span 1
| 属性 | 值 |
|---|---|
| Span 名称 | "chat gpt-4" |
gen_ai.provider.name | "openai" |
gen_ai.operation.name | "chat" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 17 |
gen_ai.usage.input_tokens | 47 |
gen_ai.response.finish_reasons | ["tool_calls"] |
gen_ai.input.messages | gen_ai.input.messages |
gen_ai.output.messages | gen_ai.output.messages |
gen_ai.input.messages 值
[
{
"role": "user",
"parts": [
{
"type": "text",
"content": "Weather in Paris?"
}
]
}
]
gen_ai.output.messages 值
[
{
"role": "assistant",
"parts": [
{
"type": "tool_call",
"id": "call_VSPygqKTWdrhaFErNvMV18Yl",
"name": "get_weather",
"arguments": {
"location": "Paris"
}
}
],
"finish_reason": "tool_call"
}
]
工具调用
如果工具调用 按照 execute-tool Span 定义进行 instrumented,它可能看起来像这样
| 属性 | 值 |
|---|---|
| Span 名称 | "execute_tool get_weather" |
gen_ai.tool.call.id | "call_VSPygqKTWdrhaFErNvMV18Yl" |
gen_ai.tool.name | "get_weather" |
gen_ai.operation.name | "execute_tool" |
gen_ai.tool.type | "function" |
GenAI 客户端 Span 2
| 属性 | 值 |
|---|---|
| Span 名称 | "chat gpt-4" |
gen_ai.provider.name | "openai" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-call_VSPygqKTWdrhaFErNvMV18Yl" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 52 |
gen_ai.usage.input_tokens | 97 |
gen_ai.response.finish_reasons | ["stop"] |
gen_ai.input.messages | gen_ai.input.messages |
gen_ai.output.messages | gen_ai.output.messages |
gen_ai.input.messages 值
[
{
"role": "user",
"parts": [
{
"type": "text",
"content": "Weather in Paris?"
}
]
},
{
"role": "assistant",
"parts": [
{
"type": "tool_call",
"id": "call_VSPygqKTWdrhaFErNvMV18Yl",
"name": "get_weather",
"arguments": {
"location": "Paris"
}
}
]
},
{
"role": "tool",
"parts": [
{
"type": "tool_call_response",
"id": " call_VSPygqKTWdrhaFErNvMV18Yl",
"response": "rainy, 57°F"
}
]
}
]
gen_ai.output.messages 值
[
{
"role": "assistant",
"parts": [
{
"type": "text",
"content": "The weather in Paris is currently rainy with a temperature of 57°F."
}
],
"finish_reason": "stop"
}
]
包含系统指令的聊天历史(启用内容)
一些提供商允许单独提供指令,而不是在输入中提供聊天历史,或者除了在输入中提供的 system(developer 等)消息之外。
此示例演示了向 OpenAI Responses API 提供冲突指令的边缘情况。在这种情况下,指令会记录在 gen_ai.system_instructions 属性中。
response = openai.responses.create(
model="gpt-4",
instructions="You must never tell jokes",
input=[{"role": "system", "content": "You are a helpful assistant"}, {"role": "user", "content": "Tell me a joke"}])
Span
| 属性 | 值 |
|---|---|
| Span 名称 | "chat gpt-4" |
gen_ai.provider.name | "openai" |
gen_ai.operation.name | "chat" |
gen_ai.request.model | "gpt-4" |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 10 |
gen_ai.usage.input_tokens | 28 |
gen_ai.response.finish_reasons | ["stop"] |
gen_ai.system_instructions | gen_ai.system_instructions |
gen_ai.input.messages | gen_ai.input.messages |
gen_ai.output.messages | gen_ai.output.messages |
gen_ai.system_instructions 值
[
{
"type": "text",
"content": "You must never tell jokes"
}
]
gen_ai.input.messages 值
[
{
"role": "system",
"parts": [
{
"type": "text",
"content": "You are a helpful bot"
}
]
},
{
"role": "user",
"parts": [
{
"type": "text",
"content": "Tell me a joke about OpenTelemetry"
}
]
}
]
gen_ai.output.messages 值
[
{
"role": "assistant",
"parts": [
{
"type": "text",
"content": "I'm sorry, but I can't assist with that"
}
],
"finish_reason": "stop"
}
]
带推理的聊天补全(启用内容)
| 属性 | 值 |
|---|---|
| Span 名称 | "chat gpt-4" |
| Trace ID | "4bf92f3577b34da6a3ce929d0e0e4736" |
| Span ID | "00f067aa0ba902b7" |
gen_ai.provider.name | "openai" |
gen_ai.operation.name | "chat" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 47 |
gen_ai.usage.input_tokens | 52 |
gen_ai.response.finish_reasons | ["stop"] |
gen_ai.input.messages | gen_ai.input.messages |
gen_ai.output.messages | gen_ai.output.messages |
gen_ai.input.messages 值
[
{
"role": "system",
"parts": [
{
"type": "text",
"content": "You are a helpful bot"
}
]
},
{
"role": "user",
"parts": [
{
"type": "text",
"content": "Tell me a joke about OpenTelemetry"
}
]
}
]
gen_ai.output.messages 值
[
{
"role": "assistant",
"parts": [
{
"type": "reasoning",
"content": "Alright, the user wants a joke about OpenTelemetry… Hmm, OpenTelemetry is all about distributed tracing and metrics, right? So maybe I can play with the word \"trace.\" That's a core concept — tracing requests through systems. But how do I make that funny? What if I take \"trace\" literally and apply it to something unexpected, like a party? If I personify OpenTelemetry as a tool that \"knows where the fun is,\" I can make a pun out of tracing requests vs. tracing enjoyment. Yeah, that could work — let me put it all together."
},
{
"type": "text",
"content": " Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"
}
],
"finish_reason": "stop"
}
]
工具调用(内置)
gen_ai.output.messages 的格式尚未为内置工具调用指定(有关详细信息,请参阅 #2585)。
这是为具有 code_interpreter 工具的聊天补全调用生成的 Telemetry 示例,该调用导致模型提供商执行工具并返回响应以及工具调用详细信息。
%%{init:
{
"sequence": { "messageAlign": "left", "htmlLabels":true },
"themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
}
}%%
sequenceDiagram
participant A as Application
participant I as Instrumented Client
participant M as Model
A ->>+ I:
I ->> M: input = [system: You are a helpful bot, user: Write Python code that generates a random number, executes it, and returns the result.]
Note left of I: GenAI Client span
I --> M: tool:code='import random ....'<br>assistant: The generated random number is 95.
I -->>- A:GenAI 客户端 Span
| 属性 | 值 |
|---|---|
| Span 名称 | "chat gpt-4" |
gen_ai.provider.name | "openai" |
gen_ai.operation.name | "chat" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 44 |
gen_ai.usage.input_tokens | 385 |
gen_ai.response.finish_reasons | ["stop"] |
gen_ai.input.messages | gen_ai.input.messages |
gen_ai.output.messages | gen_ai.output.messages |
gen_ai.input.messages 值
待办
gen_ai.output.messages 值
待办
具有多个选项的聊天补全
此示例涵盖了用户请求模型为同一提示生成两个补全的场景。
%%{init:
{
"sequence": { "messageAlign": "left", "htmlLabels":true },
"themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
}
}%%
sequenceDiagram
participant A as Application
participant I as Instrumented Client
participant M as Model
A->>+I: #U+200D
I->>M: input = [system: You are a helpful bot, user: Tell me a joke about OpenTelemetry]
Note left of I: GenAI Client span
I-->M: assistant: Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!<br/> assistant: Why did OpenTelemetry get promoted? It had great span of control!
I-->>-A: #U+200D在 Span 属性上启用内容捕获时的 GenAI 客户端 Span
| 属性 | 值 |
|---|---|
| Span 名称 | "chat gpt-4" |
gen_ai.provider.name | "openai" |
gen_ai.operation.name | "chat" |
gen_ai.request.model | "gpt-4" |
gen_ai.request.max_tokens | 200 |
gen_ai.request.top_p | 1.0 |
gen_ai.response.id | "chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l" |
gen_ai.response.model | "gpt-4-0613" |
gen_ai.usage.output_tokens | 77 |
gen_ai.usage.input_tokens | 52 |
gen_ai.response.finish_reasons | ["stop", "stop"] |
gen_ai.input.messages | gen_ai.input.messages |
gen_ai.output.messages | gen_ai.output.messages |
gen_ai.input.messages 值
[
{
"role": "system",
"parts": [
{
"type": "text",
"content": "You are a helpful bot"
}
]
},
{
"role": "user",
"parts": [
{
"type": "text",
"content": "Tell me a joke about OpenTelemetry"
}
]
}
]
gen_ai.output.messages 值
[
{
"role": "assistant",
"parts": [
{
"type": "text",
"content": " Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"
}
],
"finish_reason": "stop"
},
{
"role": "assistant",
"parts": [
{
"type": "text",
"content": " Why did OpenTelemetry get promoted? It had great span of control!"
}
],
"finish_reason": "stop"
}
]