📘 **TELUS Agriculture & Consumer Goods** 如何通过 **Haystack Agents** 转变促销交易

集成:SambaNova

使用 SambaNova 提供的开放语言模型

作者
SambaNova 团队

目录

概述

SambaNova 是一家人工智能公司,开发 SN40L 可重构数据流单元 (RDU)。这是一种处理器,可提供原生数据流处理和高性能,用于大型语言模型的快速推理。

要开始使用 SambaNova,请在此处 注册 API 密钥。这将使您能够访问 SambaNova Cloud API,该 API 提供 Llama 3 和 Qwen 等开放语言模型的快速推理。

使用

SambaNova Cloud API 与 OpenAI 兼容,因此可以通过 Haystack 中的 OpenAI Generators 轻松使用。

使用 Generator

这是一个使用 SambaNova 提供的 Llama 通过 RAG 和 PromptBuilder 进行问答的示例。您需要设置环境变量 SAMBANOVA_API_KEY 并选择一个 兼容的模型

from haystack import Document, Pipeline
from haystack.components.builders.prompt_builder import PromptBuilder
from haystack.components.retrievers.in_memory import InMemoryBM25Retriever
from haystack.document_stores.in_memory import InMemoryDocumentStore

from haystack.components.generators import OpenAIGenerator
import os

os.environ["SAMBANOVA_API_KEY"] = "YOUR_SAMBANOVA_API_KEY"

document_store = InMemoryDocumentStore()
document_store.write_documents(
    [
        Document(content="The Function-Calling API enables dynamic, agentic workflows by allowing the model to suggest and select function calls based on user input."
                "This feature facilitates flexible agentic workflows that adapt to varied needs."),
        Document(content="Interact with multimodal models directly through the Inference API (OpenAI compatible) and Playground"
                 "for seamless text and image processing."),
        Document(
            content="New Python and Gradio code samples make it easier to build and deploy applications on SambaNova Cloud. These examples simplify"
            "integrating AI models, enabling faster prototyping and reducing setup time."
        ),
    ]
)

template = """
Given only the following information, answer the question.
Ignore your own knowledge.

Context:
{% for document in documents %}
    {{ document.content }}
{% endfor %}

Question: {{ query }}?
"""

llm = OpenAIGenerator(
    api_key=Secret.from_env_var("SAMBANOVA_API_KEY"),
    api_base_url="https://api.sambanova.ai/v1",
    model="Meta-Llama-3.3-70B-Instruct",
    generation_kwargs = {"max_tokens": 512}
)

pipe = Pipeline()

pipe.add_component("retriever", InMemoryBM25Retriever(document_store=document_store))
pipe.add_component("prompt_builder", PromptBuilder(template=template))
pipe.add_component("llm", llm)
pipe.connect("retriever", "prompt_builder.documents")
pipe.connect("prompt_builder", "llm")

query = "Functionalities of Sambanova API?"

response = pipe.run({"prompt_builder": {"query": query}, "retriever": {"query": query}})

print(response["llm"]["replies"])

使用 ChatGenerator

查看一个与 Llama 3.3 进行多轮对话的示例。您需要设置环境变量 SAMBANOVA_API_KEY 并选择一个 兼容的模型

from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from haystack.utils import Secret
import os

os.environ["SAMBANOVA_API_KEY"] = "YOUR_SAMBANOVA_API_KEY"

generator = OpenAIChatGenerator(
    api_key=Secret.from_env_var("SAMBANOVA_API_KEY"),
    api_base_url="https://api.sambanova.ai/v1",
    model="Meta-Llama-3.3-70B-Instruct",
    generation_kwargs = {"max_tokens": 512}
)


messages = []

while True:
    msg = input("Enter your message or Q to exit\n🧑 ")
    if msg=="Q":
        break
    messages.append(ChatMessage.from_user(msg))
    response = generator.run(messages=messages)
    assistant_resp = response['replies'][0]
    print("🤖 "+assistant_resp.content)
    messages.append(assistant_resp)