SDK Reference

Documentation

Everything you need to instrument your AI agents and get quality monitoring in minutes.

Quick Start

Get from zero to quality monitoring in under 5 minutes.

1 Install the SDK

terminal
$ npm install @sentygent/sdk

2 Create an agent in the dashboard

Go to app.sentygent.com, create a new agent, and copy its slug. You will also find your API key under Settings.

3 Set environment variables

.env
SENTYGENT_API_KEY=sk-...
SENTYGENT_AGENT_SLUG=my-agent

4 Instrument your agent

agent.ts
import Anthropic from '@anthropic-ai/sdk';
import { SentygentClient, instrumentAnthropic } from '@sentygent/sdk';

const sentygent = new SentygentClient({
  apiKey: process.env.SENTYGENT_API_KEY,
  agent: process.env.SENTYGENT_AGENT_SLUG,
});
const anthropic = instrumentAnthropic(new Anthropic(), sentygent);

// Use anthropic as normal — all calls traced automatically
await sentygent.shutdown();
// Always call shutdown() before process exits

Anthropic

Use instrumentAnthropic to wrap your Anthropic client. All messages.create() calls are captured automatically — model, tokens, cost, and latency.

anthropic-chatbot.ts
import Anthropic from '@anthropic-ai/sdk';
import { SentygentClient, instrumentAnthropic } from '@sentygent/sdk';

const sentygent = new SentygentClient({
  apiKey: process.env.SENTYGENT_API_KEY,
  agent: 'my-chatbot',
  debug: true,
});

// Wrap once — all subsequent calls are captured
const anthropic = instrumentAnthropic(new Anthropic(), sentygent);

async function chat(userMessage: string) {
  await sentygent.trace(`chat-${Date.now()}`, async (span) => {
    span.captureLifecycle('message_received', { content: userMessage });
    const response = await anthropic.messages.create({
      model: 'claude-sonnet-4-20250514',
      max_tokens: 1024,
      messages: [{ role: 'user', content: userMessage }],
    });
    const text = response.content[0].text;
    span.captureLifecycle('message_sent', { content: text });
  });
}

await sentygent.shutdown();

Note: Always call await sentygent.shutdown() before your process exits. This flushes any pending events to the Sentygent backend.

AWS Bedrock

Use instrumentBedrock to wrap your BedrockRuntimeClient. Supports ConverseCommand with automatic token tracking. Use sentygent.request() for request-response patterns.

bedrock-agent.ts
import { BedrockRuntimeClient, ConverseCommand } from '@aws-sdk/client-bedrock-runtime';
import { SentygentClient, instrumentBedrock } from '@sentygent/sdk';

const sentygent = new SentygentClient({
  apiKey: process.env.SENTYGENT_API_KEY,
  agent: 'my-bedrock-agent',
});

const bedrock = instrumentBedrock(
  new BedrockRuntimeClient({ region: process.env.AWS_REGION }),
  sentygent,
);

await sentygent.request(`session-${Date.now()}`, async (span) => {
  span.captureLifecycle('message_received', { content: userQuestion });

  // Optional: capture RAG retrieval step
  await span.captureRetrieval({
    provider: 'pinecone',
    query: userQuestion,
    execute: () => retrieveContext(userQuestion),
    extractResults: (r) => ({ resultsCount: r.chunks.length }),
    searchType: 'semantic',
  });

  const response = await bedrock.send(new ConverseCommand({
    modelId: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    messages: [{ role: 'user', content: [{ text: userQuestion }] }],
    inferenceConfig: { maxTokens: 512 },
  }));

  span.captureLifecycle('message_sent', {
    content: response.output?.message?.content?.[0]?.text,
  });
});

Other Providers

For OpenAI, Cohere, or any other provider without auto-instrumentation, use span.captureLLM() to manually capture LLM calls with full metadata.

openai-agent.ts
import OpenAI from 'openai';
import { SentygentClient } from '@sentygent/sdk';

const sentygent = new SentygentClient({ apiKey: process.env.SENTYGENT_API_KEY, agent: 'my-agent' });
const openai = new OpenAI();

await sentygent.trace(`chat-${Date.now()}`, async (span) => {
  // captureLLM wraps your call and extracts usage automatically
  const result = await span.captureLLM({
    provider: 'openai',
    model: 'gpt-4o',
    execute: () => openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [{ role: 'user', content: userMessage }],
    }),
    extractUsage: (r) => ({
      promptTokens: r.usage.prompt_tokens,
      completionTokens: r.usage.completion_tokens,
    }),
  });
  console.log(result.choices[0].message.content);
});

Multi-agent

Use span.child(name, { agent: 'slug' }) to create child spans for sub-agents. Each sub-agent appears separately in the dashboard with its own cost breakdown.

Prerequisites: Register each agent slug (e.g. orchestrator, research-agent, writer-agent) in the Sentygent dashboard before running.

multi-agent.ts
import { SentygentClient } from '@sentygent/sdk';

const sentygent = new SentygentClient({
  apiKey: process.env.SENTYGENT_API_KEY,
  agent: 'orchestrator',
});

await sentygent.trace(`multi-agent-${Date.now()}`, async (span) => {
  span.captureLifecycle('message_received', { content: userMessage });

  // Research sub-agent — appears as separate agent in dashboard
  const researchSpan = span.child('research', { agent: 'research-agent' });
  const research = await researchSpan.captureLLM({
    provider: 'anthropic',
    model: 'claude-sonnet-4-20250514',
    execute: () => callLLM('Summarize AI safety research findings'),
    extractUsage: (r) => r.usage,
  });

  // Writer sub-agent — per-agent cost visible in dashboard
  const writerSpan = span.child('write', { agent: 'writer-agent' });
  await writerSpan.captureLLM({
    provider: 'anthropic',
    model: 'claude-sonnet-4-20250514',
    execute: () => callLLM('Write polished summary from research notes'),
    extractUsage: (r) => r.usage,
  });

  span.captureLifecycle('message_sent', { content: research.text });
});

await sentygent.shutdown();

RAG / Retrieval

Use span.captureRetrieval() to instrument knowledge base lookups, vector searches, or any retrieval step. This surfaces retrieval quality metrics in your dashboard.

rag-agent.ts
await span.captureRetrieval({
  provider: 'pinecone',         // your KB / vector DB name
  query: userQuestion,
  execute: () => vectorSearch(userQuestion),
  extractResults: (r) => ({
    resultsCount: r.chunks.length,
    relevantCount: r.chunks.filter((c) => c.score >= 0.5).length,
    meanScore: r.chunks.reduce((s, c) => s + c.score, 0) / r.chunks.length,
    topScore: Math.max(...r.chunks.map((c) => c.score)),
  }),
  searchType: 'semantic',
  tags: { step: 'retrieve' },
});

captureRetrieval options

Option Type Description
provider string Knowledge base or vector DB name
query string The search query
execute () => Promise<T> Function that performs the retrieval
extractResults (r: T) => object Extract metrics from the result
searchType 'semantic' | 'keyword' | string Type of search performed
tags Record<string, string> Dimensional metadata for filtering

API Reference

Key methods of the Sentygent SDK.

SentygentClient

Method Description
new SentygentClient({ apiKey, agent }) Initialize the client. agent is the slug from your dashboard.
sentygent.trace(id, fn) Start a trace for a full conversation or workflow.
sentygent.request(id, fn) Start a trace for a single request-response cycle.
sentygent.shutdown() Flush pending events and shut down. Call before process exit.

Instrumentation helpers

Function Description
instrumentAnthropic(client, sentygent) Wrap Anthropic SDK. All messages.create() calls auto-traced.
instrumentBedrock(client, sentygent) Wrap BedrockRuntimeClient. Supports ConverseCommand.

Span methods

Method Description
span.captureLifecycle(event, data?) Record a lifecycle event (e.g. message_received, message_sent).
span.captureLLM(options) Manually capture an LLM call with provider, model, execute, extractUsage.
span.captureTool(options) Capture a tool call (name, input, execute).
span.captureRetrieval(options) Capture a RAG/retrieval step with metrics.
span.child(name, { agent }) Create a child span for a sub-agent with its own slug.