askill
domino-genai-tracing

domino-genai-tracingSafety --Repository

Trace and evaluate GenAI applications including LLM calls, agents, RAG pipelines, and multi-step AI systems in Domino. Uses the Domino SDK (@add_tracing decorator, DominoRun context) with MLflow 3.2.0. Captures token usage, latency, cost, tool calls, and errors. Supports LLM-as-judge evaluators and custom metrics. Use when building agents, debugging LLM applications, or needing audit trails for GenAI systems.

1 stars
1.2k downloads
Updated 1/3/2026

Package Files

Loading files...
SKILL.md

Domino GenAI Tracing Skill

This skill provides comprehensive knowledge for tracing and evaluating GenAI applications in Domino Data Lab, including LLM calls, agents, RAG pipelines, and multi-step AI systems.

Key Concepts

What GenAI Tracing Captures

The Domino SDK automatically captures:

  • Token usage - Input and output tokens per call
  • Latency - Time for each operation
  • Cost - Estimated cost per call
  • Tool calls - Function/tool invocations
  • Errors - Exceptions and failure modes
  • Model parameters - Temperature, max_tokens, etc.

Core Components

  1. @add_tracing decorator - Wraps functions to capture traces
  2. DominoRun context manager - Groups traces into runs with aggregation
  3. Evaluators - Custom functions to score outputs
  4. MLflow integration - View traces in Experiment Manager

Related Documentation

Quick Start

1. Environment Setup

Requires MLflow 3.2.0 and Domino SDK with AI systems support:

RUN pip install mlflow==3.2.0
RUN pip install --no-cache-dir "git+https://github.com/dominodatalab/python-domino.git@master#egg=dominodatalab[data,aisystems]"

2. Basic Tracing

import mlflow
from domino.agents.tracing import add_tracing
from domino.agents.logging import DominoRun

@add_tracing(name="my_agent", autolog_frameworks=["openai"])
def my_agent(query: str) -> str:
    response = llm.invoke(query)
    return response

# Run with tracing
with DominoRun() as run:
    result = my_agent("What is machine learning?")

3. With Evaluators

def quality_evaluator(inputs, output):
    """Evaluate response quality."""
    return {"quality_score": assess_quality(output)}

@add_tracing(name="my_agent", evaluator=quality_evaluator)
def my_agent(query: str) -> str:
    return llm.invoke(query)

Framework Support

FrameworkAuto-log Command
OpenAImlflow.openai.autolog()
Anthropicmlflow.anthropic.autolog()
LangChainmlflow.langchain.autolog()

Viewing Traces

  1. Navigate to Experiments in your Domino project
  2. Select the experiment (format: tracing-{username})
  3. Select a run
  4. View the Traces tab for span tree visualization

Blueprint Reference

Official GenAI Tracing Tutorial: https://github.com/dominodatalab/GenAI-Tracing-Tutorial

Documentation Links

Install

Download ZIP
Requires askill CLI v1.0+

AI Quality Score

AI review pending.

Metadata

Licenseunknown
Version-
Updated1/3/2026
Publisherjvdomino

Tags

githubllmobservability