👋Hi, I'm Waqas — a Software Architect and Technical Consultant specializing in .NET, Azure, microservices, and API-first system design..
I help companies build reliable, maintainable, and high-performance backend platforms that scale.
OpenTelemetry in .NET: traces, metrics, and export to Azure or Jaeger.
June 19, 2025 · Waqas Ahmad
Read the article
Introduction
This guidance is relevant when the topic of this article applies to your system or design choices; it breaks down when constraints or context differ. I’ve applied it in real projects and refined the takeaways over time (as of 2026).
Distributed systems need vendor-neutral observability—OpenTelemetry (OTel) is the standard for traces, metrics, and logs, and in .NET you use the OTel SDK to instrument and export to Azure Monitor, Jaeger, or any OTLP backend. This article explains how OTel works in .NET, how to add it at application level, and how to query telemetry so support and platform teams can find failures and latency quickly. For architects and tech leads, standardising on OTel avoids vendor lock-in and keeps one request visible as one trace across services when W3C Trace Context is propagated.
System scale: One service to many; microservices or monoliths that need observability (traces, metrics, logs). Applies when you’re adding or standardising on OpenTelemetry in .NET.
Team size: Dev and ops; someone must own instrumentation, sampling, and export (OTLP, Azure Monitor). Works when the team can add ActivitySource/Meter and configure exporters.
Time / budget pressure: Fits when you need to debug cross-service flows or meet SLOs; breaks down when there’s no backend to query (then instrument but plan where data goes). Cost of full tracing can be high—sampling matters.
Technical constraints: .NET (ASP.NET Core, HttpClient, EF Core); OpenTelemetry SDK; OTLP or Azure Monitor (or Jaeger, Zipkin). Assumes you can run an exporter and query traces.
Non-goals: This article does not optimise for a specific vendor only; it focuses on vendor-neutral OTel and distributed tracing in .NET.
What is OpenTelemetry and what is distributed tracing?
OTel is an open standard for observability: you instrument once with its APIs and SDKs, then export to whatever backend you use (Azure Monitor, Jaeger, Zipkin, Prometheus). Swap backends without changing code.
Distributed tracing means following a single request as it crosses services. Each service creates spans—e.g. “HTTP request”, “DB call”, “ProcessOrder”. Trace ID and span ID are sent in HTTP (or message) headers so every span links into one trace. You get one timeline: API Gateway → Order Service → SQL → Billing. In .NET the SDK sits on top of System.Diagnostics.Activity; you plug in instrumentation (ASP.NET Core, HttpClient, EF Core) and an exporter, and data flows out.
The three (and a half) types of telemetry in OpenTelemetry
OpenTelemetry defines three main signals (types of telemetry), plus baggage and span events that support them. Understanding each type helps you choose what to emit and how to query it.
1. Traces
Traces answer: What path did this request take, and how long did each step take? A trace is a tree of spans. Each span represents one unit of work: an HTTP request, a database call, a call to another service, or a business operation like “ValidateOrder”. A span has:
Name (e.g. GET /orders/123, ProcessOrder)
Start and end time (duration is derived)
Trace ID (same for all spans in one request)
Span ID (unique for this span)
Parent span ID (links to the parent so the tree can be built)
Events (timestamped points in time within the span, e.g. “cache miss”, “validation failed”)
Traces are what you use for distributed tracing: following a request across services and seeing the full timeline. In .NET you create spans via ActivitySource and Activity (from System.Diagnostics).
2. Metrics
Metrics answer: How many? How fast? What is the current value? They are aggregated numerical data: request count, error count, latency histogram, queue length, CPU usage. OpenTelemetry defines four metric instruments:
Instrument
Description
Example use
Counter
Monotonically increasing value (only goes up)
Request count, errors total
UpDownCounter
Value that can go up or down
Queue length, active connections
Histogram
Distribution of values (e.g. latency)
Request duration, payload size
ObservableGauge
Current value, sampled when read
CPU %, memory, current queue depth
In .NET you create and record metrics via Meter and instruments (e.g. CreateCounter, CreateHistogram). Metrics are exported separately from traces (same OTLP pipeline, but different signal).
3. Logs
Logs answer: What happened, when, and in what context? A log record has a timestamp, a severity (e.g. Info, Error), a body (message), and optional attributes. OpenTelemetry does not replace your logger; it integrates with ILogger so that logs are exported in OTLP format and can be correlated with trace ID and span ID. That way you can search logs by trace ID and see all log lines for a single request. In .NET you use your normal ILogger; the OpenTelemetry logging provider adds trace/span context and exports to your backend.
4. Baggage and span events
Baggage is key-value data that is propagated with the trace context (e.g. in tracestate or a baggage header). Use it for data that downstream services need (e.g. tenant ID, feature flags). Span events are timestamped events inside a span (e.g. “cache hit”, “retry attempt 2”). They are part of the trace, not a separate log stream; useful for debugging the internal steps of one operation.
You need a few NuGet packages: OpenTelemetry, OpenTelemetry.Instrumentation.AspNetCore, OpenTelemetry.Instrumentation.Http, OpenTelemetry.Exporter.OpenTelemetryProtocol (or Azure.Monitor.OpenTelemetry.Exporter for Azure). Optionally add OpenTelemetry.Instrumentation.EntityFrameworkCore for EF Core and OpenTelemetry.Extensions.Hosting for the host integration.
Below is a full Program.cs that enables Tracing, Metrics, and Logs, with ASP.NET Core and HttpClient instrumentation and OTLP export. You can swap the exporter for Azure Monitor or Jaeger by changing the exporter configuration.
// Program.cs – OpenTelemetry with Traces, Metrics, and Logsusing OpenTelemetry.Resources;
using OpenTelemetry.Metrics;
using OpenTelemetry.Logs;
var builder = WebApplication.CreateBuilder(args);
// ---- 1. Tracing ----
builder.Services.AddOpenTelemetry()
.ConfigureResource(resource => resource
.AddService(serviceName: builder.Environment.ApplicationName))
.WithTracing(tracing =>
{
tracing.AddAspNetCoreInstrumentation(options =>
{
options.RecordException = true; // capture exceptions on spans
});
tracing.AddHttpClientInstrumentation(options =>
{
options.RecordException = true;
});
// Add your custom ActivitySource so spans are exported
tracing.AddSource("MyApp.Orders");
tracing.AddOtlpExporter(options =>
{
options.Endpoint = new Uri(builder.Configuration["Otlp:Endpoint"] ?? "http://localhost:4317/v1/traces");
});
})
.WithMetrics(metrics =>
{
metrics.AddAspNetCoreInstrumentation();
metrics.AddHttpClientInstrumentation();
metrics.AddMeter("MyApp.Orders"); // your custom meter
metrics.AddOtlpExporter(options =>
{
options.Endpoint = new Uri(builder.Configuration["Otlp:Endpoint"] ?? "http://localhost:4317/v1/metrics");
});
});
// ---- 2. Logs (OpenTelemetry logger provider) ----
builder.Logging.AddOpenTelemetry(logging =>
{
logging.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService(builder.Environment.ApplicationName));
logging.AddOtlpExporter(options =>
{
options.Endpoint = new Uri(builder.Configuration["Otlp:Endpoint"] ?? "http://localhost:4317/v1/logs");
});
});
var app = builder.Build();
app.MapControllers();
app.Run();
What each part does:
ConfigureResource: Sets the service name (and optional version/environment) so that all telemetry is tagged with your app name.
AddAspNetCoreInstrumentation: Creates a span for each incoming HTTP request (method, path, status code).
AddHttpClientInstrumentation: Creates a child span for each outgoing HTTP call and injects W3C Trace Context headers so the downstream service continues the same trace.
AddSource(“MyApp.Orders”): Registers your custom ActivitySource name so that spans you create from that source are exported.
AddMeter(“MyApp.Orders”): Registers your custom Meter name so that metrics you record from that meter are exported.
AddOtlpExporter: Sends traces/metrics/logs to an OTLP endpoint (e.g. collector, Jaeger, or Azure Monitor OTLP). For Azure Monitor you would use UseAzureMonitor() or the Azure Monitor exporter package and connection string instead.
Tracing API in .NET: ActivitySource and Activity (spans)
In .NET, spans are represented by System.Diagnostics.Activity. You create them from an ActivitySource. The OpenTelemetry SDK listens to activities from sources you register (e.g. AddSource("MyApp.Orders")).
// Usage: start a span for a business operationpublicclassOrderService
{
publicasync Task<Order> ProcessOrderAsync(int orderId, CancellationToken ct = default)
{
usingvar activity = OrderActivitySource.Source.StartActivity("ProcessOrder");
if (activity isnull) returnawait GetOrderFromDbAsync(orderId, ct);
activity.SetTag("order.id", orderId);
try
{
var order = await GetOrderFromDbAsync(orderId, ct);
activity.SetTag("order.status", order?.Status ?? "unknown");
return order;
}
catch (Exception ex)
{
activity.SetStatus(ActivityStatusCode.Error, ex.Message);
activity.RecordException(ex);
throw;
}
}
}
Important: If no listener is attached (e.g. OpenTelemetry not configured or source not registered), StartActivity can return null. The pattern if (activity is null) return ... avoids doing work when tracing is disabled. Using the activity disposes it and sets the end time.
Attribute types:SetTag accepts string, int, bool, and similar; they are serialised as strings in the backend. Use consistent key names (e.g. order.id, http.status_code) so that you can query by them later.
Metrics API in .NET: Meter and instruments
Metrics are created from a Meter. You register the meter name in AddMeter("MyApp.Orders") so that the SDK exports your metrics. The main functions are CreateCounter, CreateHistogram, CreateUpDownCounter, and CreateObservableGauge.
// Inject OrderMetrics via DI; in ProcessOrderAsync:var sw = Stopwatch.StartNew();
try
{
var order = await ProcessOrderAsync(orderId, ct);
_orderMetrics.RecordOrderProcessed(order.Status);
return order;
}
catch (Exception ex)
{
_orderMetrics.RecordOrderFailed(ex.Message);
throw;
}
finally
{
_orderMetrics.RecordOrderDuration(sw.ElapsedMilliseconds, "ProcessOrder");
}
Instrument
Create method
When to use
Counter
CreateCounter<long>(name)
Things that only increase: request count, errors total
UpDownCounter
CreateUpDownCounter<long>(name)
Values that go up and down: queue length, active connections
Histogram
CreateHistogram<double>(name, unit)
Distributions: latency (ms), payload size (bytes)
ObservableGauge
CreateObservableGauge(name, callback)
Current value when scraped: CPU %, memory, queue depth
Tags (attributes): Pass KeyValuePair<string, object?> to Add or Record so you can filter and group in the backend (e.g. by order.status, operation).
Logs: ILogger and trace correlation
When you use ILogger and add the OpenTelemetry logging provider (builder.Logging.AddOpenTelemetry(...)), your log records are exported in OTLP format. The SDK adds trace ID and span ID to each log record (when there is an active span). That way, in your backend you can search logs by trace ID and see every log line for a single request.
// In a controller or service – just use ILogger as usual
_logger.LogInformation("Processing order {OrderId}", orderId);
_logger.LogWarning("Retry attempt {Attempt} for order {OrderId}", attempt, orderId);
_logger.LogError(ex, "Order {OrderId} failed", orderId);
No extra code is needed for correlation: the OpenTelemetry logger provider attaches the current activity’s trace ID and span ID to the log record. Ensure your backend (e.g. Application Insights, Jaeger with a log backend) indexes these fields so you can query by trace_id or traceId.
W3C Trace Context and propagation
W3C Trace Context is the standard for propagating trace ID and span ID in HTTP headers. The header traceparent has the form version-traceId-spanId-flags (e.g. 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01). When your app calls another service with HttpClient, the HTTP client instrumentation automatically injectstraceparent (and optionally tracestate) so that the downstream service creates child spans with the same trace ID. One request → one trace across all services.
For message-based calls (e.g. Azure Service Bus), you must inject and extract trace context in the message headers yourself (or use an instrumentation library that does it). The context is stored in Activity.Current; you read traceparent from Activity.Current?.Id and the standard W3C format, and add it to the message; on the consumer side you restore the context and start a new span as a child.
Custom spans and sampling
Custom spans: Use ActivitySource and StartActivity for business operations (e.g. “ProcessOrder”, “ValidateOrder”) so that each step appears in the trace. Sampling: In high-throughput scenarios you may not want to export every trace. Use head-based sampling (decide at the start of the trace) or tail-based (decide after the request, e.g. always export errors). Configure a custom Sampler in WithTracing:
tracing.SetSampler(new ParentBasedSampler(new TraceIdRatioBasedSampler(0.1))); // 10% of traces
Always sample errors (e.g. status code Error or exception) so that failures are visible; many backends support “sample 100% of errors, 10% of success”.
Export: OTLP, Azure Monitor, Jaeger
OTLP: The default. Point AddOtlpExporter to your collector or backend (e.g. http://localhost:4317/v1/traces). Same for metrics and logs (different paths: /v1/metrics, /v1/logs).
Azure Monitor: Use the package Azure.Monitor.OpenTelemetry.Exporter and UseAzureMonitor() (or configure the exporter with connection string). Traces appear in Application Insights as “Transaction search” and “Dependencies”; metrics and logs also flow there.
Jaeger: Run Jaeger with OTLP ingestion (default in recent versions) and point your app’s OTLP exporter to the Jaeger endpoint (e.g. http://localhost:4317). No extra exporter needed.
How to query your telemetry
In enterprise apps the main consumers are support (find why a specific request failed), platform/SRE (find slow or failing operations, trends), and developers (debug a trace from a log line). You need: (1) find by trace ID, (2) find slow/failed requests, (3) correlate logs to a trace, (4) dashboards and alerts. Below: concrete queries in Azure Monitor and Jaeger, then how to wire this into runbooks and dashboards.
Querying in Azure Monitor / Application Insights
Application Insights uses KQL (Kusto Query Language). You query traces (and dependencies) in the dependencies and requests tables; logs in traces (or customEvents); metrics in customMetrics or built-in metrics.
1. Find a trace by trace ID
When you have a trace ID (e.g. from a log line or an error report), search for all telemetry with that trace ID:
union requests, dependencies
| where operation_Id == "0af7651916cd43dd8448eb211c80319c"
| order by timestamp asc
operation_Id in Application Insights is the trace ID. This returns all requests and dependencies (spans) that belong to that trace, in time order.
2. Find slow requests or failed requests
requests
| where duration > 1000 or success == false
| project timestamp, name, duration, success, operation_Id, resultCode
| order by timestamp desc
3. Find all spans (dependencies) for a given operation
dependencies
| where operation_Id == "0af7651916cd43dd8448eb211c80319c"
| project timestamp, name, data, duration, success
| order by timestamp asc
4. Search logs by trace ID
traces
| where operation_Id == "0af7651916cd43dd8448eb211c80319c"
| project timestamp, message, severityLevel
| order by timestamp asc
5. Query custom metrics
If you exported a counter or histogram (e.g. orders.processed, order.duration), they appear in customMetrics:
customMetrics
| where name == "order.duration" or name == "orders.processed"
| summarize avg(value), sum(value) by name, bin(timestamp, 1h)
6. Application Insights “Transaction search” and “Performance”
In the Azure portal, use Transaction search (under Investigation) to search by operation name, trace ID, or time range. Use Performance to see slow operations and drill into the trace. The Application map shows dependencies between services when trace context is propagated.
Querying in Jaeger
In the Jaeger UI (e.g. http://localhost:16686):
Search by Service: Select your service name (the one you set in ConfigureResource or AddService).
Search by Operation: e.g. ProcessOrder, GET /orders/{id}.
Search by Trace ID: If you have the trace ID, paste it in the “Trace ID” field and click Find. You get the full trace tree.
Tags: Add tags (e.g. order.id=123, error=true) to filter traces that have those span attributes.
Min / Max Duration: Filter by duration to find slow traces.
Clicking a trace shows the timeline (waterfall) of all spans and their attributes and events.
Querying logs by trace ID (any backend)
Search logs where trace_id (or Application Insights operation_Id) equals the trace ID. You get every log line for that request in one place.
Enterprise querying: runbooks and dashboards
Runbooks: Document “when user reports error X, get trace ID from log/error report → run this KQL (or Jaeger search) → interpret by looking at failed span and its parent”. Keep the KQL in the runbook so support or on-call can paste and run.
Dashboards: Pin the queries that matter: failed requests last 24h, p95 latency by operation, error rate by service. Use Application Insights workbooks or Grafana; alert when thresholds break so platform teams act before users complain.
SLOs: Use metrics (e.g. order.duration histogram, orders.failed counter) to define SLOs and burn-rate alerts. Traces then explain why a specific request failed when the alert fires.
Enterprise best practices
Application-level setup: Add OTel once in Program.cs (or Startup.cs) with a shared resource name (service name, version). Use a single AddOpenTelemetry() call for tracing and metrics; add the logging provider so logs get trace/span IDs. Keep exporter endpoint and sampling in config, not code.
Naming: Use one ActivitySource and one Meter per bounded context or team (e.g. MyCompany.Orders, MyCompany.Billing). Register every source/meter name in AddSource / AddMeter or nothing from that code path is exported.
Attributes: Stick to a small set of attribute names (e.g. order.id, http.status_code) so queries and dashboards are consistent. Avoid PII in span attributes; use in logs only where policy allows.
Propagation: Enable HTTP client instrumentation everywhere you call another service. For queues or buses, add context to message headers and restore it in the consumer; otherwise you get broken traces across services.
Sampling: In high-throughput apps use head-based sampling (e.g. 10%) and always sample errors. Tune by cost vs. need; document the choice in your runbook.
Queryability: Design so support can find a trace from a trace ID (from error page, log, or support ticket). Standardise on “trace ID” in user-facing error messages or logs so runbooks work.
Common Issues and Challenges
Missing propagation: Downstream HTTP or message calls without trace context create orphan spans (different trace ID). Fix: add HttpClient instrumentation and, for messaging, inject/extract context in message headers.
Custom spans or metrics not showing: You must register your ActivitySource name in AddSource("MyApp.Orders") and your Meter name in AddMeter("MyApp.Orders"). Otherwise the SDK ignores them.
High volume / cost: Export every trace only if you can afford it. Use sampling (e.g. 10% of traces, 100% of errors) and batch export. In Application Insights, use sampling and retention policies to control cost.
Wrong OTLP endpoint: Ensure the exporter endpoint is correct (e.g. http://localhost:4317/v1/traces for a local collector). For Azure Monitor, use the Azure Monitor exporter or the correct Azure OTLP endpoint and connection string.
Logs not correlated: Ensure the OpenTelemetry logging provider is added and that your backend indexes trace_id / operation_Id so you can search logs by trace.
OTel gives vendor-neutral traces, metrics, and logs; in .NET use ActivitySource/Activity for spans, Meter for metrics, and ILogger with the OTel provider, then instrument ASP.NET Core and HttpClient and export via OTLP or Azure Monitor. Skipping W3C propagation or sampling leads to unlinked traces or runaway cost; putting trace ID in runbooks and error messages makes support able to find the full path. Next, add the OTel SDK and ASP.NET Core/HttpClient instrumentation, set sampling (e.g. 100% errors, 10% success), then add trace ID to logs and runbooks; FAQs below for quick reference.
Position & Rationale
I use OpenTelemetry for new .NET observability so we’re not locked to one vendor; we can switch from Jaeger to Azure Monitor by changing the exporter. I add ASP.NET Core and HttpClient instrumentation first so every request and downstream call gets a span; then EF Core if we need DB visibility. I set sampling (e.g. 100% of errors, 10% of success) so we don’t blow cost; I avoid “trace everything” in high-volume production without a sampling strategy. I propagate W3C Trace Context so cross-service calls link into one trace. I put trace ID in error messages and runbooks so support can find the full path. I don’t add custom spans for every method—only for meaningful operations (e.g. “ProcessOrder”, “CallBilling”).
Trade-Offs & Failure Modes
OpenTelemetry adds dependency on SDK and exporter; you gain vendor-neutral instrumentation. Full tracing can be expensive; sampling and retention policies are necessary. Log correlation (trace_id in logs) requires the logging provider and a backend that indexes it. Failure modes: no sampling and high export cost; wrong OTLP endpoint so no data appears; logs not correlated so you can’t jump from trace to logs; too many custom spans (noise).
What Most Guides Miss
Most guides show “add OTel and export” but don’t stress sampling—in production you often can’t afford 100% of traces; use head-based or tail-based sampling and always sample errors. Another gap: querying—having traces is useless if no one uses trace ID to find failures or slow paths; runbooks and error pages should include trace ID. Logs + trace_id correlation is often missing; add the OTel logging provider and ensure the backend indexes trace_id so you can search logs by trace.
Decision Framework
If adding observability to .NET → Add OpenTelemetry SDK; instrument ASP.NET Core and HttpClient; export to OTLP or Azure Monitor.
For sampling → Use a sampling strategy (e.g. 100% errors, 10% success) to control cost.
For cross-service → Ensure W3C Trace Context is propagated (HttpClient instrumentation does this).
For debugging → Put trace ID in error responses and runbooks; query by trace ID in your backend.
For logs → Add OTel logging provider and index trace_id so logs and traces link.
Key Takeaways
OpenTelemetry = vendor-neutral traces, metrics, logs; instrument once, export to any backend.
Sampling is essential in production to control cost; sample errors fully, sample success proportionally.
Trace ID in error messages and runbooks so you can find the full request path.
Correlate logs with traces (trace_id); use the OTel logging provider and index trace_id.
For production-grade Azure systems, I offer consulting on cloud architecture, scalability, and cloud-native platform design.
When I Would Use This Again — and When I Wouldn’t
I’d use OpenTelemetry again for any new .NET service that needs tracing or metrics—standard instrumentation and exporter, with sampling. I’d use W3C propagation so cross-service calls form one trace. I wouldn’t enable 100% trace sampling in high-throughput production without a cost plan. I also wouldn’t add observability without a way to query it (backend, dashboards, runbooks that use trace ID); otherwise the data is unused.
Frequently Asked Questions
Frequently Asked Questions
What is OpenTelemetry?
OpenTelemetry (OTel) is an open standard for observability: traces, metrics, and logs. It provides SDKs and APIs so that you instrument once and export to any backend (Azure Monitor, Jaeger, Zipkin, Prometheus, etc.).
What is a span?
A span represents one unit of work (e.g. HTTP request, DB call, ProcessOrder). It has a name, start/end time, trace ID, span ID, optional attributes, status, and events. Spans are linked by trace ID to form a trace.
What is a trace?
A trace is the full path of a request across services; it is a tree of spans. The trace ID ties all spans together; use it in your backend to search and visualise the request flow.
What are the types of telemetry in OpenTelemetry?
Traces (spans), metrics (counters, histograms, gauges, up/down counters), and logs (log records with severity and body). Baggage and span events support traces. Each type has its own API in .NET: ActivitySource/Activity for traces, Meter for metrics, ILogger for logs.
What is W3C Trace Context?
W3C Trace Context is a standard for propagating trace ID and span ID in HTTP headers (traceparent, tracestate). Downstream services read these headers and create child spans with the same trace ID so that one request produces one trace across all services.
How do I add OpenTelemetry to .NET?
Add packages OpenTelemetry.Instrumentation.AspNetCore, OpenTelemetry.Instrumentation.Http, and OpenTelemetry.Exporter.OpenTelemetryProtocol (or Azure.Monitor.OpenTelemetry.Exporter). In Program.cs call AddOpenTelemetry().WithTracing(...).WithMetrics(...), add AddAspNetCoreInstrumentation(), AddHttpClientInstrumentation(), AddSource and AddMeter for your custom names, and AddOtlpExporter(...). For logs, use builder.Logging.AddOpenTelemetry(...).
How do I create custom spans in .NET?
Use ActivitySource (e.g. new ActivitySource("MyApp.Orders", "1.0.0")) and register the name in AddSource("MyApp.Orders"). Start a span with source.StartActivity("OperationName"); use SetTag, AddEvent, SetStatus, RecordException. Dispose or end the activity when the operation completes.
What functions can I use on a span (Activity)?
SetTag(key, value) for attributes, AddEvent(name) or AddEvent(name, timestamp, attributes) for events, SetStatus(ActivityStatusCode.Error, description) for errors, RecordException(exception) to attach an exception. Use consistent key names for querying.
How do I create custom metrics in .NET?
Create a Meter (e.g. new Meter("MyApp.Orders", "1.0.0")) and register the name in AddMeter("MyApp.Orders"). Use CreateCounter, CreateHistogram, CreateUpDownCounter, or CreateObservableGauge. Call Add(amount, tags) on counters or Record(value, tags) on histograms. Pass KeyValuePair for tags.
How do I export to Azure Monitor?
Use the package Azure.Monitor.OpenTelemetry.Exporter and call UseAzureMonitor() (or configure the exporter with your connection string). Traces appear in Application Insights as requests and dependencies; metrics and logs also flow there.
How do I query traces in Azure Monitor?
Use KQL: union requests, dependencies | where operation_Id == "your-trace-id" | order by timestamp asc. operation_Id is the trace ID. Use Transaction search in the portal or Performance to drill into slow or failed requests.
How do I query traces in Jaeger?
In the Jaeger UI, search by Service, Operation, or paste the Trace ID. Add tags (e.g. order.id=123) to filter. Use Min / Max Duration to find slow traces. Click a trace to see the full span tree and timeline.
How do I correlate logs with traces?
Add the OpenTelemetry logging provider (builder.Logging.AddOpenTelemetry(...)). The SDK attaches the current trace ID and span ID to each log record. In your backend, search logs by trace_id (or operation_Id in Application Insights) to see all log lines for a request.
What is sampling and when should I use it?
Sampling means exporting only a fraction of traces (e.g. 10%) to reduce cost and volume. Use head-based sampling (TraceIdRatioBasedSampler) or tail-based (decide after the request). Always sample errors so that failures are visible.
Why are my custom spans or metrics not showing?
Register your ActivitySource name in AddSource("YourSourceName") and your Meter name in AddMeter("YourMeterName"). Without that, the SDK does not export them.
What is the difference between Counter and Histogram?
Counter is for values that only increase (e.g. request count, errors total); you call Add(amount). Histogram is for distributions (e.g. latency, payload size); you call Record(value) and the backend computes percentiles (p50, p95, p99).
What are best practices for OpenTelemetry in enterprise apps?
Add OTel once in Program.cs with a single AddOpenTelemetry() for tracing and metrics; add the logging provider so logs get trace/span IDs. Use one ActivitySource and one Meter per bounded context (e.g. MyCompany.Orders) and register every name in AddSource/AddMeter. Keep attribute names consistent (e.g. order.id) and avoid PII in span attributes. Put trace ID in runbooks and user-facing error messages so support can find a trace quickly.
How do I propagate trace context for message queues (e.g. Service Bus)?
HTTP client instrumentation injects traceparent automatically. For queues or buses you must inject and extract context in message headers yourself: read the current activity’s trace ID/span ID (or the W3C traceparent string) and add it to the outbound message; in the consumer, restore the context and start a new span as a child. Some libraries (e.g. Azure Service Bus instrumentation) do this for you.
How do I use OpenTelemetry for runbooks and dashboards in enterprise?
Document in runbooks: when user reports error X, get trace ID from log or error page → run this KQL (or Jaeger search) → interpret by failed span and parent. Pin the KQL or Jaeger query so support can paste and run. For dashboards use metrics (e.g. order.duration histogram, orders.failed counter) and alert when thresholds break; use traces to explain why a specific request failed when the alert fires.
Related Guides & Resources
Explore the matching guide, related services, and more articles.