Core Concepts¶
Architecture¶
graph TB
subgraph "Your Applications"
A1[App 1<br/>+SDK]
A2[App 2<br/>+SDK]
A3[Test Suite<br/>+SDK]
end
subgraph "Zelos Agent"
B[gRPC Server]
C[In-Memory Buffer]
D[Stream Publisher]
end
subgraph "Consumers"
E[Zelos App]
F[Cloud]
end
A1 --> B
A2 --> B
A3 --> B
B --> C
C --> D
D --> E
D --> F
Data Flow¶
- Your code calls SDK logging functions
- SDK validates types and adds timestamps
- Agent stores in memory and broadcasts to UI
- Visualization displays real-time data
Data Model¶
Hierarchy¶
Source / Event . Field
└──┬──┘ └─┬──┘ └─┬──┘
│ │ │
│ │ │
│ │ └───────────── Data point within event
│ └───────────────────── Collection of related fields
└───────────────────────────── Logical component/subsystem
Example: motor_controller/status.rpm
└──────┬───────┘ └──┬──┘ └┬┘
│ │ └─── Field (RPM value)
│ └───────── Event (status event)
└────────────────────── Source (motor controller)
Sources¶
A source identifies a logical component or subsystem. Sources are lightweight - create as many as needed for organization.
# Sources provide namespace isolation
motor = zelos_sdk.TraceSource("motor_controller")
battery = zelos_sdk.TraceSource("battery_management")
sensors = zelos_sdk.TraceSource("sensor_array")
Properties:
- Unique identifier (UUID) per instance
- Automatic segment start/end events
- Independent event schemas
- No performance penalty for multiple sources
Events¶
An event groups related fields that are logged together. All fields in an event share the same timestamp, ensuring temporal correlation.
# All three values logged atomically with same timestamp
source.log("telemetry", {
"voltage": 48.2, # ┐
"current": 12.5, # ├── Same timestamp
"power": 602.5 # ┘
})
Properties:
- Named collection of fields
- Single timestamp for all fields
- Schema definition (optional but recommended)
- Support for nested events (submessages)
Fields¶
A field is an individual data point within an event. When combined with source and event names, it becomes a signal - a unique time series.
# Define field with type and unit
field = TraceEventFieldMetadata(
name="temperature",
data_type=DataType.Float64,
unit="°C" # Optional
)
Properties:
- Strong typing (14 data types)
- Optional units metadata
- Value tables for enums
- Becomes a signal when qualified with source/event
Data Types¶
Type | Bytes | Range | Use Case |
---|---|---|---|
int8 | 1 | -128 to 127 | Small signed values |
int16 | 2 | -32,768 to 32,767 | Sensor readings |
int32 | 4 | ±2.1 billion | Counters |
int64 | 8 | ±9.2 quintillion | Timestamps |
uint8 | 1 | 0 to 255 | Byte values, IDs |
uint16 | 2 | 0 to 65,535 | Analog values |
uint32 | 4 | 0 to 4.2 billion | Large counters |
uint64 | 8 | 0 to 18 quintillion | Unique IDs |
float32 | 4 | ±3.4E38 (7 digits) | Sensor data |
float64 | 8 | ±1.7E308 (15 digits) | Precise measurements |
bool | 1 | true/false | States, flags |
string | Variable | UTF-8 text | Text, states |
binary | Variable | Raw bytes | Payloads, images |
timestamp_ns | 8 | Nanoseconds since epoch | Time points |
Timing¶
Timestamps¶
Every event gets a timestamp automatically:
# Automatic timestamp (current time)
source.log("data", {"value": 42})
# Explicit timestamp (nanoseconds since epoch)
source.log_at(1699564234567890123, "data", {"value": 42})
# Using time module
import time
timestamp_ns = time.time_ns()
source.log_at(timestamp_ns, "data", {"value": 42})
Resolution: Nanosecond (10⁻⁹ seconds) Epoch: Unix epoch (1970-01-01 00:00:00 UTC) Range: ±292 years from epoch
Segments¶
A segment represents the lifetime of a source, from creation to destruction:
source = TraceSource("test") # Sends segment start
# ... your logging ...
del source # Sends segment end (or when garbage collected)
Segments enable:
- Tracking source lifecycle
- Grouping related events
- Metadata association
- Clean data boundaries
Buffering & Batching¶
How It Works¶
The SDK uses intelligent batching to optimize throughput:
- Events accumulate in buffer
-
Batch sent when:
-
Buffer reaches
batch_size
(default: 1000) - Timer reaches
flush_interval
(default: 100ms) - Explicit flush called
- Backpressure applied if agent can't keep up
Configuration¶
Network Protocol¶
- Transport: gRPC over HTTP/2
- Encoding: Protocol Buffers v3
- Stream: Bidirectional for publish, server-stream for subscribe
Schema Evolution¶
Adding Fields¶
Safe to add new fields - old consumers ignore them:
# Version 1
source.log("data", {"temperature": 25.0})
# Version 2 - Safe to add field
source.log("data", {"temperature": 25.0, "humidity": 60.0})
FAQ¶
How much data can I send?
The Agent can handle millions of points/second. The SDK batches efficiently, but consider your network bandwidth and Agent resources.
What happens if the Agent is down?
The SDK buffers data locally (up to configured limits) and automatically reconnects. Data is preserved within buffer capacity.
Can I use multiple sources in one file?
Yes! Create as many sources as needed. They're lightweight and help organize your data.