Quick Start¶
Connect to a running Zelos Agent and run your first query. By the end you'll have browsed the live catalog, pulled a chart-ready frame, watched values stream in, and saved a snapshot to disk.
Prerequisites¶
- Zelos App or Agent running (default:
localhost:2300) - Python 3.10+
pip install zelos-sdk
No live data yet?
Run the demo data generator in another terminal so the agent has signals to serve:
The examples below use paths from this generator. Substitute your own if your catalog differs.
1. Connect¶
from zelos_sdk.agent import connect
with connect("localhost:2300") as agent:
print("connected to", agent.target) # "http://localhost:2300"
print("health:", agent.health())
The Agent is a context manager — exiting the with block closes the connection. You can also call agent.close() explicitly.
Connection options
- Bare host —
connect("myhost")→http://myhost:2300 - Host + port —
connect("myhost:2400")→http://myhost:2400 - Full URL —
connect("http://myhost:2300") - Default —
connect()readsZELOS_AGENT_URL, falling back tohttp://localhost:2300
Catch zelos_sdk.agent.AgentError (or subclasses like AgentUnavailable) for connection failures.
For a richer snapshot, agent.info() returns health, log/config dirs, memory, and settings. Anything that can't be loaded is None and named in info.failures.
2. Browse Signals¶
Every signal has a path of the form source/message.name, a data type, and usually a unit.
catalog = agent.signals(lookback=60.0) # last 60s of activity
print(len(catalog), "signal(s) live")
for s in catalog.search("voltage")[:5]: # case-insensitive substring
print(s.path, s.data_type, s.unit)
Slicing a SignalCatalog returns a SignalCatalog, so calls chain. Look up by exact path with catalog.by_path("bus0/BMS_message/status.cell_voltage").
3. Query Time-Series¶
Pull a window of samples into a SignalFrame. Pass a single path, a list, or wildcards — they all expand against the live catalog.
paths = [
"bus0/BMS_message/status.cell_voltage",
"bus0/BMS_message/status.pack_current",
]
frame = agent.query(paths, start="-30s", end="now", downsample=400)
print(frame) # rows, columns, query duration
df = frame.to_pandas() # pandas DataFrame, UTC index
arrow = frame.to_arrow() # pyarrow Table
series = frame[paths[0]] # one column as a SignalSeries
print(series.legend, series.unit)
print("mean:", series.mean(), "max:", series.max())
start / end accept a datetime, "now", an ISO 8601 string, or a relative offset ("-30s", "-2m", "-1.5h", "-1d").
Downsampling
downsample=N returns a plot-friendly view of about N points per signal — perfect for charting long windows. Omit it for full-resolution samples; pair with max_rows=N to cap large windows.
Series compose with arithmetic. Units flow through:
v = frame["bus0/BMS_message/status.cell_voltage"]
i = frame["bus0/BMS_message/status.pack_current"]
power = v * i # SignalSeries with unit "V·A"
energy = power.integrate() # cumulative trapezoidal, unit "V·A·s"
print(power.legend, "->", float(power.mean()))
Iterate frame.signals for catalog metadata — each entry has a clean .path, plus .unit and .producer:
4. Latest Values¶
For point-in-time reads, use agent.latest():
# Single string -> single LatestValue
lv = agent.latest("bus0/BMS_message/status.cell_voltage")
print(lv.value, lv.signal.unit, lv.formatted())
# List or tuple -> dict[str, LatestValue]
for path, lv in agent.latest(paths).items():
print(path, "=", lv.formatted(), "@", lv.time)
LatestValue.formatted() renders for display: enum label > numeric with unit > bool > hex > string. lv.value is the typed scalar; lv.signal carries the metadata.
Wildcards and ambiguity
A list always returns a dict, even if a path expands to many signals. A single string that expands to multiple signals raises AmbiguousSignal — wrap it in a list to get a dict instead.
To read at a specific instant, use agent.at(paths, time):
from datetime import datetime, timedelta, timezone
cursor = datetime.now(timezone.utc).replace(microsecond=0)
values = agent.at(paths, time=cursor, min_time=cursor - timedelta(seconds=10))
5. Watch Live Changes¶
agent.watch() is a polling iterator that yields dict[str, LatestValue] per tick. Bound the loop with until= (seconds, or a datetime deadline).
for tick in agent.watch(paths, interval=1.0, until=5.0):
line = ", ".join(f"{p.split('.')[-1]}={lv.formatted()}" for p, lv in tick.items())
print(line)
Watch everything
paths=None (the default) snapshots the catalog once at entry. on_error="skip" keeps the loop alive on transient errors with a warning, but still re-raises after max_consecutive_errors consecutive failures.
6. Save and Re-Open¶
agent.export() writes a .trz file from live data. The file lives on the agent's host.
result = agent.export("/tmp/run.trz", start="-1m", end="now", overwrite=True)
print("wrote", result.path, "ok:", result.ok)
Re-open it with the same query API, plus relative-time bounds:
trace = agent.open_trace("/tmp/run.trz")
print("trace covers", trace.time_range.duration.total_seconds(), "seconds")
offline = trace.query(paths, relative_start=0, relative_end=30)
print(len(offline), "rows over the first 30s of the file")
To overlay multiple traces on a shared t=0, use agent.open_traces([...]). TraceSet adds a time_mode='absolute'|'relative' kwarg (default "relative") on every query method.
7. Inspect Actions¶
Actions are typed operations the agent (or an extension) exposes. Each one has a JSON Schema for its parameters.
for action in agent.actions.list():
print(action.name)
# Inspect one action's parameter schema
schema = agent.actions.schema("action/name")
print(schema.action_schema) # JSON Schema dict
print(schema.ui_schema) # rendering hints
print(schema.default_timeout_ms)
Execute by name with a params dict:
result = agent.actions.execute("action/name", {"param": "value"}, timeout=5.0)
print("status:", result.status, "ok:", result.ok)
print("value:", result.value)
result.ok is True for "pass" and "done". See Run Actions for concrete extension examples.
What's Next¶
- Query Live Data — full surface of
query(),latest(),at(),window(),watch() - Open Trace Files —
TraceandTraceSetfor offline analysis - Run Actions — schemas, validation, dynamic choices, error handling
- Manage Extensions —
start,stop,restart, configs, and READMEs - Streaming SDK — produce data the agent can serve