Emphasize that the talk is practical: attendees should leave with a copy/paste workflow and a sense of the roadmap.
Underline the “stay in the terminal” theme to set expectations for the demos.
Ask the audience to raise hands if they touch dashboards daily—highlight empathy for people who live in terminals.
Frame the tools as lowering friction for both developers and lab owners.
Share quick anecdote about juggling Gentoo, CIP, and upstream trees.
Point out that CI fragmentation leads to context switching and delays.
Remind audience that KernelCI is already rich; the gap is ergonomics, not coverage.
Explain that the CLI layers on top of existing dashboards and APIs.
In the last 7 days we did 7108 kernel builds
https://api.kernelci.org/stats
Describe how buildbot-try shaped expectations: one command to submit, readable output.
Position kci-dev as bringing that comfort to KernelCI for more trees.
Highlight the “single binary” value: fewer bespoke scripts.
Stress that defaults are tuned for developers, not dashboard operators.
Explain verbally: - CIP = Civil Infrastructure Platform, industrial-grade Linux for long-lived systems and maintain SLTS kernels extend upstream LTS up to ~10 years, so continuous testing of KernelCI data are critical to keep them safe over their whole lifetime.
- **Civil Infrastructure Platform (CIP)**
- Linux Foundation project aiming to provide an *industrial-grade* Linux base layer for things like power grids, trains, factories, and other critical infrastructure.
- CIP maintains **Super Long-Term Support (SLTS)** kernels: LTS-based branches (e.g. 4.4-cip, 4.19-cip, 5.10-cip, 6.1-cip, 6.12-cip) with **~10 years of security and bug-fix maintenance**, beyond normal LTS lifetimes.
explain that they also have a graph view report system
if there is interesting in check what kernelci-pipeline is doing, feel free to check the link
Walk through arrows briefly: where commands talk to dashboard vs Maestro.
Mention that the same CLI can be pointed at different instances via config.
Call out that commands are grouped by task: results, Maestro control, validation.
Note that defaults aim to be readable tables with JSON available for scripts.
Use this slide to narrate a “morning check” story from left to right.
Emphasize how little configuration is needed for read-only results.
Clarify naming: kci-dev is for daily devs, kci-deploy is for lab owners.
Invite folks to try PyPI first, then explore Maestro commands when they have tokens.
Reassure that installation is lightweight: venv + pip is enough.
Mention that config is only needed when hitting Maestro; results commands are open.
Encourage audience to enable completions for discoverability.
Show quick demo of pressing tab to list subcommands if time permits.
Explain that results commands need no tokens; Maestro ones do.
Point out that `compare` and `hardware summary` are often the first useful entry points.
This commands line up with the first two real questions
1) “What changed between these two versions?” → results compare
“This release looks broken. What changed between this and the last good one?”
2) “What’s happening on this board / platform I care about?” → results
hardware summary
“Why is this SoC/lab/board red again?”
“Is this regression just my board or across everything?”
compare is for comparing between commits with summary and regressions.
Tell the story quickly: morning summary, chase failures, decide if it’s infra, then automate.
Encourage saving commands to a script or chat message for team visibility.
I discovered yesterday that because of changes on api results compare is broken
Share that quiet/JSON modes make it easy to integrate with `jq` and CI scripts.
Mention color-coded history view and how it surfaces regressions quickly.
Be transparent about current latency and that caching work is underway.
Ask for patience and feedback—this is an area where contributors can help.
Position kci-deploy as lowering setup time for new labs.
Invite early adopters to share network/storage pain points.
Give one concrete example for each pattern (e.g., nightly Matrix digest, pre-merge hook).
Highlight that CLI output formats make these automations simple.
_note:
example of what we can do with kci-dev maestro
_note:
example of what we can do with kci-dev kcidb
Here I explain what the three result states in kci-dev really mean.
“pass” means we have strong evidence things are OK: the build/boot/tests actually ran to completion and all relevant checks are green. These are our known-good reference points when we look for regressions or do bisects.
“fail” means we have strong evidence something is broken in the kernel/config/test combo: builds that really fail, boots that don’t come up, or tests with clear assertion failures. These are the ones we treat as real bugs and worth debugging or bisecting.
“inconclusive” means CI didn’t give us a trustworthy answer: jobs errored out, timed out, got cancelled, or data is missing/partial. It’s not a confirmed regression — it just tells us we need to re-run or fix infra before we can call it pass or fail.
Reinforce that the goal is faster iteration with less context switching.
Prompt the audience to try one results command this week.
Ask for collaborators on diffing and git integration features.
Invite labs to pilot kci-deploy and provide feedback on installers.
Mention caching and trigger features as active research areas.
Encourage distro maintainers to chime in on packaging needs.
Point attendees to kci.dev for guides and announce that contributions are welcome.
Suggest filing issues for missing boards or data fields.
Give a simple call to action: run one results command, then file feedback.
Mention community calls as a good venue for follow-up questions.
Thank the audience and invite questions about specific workflows.
Encourage them to ping on Matrix or GitHub after the session.
Narrate commands live if possible; otherwise explain what each does and why the options matter.
Stress that the same pattern works for any git tree and branch.
Explain how `--download-logs` keeps you out of the browser.
Mention that `validate` helps catch mismatches between data sources.