A customer-facing observability builder for TAM-led Puppet Enterprise demos, live dashboard storytelling, screenshot-ready deck prep, and exportable starter packs. Discovery notes stay local. Fake data stays fake.
Keep a live dashboard one click away for discovery calls and demos.
Switch between live demo, screenshot deck, and customer export motions.
Each dashboard can have its own refresh rate and fake data profile.
Generation is staged and bundled at the end of the workflow.
Nothing is stored automatically. Use Save Session at the end to download a local JSON handoff.
Compresses platform posture into a first-screen demo that makes the rest of the story legible.
Separates broad platform health from localized trouble so follow-up dashboards stay focused.
Lets a TAM connect observability to risk reduction and platform confidence without drowning in raw metrics.
Infrastructure saturation is where PE complaints often begin before they are recognized as platform issues.
Builds confidence that the demo is not just executive paint but operationally useful telemetry.
Turns alert noise into actionable narrowing, which is usually where PE teams lose time.
This is where the demo proves the platform can go beyond health lights into workflow detail.
Patch visibility is one of the easiest TAM wins because it turns routine work into measurable confidence.
Stable service timings, clean runs, modest queue depth, and steady node compliance.
Higher failure counts, slower compile times, queue growth, and visible infrastructure strain.
Higher fleet sizes, wider environment spread, and larger backlog pressure without collapsing health.
Use this after discovery, not during it. All profiles remain synthetic.
PROFILE=healthy ./scripts/start-demo.sh healthy PROFILE=degraded ./scripts/start-demo.sh degraded PROFILE=degraded ./scripts/start-demo.sh degraded
Finalize the dashboard mix and mark which ones should receive fake data.
Run the capture flow once the data profile is loaded and the refresh cadence is stable.
Export slide notes so each screenshot can drop directly into PowerPoint with operator guidance.
npm install npx playwright install chromium ./scripts/capture-demo.sh
puppet_reports.average_run_durationLonger runs usually precede trust erosion and missed windows.
puppet_reports.succeeded_count / total_runsConverts technical execution into a trust metric for change processes.
puppetserver.compile_time_p95Explains why runs feel worse even before broad failure rates climb.
puppet_reports.corrective_countMeasures how much of the platform is reacting to drift instead of planned change.
puppet_reports.corrective_count by node or environmentReveals where declared state is least trustworthy.
system_cpu.usageCPU pressure often explains compile or orchestrator slowdowns.
puppet_data_connector top run_duration by certnameMakes cross-node tuning conversations concrete.
puppet_data_connector.event_count/change_count/failure_countTurns run detail into a tunable workload story.
system_disk.percent_usedStorage bottlenecks affect reports, DB performance, and stability.
puppet_events.failure_count by resource_typeShows whether failures cluster around one class of managed object.
puppet_reports.failed_count / total_runsHighlights customer-facing risk faster than raw fail counts alone.
service failure proxy from status and error countersCreates a service-facing signal customers immediately understand.
composite of failures, queue depth, and 5xx proxiesGives an at-a-glance signal for whether operators should stay in incident mode.
system_memory.percent_usedMemory pressure can hide behind intermittent failures and restarts.
puppet_inventory.node_count by environmentShows whether the dashboard scope matches the actual managed estate.
orchestrator.deploy_queue_lengthExplains whether slow outcomes come from demand piling up.
puppet_data_connector.patch_job_statusMeasures whether patch jobs reach the finish line reliably.
puppet_data_connector.patch_node_error_countPrioritizes remediation for patching friction.
derived patch_job_status over timeConnects patching to governance and change trust.
orchestrator|puppetdb|puppetserver service statusSeparates service degradation from workload-driven slowdowns.
puppet_data_connector.run_durationUseful for node-level comparison in larger estates.
postgresql.connectionsConnection stress exposes database saturation early.
orchestrator.deploy_queue_length and throughputShows if operational demand is outrunning system capacity.
puppet_data_connector.resource_evaluation_timeHighlights catalog evaluation cost beyond total run time.
puppet_reports.run_duration top valuesOutliers point to issues averages flatten away.
puppet_reports.failed_countFastest way to show that policy execution is no longer normal.
puppet_reports.succeeded_countShows whether configuration management is completing as expected.
puppet_data_connector.patch_job_durationDuration matters as much as success during maintenance windows.
puppet_reports.failed_count by certnamePrioritizes follow-up by highest operational pain.
puppet_inventory.node_countEstablishes scale before discussing health or efficiency.
Agents should only work with the provided fake data profiles, dashboard JSON, and exported local notes. Never paste real customer telemetry into the builder.
Use the discovery prompts first, finalize the dashboard plan second, and only then ask the agent to run capture or export commands.
Have the agent export slide notes, fake data commands, and customer handoff artifacts instead of relying on implicit state.
Treat the generated dashboard pack as a draft. Review metric naming, refresh cadence, and persona mapping before customer handoff.
No automatic saves. No remote persistence. No customer data retention. Notes export only when the operator chooses to download them.
Use for live command demos or incident walk-throughs.
Use for TAM-led discovery when change is visible but calmer pacing helps.
Use for leadership scorecards and broader platform reviews.
Use for static screenshots and exported customer starter packs.
flowchart LR
A[Discovery prompts] --> B[Select personas and dashboards]
B --> C[Choose fake data profiles]
C --> D[Set refresh cadence per dashboard]
D --> E[Open live Grafana demo]
D --> F[Run screenshot capture flow]
D --> G[Export customer dashboard pack]
A --> H[Local-only session export]
F --> I[PowerPoint slide notes]
G --> J[Customer installation handoff]