What
Struct is an AI on-call agent that automates investigation steps in an engineering on-call runbook. It cross-references logs, metrics, traces, and your codebase to proactively help root-cause engineering alerts and bugs, and it can respond with a root cause, impact analysis, and a suggested fix.
It is positioned for fast-moving software teams that rely on observability, alerting, and work-tracking tools. The workflow emphasized is: connect key data sources, let Struct auto-investigate new alerts as they occur, then review evidence and act from Slack or deeper investigation views (timelines, commit history, log queries) using AI investigation reports.
Features
- Broad stack context ingestion: Pulls context from across observability/alerting, cloud logs, and work tools (examples listed include Sentry, Datadog, Slack, Linear, Asana, GitHub) to reduce time spent switching systems during incidents.
- Automatic alert investigation: Automatically investigates engineering alerts as they occur and replies with a root cause, impact analysis, and suggested fix to speed initial triage.
- On-demand investigations via Slack mentions: Supports triggering an investigation by @mentioning Struct, enabling quick checks without leaving team chat.
- Evidence review and deeper exploration: Lets engineers review collected evidence and test hypotheses in Slack or via incident timelines, commit histories, and log queries backed by AI investigation reports.
- PR creation and handoff support: Provides one-click creation of PRs (with a claim that they “always build cleanly”) and the ability to hand off tasks to a coding agent with full context included.
- Security and data handling claims: States data is logically isolated, not used for training, encrypted, and that the product is SOC2 Type II and HIPAA compliant (with more details referenced at trust.struct.ai).
Helpful Tips
- Validate source coverage early: Before rollout, confirm your primary alerting/observability tools and log sources are supported in practice for your stack (the site lists examples and “all leading” platforms, but you should verify your exact setup).
- Define what “good” looks like for investigations: Establish internal expectations for what a useful auto-investigation should include (suspected cause, impacted services/users, relevant links, and a concrete next step) so outputs are consistently actionable.
- Start with high-signal alert classes: Begin with recurring, well-instrumented alerts where logs/metrics/traces and deploy/commit context are reliable; expand to noisier categories after tuning.
- Plan for human review and escalation paths: Treat AI-generated root cause and fixes as suggestions; ensure on-call owners have a clear process for confirming, escalating, and documenting outcomes.
- Align security review to your requirements: If compliance is a factor, map Struct’s stated SOC2/HIPAA posture and data-training claims to your vendor assessment checklist and required controls.
OpenClaw Skills
Struct could be a strong upstream signal source for OpenClaw-style operational workflows because its core output is structured incident context (root cause hypotheses, impact analysis, suggested fixes, and evidence links) produced from cross-referenced telemetry and code/work data. A likely use case (not a confirmed native integration) is an OpenClaw incident-coordinator skill that listens for Struct investigation summaries in Slack, normalizes them into a standard incident record, and automatically updates ticketing/runbooks with the evidence Struct collected.
Additional likely OpenClaw agents could include: (1) a “fix orchestration” skill that takes Struct’s suggested fix and routes it to the right owner or coding agent while enforcing internal guardrails (branching strategy, required approvals, rollback notes), and (2) a “post-incident synthesis” skill that combines Struct’s timeline/commit history with your internal templates to draft incident reports and create follow-up tasks. If implemented, this combination could reduce manual triage and improve consistency in how engineering teams move from alert → diagnosis → remediation → documentation, while keeping human ownership for verification and decision-making.