Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.superblocks.com/llms.txt

Use this file to discover all available pages before exploring further.

Lifecycle hooks are currently in beta. Contact support to get on the waitlist.
Lifecycle hooks let admins run automated checks at key points in the Superblocks development workflow — on commit, before deployment, after a build, or any combination. When a check fails, the results are sent back to Clark to fix automatically, and admins control which checks are blocking versus advisory. Hooks are the foundation for enforcing standards on AI-generated code. Instead of building one-off guardrails, admins configure reusable checks that run consistently across every application and every builder in the organization. Hooks are complementary to Knowledge. Knowledge tells Clark what to do while it writes code, but instructions alone aren’t enough — you need multiple layers of defense, and hooks verify that Clark actually followed them. Custom AI agents that run as part of a hook can also access your organization’s Knowledge, so the policies and standards you’ve already written become the evaluation criteria for automated checks.

How hooks work

A hook is a check that runs automatically when a specific event occurs in the application lifecycle. Each hook has three parts:
PartDescription
TriggerThe event that starts the check — on commit, on edit, before deployment, after build
CheckWhat runs — a static analysis tool, a custom AI agent, an API call, etc
ActionWhat happens with the results — block the next step, warn the builder, or report to Clark for auto-remediation
Admins configure hooks at the organization level. Builders don’t need to set anything up — hooks run automatically in the background.

What you can run

Hooks support two types of checks:
  • Bash scripts: Run any CLI tool or script — Semgrep, SonarQube, Wiz, or your own internal tooling. If it runs in a shell, you can use it as a check.
  • Custom AI agents: Dedicated agents with their own instructions, context window, and access to your organization’s knowledge. These handle multi-step checks that require deep analysis — mapping architecture, evaluating exploitability, or filtering false positives.
  • Prompt hooks: A single-turn LLM call that evaluates a yes/no question — “does this query modify production data?”, “does this match the design system?” Fast and cheap for checks that need judgment but not deep analysis.
  • API requests: Call an external service — a webhook, an internal compliance endpoint, or any HTTP API — and use the response to pass or fail the check.
You can combine both in a single hook. For example, run Semgrep first for fast deterministic scanning, then pass the results to an AI agent that filters false positives and assesses severity in context.

Use cases

Lifecycle hooks power several built-in capabilities, and you can create your own.

Security reviews

Scan code for vulnerabilities, secrets, insecure patterns, and dependency issues. High-severity findings block deployment by default; Clark automatically remediates issues before they reach production. Learn more about security reviews

Performance optimization

Run performance checks against changed code — flagging expensive queries, N+1 patterns, unnecessary data fetching, and rendering bottlenecks.

Design enforcement

Validate that generated UI code follows your organization’s design system — correct component usage, spacing and color tokens, typography, and layout patterns.

Coding standards

Enforce naming conventions, architectural patterns, and code quality rules across all applications — ensuring consistency regardless of which builder or AI model generated the code.