Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.superblocks.com/llms.txt

Use this file to discover all available pages before exploring further.

Superblocks on Databricks means your builders create apps while your data, inference, and compute stay within your existing Databricks security boundary. Every prompt, every query, and every file operation runs on Databricks infrastructure — governed by Unity Catalog and your existing workspace policies.

Platform mapping

Superblocks featureDatabricks serviceDescription
DatabaseLakeBaseEach app gets its own managed LakeBase database with automatic schema migrations on deploy
InferenceAI GatewayClark AI runs inference through your Databricks AI Gateway, burning down your existing commitment and reducing Clark credit usage
App AIAI GatewayRuntime AI features in deployed apps route through your AI Gateway automatically
File StoreUnity Catalog VolumesEach app gets its own managed file storage governed through Unity Catalog
Data queriesSQL WarehouseAPIs that query your Lakehouse data run compute on your SQL Warehouse

How it works in practice

A builder describes the app they want to Clark. From that point forward, every interaction runs on your Databricks infrastructure:
  • Building the app: Every prompt the builder sends to Clark runs inference through your AI Gateway
  • Querying data: Clark generates SQL queries that execute on your SQL Warehouse
  • Storing data: When the app needs to persist data, Clark provisions a LakeBase database - isolated per app, with clean separation between development and production
  • Running in production: End users interact with apps backed by your SQL Warehouse, AI Gateway, and LakeBase - all running on your infrastructure

Database branching

LakeBase gives every app separate development and production databases. Builders work against a development instance while editing, and Superblocks automatically provisions and migrates the production database when they publish. Test data never reaches production, and the builder never needs to think about it.

What this means for your organization

Reuse your Unity Catalog governance. The permissions and policies you have already built in Unity Catalog apply automatically to every Superblocks app. Data access, file storage, and compute are all governed by Unity Catalog - no separate access control layer to maintain. Superblocks integrates with Databricks OAuth Token Federation so every query runs under the user’s identity. Data stays in your security boundary. All application data, AI inference, and file storage remain on Databricks infrastructure you already manage. No data leaves your approved security boundary for processing. One platform to observe everything. All app activity - queries, inference calls, file operations - runs on Databricks services, so it shows up in your existing Databricks observability and audit tooling. No blind spots from external services. Burn down your existing commitment. Inference, compute, and storage all run on Databricks infrastructure, contributing to your existing Databricks commitment and reducing Clark credit usage. Deploy as Databricks Apps. Apps built with Superblocks can be deployed directly as Databricks Apps, giving users a native experience inside their Databricks workspace.

Get started

  1. Configure your database backend to use LakeBase
  2. Point Clark inference at your Databricks AI Gateway
  3. Set App AI to route runtime AI through your AI Gateway
  4. Configure file storage to use Unity Catalog Volumes
  5. Set up a Databricks SQL integration for Lakehouse queries
  6. Optionally, deploy apps to Databricks Apps