Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.superblocks.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Builders use AI across every type of app they create. For example, drafting justifications for approval workflows, summarizing support tickets, generating images for product catalogs, transcribing call recordings, classifying inbound requests, and more. App AI lets administrators set the provider and model that powers all of these features across the entire organization, so builders get instant access without configuring anything themselves.
This is separate from Bring Your Own Inference, which controls where Clark runs inference during development. App AI controls where apps run inference when end users interact with AI-powered features in production.
SettingControlsWho is affected
Bring Your Own InferenceClark’s inference during app developmentBuilders prompting Clark
App AIRuntime AI inside deployed appsEnd users of published apps

How it works

1. Admin sets the default provider

In Organization Settings, an administrator configures the AI integration and model that all apps should use by default. This includes the provider endpoint and authentication credentials.

2. Configure per modality

Administrators can select a different integration and model for each AI modality:
ModalityExample use cases
Text generationDrafting content, summarization, classification, chat
VoiceTranscription, text-to-speech
Image generationCreating visuals, diagrams, thumbnails
This lets you optimize for cost and capability - for example, routing text generation through a cost-efficient model on your AI gateway while using a specialized provider for image generation. You can also connect to your own custom AI gateway for centralized policy and model management.

3. Builders get AI capabilities automatically

When a builder adds an AI-powered feature to their app — for example, a “Generate with AI” button that drafts a rationale or summarizes data — the feature automatically uses the organization’s default for that modality. The builder does not need to select a model or configure credentials.

4. End users trigger inference at runtime

When an end user clicks a button or triggers a feature that calls the AI provider, the request goes directly to the configured provider. For example, if the text generation default is set to a Databricks AI Gateway running GPT-4, every text AI interaction in every published app routes through that gateway.

What this means for your organization

  • One configuration, every app: Set the provider once per modality and every AI-powered feature across all apps inherits it
  • Cost control per modality: Choose cost-efficient models for high-volume text generation and specialized models for image or voice
  • Builders stay focused: No need for individual builders to manage API keys or select models
  • Inference on your terms: Runtime AI calls go to your provider, burning down your existing commitment
  • Consistent governance: All AI features in all apps are routed through governed endpoints you control