Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.superblocks.com/llms.txt

Use this file to discover all available pages before exploring further.

Overview

By default, Clark AI runs inference through models hosted in the Superblocks Cloud. With Bring Your Own Inference, administrators can redirect Clark to run against their organization’s own AI infrastructure instead.
Bring Your Own Inference is available on Cloud-Prem deployments only.
Once configured, every prompt that every builder sends to Clark runs inference on your provider. Tokens burn down your existing cloud commitment — whether that is an AWS Bedrock reservation, a Databricks AI Gateway, or a Snowflake Cortex allocation — and reduce Clark AI credit usage. Builders do not need to configure anything; they use Clark exactly as before.

Supported providers

ProviderDescription
Anthropic APIDirect API access to Anthropic models
AWS BedrockManaged inference through Amazon Bedrock
Custom AI GatewayRoute through your own AI gateway for centralized policy, logging, and model management
Databricks AI GatewayInference routed through a Databricks-managed AI Gateway
Google Vertex AIManaged inference through Google Cloud Vertex AI
Snowflake CortexInference through Snowflake’s built-in Cortex AI service

How it works

1. Admin configures the inference provider

In Organization Settings, an administrator selects their inference provider and supplies the necessary credentials — for example, an API endpoint and authentication token, or a personal access token for AWS Bedrock.

2. All builders inherit the configuration

Once saved, every builder in the organization automatically uses the configured provider for all Clark interactions. There is no per-user or per-app setup required.

3. Inference burns down your commitment and reduces Clark credit usage

Every prompt a builder sends — whether exploring a database schema, generating a plan, or writing code — runs inference on your provider. Token consumption counts toward your existing cloud commitment and reduces Clark AI credit usage.

What this means for your organization

  • Burn down your existing commitment: Inference runs on your provider, contributing to your cloud commitment and reducing Clark credit consumption
  • Inference runs in your network: For providers like Bedrock and Vertex, all model inference executes within your cloud account
  • No builder friction: Builders use Clark identically regardless of the underlying provider