diff --git a/.github/pages/docs/assets/step1-install.png b/.github/pages/docs/assets/step1-install.png
new file mode 100644
index 00000000..ebd5f7b7
Binary files /dev/null and b/.github/pages/docs/assets/step1-install.png differ
diff --git a/.github/pages/docs/assets/step2-explore.png b/.github/pages/docs/assets/step2-explore.png
new file mode 100644
index 00000000..1e1fc78f
Binary files /dev/null and b/.github/pages/docs/assets/step2-explore.png differ
diff --git a/.github/pages/docs/assets/step4-pipeline.png b/.github/pages/docs/assets/step4-pipeline.png
new file mode 100644
index 00000000..a242e341
Binary files /dev/null and b/.github/pages/docs/assets/step4-pipeline.png differ
diff --git a/.github/pages/docs/assets/step5-dashboard.png b/.github/pages/docs/assets/step5-dashboard.png
new file mode 100644
index 00000000..ad942fcc
Binary files /dev/null and b/.github/pages/docs/assets/step5-dashboard.png differ
diff --git a/.github/pages/docs/assets/step6-agent.png b/.github/pages/docs/assets/step6-agent.png
new file mode 100644
index 00000000..f63c05b5
Binary files /dev/null and b/.github/pages/docs/assets/step6-agent.png differ
diff --git a/.github/pages/docs/assets/step7-app.png b/.github/pages/docs/assets/step7-app.png
new file mode 100644
index 00000000..eb3b3009
Binary files /dev/null and b/.github/pages/docs/assets/step7-app.png differ
diff --git a/.github/pages/docs/index.md b/.github/pages/docs/index.md
new file mode 100644
index 00000000..af154527
--- /dev/null
+++ b/.github/pages/docs/index.md
@@ -0,0 +1,42 @@
+---
+hide:
+ - navigation
+ - toc
+---
+
+
+
+
+
+# AI Dev Kit
+
+Give your AI coding assistant superpowers on Databricks.
+{.tagline}
+
+[Start the Quickstart :material-arrow-right:](quickstart.md){.cta-primary}
+
+
+
+
+
27Skills
+
50+MCP Tools
+
~60 minQuickstart
+
+
+
The AI Dev Kit gives AI coding assistants the knowledge and tools to build on Databricks.
Skills teach patterns. MCP Tools execute them. You talk, it builds.
+
+| Step | What you build | Time |
+|:----:|---------------|-----:|
+| **1** | Install the AI Dev Kit | 2 min |
+| **2** | Explore and profile your data | 5 min |
+| **3** | Generate a sample dataset | 5 min |
+| **4** | Build a medallion data pipeline (SDP) | 15 min |
+| **5** | Create an AI/BI dashboard | 10 min |
+| **6** | Deploy a Knowledge Assistant (RAG agent) | 15 min |
+| **7** | Build a full-stack Databricks App | 10 min |
+
+
Prerequisites: uv, Databricks CLI (authenticated), and an AI assistant (Claude Code, Cursor, or Gemini CLI).
+
+[Start the Quickstart :material-arrow-right:](quickstart.md){.cta-primary}
+
+
diff --git a/.github/pages/docs/overrides/main.html b/.github/pages/docs/overrides/main.html
new file mode 100644
index 00000000..94d9808c
--- /dev/null
+++ b/.github/pages/docs/overrides/main.html
@@ -0,0 +1 @@
+{% extends "base.html" %}
diff --git a/.github/pages/docs/quickstart.md b/.github/pages/docs/quickstart.md
new file mode 100644
index 00000000..33369d50
--- /dev/null
+++ b/.github/pages/docs/quickstart.md
@@ -0,0 +1,253 @@
+---
+hide:
+ - navigation
+---
+
+
+
+# Quickstart
+
+##
1 Install the AI Dev Kit
2 min
+
+You need **uv**, the **Databricks CLI** (authenticated), and an AI coding assistant.
+
+=== "Mac / Linux"
+
+ ```bash
+ bash <(curl -sL https://raw.githubusercontent.com/databricks-solutions/ai-dev-kit/main/install.sh)
+ ```
+
+=== "Windows"
+
+ ```powershell
+ irm https://raw.githubusercontent.com/databricks-solutions/ai-dev-kit/main/install.ps1 | iex
+ ```
+
+Follow the interactive prompts, then open your AI assistant from the same directory.
+
+!!! success "What you'll see"
+ Your project now has `.claude/skills/` (27 Databricks skills) and `.claude/mcp.json` (MCP server config). Your AI assistant can talk to Databricks.
+
+
+

+
+
+---
+
+##
2 Explore your data
5 min
+
+Discover what's in your workspace. Open your AI assistant and paste:
+
+!!! example "Prompt"
+ ```
+ What catalogs and schemas are available in my Databricks workspace?
+ Show me the tables in each schema with their column names and row counts.
+ ```
+
+Now pick a table and profile it:
+
+!!! example "Prompt"
+ ```
+ Profile the table `main.default.my_table`. Show total rows, null counts
+ per column, value distributions for categoricals (top 10), and
+ min/max/mean for numerics. Include 5 sample rows.
+ ```
+
+!!! tip
+ Replace `main.default.my_table` with a real table. If your workspace is empty, the next step generates sample data.
+
+
+

+
+
+---
+
+##
3 Generate sample data
5 min
+
+If you need data for the rest of this quickstart:
+
+!!! example "Prompt"
+ ```
+ Generate a realistic e-commerce dataset in my workspace:
+ - main.quickstart.customers — 10,000 rows (name, email, signup_date, segment)
+ - main.quickstart.orders — 50,000 rows (order_id, customer_id, order_date, total_amount, status)
+ - main.quickstart.products — 500 rows (product_id, name, category, price)
+
+ Use realistic distributions, not uniform random. Make sure foreign keys are valid.
+ ```
+
+!!! success "What you'll see"
+ Three tables created in Unity Catalog with realistic distributions, immediately queryable.
+
+Try asking a question:
+
+!!! example "Prompt"
+ ```
+ What are the top 10 product categories by total revenue?
+ Break it down by month for the last 6 months.
+ ```
+
+---
+
+##
4 Build a data pipeline
15 min
+
+Create a production-ready Spark Declarative Pipeline with the medallion architecture.
+
+!!! example "Prompt"
+ ```
+ Create a new Spark Declarative Pipeline using Databricks Asset Bundles:
+
+ - Python (not SQL)
+ - Medallion architecture: bronze → silver → gold
+ - Serverless compute
+ - Target: main.quickstart schema
+
+ Bronze: ingest from orders and customers tables
+ Silver: clean nulls, join orders with customers, add order_year/month columns
+ Gold: materialized views for monthly_revenue (by month + segment)
+ and customer_lifetime_value
+
+ Initialize with `databricks pipelines init`, then deploy and run it.
+ ```
+
+!!! success "What you'll see"
+ The assistant scaffolds a DAB project with bronze/silver/gold Python files, deploys it, triggers a run, and shows pipeline status as each table processes.
+
+!!! info "How skills help"
+ The assistant loaded `databricks-spark-declarative-pipelines` and `databricks-bundles` skills, which taught it correct SDP patterns, serverless defaults, and Asset Bundle structure. Without these skills, it would guess — and often get it wrong.
+
+
+

+
+
+---
+
+##
5 Create a dashboard
10 min
+
+Build an AI/BI dashboard from the gold tables your pipeline just created.
+
+!!! example "Prompt"
+ ```
+ Create an AI/BI dashboard called "Quickstart: Sales Overview" using main.quickstart:
+
+ 1. Counter: total revenue
+ 2. Counter: total orders
+ 3. Line chart: monthly revenue trend (last 12 months)
+ 4. Bar chart: revenue by customer segment
+ 5. Table: top 20 customers by lifetime value
+ 6. Date range filter on all charts
+
+ Test all SQL queries before deploying.
+ ```
+
+!!! success "What you'll see"
+ The assistant follows the mandatory validation workflow: get schemas → write SQL → **test every query** → build dashboard JSON → deploy. Returns a URL to the live dashboard.
+
+Iterate by asking:
+
+!!! example "Prompt"
+ ```
+ Update the dashboard: change the monthly chart to a stacked area by segment,
+ and add a second page "Customers" with a scatter plot of order frequency
+ vs average order value.
+ ```
+
+!!! info "Why validation matters"
+ The `databricks-aibi-dashboards` skill enforces SQL testing before deployment. Without it, widgets show "Invalid widget definition" errors. Skills encode hard-won best practices.
+
+
+

+
+
+---
+
+##
6 Deploy an AI agent
15 min
+
+Create a Knowledge Assistant — a RAG-based agent that answers questions from documents.
+
+!!! example "Prompt"
+ ```
+ Create a Knowledge Assistant called "Quickstart FAQ Bot":
+
+ 1. Generate 20 sample FAQ documents (pricing, features, returns, shipping, support)
+ 2. Upload to UC volume main.quickstart.volumes.faq_docs
+ 3. Create a Vector Search endpoint and index for the documents
+ 4. Create the Knowledge Assistant using Foundation Model APIs
+ 5. Deploy to a serving endpoint
+ 6. System prompt: "Answer questions based only on the FAQ documents. If unsure, say so."
+
+ Test it with: "What is your return policy?" and "How much does enterprise cost?"
+ ```
+
+!!! success "What you'll see"
+ The assistant creates the knowledge base, vector index, and agent, deploys it, then runs test queries showing responses with source attribution.
+
+
+

+
+
+**Alternative** — for SQL-based data Q&A, try a Genie Space instead:
+
+!!! example "Prompt"
+ ```
+ Create a Genie Space called "Sales Genie" that lets users ask natural
+ language questions about the quickstart tables (orders, customers, products).
+ Add sample questions and curation instructions.
+ ```
+
+---
+
+##
7 Build a full-stack app
10 min
+
+Bring everything together in a Databricks App.
+
+!!! example "Prompt"
+ ```
+ Create a Databricks App called "quickstart-explorer" with FastAPI + React (APX pattern):
+
+ - Page 1 "Explorer": catalog/schema browser + SQL query editor with results table
+ - Page 2 "Dashboard": line chart of monthly_revenue from the gold table,
+ filterable by segment, auto-refreshes every 30s
+ - Page 3 "Chat": chat interface connected to the FAQ Bot serving endpoint,
+ with streaming responses and source document cards
+
+ Set up app.yaml with SQL warehouse and serving endpoint resources,
+ then deploy to Databricks Apps.
+ ```
+
+!!! success "What you'll see"
+ The assistant scaffolds a complete project (FastAPI backend + React frontend), configures app.yaml with resource permissions, deploys it, and returns the live app URL.
+
+
+

+
+
+---
+
+
+
+## You're done.
+
+**Data exploration** → **Pipeline** → **Dashboard** → **AI Agent** → **Full-stack App** — all through conversation.
+
+The AI Dev Kit has [27 skills](reference/skills.md) and [50+ MCP tools](reference/mcp-tools.md). Just ask your assistant to build something — skills activate automatically.
+
+!!! example "Ideas to try next"
+ ```
+ Create a scheduled Databricks job that runs my pipeline every hour
+ and sends a Slack notification on failure.
+ ```
+
+ ```
+ Set up MLflow evaluation for my FAQ Bot. Create 10 test questions and
+ measure correctness, retrieval relevance, and faithfulness.
+ ```
+
+ ```
+ Add a Lakebase PostgreSQL database to my app for storing user preferences
+ and query history.
+ ```
+
+
+
+
diff --git a/.github/pages/docs/reference/builder-app.md b/.github/pages/docs/reference/builder-app.md
new file mode 100644
index 00000000..62b5aaa0
--- /dev/null
+++ b/.github/pages/docs/reference/builder-app.md
@@ -0,0 +1,36 @@
+# Visual Builder App
+
+A web-based chat UI for building on Databricks — powered by Claude Code under the hood. Same skills and tools as the CLI experience, but through a browser.
+
+---
+
+## What it does
+
+- Chat with an AI assistant to build Databricks resources
+- Manage projects with file editing and preview
+- Back up and restore project state to PostgreSQL
+
+## Architecture
+
+| Layer | Technology |
+|-------|-----------|
+| Frontend | React + Vite |
+| Backend | FastAPI + Uvicorn |
+| Database | PostgreSQL (Lakebase) |
+| AI Agent | Claude Code via claude-agent-sdk |
+
+## Setup
+
+```bash
+cd ai-dev-kit/databricks-builder-app
+./scripts/setup.sh
+```
+
+## Deploy to Databricks Apps
+
+```bash
+databricks apps create my-builder --from app.yaml
+databricks apps deploy my-builder
+```
+
+See the full [Builder App README](https://github.com/databricks-solutions/ai-dev-kit/tree/main/databricks-builder-app) for detailed setup, configuration, and troubleshooting.
diff --git a/.github/pages/docs/reference/mcp-tools.md b/.github/pages/docs/reference/mcp-tools.md
new file mode 100644
index 00000000..f09e1056
--- /dev/null
+++ b/.github/pages/docs/reference/mcp-tools.md
@@ -0,0 +1,74 @@
+# MCP Tools
+
+The MCP server exposes 50+ tools that let your AI assistant **execute actions** against your Databricks workspace. These are the functions it calls when you ask it to run SQL, create a dashboard, or deploy a job.
+
+---
+
+## SQL
+
+| Tool | What it does |
+|------|-------------|
+| `execute_sql` | Run a SQL query, return results |
+| `execute_sql_multi` | Run multiple queries in parallel |
+| `list_warehouses` | List available SQL warehouses |
+| `get_table_details` | Get table schemas for a catalog/schema |
+| `get_best_warehouse` | Auto-select the best available warehouse |
+
+## Compute
+
+| Tool | What it does |
+|------|-------------|
+| `list_clusters` | List all clusters |
+| `execute_databricks_command` | Run code on a cluster |
+| `run_python_file_on_databricks` | Upload and execute a Python file |
+
+## Files
+
+| Tool | What it does |
+|------|-------------|
+| `upload_file` | Upload a file to a UC volume or workspace |
+| `upload_folder` | Upload an entire folder |
+
+## Jobs & Workflows
+
+| Tool | What it does |
+|------|-------------|
+| `create_job` | Create a new job |
+| `run_job_now` | Trigger a job run |
+| `get_job` | Get job details |
+| `list_jobs` | List all jobs |
+| `wait_for_run` | Wait for a run to complete |
+| `get_run` | Get run status and results |
+
+## Pipelines (SDP/DLT)
+
+| Tool | What it does |
+|------|-------------|
+| `create_or_update_pipeline` | Create or update a pipeline |
+| `run_pipeline` | Trigger a pipeline update |
+| `get_pipeline` | Get pipeline status |
+
+## AI/BI Dashboards
+
+| Tool | What it does |
+|------|-------------|
+| `create_or_update_dashboard` | Deploy a dashboard |
+| `get_dashboard` | Get details or list all dashboards |
+| `publish_dashboard` | Publish or unpublish |
+| `delete_dashboard` | Move to trash |
+
+## AI Agents
+
+| Tool | What it does |
+|------|-------------|
+| `manage_ka` | Create/manage Knowledge Assistants |
+| `manage_mas` | Create/manage Supervisor Agents |
+| `create_or_update_genie` | Create/manage Genie Spaces |
+| `find_genie_by_name` | Find a Genie Space by name |
+
+## Model Serving
+
+| Tool | What it does |
+|------|-------------|
+| `query_serving_endpoint` | Query a model serving endpoint |
+| `get_serving_endpoint_status` | Check endpoint status |
diff --git a/.github/pages/docs/reference/skills.md b/.github/pages/docs/reference/skills.md
new file mode 100644
index 00000000..0ba19db5
--- /dev/null
+++ b/.github/pages/docs/reference/skills.md
@@ -0,0 +1,70 @@
+# Skills Catalog
+
+Skills are markdown files that teach your AI assistant Databricks patterns. They load automatically — you never need to reference them directly. When you ask "create a dashboard," the assistant reads the dashboard skill and follows the right workflow.
+
+---
+
+## AI & Agents
+
+| Skill | What it teaches |
+|-------|----------------|
+| `databricks-ai-functions` | Built-in AI Functions (ai_classify, ai_extract, ai_summarize, ai_query, ai_forecast) |
+| `databricks-agent-bricks` | Knowledge Assistants, Genie Spaces, Supervisor Agents |
+| `databricks-genie` | Genie Space creation, curation, and Conversation API |
+| `databricks-model-serving` | Deploy MLflow models and AI agents to serving endpoints |
+| `databricks-unstructured-pdf-generation` | Generate synthetic PDFs for RAG testing |
+| `databricks-vector-search` | Vector index configuration, querying, and RAG patterns |
+
+## Analytics & Dashboards
+
+| Skill | What it teaches |
+|-------|----------------|
+| `databricks-aibi-dashboards` | AI/BI dashboard creation with mandatory SQL validation workflow |
+| `databricks-unity-catalog` | System tables for lineage, audit, and billing |
+| `databricks-metric-views` | Governed business metrics in YAML |
+| `databricks-dbsql` | Advanced SQL warehouse features |
+
+## Data Engineering
+
+| Skill | What it teaches |
+|-------|----------------|
+| `databricks-spark-declarative-pipelines` | SDP/DLT — streaming tables, materialized views, CDC, SCD Type 2 |
+| `databricks-spark-structured-streaming` | Spark Structured Streaming patterns |
+| `databricks-jobs` | Multi-task workflows, triggers, schedules |
+| `databricks-synthetic-data-gen` | Realistic test data generation with Faker |
+| `databricks-iceberg` | Apache Iceberg tables, UniForm, REST Catalog |
+| `databricks-zerobus-ingest` | Near real-time ingestion via gRPC |
+
+## Development & Deployment
+
+| Skill | What it teaches |
+|-------|----------------|
+| `databricks-bundles` | Databricks Asset Bundles for multi-environment CI/CD |
+| `databricks-app-apx` | Full-stack apps (FastAPI + React) |
+| `databricks-app-python` | Python web apps (Dash, Streamlit, Flask, Gradio) |
+| `databricks-python-sdk` | Python SDK, Databricks Connect, CLI, REST API |
+| `databricks-config` | Profile authentication setup |
+| `databricks-lakebase-provisioned` | Managed PostgreSQL for OLTP workloads |
+| `databricks-lakebase-autoscale` | Autoscaling managed PostgreSQL |
+| `databricks-execution-compute` | Cluster and serverless compute management |
+
+## MLflow
+
+From [mlflow/skills](https://github.com/mlflow/skills):
+
+| Skill | What it teaches |
+|-------|----------------|
+| `agent-evaluation` | End-to-end agent evaluation workflow |
+| `analyze-mlflow-chat-session` | Debug multi-turn conversations |
+| `analyze-mlflow-trace` | Debug traces, spans, and assessments |
+| `instrumenting-with-mlflow-tracing` | Add MLflow tracing to Python/TypeScript |
+| `mlflow-onboarding` | MLflow setup for new users |
+| `querying-mlflow-metrics` | Aggregated metrics and time-series analysis |
+| `retrieving-mlflow-traces` | Trace search and filtering |
+| `searching-mlflow-docs` | Search MLflow documentation |
+
+## Reference
+
+| Skill | What it teaches |
+|-------|----------------|
+| `databricks-docs` | Documentation index via llms.txt |
diff --git a/.github/pages/docs/reference/tools-core.md b/.github/pages/docs/reference/tools-core.md
new file mode 100644
index 00000000..d9d318e1
--- /dev/null
+++ b/.github/pages/docs/reference/tools-core.md
@@ -0,0 +1,86 @@
+# Core Library
+
+`databricks-tools-core` is the Python library that powers the MCP server. You can also use it directly in your own projects — with LangChain, OpenAI Agents, FastAPI, or any Python framework.
+
+---
+
+## Install
+
+```bash
+pip install databricks-ai-dev-kit
+```
+
+## Quick example
+
+```python
+from databricks_tools_core.sql import execute_sql
+from databricks_tools_core.auth import set_auth
+
+set_auth(profile="DEFAULT")
+results = execute_sql("SELECT * FROM main.default.my_table LIMIT 10")
+```
+
+## Modules
+
+| Module | Key functions |
+|--------|--------------|
+| `sql` | `execute_sql`, `execute_sql_multi`, `list_warehouses`, `get_table_details` |
+| `jobs` | `create_job`, `run_job_now`, `get_job`, `list_jobs`, `wait_for_run` |
+| `unity_catalog` | `list_catalogs`, `list_schemas`, `list_tables` |
+| `compute` | `list_clusters`, `execute_command` |
+| `spark_declarative_pipelines` | `create_or_update_pipeline`, `run_pipeline`, `get_pipeline` |
+
+## Authentication
+
+```python
+from databricks_tools_core.auth import set_auth
+
+# CLI profile (default)
+set_auth(profile="DEFAULT")
+
+# Explicit credentials
+set_auth(host="https://my-workspace.cloud.databricks.com", token="dapi...")
+```
+
+Multi-user safe via Python `contextvars`.
+
+## Framework integration
+
+=== "LangChain"
+
+ ```python
+ from langchain.tools import tool
+ from databricks_tools_core.sql import execute_sql
+
+ @tool
+ def query_databricks(query: str) -> str:
+ """Execute a SQL query on Databricks."""
+ return execute_sql(query)
+ ```
+
+=== "OpenAI Agents"
+
+ ```python
+ from agents import function_tool
+ from databricks_tools_core.sql import execute_sql
+
+ @function_tool
+ def query_databricks(query: str) -> str:
+ """Execute a SQL query on Databricks."""
+ return execute_sql(query)
+ ```
+
+=== "FastAPI"
+
+ ```python
+ from fastapi import FastAPI
+ from databricks_tools_core.sql import execute_sql
+
+ app = FastAPI()
+
+ @app.get("/query")
+ def run_query(sql: str):
+ return execute_sql(sql)
+ ```
+
+See the full [API documentation](https://github.com/databricks-solutions/ai-dev-kit/tree/main/databricks-tools-core) for details.
diff --git a/.github/pages/docs/stylesheets/extra.css b/.github/pages/docs/stylesheets/extra.css
new file mode 100644
index 00000000..23c99d04
--- /dev/null
+++ b/.github/pages/docs/stylesheets/extra.css
@@ -0,0 +1,144 @@
+/* ─── Databricks Brand ─── */
+:root {
+ --md-primary-fg-color: #FF3621;
+ --md-primary-fg-color--light: #FF6B59;
+ --md-primary-fg-color--dark: #CC2B1A;
+ --md-accent-fg-color: #FF3621;
+ --db-red: #FF3621;
+}
+[data-md-color-scheme="slate"] {
+ --md-primary-fg-color: #FF3621;
+ --md-primary-fg-color--light: #FF6B59;
+ --md-primary-fg-color--dark: #CC2B1A;
+ --md-accent-fg-color: #FF6B59;
+}
+
+/* ─── Global ─── */
+.md-content__inner { max-width: 820px; margin: 0 auto; }
+.md-typeset h1 { font-weight: 800; letter-spacing: -0.02em; }
+.md-typeset h2 { font-weight: 700; margin-top: 1.8em; }
+.md-typeset hr { margin: 1.5em 0; }
+
+/* ─── CTA Button ─── */
+.cta-primary {
+ display: inline-block;
+ background: var(--db-red);
+ color: #fff !important;
+ padding: 0.7rem 1.8rem;
+ border-radius: 8px;
+ font-weight: 700;
+ font-size: 0.95rem;
+ text-decoration: none !important;
+ transition: transform 0.15s, box-shadow 0.15s;
+}
+.cta-primary:hover {
+ transform: translateY(-2px);
+ box-shadow: 0 6px 20px rgba(255, 54, 33, 0.3);
+}
+
+/* ─── Landing: whole page centered ─── */
+.landing {
+ text-align: center;
+ max-width: 680px;
+ margin: 0 auto;
+}
+.landing .md-typeset__table, .landing table { margin: 0 auto; }
+.landing-hero {
+ padding: 2.5rem 0 1.5rem;
+}
+.landing-hero h1 {
+ font-size: 2.6rem;
+ font-weight: 800;
+ letter-spacing: -0.03em;
+ line-height: 1.1;
+ margin-bottom: 0.5rem;
+}
+.landing-hero .tagline {
+ font-size: 1.05rem;
+ opacity: 0.6;
+ margin-bottom: 1.5rem;
+}
+
+/* ─── Landing: Stats ─── */
+.stats-bar {
+ display: flex;
+ justify-content: center;
+ gap: 2.5rem;
+ padding: 1rem 0;
+ margin: 0 auto 1rem;
+ max-width: 420px;
+ border-top: 1px solid var(--md-default-fg-color--lightest);
+ border-bottom: 1px solid var(--md-default-fg-color--lightest);
+}
+.stat { text-align: center; }
+.stat .num { display: block; font-size: 1.5rem; font-weight: 800; color: var(--db-red); }
+.stat .label { font-size: 0.7rem; opacity: 0.5; text-transform: uppercase; letter-spacing: 0.06em; }
+
+/* ─── Landing: description + prereqs ─── */
+.landing-desc {
+ font-size: 0.95rem;
+ line-height: 1.6;
+ opacity: 0.8;
+ margin: 1rem auto 1.2rem;
+ max-width: 560px;
+}
+.landing-prereqs {
+ font-size: 0.85rem;
+ opacity: 0.7;
+ margin: 1rem auto 1.2rem;
+}
+
+/* ─── Quickstart: step headings ─── */
+.md-typeset .qs h2 {
+ display: flex;
+ align-items: center;
+ gap: 0.6rem;
+ font-size: 1.3rem;
+ margin-top: 0;
+ padding-bottom: 0.5rem;
+ border-bottom: 2px solid var(--md-default-fg-color--lightest);
+}
+.md-typeset .qs h2 .num {
+ display: inline-flex;
+ align-items: center;
+ justify-content: center;
+ width: 32px;
+ height: 32px;
+ border-radius: 50%;
+ background: var(--db-red);
+ color: #fff;
+ font-size: 0.8rem;
+ font-weight: 800;
+ flex-shrink: 0;
+}
+.md-typeset .qs h2 .dur {
+ margin-left: auto;
+ font-size: 0.75rem;
+ font-weight: 400;
+ opacity: 0.45;
+}
+
+/* ─── Quickstart: tighter admonition spacing ─── */
+.md-typeset .qs .admonition { margin-top: 0.5em; margin-bottom: 0.5em; }
+
+/* ─── Quickstart: step screenshots ─── */
+.step-screenshot {
+ margin: 0.8em 0;
+ border-radius: 6px;
+ overflow: hidden;
+ border: 1px solid var(--md-default-fg-color--lightest);
+}
+.step-screenshot img {
+ display: block;
+ width: 100%;
+ height: auto;
+}
+
+/* ─── Completion banner ─── */
+.completion-banner {
+ text-align: center;
+ padding: 1.5rem 1rem;
+ max-width: 600px;
+ margin: 0 auto;
+}
+.completion-banner h2 { border: none !important; font-size: 1.6rem; }
diff --git a/.github/pages/mkdocs.yml b/.github/pages/mkdocs.yml
new file mode 100644
index 00000000..8cbd607f
--- /dev/null
+++ b/.github/pages/mkdocs.yml
@@ -0,0 +1,79 @@
+site_name: Databricks AI Dev Kit
+site_description: Quickstart — AI-Driven Development on Databricks
+site_url: https://databricks-solutions.github.io/ai-dev-kit/
+repo_url: https://github.com/databricks-solutions/ai-dev-kit
+repo_name: databricks-solutions/ai-dev-kit
+
+theme:
+ name: material
+ custom_dir: docs/overrides
+ palette:
+ - media: "(prefers-color-scheme: light)"
+ scheme: default
+ primary: custom
+ accent: red
+ toggle:
+ icon: material/brightness-7
+ name: Switch to dark mode
+ - media: "(prefers-color-scheme: dark)"
+ scheme: slate
+ primary: custom
+ accent: red
+ toggle:
+ icon: material/brightness-4
+ name: Switch to light mode
+ font:
+ text: DM Sans
+ code: JetBrains Mono
+ features:
+ - navigation.instant
+ - navigation.tracking
+ - navigation.top
+ - content.code.copy
+ - content.code.annotate
+ - content.tabs.link
+ - search.highlight
+ - toc.follow
+ icon:
+ repo: fontawesome/brands/github
+
+extra_css:
+ - stylesheets/extra.css
+
+markdown_extensions:
+ - admonition
+ - pymdownx.details
+ - pymdownx.superfences
+ - pymdownx.tabbed:
+ alternate_style: true
+ - pymdownx.highlight:
+ anchor_linenums: true
+ line_spans: __span
+ pygments_lang_class: true
+ - pymdownx.inlinehilite
+ - pymdownx.snippets
+ - pymdownx.emoji:
+ emoji_index: !!python/name:material.extensions.emoji.twemoji
+ emoji_generator: !!python/name:material.extensions.emoji.to_svg
+ - attr_list
+ - md_in_html
+ - tables
+ - toc:
+ permalink: true
+
+plugins:
+ - search
+
+nav:
+ - Home: index.md
+ - Quickstart: quickstart.md
+ - Reference:
+ - Skills Catalog: reference/skills.md
+ - MCP Tools: reference/mcp-tools.md
+ - Core Library: reference/tools-core.md
+ - Visual Builder: reference/builder-app.md
+
+extra:
+ social:
+ - icon: fontawesome/brands/github
+ link: https://github.com/databricks-solutions/ai-dev-kit
diff --git a/.github/workflows/pages.yml b/.github/workflows/pages.yml
new file mode 100644
index 00000000..eea56c70
--- /dev/null
+++ b/.github/workflows/pages.yml
@@ -0,0 +1,40 @@
+name: Deploy docs to GitHub Pages
+
+on:
+ push:
+ branches: [main]
+ paths: [".github/pages/**"]
+ workflow_dispatch:
+
+permissions:
+ contents: read
+ pages: write
+ id-token: write
+
+concurrency:
+ group: pages
+ cancel-in-progress: false
+
+jobs:
+ build:
+ runs-on: linux-ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - uses: actions/setup-python@v5
+ with:
+ python-version: "3.12"
+ - run: pip install mkdocs-material
+ - run: mkdocs build -f .github/pages/mkdocs.yml -d ../../site
+ - uses: actions/upload-pages-artifact@v3
+ with:
+ path: site
+
+ deploy:
+ environment:
+ name: github-pages
+ url: ${{ steps.deployment.outputs.page_url }}
+ runs-on: ubuntu-latest
+ needs: build
+ steps:
+ - id: deployment
+ uses: actions/deploy-pages@v4