Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions .github/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# CI/CD

## GitHub Actions Workflow

The CI workflow (`.github/workflows/ci.yml`) runs on every push to `main` and on pull requests targeting `main`. It consists of three parallel jobs:

### Jobs

| Job | What it does |
|-----|-------------|
| **Lint** | Runs `ruff check` and `ruff format --check` against `src/` |
| **Type Check** | Runs `mypy` against `src/` (installs the `dev` extra for type stubs) |
| **Integration Tests** | Runs the test suite against a live Teradata database |

All jobs use `uv sync --frozen` to ensure the lock file is up to date — if `uv.lock` is stale relative to `pyproject.toml`, the job will fail.

### Running Checks Locally

```bash
# Lint
uv run ruff check src/
uv run ruff format --check src/

# Type check
uv sync --extra dev
uv run mypy src/

# Integration tests (requires a live Teradata connection)
export DATABASE_URI="teradata://user:pass@host:1025/database"
uv run python tests/run_mcp_tests.py "uv run teradata-mcp-server"
```

### Configuring the `DATABASE_URI` Secret

The integration test job requires a `DATABASE_URI` repository secret to connect to a Teradata instance. Without it, the test job logs a warning and skips.

To configure:

1. Go to **Settings > Secrets and variables > Actions** in the GitHub repository
2. Click **New repository secret**
3. Name: `DATABASE_URI`
4. Value: a Teradata connection URI, e.g. `teradata://user:pass@host:1025/database`

You need to be on the Teradata VPN to access the test database, so this secret should only be added by authorized Teradata personnel. If you don't have access, the tests will simply be skipped.

The test job is automatically skipped on fork PRs (where secrets are unavailable) to avoid failures.

### Concurrency

The workflow uses concurrency groups scoped to the branch/PR ref. If a new commit is pushed while a previous run is still in progress, the older run is cancelled to save CI minutes and avoid database contention.
65 changes: 65 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
name: CI

on:
push:
branches: [main]
pull_request:
branches: [main]

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
lint:
name: Lint (Ruff)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v4
with:
enable-cache: true
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- run: uv sync --frozen --extra dev
- run: uv run ruff check src/
- run: uv run ruff format --check src/

typecheck:
name: Type Check (Mypy)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v4
with:
enable-cache: true
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- run: uv sync --frozen --extra dev
- run: uv run mypy src/

test:
name: Integration Tests
runs-on: ubuntu-latest
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
steps:
- uses: actions/checkout@v4
- uses: astral-sh/setup-uv@v4
with:
enable-cache: true
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- run: uv sync --frozen
- name: Run integration tests
if: ${{ env.DATABASE_URI != '' }}
env:
DATABASE_URI: ${{ secrets.DATABASE_URI }}
run: uv run python tests/run_mcp_tests.py "uv run teradata-mcp-server"
- name: Warn if DATABASE_URI not configured
if: ${{ env.DATABASE_URI == '' }}
env:
DATABASE_URI: ${{ secrets.DATABASE_URI }}
run: echo "::warning::DATABASE_URI secret is not configured — integration tests were skipped. See .github/README.md for setup instructions."
4 changes: 3 additions & 1 deletion src/teradata_mcp_server/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -554,7 +554,9 @@ def executor(**kwargs):
mcp.tool(name=full_func_name, description=doc_string)(func)

# Load YAML-defined tools/resources/prompts from config directory
custom_object_files: list[Any] = [config_dir / file for file in os.listdir(config_dir) if file.endswith("_objects.yml")]
custom_object_files: list[Any] = [
config_dir / file for file in os.listdir(config_dir) if file.endswith("_objects.yml")
]
if custom_object_files:
logger.info(
f"Found {len(custom_object_files)} custom object files in config directory: {[f.name for f in custom_object_files]}"
Expand Down
8 changes: 6 additions & 2 deletions src/teradata_mcp_server/tools/sql_opt/sql_opt_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,9 @@ def load_sql_clustering_config():
SQL_CLUSTERING_CONFIG = load_sql_clustering_config()


def handle_sql_Execute_Full_Pipeline(conn, optimal_k: int | None = None, max_queries: int | None = None, *args, **kwargs):
def handle_sql_Execute_Full_Pipeline(
conn, optimal_k: int | None = None, max_queries: int | None = None, *args, **kwargs
):
"""
**COMPLETE SQL QUERY CLUSTERING PIPELINE FOR HIGH-USAGE QUERY OPTIMIZATION**

Expand Down Expand Up @@ -503,7 +505,9 @@ def handle_sql_Execute_Full_Pipeline(conn, optimal_k: int | None = None, max_que
return create_response({"status": "success", "pipeline_completed": True}, metadata)


def handle_sql_Analyze_Cluster_Stats(conn, sort_by_metric: str = "avg_cpu", limit_results: int | None = None, *args, **kwargs):
def handle_sql_Analyze_Cluster_Stats(
conn, sort_by_metric: str = "avg_cpu", limit_results: int | None = None, *args, **kwargs
):
"""
**ANALYZE SQL QUERY CLUSTER PERFORMANCE STATISTICS**

Expand Down
Loading