Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .vale/styles/Infrahub/sentence-case.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ exceptions:
- Jinja
- Jinja2
- JWT
- MDX
- Namespace
- NATS
- Node
Expand Down
1 change: 1 addition & 0 deletions .vale/styles/spelling-exceptions.txt
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ kbps
Keycloak
Loopbacks
markdownlint
MDX
max_count
memgraph
menu_placement
Expand Down
2 changes: 1 addition & 1 deletion AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Infrahub Python SDK - async/sync client for Infrahub infrastructure management.
```bash
uv sync --all-groups --all-extras # Install all deps
uv run invoke format # Format code
uv run invoke lint # All linters (code + yamllint + documentation)
uv run invoke lint # Full pipeline: ruff, yamllint, ty, mypy, markdownlint, vale
uv run invoke lint-code # All linters for Python code
uv run pytest tests/unit/ # Unit tests
uv run pytest tests/integration/ # Integration tests
Expand Down
1 change: 1 addition & 0 deletions changelog/497.fixed.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fixed Python SDK query generation regarding from_pool generated attribute value
92 changes: 92 additions & 0 deletions dev/commands/feedback.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# Session Feedback

Analyze this conversation and identify what documentation or context was missing, incomplete, or incorrect. The goal is to continuously improve the project's knowledge base so future conversations are more efficient.

## Step 1: Session Analysis

Reflect on the work done in this conversation. For each area, identify friction points:

1. **Exploration overhead**: What parts of the codebase did you have to discover by searching that should have been documented? (e.g., patterns, conventions, module responsibilities)
2. **Wrong assumptions**: Did you make incorrect assumptions due to missing or misleading documentation?
3. **Repeated patterns**: Did you discover recurring patterns or conventions that aren't documented anywhere?
4. **Missing context**: What background knowledge would have helped you start faster? (e.g., architecture decisions, data flow, naming conventions)
5. **Tooling gaps**: Were there commands, scripts, or workflows that you had to figure out?

## Step 2: Documentation Audit

For each friction point identified, determine the appropriate fix. Check the existing documentation to avoid duplicating what's already there:

- `AGENTS.md` — Top-level project instructions and component map
- `CLAUDE.md` — Entry point referencing AGENTS.md
- `docs/AGENTS.md` — Documentation site guide
- `infrahub_sdk/ctl/AGENTS.md` — CLI development guide
- `infrahub_sdk/pytest_plugin/AGENTS.md` — Pytest plugin guide
- `tests/AGENTS.md` — Testing guide

Read the relevant existing files to understand what's already documented before proposing changes.

## Step 3: Generate Report

Present the feedback as a structured report with the following sections. Only include sections that have content — skip empty sections.

### Format

```markdown
## Session Feedback Report

### What I Was Working On
<!-- Brief summary of the task(s) performed in this conversation -->

### Documentation Gaps
<!-- Things that should be documented but aren't -->

For each gap:

- **Topic**: What's missing
- **Where**: Which file should contain this (existing file to update, or new file to create)
- **Why**: How this would have helped during this conversation
- **Suggested content**: A draft of what should be added (be specific and actionable)

### Documentation Corrections
<!-- Things that are documented but wrong or misleading -->

For each correction:

- **File**: Path to the file
- **Issue**: What's wrong or misleading
- **Fix**: What it should say instead

### Discovered Patterns
<!-- Conventions or patterns found in the code that aren't documented -->

For each pattern:

- **Pattern**: Description of the convention
- **Evidence**: Where in the code this pattern is used (file paths)
- **Where to document**: Which AGENTS.md or guide file should capture this

### Memory Updates
<!-- Propose additions/changes to MEMORY.md for cross-session persistence -->

For each update:

- **Action**: Add / Update / Remove
- **Content**: What to write
- **Reason**: Why this is worth remembering across sessions
```

## Step 4: Apply Changes

After presenting the report, ask the user which changes they want to apply. Present the options:

1. **Apply all** — Create/update all proposed documentation files and memory
2. **Cherry-pick** — Let the user select which changes to apply
3. **None** — Just keep the report as reference, don't modify any files


For approved changes:

- Edit existing files when updating documentation
- Create new files only when no appropriate existing file exists
- Update `MEMORY.md` with approved memory changes
- Keep all changes minimal and focused — don't over-document
36 changes: 36 additions & 0 deletions dev/commands/pre-ci.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
Run a subset of fast CI checks locally. These are lightweight validations that catch common issues before pushing. Run all steps and report a summary at the end.

## Steps

1. **Format** Python code:
```bash
uv run invoke format
```

2. **Lint** (YAML, Ruff, ty, mypy, markdownlint, vale):
```bash
uv run invoke lint
```

3. **Python unit tests**:
```bash
uv run pytest tests/unit/
```

4. **Docs unit tests** (vitest):
```bash
(cd docs && npx --no-install vitest run)
```

5. **Validate generated documentation** (regenerate and check for drift):
```bash
uv run invoke docs-validate
```

## Instructions

- Run each step in order using the Bash tool.
- If a step fails, continue with the remaining steps.
- At the end, print a summary table of all steps with pass/fail status.
- Do NOT commit or push anything.

6 changes: 3 additions & 3 deletions docs/AGENTS.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# docs/AGENTS.md
# Documentation agents

Docusaurus documentation following Diataxis framework.

Expand Down Expand Up @@ -34,12 +34,12 @@ Sidebar navigation is dynamic: `sidebars-*.ts` files read the filesystem at buil

No manual sidebar update is needed when adding a new `.mdx` file. However, to control the display order of a new page, add its doc ID to the ordered list in the corresponding `sidebars-*.ts` file.

## Adding Documentation
## Adding documentation

1. Create MDX file in appropriate directory
2. Add frontmatter with `title`

## MDX Pattern
## MDX pattern

Use Tabs for async/sync examples, callouts for notes:

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/python-sdk/guides/client.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ Your client is now configured to use the specified default branch instead of `ma

## Hello world example

Let's create a simple "Hello World" example to verify your client configuration works correctly. This example will connect to your Infrahub instance and query the available accounts.
Let's create a "Hello World" example to verify your client configuration works correctly. This example will connect to your Infrahub instance and query the available accounts.

1. Create a new file called `hello_world.py`:

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/python-sdk/guides/python-typing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ infrahubctl graphql generate-return-types queries/get_tags.gql

### Example workflow

1. **Create your GraphQL queries** in `.gql` files preferably in a directory (e.g., `queries/`):
1. **Create your GraphQL queries** in `.gql` files preferably in a directory (for example, `queries/`):

```graphql
# queries/get_tags.gql
Expand Down
12 changes: 12 additions & 0 deletions docs/docs/python-sdk/sdk_ref/infrahub_sdk/node/attribute.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,15 @@ value(self) -> Any
```python
value(self, value: Any) -> None
```

#### `is_from_pool_attribute`

```python
is_from_pool_attribute(self) -> bool
```

Check whether this attribute's value is sourced from a resource pool.

**Returns:**

- True if the attribute value is a resource pool node or was explicitly allocated from a pool.
20 changes: 10 additions & 10 deletions docs/docs/python-sdk/topics/object_file.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -68,13 +68,13 @@ spec:

> Multiple documents in a single YAML file are also supported, each document will be loaded separately. Documents are separated by `---`

### Data Processing Parameters
### Data processing parameters

The `parameters` field controls how the data in the object file is processed before loading into Infrahub:

| Parameter | Description | Default |
| -------------- | ------------------------------------------------------------------------------------------------------- | ------- |
| `expand_range` | When set to `true`, range patterns (e.g., `[1-5]`) in string fields are expanded into multiple objects. | `false` |
| Parameter | Description | Default |
| -------------- | -------------------------------------------------------------------------------------------------------------- | ------- |
| `expand_range` | When set to `true`, range patterns (for example, `[1-5]`) in string fields are expanded into multiple objects. | `false` |

When `expand_range` is not specified, it defaults to `false`.

Expand Down Expand Up @@ -208,9 +208,9 @@ Metadata support is planned for future releases. Currently, the Object file does
3. Validate object files before loading them into production environments.
4. Use comments in your YAML files to document complex relationships or dependencies.

## Range Expansion in Object Files
## Range expansion in object files

The Infrahub Python SDK supports **range expansion** for string fields in object files when the `parameters > expand_range` is set to `true`. This feature allows you to specify a range pattern (e.g., `[1-5]`) in any string value, and the SDK will automatically expand it into multiple objects during validation and processing.
The Infrahub Python SDK supports **range expansion** for string fields in object files when the `parameters > expand_range` is set to `true`. This feature allows you to specify a range pattern (for example, `[1-5]`) in any string value, and the SDK will automatically expand it into multiple objects during validation and processing.

```yaml
---
Expand All @@ -225,15 +225,15 @@ spec:
type: Country
```

### How Range Expansion Works
### How range expansion works

- Any string field containing a pattern like `[1-5]`, `[10-15]`, or `[1,3,5]` will be expanded into multiple objects.
- If multiple fields in the same object use range expansion, **all expanded lists must have the same length**. If not, validation will fail.
- The expansion is performed before validation and processing, so all downstream logic works on the expanded data.

### Examples

#### Single Field Expansion
#### Single field expansion

```yaml
spec:
Expand All @@ -256,7 +256,7 @@ This will expand to:
type: Country
```

#### Multiple Field Expansion (Matching Lengths)
#### Multiple field expansion (matching lengths)

```yaml
spec:
Expand All @@ -283,7 +283,7 @@ This will expand to:
type: Country
```

#### Error: Mismatched Range Lengths
#### Error: mismatched range lengths

If you use ranges of different lengths in multiple fields:

Expand Down
Loading