diff --git a/.claude/CLAUDE.md b/.claude/CLAUDE.md index 0aa41178..e548daa1 100644 --- a/.claude/CLAUDE.md +++ b/.claude/CLAUDE.md @@ -9,6 +9,7 @@ These directories are **shared tooling and templates**. Do not modify them for g - `retools/` — static analysis toolkit (shared tooling) - `livetools/` — Frida-based dynamic analysis (shared tooling) - `graphics/` — DX9 tracer framework (shared tooling) +- `renderdoctools/` — RenderDoc capture analysis toolkit (shared tooling) **Per-game work goes in `patches//`.** When starting a new game, copy `rtx_remix_tools/dx/remix-comp-proxy/` (excluding `build/`) to `patches//` and edit the copy. If the user says "edit remix-comp-proxy code" without specifying, ask whether they mean the template or a game copy. @@ -105,3 +106,4 @@ When working on any of the following — invoke the **`dx9-ffp-port` skill** imm - **Subagent workflow and delegation rules**: @.claude/rules/subagent-workflow.md - **DX9 FFP proxy porting for RTX Remix**: `.claude/skills/dx9-ffp-port/SKILL.md` (invoke `dx9-ffp-port` skill, not auto-loaded) - **Frida-based dynamic analysis**: `/dynamic-analysis` skill +- **RenderDoc capture analysis**: `/renderdoc-analysis` skill diff --git a/.claude/rules/tool-dispatch.md b/.claude/rules/tool-dispatch.md index 4f9ea296..97411beb 100644 --- a/.claude/rules/tool-dispatch.md +++ b/.claude/rules/tool-dispatch.md @@ -60,6 +60,17 @@ Under `rtx_remix_tools/dx/scripts/`. Use BEFORE retools for D3D9 questions. Run - `find_blend_states.py $B` — D3DRS_VERTEXBLEND / INDEXEDVERTEXBLENDENABLE + WORLDMATRIX transforms - `scan_d3d_region.py $B 0xSTART 0xEND` — D3D calls in code region +## RenderDoc capture analysis (main agent) + +- `python -m renderdoctools events ` — list events/draw calls +- `python -m renderdoctools analyze --summary` — capture overview +- `python -m renderdoctools pipeline --event EID` — pipeline state +- `python -m renderdoctools textures --event EID` — bound textures +- `python -m renderdoctools shaders --event EID` — shader disassembly +- `python -m renderdoctools mesh --event EID` — vertex data +- `python -m renderdoctools counters ` — GPU counters +- `python -m renderdoctools open ` — launch RenderDoc GUI + ## dx9tracer - Capture (main agent): `python -m graphics.directx.dx9.tracer trigger --game-dir ` diff --git a/.claude/skills/dynamic-analysis/SKILL.md b/.claude/skills/dynamic-analysis/SKILL.md index 4e87f5e6..acf512be 100644 --- a/.claude/skills/dynamic-analysis/SKILL.md +++ b/.claude/skills/dynamic-analysis/SKILL.md @@ -394,3 +394,15 @@ python -m livetools analyze scene.jsonl --export-csv scene.csv 7. **Use modules to find DLL bases.** Before hooking a DLL function (e.g. D3D9 vtable), use `modules` to find the actual loaded base address. 8. **Composable pipeline.** `trace` captures raw records. `collect` streams them to disk. `analyze` aggregates offline. Chain them for any investigation. + +9. **Hook the game's CALL instruction, not the DLL function.** To trace a D3D9 method (or any API call), find the `call [reg+offset]` or `call ` instruction *in the game's code* via `xrefs.py` or `vtable.py calls`. Hook THAT address. Do NOT compute the target address inside d3d9.dll and hook there — the arguments are arranged at the caller, and the DLL entry point is shared across all callers. + +10. **Zero hits means something is wrong — diagnose, don't give up.** If trace/collect returns 0 samples: (a) Ask the user: is the game window focused and actively rendering? (b) Verify the address: `disasm ` in livetools — confirm real code exists there. (c) Try a known-hot address: `dipcnt callers 10` finds confirmed active call sites; trace one to prove the hook pipeline works. (d) Only after all three pass should you reconsider whether the original address is actually called during gameplay. + +--- + +## Anti-Patterns + +**Do NOT chase the "real" device pointer.** When working with D3D9, do NOT: read a device pointer from a global, dereference its vtable, compute `d3d9.dll_base + slot_offset`, and hook that address. This hooks inside the DLL where arguments are not in the expected layout and proxy/wrapper DLLs break the vtable chain. Hook the game's CALL instruction instead (pattern #9). + +**Do NOT explain away zero data.** If a trace returns 0 samples, the answer is "I got no data and need to troubleshoot" — not "the game doesn't appear to use this code path." Follow the escalation in pattern #10. diff --git a/.claude/skills/renderdoc-analysis/SKILL.md b/.claude/skills/renderdoc-analysis/SKILL.md new file mode 100644 index 00000000..30a47334 --- /dev/null +++ b/.claude/skills/renderdoc-analysis/SKILL.md @@ -0,0 +1,251 @@ +--- +name: renderdoc-analysis +description: RenderDoc-based GPU capture analysis toolkit for reverse engineering. Use when loading .rdc capture files, inspecting draw calls, examining pipeline state, viewing textures/shaders, decoding mesh data, analyzing GPU counters, or performing any graphics debugging task on a captured frame. +--- + +# RenderDoc Analysis with renderdoctools + +Programmatic GPU capture analysis. Analyze .rdc files headlessly or launch the RenderDoc GUI for manual inspection. + +All commands: `python -m renderdoctools [args]` +All commands support `--json` for raw JSON and `--output FILE`. + +## Capturing + +A capture (`.rdc`) is a snapshot of one frame's entire GPU command stream: every draw call, state change, resource binding, and buffer/texture content at that point in time. + +### CLI capture (preferred) + +Launch a game with RenderDoc injection and capture — no GUI required: +``` +python -m renderdoctools capture # launch + capture +python -m renderdoctools capture --output out.rdc # specify output filename +python -m renderdoctools capture -- --arg1 --arg2 # pass args to the game +``` + +This calls `renderdoccmd capture -w ` under the hood. The game launches with RenderDoc hooked in. Press **F12** or **Print Screen** in-game to trigger the capture. The `.rdc` file is written to the working directory (or the `--output` path). + +**Use this as the default capture method.** The agent can run this directly — no need to walk the user through the GUI. + +### GUI capture (fallback) + +If CLI capture fails (e.g. game needs specific launch options the CLI doesn't support): +``` +python -m renderdoctools open # open existing capture in GUI +``` +Or launch RenderDoc GUI manually: File > Launch Application. Set executable + working dir. Hit Launch, press F12 in-game. + +### Capture tips +- Navigate to the exact game state first, then capture. The captured frame is whatever's rendering at trigger time. +- For games with launchers, use `--opt-hook-children` to capture the child game process. +- D3D11/D3D12/Vulkan/OpenGL/DX9 supported. DX9 requires a custom RenderDoc build with the D3D9 driver. +- If the game crashes on inject, try `--opt-ref-all-resources` (slower but more compatible). +- Capture files can be 100MB-1GB+. Each contains full texture/buffer data for that frame. + +## DX9 Captures + +DX9 support requires a custom RenderDoc build with the D3D9 replay driver. When working with DX9 captures: + +- **Texture export must use DDS format.** PNG export hangs or corrupts on DX9 BC/DXT textures. Always pass `--format dds`: + ``` + python -m renderdoctools textures capture.rdc --event --save-all ./dump --format dds + ``` +- **Supported commands:** `events`, `analyze`, `pipeline`, `textures` (list + DDS export), `mesh`, `tex-data`, `api-calls`, `usage`, `frame-info` +- **Limited/unsupported:** `shaders` (disassembly not available for SM1-3 bytecode), `debug-shader`, `custom-shader`, `counters`, `pixel-history`, `tex-stats`, `pick-pixel` +- **Pipeline state** reports vertex and pixel shader stages with constant buffers and sampler bindings. Hull/domain/geometry/compute stages don't exist in DX9. +- **Mesh data** decodes vertex attributes (POSITION, NORMAL, TEXCOORD, etc.) from the vertex declaration. + +## Quick Reference + +| Command | Description | +|---------|-------------| +| `events ` | List all events/draw calls | +| `events --draws-only` | Draw calls only | +| `pipeline --event EID` | Pipeline state at event | +| `pipeline --event EID --stage pixel` | Single stage | +| `textures --event EID` | List bound textures | +| `textures --event EID --save-all DIR` | Export all textures (PNG) | +| `textures --event EID --save-all DIR --format dds` | Export all textures (DDS, required for DX9) | +| `shaders --event EID` | Disassemble bound shaders | +| `shaders --event EID --cbuffers` | Include constant buffer values | +| `mesh --event EID` | Vertex input data | +| `mesh --event EID --post-vs` | Post-VS output | +| `descriptors --event EID` | All descriptors accessed at event | +| `descriptors --event EID --type srv` | Filter: sampler/cbuffer/srv/uav | +| `api-calls ` | List all API calls with inline params | +| `api-calls --event EID` | Detailed params for one event | +| `api-calls --filter "Map"` | Filter calls by function name | +| `api-calls --range 100 200` | Calls in event ID range | +| `counters ` | List GPU counters | +| `counters --zero-samples` | Find wasted draws | +| `analyze --summary` | Capture overview stats | +| `analyze --biggest-draws 10` | Top N draws by vertex count | +| `analyze --render-targets` | Unique render targets | +| `pixel-history --event EID --resource RID --x X --y Y` | What drew to this pixel? | +| `pick-pixel --resource RID --x X --y Y` | Read pixel value at (x,y) | +| `pick-pixel --resource RID --x X --y Y --comp-type float` | Pick with type override | +| `messages ` | All API debug/validation messages | +| `messages --severity high` | Only high+ severity messages | +| `tex-stats --resource RID` | Min/max RGBA values of a texture | +| `tex-stats --resource RID --histogram` | Value distribution histogram | +| `custom-shader --event EID --source FILE --output FILE` | Apply custom viz shader | +| `tex-data --resource RID` | Raw bytes + hex preview | +| `tex-data --resource RID --output-file out.bin` | Save raw texture bytes | +| `usage --resource RID` | Which events read/write a resource? | +| `usage --resource RID --filter read` | Only read usages | +| `usage --resource RID --filter write` | Only write usages | +| `frame-info ` | Frame stats: draws, dispatches, binds, state changes | +| `debug-shader --event EID --mode vertex --vertex-index N` | Debug vertex shader | +| `debug-shader --event EID --mode pixel --x X --y Y` | Debug pixel shader | +| `debug-shader --event EID --mode compute --group 0,0,0 --thread 0,0,0` | Debug compute | +| `open ` | Launch RenderDoc GUI | +| `capture [--output FILE] [-- EXE_ARGS]` | Launch game with RenderDoc injection, capture on F12 | + +## Verification: Confirm Before You Dig + +Always verify you're looking at the right thing before deep analysis. + +**Dump and check render targets:** +``` +python -m renderdoctools textures capture.rdc --event --save-all ./verify +python -m renderdoctools textures capture.rdc --event --save-all ./verify --format dds # DX9 captures +``` +Open the images. Confirm the render target matches expected game output. If it's a depth buffer, GBuffer, or intermediate pass you don't recognize — wrong draw. + +**Spot-check pixel values:** +``` +python -m renderdoctools pick-pixel capture.rdc --resource --x 100 --y 100 +``` +Normal map: expect RGB near (0.5, 0.5, 1.0). HDR color: expect values > 1.0. All zeros: resource uninitialized or cleared at that EID. + +**Compare before/after:** Dump textures at two EIDs to confirm a draw changes what you expect. + +## Finding the Right Draw Call + +Core RE question: "which draw call renders X?" + +### Strategy 1: Work backwards from render targets +``` +python -m renderdoctools analyze capture.rdc --render-targets +``` +Dump the most-written RTs, visually identify which contains your target, then: +``` +python -m renderdoctools usage capture.rdc --resource --filter write +``` +Lists every draw writing to it. Narrow by EID range. + +### Strategy 2: Binary search by EID +`events --draws-only`, pick midpoint EID, dump its RTs. Content there? Search earlier. Not there? Search later. Converge on the exact draw. + +### Strategy 3: Filter by name +Many engines annotate draws with debug markers: +``` +python -m renderdoctools events capture.rdc --filter "shadow" +python -m renderdoctools events capture.rdc --filter "GBuffer" +``` + +### Strategy 4: Filter by geometry size +``` +python -m renderdoctools analyze capture.rdc --biggest-draws 20 +``` + +## Multi-Pass Analysis + +Reconstruct a render pipeline — who writes what, who reads it: + +1. `analyze --render-targets` — list all unique RTs +2. For each RT: `usage --resource ` — all reads and writes +3. Write events = pass boundaries. Reads between writes = consumers of that pass. +4. Dump textures at key EIDs to label each pass (shadow, GBuffer, lighting, post, final) + +RT written at EID 100, 300, 500. Read at EID 200, 400. Means: Pass A (100) produces, Pass B (200) consumes, Pass C (300) overwrites, etc. + +## Interpreting Shader Debug Output + +`debug-shader` produces: inputs, constant blocks, per-step variable changes, source locations. + +**What to look for:** +- **NaN/Inf:** float values becoming NaN mid-shader = division by zero or bad input. Trace the step that introduced it. +- **Unexpected zeros:** input that should be nonzero reads as 0 = wrong binding or uninitialized resource. +- **Matrix transforms:** check `finalState` output position in vertex shaders. Offscreen or degenerate = bad matrices in constant blocks. +- **Shader discards:** `shaderDiscarded: true` in pixel history. Debug the pixel shader to find the discard condition. + +**Combine with cbuffer inspection:** +``` +python -m renderdoctools shaders capture.rdc --event --cbuffers +python -m renderdoctools debug-shader capture.rdc --event --mode pixel --x X --y Y +``` + +## When Data Looks Wrong + +Checklist: + +1. **Correct EID?** All pipeline/texture/resource queries reflect state at the queried EID. Wrong EID = wrong data. +2. **Correct resource?** Resource IDs are per-capture. Re-discover with `textures` or `analyze --render-targets`. +3. **Correct format?** `pick-pixel` with wrong `--comp-type` reads garbage. Check `textures` output for actual format. +4. **Initialized?** All zeros = resource not yet written at that EID. `usage --resource --filter write` finds the first write. +5. **Mip/slice?** Querying mip 0 of a texture only written at mip 1+ returns stale data. Use `--sub-mip`. + +## Falling Back to the GUI + +When programmatic analysis can't get you there: + +``` +python -m renderdoctools open capture.rdc +``` + +**GUI strengths over CLI:** +- **Texture viewer scrubbing:** step through events watching render targets update live. Fastest way to find "which draw renders X." +- **Mesh viewer:** 3D vertex visualization with rotation/zoom. Essential for understanding vertex transforms. +- **Shader debugger with source:** step through HLSL/GLSL with variable watch, breakpoints, source highlighting. Far richer than JSON trace. +- **Overlay modes:** wireframe, depth, stencil, overdraw heat map. +- **Resource inspector:** browse all textures/buffers with format decoding, mip/slice selection. + +**Workflow:** Open in GUI, visually locate the draw/resource, note the EID and resource ID, return to CLI for scripted/batch operations. + +## Workflow Recipes + +### Quick capture overview +``` +python -m renderdoctools analyze capture.rdc --summary +python -m renderdoctools events capture.rdc --draws-only +python -m renderdoctools analyze capture.rdc --biggest-draws 10 +``` + +### Investigate a specific draw +``` +python -m renderdoctools pipeline capture.rdc --event +python -m renderdoctools textures capture.rdc --event --save-all ./dump +python -m renderdoctools shaders capture.rdc --event --cbuffers +``` + +### Shader debugging +``` +python -m renderdoctools debug-shader capture.rdc --event --mode vertex --vertex-index 0 +python -m renderdoctools debug-shader capture.rdc --event --mode pixel --x 512 --y 384 +python -m renderdoctools debug-shader capture.rdc --event --mode compute --group 0,0,0 --thread 0,0,0 +``` + +### Resource tracking +``` +python -m renderdoctools usage capture.rdc --resource +python -m renderdoctools usage capture.rdc --resource --filter write +``` + +### Full frame audit +``` +python -m renderdoctools analyze capture.rdc --summary +python -m renderdoctools analyze capture.rdc --render-targets +python -m renderdoctools messages capture.rdc --severity high +python -m renderdoctools counters capture.rdc --zero-samples +``` + +## Thinking Patterns + +1. **Broad to narrow.** `analyze --summary` > `events --draws-only` > `pipeline`/`shaders`/`textures` on the target draw. +2. **Verify with texture dumps.** Dump render targets and visually confirm before deep analysis. +3. **Cross-reference with livetools.** Match draw call patterns with function traces from dynamic analysis. +4. **Track dependencies with usage.** `--filter write` = producers. `--filter read` = consumers. +5. **Debug shaders for transforms.** `debug-shader` + `shaders --cbuffers` traces inputs through shader code. +6. **GUI when stuck.** Scrub through events in the texture viewer to visually identify draws. +7. **JSON for automation.** `--json` on any command for scripted pipelines. diff --git a/.cursor/rules/project-workspace.mdc b/.cursor/rules/project-workspace.mdc index 9e38b27e..6d8e1041 100644 --- a/.cursor/rules/project-workspace.mdc +++ b/.cursor/rules/project-workspace.mdc @@ -5,6 +5,17 @@ alwaysApply: true # Project Workspace +## Read-Only Templates + +Do not modify for game-specific work — per-game changes go in `patches//`. + +- `rtx_remix_tools/dx/remix-comp-proxy/` — proxy framework **template** (copy per-game) +- `rtx_remix_tools/dx/scripts/`, `retools/`, `livetools/`, `graphics/`, `renderdoctools/` — shared tooling + +Shared tooling can be modified to improve tools themselves — not for game-specific customization. + +## Per-Game Work + Use `patches//` (git-ignored) for all project-specific artifacts: - Knowledge base files (`kb.h`) - One-off analysis scripts diff --git a/.cursor/skills/dx9-ffp-port/SKILL.md b/.cursor/skills/dx9-ffp-port/SKILL.md index 6d602b72..559d0d14 100644 --- a/.cursor/skills/dx9-ffp-port/SKILL.md +++ b/.cursor/skills/dx9-ffp-port/SKILL.md @@ -16,6 +16,8 @@ Port a DX9 shader-based game to fixed-function pipeline (FFP) for RTX Remix comp ## What remix-comp-proxy Does +**NEVER MODIFY the template at `rtx_remix_tools/dx/remix-comp-proxy/`.** Always copy to `patches//` first and edit the copy. + The codebase (`rtx_remix_tools/dx/remix-comp-proxy/`) is a C++20 compatibility mod based on remix-comp-base that: 1. Captures VS constants (View, Projection, World matrices) from `SetVertexShaderConstantF` @@ -37,8 +39,12 @@ The codebase (`rtx_remix_tools/dx/remix-comp-proxy/`) is a C++20 compatibility m | `src/comp/modules/renderer.cpp` | Draw call routing -- `on_draw_indexed_prim()` and `on_draw_primitive()` | | `src/comp/modules/d3d9ex.cpp` | `IDirect3DDevice9` hook layer -- intercepts all 119 methods | | `src/comp/modules/skinning.cpp` | Skinning module (vertex expansion, bone upload, FFP blending) | -| `src/comp/modules/diagnostics.cpp` | Diagnostic logging to `ffp_proxy.log` | +| `src/comp/modules/diagnostics.cpp` | Diagnostic logging to `rtx_comp/diagnostics.log` | | `src/comp/modules/imgui.cpp` | ImGui debug overlay (F4 toggle) | +| `src/comp/comp.cpp` | Module registration order | +| `src/comp/d3d9_proxy.cpp` | Loads real d3d9 chain, DLL pre/post-load | +| `src/comp/game/game.cpp` | Per-game address init (patterns, hooks) | +| `src/comp/game/game.hpp` | Per-game variables and function typedefs | | `src/shared/common/ffp_state.cpp` | FFP state tracker -- engage/disengage, matrix transforms, texture stages | | `src/shared/common/ffp_state.hpp` | `ffp_state` class with all state accessors | | `src/shared/common/config.hpp` | Config structures: `ffp_settings`, `skinning_settings`, etc. | @@ -47,6 +53,8 @@ The codebase (`rtx_remix_tools/dx/remix-comp-proxy/`) is a C++20 compatibility m Per-game setup: copy the entire `rtx_remix_tools/dx/remix-comp-proxy/` folder to `patches//`, then edit `src/comp/` directly. +**Before reading remix-comp-proxy source files**, read `references/remix-comp-context.md` for a skip-list of boilerplate files you should never open. + --- ## Porting Workflow @@ -58,8 +66,15 @@ Run ALL of the analysis scripts on the game binary. These are purpose-built for ```bash python rtx_remix_tools/dx/scripts/find_d3d_calls.py "" python rtx_remix_tools/dx/scripts/find_vs_constants.py "" +python rtx_remix_tools/dx/scripts/find_ps_constants.py "" python rtx_remix_tools/dx/scripts/decode_vtx_decls.py "" --scan +python rtx_remix_tools/dx/scripts/decode_fvf.py "" python rtx_remix_tools/dx/scripts/find_device_calls.py "" +python rtx_remix_tools/dx/scripts/find_render_states.py "" +python rtx_remix_tools/dx/scripts/find_texture_ops.py "" +python rtx_remix_tools/dx/scripts/find_transforms.py "" +python rtx_remix_tools/dx/scripts/classify_draws.py "" +python rtx_remix_tools/dx/scripts/find_matrix_registers.py "" python rtx_remix_tools/dx/scripts/find_skinning.py "" python rtx_remix_tools/dx/scripts/find_blend_states.py "" ``` @@ -96,6 +111,8 @@ If the tracer gives you the register layout, skip the dynamic approach in Step 2 This is the **most critical** step. Determine which VS constant registers hold View, Projection, and World matrices. +**Remix REQUIRES separate World, View, and Projection matrices.** A concatenated WVP will NOT work. If the game uploads a pre-multiplied WorldViewProj, the proxy must intercept individual matrices before concatenation. Start with `find_matrix_registers.py` to detect this. + **Static approach:** Decompile call sites: ```bash python -m retools.decompiler --types patches//kb.h @@ -135,7 +152,7 @@ Deploy to game directory: `d3d9.dll` + `remix-comp-proxy.ini`. Place `d3d9_remix ### Step 5: Diagnose with Log and ImGui -The proxy writes `ffp_proxy.log` in the game directory after a configurable delay (default 50 seconds via `[Diagnostics] DelayMs`), then logs frames of detailed draw call data: +The proxy writes `rtx_comp/diagnostics.log` in the game directory after a configurable delay (default 50 seconds via `[Diagnostics] DelayMs`), then logs frames of detailed draw call data: - **VS regs written**: which constant registers the game actually fills - **Vertex declarations**: what vertex elements each draw uses @@ -172,6 +189,34 @@ Other game-specific INI settings: --- +## INI Config (`remix-comp-proxy.ini`) + +Runtime settings (no recompile): + +```ini +[FFP] +Enabled=1 +AlbedoStage=0 ; Diffuse texture stage (0-7) + +[Skinning] +Enabled=0 ; Only after rigid FFP works + +[Diagnostics] +Enabled=1 +DelayMs=50000 +LogFrames=3 + +[Remix] +Enabled=1 +DLLName=d3d9_remix.dll + +[Chain] +PreLoad= ; Semicolon-separated DLLs to load before d3d9 chain +PostLoad= +``` + +--- + ## Architecture: What to Edit vs What to Leave Alone | File / Section | Edit Per-Game? | @@ -181,6 +226,8 @@ Other game-specific INI settings: | `remix-comp-proxy.ini` `[Skinning] Enabled` | **YES** (after rigid works) | | `renderer.cpp` `on_draw_indexed_prim()` | **YES** -- main draw routing | | `renderer.cpp` `on_draw_primitive()` | **YES** -- draw routing | +| `src/comp/main.cpp` WINDOW_CLASS_NAME | **YES** | +| `src/comp/game/game.cpp` address init | **YES** -- per-game hooks | | `ffp_state.cpp` `setup_lighting()`, `setup_texture_stages()`, `apply_transforms()` | MAYBE | | `ffp_state.cpp` `on_set_vs_const_f()` | MAYBE -- dirty tracking | | `ffp_state.cpp` `on_set_vertex_declaration()` | MAYBE -- element parsing | @@ -193,15 +240,17 @@ Other game-specific INI settings: ### DrawIndexedPrimitive Decision Tree ``` -viewProjValid? -+-- NO -> shader passthrough +ffp.is_enabled() AND ffp.view_proj_valid()? ++-- NO -> passthrough with shaders +-- YES - +-- curDeclIsSkinned? + +-- ffp.cur_decl_is_skinned()? | +-- YES + skinning module -> skinning::draw_skinned_dip() - | +-- YES + no skinning -> shader passthrough - +-- NOT skinned - +-- !curDeclHasNormal -> shader passthrough (HUD/UI) - +-- hasNormal -> ffp_state::engage + rigid FFP draw + | +-- YES + no skinning -> passthrough with shaders + +-- !ffp.cur_decl_has_normal()? + | +-- passthrough (HUD/UI) + | GAME-SPECIFIC: remove this filter if world geometry lacks NORMAL + +-- else (rigid 3D mesh) + +-- ffp.engage() + draw + restore ``` **Common per-game changes:** @@ -212,9 +261,10 @@ viewProjValid? ### DrawPrimitive Decision Tree ``` -viewProjValid AND lastDecl AND !curDeclHasPosT AND !curDeclIsSkinned? -+-- YES -> ffp_state::engage (world-space particles / non-indexed geometry) -+-- NO -> shader passthrough (screen-space UI, POSITIONT, no decl, skinned) +ffp.is_enabled() AND ffp.view_proj_valid() AND ffp.last_decl() +AND !ffp.cur_decl_has_pos_t() AND !ffp.cur_decl_is_skinned()? ++-- YES -> ffp.engage() (world-space particles / non-indexed geometry) ++-- NO -> passthrough (screen-space UI, POSITIONT, no decl, skinned) ``` --- @@ -225,9 +275,16 @@ viewProjValid AND lastDecl AND !curDeclHasPosT AND !curDeclIsSkinned? |--------|-----------------| | `python rtx_remix_tools/dx/scripts/find_d3d_calls.py ` | D3D9/D3DX imports and call sites | | `python rtx_remix_tools/dx/scripts/find_vs_constants.py ` | `SetVertexShaderConstantF` call sites and register/count args | +| `python rtx_remix_tools/dx/scripts/find_ps_constants.py ` | `SetPixelShaderConstantF/I/B` call sites and register/count args | | `python rtx_remix_tools/dx/scripts/find_device_calls.py ` | Device vtable call patterns and device pointer refs | | `python rtx_remix_tools/dx/scripts/find_vtable_calls.py ` | D3DX constant table usage and D3D9 vtable calls | | `python rtx_remix_tools/dx/scripts/decode_vtx_decls.py --scan` | Vertex declaration formats (BLENDWEIGHT/BLENDINDICES -> skinning) | +| `python rtx_remix_tools/dx/scripts/decode_fvf.py ` | FVF bitfield decode from SetFVF calls | +| `python rtx_remix_tools/dx/scripts/find_render_states.py ` | SetRenderState args decoded by category | +| `python rtx_remix_tools/dx/scripts/find_texture_ops.py ` | Texture pipeline: stages, TSS ops, sampler states | +| `python rtx_remix_tools/dx/scripts/find_transforms.py ` | SetTransform types (World, View, Projection, Texture) | +| `python rtx_remix_tools/dx/scripts/classify_draws.py ` | Draw call classification (FFP/shader/hybrid %) | +| `python rtx_remix_tools/dx/scripts/find_matrix_registers.py ` | Identify View/Proj/World registers (CTAB + frequency + layout suggestion) | | `python rtx_remix_tools/dx/scripts/find_skinning.py ` | Consolidated skinning analysis: skinned decls, bone palettes, blend states, suggested INI | | `python rtx_remix_tools/dx/scripts/find_blend_states.py ` | D3DRS_VERTEXBLEND + INDEXEDVERTEXBLENDENABLE + WORLDMATRIX transforms | | `python rtx_remix_tools/dx/scripts/scan_d3d_region.py 0xSTART 0xEND` | Map all D3D9 vtable calls in a code region | @@ -281,14 +338,16 @@ python -m retools.search strings -f "vertex,decl,shader" --xrefs ## Common Pitfalls -- **Matrices look wrong**: D3D9 FFP `SetTransform` expects row-major. The proxy transposes. If the game stores matrices row-major in VS constants (uncommon), remove the transpose in `ffp_state::apply_transforms()`. -- **Everything is white/black**: Albedo texture is on stage 1+, not stage 0. Set `AlbedoStage` in `remix-comp-proxy.ini` `[FFP]` section, or trace `SetTexture` calls to find the correct stage. -- **Some objects render, others don't**: Check whether missing geometry has NORMAL in its vertex decl. Check `view_proj_valid()` is true at draw time. `on_draw_primitive()` routes on decl presence + no POSITIONT + not skinned. -- **Skinned meshes invisible**: Enable `[Skinning] Enabled=1` in `remix-comp-proxy.ini`. Verify bone count is non-zero in diagnostic log entries. -- **Bones mixed up between NPCs**: Stale WORLDMATRIX slots from a previous object. If broken, the game may need a game-specific reset hook. -- **Game crashes on startup**: Set `Enabled=0` in `remix-comp-proxy.ini` `[Remix]` to test without Remix. -- **Geometry at origin / piled up**: World matrix register mapping wrong. Re-examine VS constant writes via `livetools trace`. -- **World geometry shifts after skinned draws**: `WORLDMATRIX(0)` clobbered by bone[0]. The proxy re-applies via world dirty tracking. If still broken, check for bone register overlap with world matrix range. +1. **Concatenated WVP instead of separate matrices**: Remix REQUIRES separate World, View, Projection. If the game uploads a pre-multiplied WVP to a single VS constant, the proxy must intercept individual matrices before concatenation. Run `find_matrix_registers.py` first — if only one register appears with high frequency, it's likely WVP. +2. **Everything white/black**: Albedo on wrong stage. Set `[FFP] AlbedoStage` in INI, trace `SetTexture` to find correct stage. +3. **Some objects missing**: Check NORMAL in vertex decl, `view_proj_valid()` at draw time. +4. **Matrices look wrong**: FFP `SetTransform` expects row-major; proxy transposes. If game stores row-major in VS constants (uncommon), remove transpose in `apply_transforms()`. +5. **Skinned meshes invisible**: `[Skinning] Enabled=1` in INI. Check bone count > 0 in diagnostics. +6. **Bones mixed between NPCs**: Stale WORLDMATRIX slots. May need game-specific reset hook. +7. **Geometry at origin**: World matrix register mapping wrong. Trace VS constant writes. +8. **World shifts after skinned draws**: WORLDMATRIX(0) clobbered by bone[0]. Proxy re-applies via dirty tracking. +9. **ImGui overlay not appearing**: Check WINDOW_CLASS_NAME in `main.cpp` matches the game's window class. Use Spy++ or `FindWindow` to verify. +10. **Game crashes on startup**: Set `[Remix] Enabled=0` to test without Remix. ### Skinning Stability: Finding Game-Specific Hook Points diff --git a/.cursor/skills/dx9-ffp-port/references/remix-comp-context.md b/.cursor/skills/dx9-ffp-port/references/remix-comp-context.md new file mode 100644 index 00000000..63be154a --- /dev/null +++ b/.cursor/skills/dx9-ffp-port/references/remix-comp-context.md @@ -0,0 +1,58 @@ +# remix-comp-proxy: Context Guide + +**`rtx_remix_tools/dx/remix-comp-proxy/` is the READ-ONLY TEMPLATE.** Never modify it unless explicitly asked. Per-game work goes in `patches//`. + +Each game folder under `patches//` is a self-contained remix-comp-proxy project. All paths below are relative to the game folder root (e.g. `patches/FNV/`) or to the framework template at `rtx_remix_tools/dx/remix-comp-proxy/`. + +## Do Not Read + +These are infrastructure files. Use the summaries below instead of opening them. + +### Auto-generated +- `src/comp/modules/tracer_dispatch.inc` — 119 inline `trace_XXX()` functions generated by `graphics/directx/dx9/tracer/d3d9_methods.py`. Each wraps `tracer::record_begin/arg/end`. + +### D3D9 passthrough proxy +- `src/comp/modules/drawcall_mod_context.cpp` — `drawcall_mod_context` save/restore method implementations. Never needs to be read; the API is in `renderer.hpp`. +- `src/comp/modules/d3d9ex.cpp` — `IDirect3DDevice9` wrapper + exported `Direct3DCreate9`/`Direct3DCreate9Ex`. ~80 of 119 methods are trivial 1-line forwards. The ~12 methods with logic dispatch to renderer/ffp_state/imgui/tracer modules. +- `src/comp/modules/d3d9ex.hpp` — D3D9Device, _d3d9, and _d3d9ex wrapper class declarations. +- `src/comp/d3d9_proxy.cpp` + `d3d9_proxy.hpp` — d3d9.dll proxy: loads real d3d9 chain (Remix bridge or system), DLL pre/post-load chains, forwarded D3DPERF/debug exports. + +### Shared library (framework, never edited per-game) +- `src/shared/utils/hooking.cpp/hpp` — MinHook wrapper + vtable patching. +- `src/shared/utils/vector.hpp` — SIMD float2/float3/float4 math. +- `src/shared/common/remix_api.cpp/hpp` — RTX Remix runtime bridge (load d3d9_remix.dll, init API, submit lights). +- `src/shared/common/console.hpp` — Debug console allocation/output. +- `src/shared/common/imgui_helper.cpp/hpp` — ImGui setup/teardown utilities. +- `src/shared/utils/memory.cpp/hpp` — Pattern scanning and memory allocation. +- `src/shared/utils/utils.cpp/hpp` — String/path/module utilities. +- `src/shared/common/dinput_hook_v1.cpp/hpp` — DirectInput8 hook (input capture). +- `src/shared/common/dinput_hook_v2.cpp/hpp` — Alternate DirectInput hook. +- `src/shared/common/loader.cpp/hpp` — Module loader/registry. +- `src/shared/common/flags.cpp/hpp` — Command-line flag parsing. +- `src/shared/common/shader_cache.hpp` — Shader handle cache. +- `src/shared/globals.cpp/hpp` — Global state (device pointer, module handle). +- `src/shared/structs.hpp`, `src/shared/std_include.*`, `src/comp/std_include.*` — Precompiled headers, forward declarations. + +## Read These Instead + +The files that matter for per-game FFP porting work: + +| File | Role | +|------|------| +| `src/comp/modules/renderer.cpp` | **Draw call routing — THE main edit target** | +| `src/comp/modules/renderer.hpp` | `drawcall_mod_context` API + `renderer` class declaration | +| `src/shared/common/ffp_state.cpp` | FFP state tracker: engage/disengage, transforms, textures | +| `src/shared/common/ffp_state.hpp` | ffp_state class with all accessors | +| `src/shared/common/config.cpp` | INI config loader | +| `src/shared/common/config.hpp` | Config structures | +| `src/comp/main.cpp` | DLL entry, WINDOW_CLASS_NAME, module init | +| `src/comp/comp.cpp` | Module registration order | +| `src/comp/game/game.cpp` | Per-game address patterns and hooks | +| `src/comp/game/game.hpp` | Per-game variables and typedefs | +| `src/comp/game/structs.hpp` | Per-game data structures | + +### Read only when relevant +- `src/comp/modules/diagnostics.cpp/hpp` — Frame logger to `rtx_comp/diagnostics.log`. Read when debugging log output. +- `src/comp/modules/skinning.cpp/hpp` — Skinned mesh FFP conversion. Read only if user asks about skinning. +- `src/comp/modules/imgui.cpp/hpp` — Debug overlay (F4). Read only if debugging ImGui. +- `src/comp/modules/tracer.cpp/hpp` — JSONL D3D9 call recorder. Read only if working on tracer features. diff --git a/.cursor/skills/renderdoc-analysis/SKILL.md b/.cursor/skills/renderdoc-analysis/SKILL.md new file mode 100644 index 00000000..49ca199e --- /dev/null +++ b/.cursor/skills/renderdoc-analysis/SKILL.md @@ -0,0 +1,99 @@ +--- +name: 'renderdoc-analysis' +description: 'RenderDoc GPU capture analysis. Use for .rdc files, draw call inspection, pipeline state, textures/shaders, mesh data, GPU counters.' +user-invocable: true +--- + +# RenderDoc Analysis + +All commands: `python -m renderdoctools [args]`. All support `--json` and `--output FILE`. + +## Capturing + +Launch a game with RenderDoc injection and capture — no GUI required: +``` +python -m renderdoctools capture # launch + capture +python -m renderdoctools capture --output out.rdc # specify output filename +python -m renderdoctools capture -- --arg1 --arg2 # pass args to the game +``` + +The game launches with RenderDoc hooked in. Press **F12** / **Print Screen** in-game to trigger the capture. The `.rdc` file is written to the working directory (or the `--output` path). + +**Use this as the default capture method.** The agent can run this directly — no need to walk the user through the GUI. + +D3D11/D3D12/Vulkan/OpenGL/DX9 supported. DX9 requires the custom RenderDoc build with D3D9 driver. Use `--opt-hook-children` for games with launchers, `--opt-ref-all-resources` if game crashes on inject. + +## Commands + +| Command | Purpose | +|---------|---------| +| `events [--draws-only]` | List events/draw calls | +| `analyze --summary` | Capture overview stats | +| `analyze --biggest-draws N` | Top N draws by vertex count | +| `analyze --render-targets` | Unique render targets | +| `pipeline --event EID` | Pipeline state (optional `--stage pixel`) | +| `textures --event EID [--save-all DIR]` | Bound textures, export | +| `shaders --event EID [--cbuffers]` | Shader disassembly + constants | +| `mesh --event EID [--post-vs]` | Vertex data | +| `descriptors --event EID [--type srv]` | Descriptor bindings | +| `api-calls [--event EID] [--filter NAME]` | API call params | +| `counters [--zero-samples]` | GPU counters, find wasted draws | +| `pixel-history --event EID --resource RID --x X --y Y` | What drew to pixel? | +| `pick-pixel --resource RID --x X --y Y` | Read pixel value | +| `usage --resource RID [--filter read\|write]` | Resource usage across events | +| `debug-shader --event EID --mode vertex\|pixel --vertex-index N` | Debug shader execution | +| `messages [--severity high]` | API debug/validation messages | +| `tex-stats --resource RID [--histogram]` | Min/max RGBA, value distribution | +| `tex-data --resource RID [--output-file F]` | Raw texture bytes | +| `frame-info ` | Frame stats: draws, dispatches, binds | +| `custom-shader --event EID --source F --output F` | Apply custom viz shader | +| `capture [--output FILE] [-- EXE_ARGS]` | Launch game with RenderDoc injection, capture on F12 | +| `open ` | Launch RenderDoc GUI | + +## Finding the Right Draw Call + +1. **Render targets**: `analyze --render-targets` → dump RTs → `usage --resource --filter write` → narrow by EID +2. **Binary search**: `events --draws-only`, pick midpoint, dump RT, converge +3. **By name**: `events --filter "shadow"` (if engine uses debug markers) +4. **By size**: `analyze --biggest-draws 20` + +## Verification + +Always confirm you're looking at the right thing: +- Dump and visually check RTs: `textures --event --save-all ./verify` +- Spot-check pixels: `pick-pixel --resource --x 100 --y 100` +- Compare before/after EIDs + +## Multi-Pass Reconstruction + +1. `analyze --render-targets` — list all RTs +2. Per RT: `usage --resource ` — writes = pass boundaries, reads = consumers +3. Dump textures at key EIDs to label passes (shadow, GBuffer, lighting, post) + +## Interpreting Shader Debug Output + +`debug-shader` produces inputs, constant blocks, per-step variable changes. Look for: +- **NaN/Inf**: division by zero or bad input — trace the step +- **Unexpected zeros**: wrong binding or uninitialized resource +- **Bad transforms**: check `finalState` output position + constant blocks +- **Shader discards**: `shaderDiscarded: true` in pixel history — debug the discard condition + +## When Data Looks Wrong + +1. **Correct EID?** All queries reflect state at the queried EID +2. **Correct resource?** Resource IDs are per-capture — re-discover with `textures` +3. **Correct format?** `pick-pixel` with wrong `--comp-type` reads garbage +4. **Initialized?** All zeros = not yet written. `usage --filter write` finds first write +5. **Mip/slice?** Querying mip 0 of a texture only written at mip 1+ returns stale data + +## Workflow Recipes + +**Quick overview**: `analyze --summary` > `events --draws-only` > `analyze --biggest-draws 10` + +**Investigate draw**: `pipeline --event EID` > `textures --event EID --save-all ./dump` > `shaders --event EID --cbuffers` + +**Full frame audit**: `analyze --render-targets` > `messages --severity high` > `counters --zero-samples` + +## When to Use GUI + +`python -m renderdoctools open ` — better for texture scrubbing, 3D mesh viewer, shader debugger with source, overlay modes (wireframe, depth, overdraw). diff --git a/.github/agents/static-analyzer.agent.md b/.github/agents/static-analyzer.agent.md index 95d55770..41dbba3f 100644 --- a/.github/agents/static-analyzer.agent.md +++ b/.github/agents/static-analyzer.agent.md @@ -15,12 +15,20 @@ On first invocation, read the full tool catalog at `.claude/rules/tool-catalog.m Before any analysis, run these checks in order: -**1. Signature DB**: If `retools/data/signatures.db` does not exist, pull it first: +**1. Verify install**: Run `python verify_install.py` on first invocation. If pyghidra/Ghidra/Java show WARN, run `python verify_install.py --setup`. + +**2. Signature DB**: If `retools/data/signatures.db` does not exist, pull it first: ```bash test -f retools/data/signatures.db || python retools/sigdb.py pull ``` -**2. Bootstrap**: Check if the project KB needs bootstrapping: +**3. Ghidra project**: Check if a Ghidra project exists: +```bash +python retools/pyghidra_backend.py status --project patches/ +``` +If "Not analyzed", run `python retools/pyghidra_backend.py analyze --project patches/` (2-15 min, then near-instant decompilations). + +**4. Bootstrap**: Check if the project KB needs bootstrapping: ```bash grep -cE '^[@$]|^struct |^enum ' patches//kb.h 2>/dev/null || echo 0 ``` @@ -28,10 +36,19 @@ If the count is under 50 (or the file doesn't exist), run `python -m retools.boo ## Running Tools -Run all tools from the repo root using `python -m retools.` syntax: +Run all tools from the repo root using `python -m retools.` syntax. +### Decompilation (two backends) + +- **pyghidra (preferred)** — better type propagation, library resolution. Needs Ghidra project: + `python retools/pyghidra_backend.py decompile binary.exe 0x401000 --project patches/proj` +- **r2ghidra (fallback)** — better `__thiscall`, no JVM: + `python -m retools.decompiler binary.exe 0x401000 --types patches/proj/kb.h --backend pdg` +- **Auto mode** (tries pyghidra first): + `python -m retools.decompiler binary.exe 0x401000 --types patches/proj/kb.h --project patches/proj` + +### Other tools ``` -python -m retools.decompiler binary.exe 0x401000 python -m retools.decompiler binary.exe 0x401000 --types patches/proj/kb.h python -m retools.search binary.exe strings -f "error" --xrefs python -m retools.xrefs binary.exe 0x401000 -t call @@ -80,7 +97,12 @@ Update KB when you: identify a function's purpose, reconstruct a struct, identif ## Output -Write all findings to `patches//findings.md`, creating the file if it doesn't exist. Append to it if it already exists — do not overwrite previous findings. Use clear headings per analysis task so the main agent can read specific sections. +Write findings to the appropriate file, creating if needed. Append — do not overwrite. + +- **Default**: `patches//findings.md` +- **If told to use r2ghidra for dual-backend comparison**: `patches//findings_r2.md` + +Use clear headings per analysis task so the main agent can read specific sections. Format: ```markdown diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 165cfed1..2db9d552 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -3,6 +3,18 @@ This is a reverse engineering toolkit for PE binaries (`.exe` / `.dll`) combining static analysis (`retools`), dynamic analysis (`livetools` via Frida), and D3D9 frame tracing. Work is organized around knowledge base files (`kb.h`) that accumulate discoveries and feed back into richer decompilation. +## Read-Only Templates + +These directories are **shared tooling and templates** — do not modify for game-specific work: +- `rtx_remix_tools/dx/remix-comp-proxy/` — proxy framework template (copy per-game) +- `rtx_remix_tools/dx/scripts/`, `retools/`, `livetools/`, `graphics/`, `renderdoctools/` — shared tooling + +**Per-game work goes in `patches//`.** Copy the template there and edit the copy. + +## DX9 FFP Porting + +When editing `renderer.cpp`, `ffp_state.cpp`, `remix-comp-proxy.ini`, draw routing, VS constants, vertex declarations, matrix mapping, or skinning — read the **dx9-ffp-port** skill (`skills/dx9-ffp-port/SKILL.md`) before starting. + ## Bootstrap On first use, or if `kb.h` is missing or contains only minimal content, automatically run: @@ -61,8 +73,10 @@ Each file reads as if it was always designed this way. Comments guide the next d Detailed guidance lives in scoped instruction files — consult these before acting: -- **Tool catalog** (all retools / livetools / dx9tracer commands and caveats): `.github/instructions/tool-catalog.instructions.md` -- **Knowledge base format** (kb.h conventions, when to update, struct/function/global notation): `.github/instructions/kb-format.instructions.md` -- **FFP proxy porting for RTX Remix** (DX9 fixed-function pipeline port workflow): `.github/instructions/ffp-proxy.instructions.md` -- **Static analyzer agent** (delegation workflow, what to tell it, output file conventions): `.github/agents/static-analyzer.agent.md` -- **Web researcher agent** (when and how to delegate web research): `.github/agents/web-researcher.agent.md` +- **Tool dispatch** (quick decision guide — which tool, run vs delegate): `.github/instructions/tool-dispatch.instructions.md` +- **Subagent workflow** (delegation, parallel work, anti-patterns): `.github/instructions/subagent-workflow.instructions.md` +- **Tool catalog** (full syntax tables and caveats): `.github/instructions/tool-catalog.instructions.md` +- **Knowledge base format** (kb.h conventions): `.github/instructions/kb-format.instructions.md` +- **FFP proxy porting** (DX9 FFP port workflow): `.github/instructions/ffp-proxy.instructions.md` +- **Static analyzer agent**: `.github/agents/static-analyzer.agent.md` +- **Web researcher agent**: `.github/agents/web-researcher.agent.md` diff --git a/.github/instructions/subagent-workflow.instructions.md b/.github/instructions/subagent-workflow.instructions.md new file mode 100644 index 00000000..d6cae5d4 --- /dev/null +++ b/.github/instructions/subagent-workflow.instructions.md @@ -0,0 +1,57 @@ +--- +applyTo: "**" +--- + +# Subagent Workflow + +Main agent: live tools, user interaction, synthesis. Heavy static analysis → `static-analyzer` subagent. + +## Pre-flight + +Run `python verify_install.py` before first pyghidra use. If WARN, run `python verify_install.py --setup` (one-time ~600MB). + +## Bootstrap First — New Binaries + +When a binary has no or sparse `patches//kb.h` (fewer than 50 `@`/`$`/`struct`/`enum` lines): + +1. Spawn `static-analyzer`: `bootstrap.py --project ` (2-5 min, seeds kb.h) +2. **In parallel**, spawn second `static-analyzer`: `pyghidra_backend.py analyze --project patches/` (5-15 min, reusable Ghidra project) +3. After both: all `decompiler.py` calls use `--types patches//kb.h --project patches/` + +**Detect needs:** `grep -cE '^[@$]|^struct |^enum ' patches//kb.h` — under 50 = bootstrap. Check `patches//ghidra/.gpr` exists for Ghidra. + +## Delegation Table + +| Task | Where | +|------|-------| +| Decompile / callgraph / xrefs / strings / datarefs / structrefs / RTTI / throwmap / dumpinfo | `static-analyzer` | +| Web research, API docs, specs | `web-researcher` | +| Bootstrap / pyghidra analyze / bulk sigdb scan | `static-analyzer` | +| `dataflow.py`, `readmem.py`, `sigdb identify/fingerprint`, `context.py` | Main agent (fast, <5s) | +| `livetools` (trace, bp, mem, scan) | Main agent | +| dx9tracer capture | Main agent | +| dx9tracer analysis | `static-analyzer` | + +## Parallel Work + +1. Spawn `static-analyzer` in background +2. **Immediately** engage user — ask to launch game, prepare livetools, discuss approach +3. Do NOT silently wait for subagents +4. When subagent returns, read `patches//findings.md` for full details +5. Follow up with livetools to verify/act on findings + +Multiple `static-analyzer` instances can run in parallel for independent questions. + +## Dual-Backend Deep Analysis + +For complex exploratory tasks, spawn two parallel agents: +- **r2ghidra**: `--backend pdg --types kb.h` → writes `findings_r2.md` +- **pyghidra**: `pyghidra_backend.py decompile` → writes `findings.md` + +Merge both for complete picture. Not needed for single-function work — use `--backend auto`. + +## Anti-Patterns + +- **Cascade Trap**: "One quick xref" chains into full static analysis while user waits. Second retools command = should have delegated. +- **Duplicating subagent work**: Don't grep for what you delegated. +- **Silent waiting**: Always talk to user or run livetools while subagents work. diff --git a/.github/instructions/tool-catalog.instructions.md b/.github/instructions/tool-catalog.instructions.md index a9257db2..c58824af 100644 --- a/.github/instructions/tool-catalog.instructions.md +++ b/.github/instructions/tool-catalog.instructions.md @@ -23,6 +23,8 @@ These are fast and can be run inline without delegation: - "Get full context before reasoning about a function" → `python -m retools.context assemble $B $VA --project $P` - "Clean up decompiler output with known names" → pipe through `python -m retools.context postprocess` - "Read a typed value from the PE file" → `python -m retools.readmem $B $VA $TYPE` +- "What constant flows into this register?" → `python -m retools.dataflow $B $VA --constants` +- "Trace where a value comes from" → `python -m retools.dataflow $B $VA --slice TARGET_VA:REG` - "Build an ASI patch DLL" → `python -m retools.asi_patcher build spec.json` ### Delegate to the static-analyzer agent @@ -33,7 +35,7 @@ Everything else. Specify WHAT you need, not HOW to run it — the agent has the - "What does this function do?" → decompile + callgraph + xrefs - "Who calls this function?" → xrefs or callgraph --up -- "What does this function call?" → callgraph --down +- "What does this function call?" → callgraph --down (add `--indirect` for vtable calls) - "Find a string and who uses it" → string search with xrefs - "Where is this global read/written?" → datarefs - "Where is struct field +0x54 used?" → structrefs @@ -92,10 +94,15 @@ These are fast first-pass scanners — they surface candidate addresses. Follow |------|---------|---------| | `disasm.py $B $VA` | Disassemble N instructions at VA | `disasm.py binary.exe 0x401000 -n 50` | | `decompiler.py $B $VA --types` | **Ghidra-quality C decompilation** with knowledge base | `python -m retools.decompiler binary.exe 0x401000 --types patches/proj/kb.h` | +| `decompiler.py $B $VA --types --project` | **Auto-backend** (pyghidra if project exists, else r2ghidra) | `python -m retools.decompiler binary.exe 0x401000 --types patches/proj/kb.h --project patches/proj` | +| `pyghidra_backend.py analyze $B --project $P` | Full Ghidra analysis -- one-time, reusable project (5-15 min) | `pyghidra_backend.py analyze game.exe --project patches/MyGame` | +| `pyghidra_backend.py decompile $B $VA --project $P` | Decompile via saved Ghidra project | `pyghidra_backend.py decompile game.exe 0x401000 --project patches/MyGame` | +| `pyghidra_backend.py status $B --project $P` | Check if Ghidra project exists (<1s) | `pyghidra_backend.py status game.exe --project patches/MyGame` | | `funcinfo.py $B $VA` | Find function start/end, rets, calling convention, callees | `funcinfo.py binary.exe 0x401000` | -| `cfg.py $B $VA` | Control flow graph (basic blocks + edges, text or mermaid) | `cfg.py binary.exe 0x401000 --format mermaid` | -| `callgraph.py $B $VA` | Caller/callee tree (multi-level, --up/--down N) | `callgraph.py binary.exe 0x401000 --up 3` | -| `xrefs.py $B $VA` | Find all calls/jumps TO an address | `xrefs.py binary.exe 0x401000 -t call` | +| `cfg.py $B $VA` | Control flow graph (resolves MSVC switch/jump tables). `--switch-details` shows table info | `cfg.py binary.exe 0x401000 --format mermaid` | +| `callgraph.py $B $VA` | Caller/callee tree (--up/--down N). `--indirect` adds vtable/fptr calls | `callgraph.py binary.exe 0x401000 --down 2 --indirect` | +| `xrefs.py $B $VA` | Find all calls/jumps TO an address. `--indirect` scans for indirect calls too | `xrefs.py binary.exe 0x401000 --indirect` | +| `dataflow.py $B $VA` | Forward constant propagation (`--constants`) or backward register slice (`--slice VA:REG`) | `dataflow.py binary.exe 0x401000 --constants` | | `datarefs.py $B $VA` | Find instructions that reference a global address (mem deref + `--imm` for push/mov constants) | `datarefs.py binary.exe 0x7A0000 --imm` | | `structrefs.py $B $OFF` | Find all `[reg+offset]` accesses (struct field usage) | `structrefs.py binary.exe 0x54 --base esi` | | `structrefs.py $B --aggregate` | Reconstruct C struct from all field accesses in a function | `structrefs.py binary.exe --aggregate --fn 0x401000 --base esi` | @@ -151,9 +158,10 @@ These are fast first-pass scanners — they surface candidate addresses. Follow ## Dynamic Analysis (`livetools/`) -- Frida-based, attaches to running process ``` -python -m livetools attach # start session -python -m livetools detach # end session -python -m livetools status # check connection +python -m livetools attach # attach to running process by name or PID +python -m livetools attach "C:/Games/game.exe" --spawn # launch + instrument before init code runs +python -m livetools detach # end session +python -m livetools status # check connection ``` | Command | Purpose | diff --git a/.github/instructions/tool-dispatch.instructions.md b/.github/instructions/tool-dispatch.instructions.md new file mode 100644 index 00000000..1493ff71 --- /dev/null +++ b/.github/instructions/tool-dispatch.instructions.md @@ -0,0 +1,31 @@ +--- +applyTo: "**" +--- + +# Tool Dispatch — Quick Reference + +Run all tools from repo root via `python -m `. **ALWAYS pass `--types patches//kb.h`** to `decompiler.py`. + +## Run Directly (<5s) + +- `sigdb fingerprint/identify` — compiler ID, single function lookup +- `context assemble/postprocess` — full context gathering, annotation +- `dataflow --constants/--slice` — forward propagation, backward trace +- `readmem` — typed PE read +- `asi_patcher build` — build step +- `pyghidra_backend status` — project check + +## DX Scripts First (for D3D9 questions) + +Run BEFORE retools. Under `rtx_remix_tools/dx/scripts/`: +`find_d3d_calls`, `find_vs_constants`, `find_ps_constants`, `find_device_calls`, `find_render_states`, `find_texture_ops`, `find_transforms`, `find_surface_formats`, `find_stateblocks`, `decode_fvf`, `decode_vtx_decls`, `find_shader_bytecode`, `classify_draws`, `find_matrix_registers`, `find_vtable_calls`, `find_skinning`, `find_blend_states`, `scan_d3d_region` + +## Delegate to `static-analyzer` + +Everything else: decompile, callgraph, xrefs, strings, datarefs, structrefs, RTTI, throwmap, dumpinfo, bootstrap, pyghidra analyze, bulk sigdb scan, dx9tracer analysis. + +If you're about to run a second `retools` command, stop and delegate. + +## Live Tools (main agent, attached process) + +`attach`, `trace`, `collect`, `bp`, `watch`, `regs`, `stack`, `bt`, `mem read/write`, `scan`, `dipcnt`, `memwatch`, `modules`, `steptrace` diff --git a/.github/skills/dx9-ffp-port/SKILL.md b/.github/skills/dx9-ffp-port/SKILL.md index 01082070..07d39071 100644 --- a/.github/skills/dx9-ffp-port/SKILL.md +++ b/.github/skills/dx9-ffp-port/SKILL.md @@ -8,6 +8,8 @@ argument-hint: '' You are helping a user port a DX9 shader-based game to the fixed-function pipeline using the `rtx_remix_tools/dx/remix-comp-proxy/` codebase in this workspace. The goal is RTX Remix compatibility: Remix requires FFP geometry to inject path-traced lighting and replaceable assets. Also use the Vibe RE tools (retools, livetools) for static and dynamic analysis to assist with developing this wrapper. They are meant to be used together. +**NEVER MODIFY TEMPLATE CODE.** `rtx_remix_tools/dx/remix-comp-proxy/` is read-only. Copy to `patches//` and edit the copy. If the user says "edit remix-comp-proxy code", confirm whether they mean the template or a game copy. + **SKINNING IS OFF BY DEFAULT.** Do NOT enable skinning, modify skinning code, or discuss skinning infrastructure unless the user explicitly asks for character model / bone / skeletal animation support. Until then, treat skinning as non-existent. When the user does request it, read `src/comp/modules/skinning.hpp` and `src/comp/modules/skinning.cpp` for the full implementation. **SKINNING APPROACH: FFP indexed vertex blending, NOT CPU matrix math.** When skinning is enabled, keep BLENDINDICES and BLENDWEIGHT in the vertex declaration and buffer, upload bone matrices via `SetTransform(D3DTS_WORLDMATRIX(n), &boneMatrix[n])`, enable `D3DRS_INDEXEDVERTEXBLENDENABLE = TRUE`, and set `D3DRS_VERTEXBLEND` to the weight count. CPU-side vertex skinning is a **last resort** -- it is extremely expensive and tanks frame rate. Always prefer the hardware path. @@ -37,8 +39,9 @@ The codebase is a d3d9.dll proxy based on remix-comp-base that intercepts `IDire | `src/comp/modules/renderer.cpp` | Draw call routing -- `on_draw_indexed_prim()` and `on_draw_primitive()` | | `src/comp/modules/d3d9ex.cpp` | `IDirect3DDevice9` hook layer -- intercepts all 119 methods | | `src/comp/modules/skinning.cpp` | Skinning module (vertex expansion, bone upload, FFP blending) | -| `src/comp/modules/diagnostics.cpp` | Diagnostic logging to `ffp_proxy.log` | +| `src/comp/modules/diagnostics.cpp` | Diagnostic logging to `rtx_comp/diagnostics.log` | | `src/comp/modules/imgui.cpp` | ImGui debug overlay (F4 toggle) | +| `src/comp/game/game.cpp` | Per-game address init (patterns, hooks) | | `src/shared/common/ffp_state.cpp` | FFP state tracker -- engage/disengage, matrix transforms, texture stages | | `src/shared/common/ffp_state.hpp` | `ffp_state` class with all state accessors | | `src/shared/common/config.hpp` | Config structures: `ffp_settings`, `skinning_settings`, etc. | @@ -109,7 +112,7 @@ Key things to find: This is the **most critical** step. You must determine which VS constant registers hold View, Projection, and World matrices. -**Remix REQUIRES separate World, View, and Projection matrices.** A concatenated WorldViewProj (WVP) or ViewProj (VP) will NOT work -- Remix needs individual matrices for its own camera and per-object transforms. If the game uploads a pre-multiplied WVP, the proxy must intercept the individual matrices *before* concatenation. This is the #1 source of broken Remix ports. Use `find_matrix_registers.py` to detect this pattern. +**Remix REQUIRES separate World, View, and Projection matrices.** A concatenated WorldViewProj (WVP) or ViewProj (VP) will NOT work -- Remix needs individual matrices for its own camera and per-object transforms. If the game uploads a pre-multiplied WVP, the proxy must intercept the individual matrices *before* concatenation. **This is the #1 source of broken Remix ports.** Use `find_matrix_registers.py` to detect this pattern -- if CTAB shows "WorldViewProj" or only one matrix range per draw, you have this problem. **Static approach:** Decompile functions that call `SetVertexShaderConstantF`: ```bash @@ -123,6 +126,13 @@ python -m livetools trace --count 50 \ ``` This captures: startRegister, Vector4fCount, and the actual float data (first 4 vec4 constants, dereferenced from `pConstantData`). +**DX9 Tracer approach:** Capture a frame and analyze: +```bash +python -m graphics.directx.dx9.tracer analyze --const-provenance +python -m graphics.directx.dx9.tracer analyze --matrix-flow +python -m graphics.directx.dx9.tracer analyze --shader-map +``` + **How to identify matrices:** - View matrix: changes with camera movement, contains camera orientation - Projection matrix: contains aspect ratio and FOV, rarely changes @@ -131,11 +141,13 @@ This captures: startRegister, Vector4fCount, and the actual float data (first 4 ### Step 3: Copy comp/ and Configure -1. Copy `rtx_remix_tools/dx/remix-comp-proxy/src/comp/` to `patches//proxy/comp/` -2. Copy `remix-comp-proxy.ini` from `assets/` to `patches//proxy/` +1. Copy the entire `rtx_remix_tools/dx/remix-comp-proxy/` folder to `patches//` (excluding `build/`) +2. Edit `src/comp/` directly in the game's copy 3. Edit register layout defaults in `src/shared/common/ffp_state.hpp` -4. Use `build.bat` for the game-specific build -5. Update `kb.h` with any function signatures, structs, or globals discovered +4. Edit `src/comp/main.cpp`: set `WINDOW_CLASS_NAME` +5. Customize `src/comp/modules/renderer.cpp` draw routing if needed +6. Customize `src/comp/game/game.cpp` with game-specific hooks +7. Update `kb.h` with discovered function signatures, structs, globals ### Step 4: Build and Deploy @@ -144,11 +156,11 @@ cd patches/ build.bat release --name ``` -Deploy: `d3d9.dll` + `remix-comp-proxy.ini` to game directory. Place `d3d9_remix.dll` there if using Remix. +Build output: `patches//build/bin/release/d3d9.dll`. Deploy `d3d9.dll` + `remix-comp-proxy.ini` to game directory. Place `d3d9_remix.dll` there if using Remix. ### Step 5: Diagnose with Log and ImGui -The proxy writes `ffp_proxy.log` in the game directory. After a configurable delay (default 50 seconds via `[Diagnostics] DelayMs`), it logs frames of detailed draw call data: +The proxy writes `rtx_comp/diagnostics.log` in the game directory. After a configurable delay (default 50 seconds via `[Diagnostics] DelayMs`), it logs frames of detailed draw call data: - **VS regs written**: shows which constant registers the game actually fills - **Vertex declarations**: what vertex elements each draw uses (POSITION, NORMAL, TEXCOORD, BLENDWEIGHT, etc.) diff --git a/.github/skills/renderdoc-analysis/SKILL.md b/.github/skills/renderdoc-analysis/SKILL.md new file mode 100644 index 00000000..3a00a808 --- /dev/null +++ b/.github/skills/renderdoc-analysis/SKILL.md @@ -0,0 +1,69 @@ +--- +name: 'renderdoc-analysis' +description: 'RenderDoc GPU capture analysis. Use for .rdc files, draw call inspection, pipeline state, textures/shaders, mesh data, GPU counters.' +--- + +# RenderDoc Analysis + +All commands: `python -m renderdoctools [args]`. All support `--json` and `--output FILE`. + +## Capturing + +Launch a game with RenderDoc injection and capture — no GUI required: +``` +python -m renderdoctools capture # launch + capture +python -m renderdoctools capture --output out.rdc # specify output filename +python -m renderdoctools capture -- --arg1 --arg2 # pass args to the game +``` + +The game launches with RenderDoc hooked in. Press **F12** / **Print Screen** in-game to trigger the capture. The `.rdc` file is written to the working directory (or the `--output` path). + +**Use this as the default capture method.** The agent can run this directly — no need to walk the user through the GUI. + +D3D11/D3D12/Vulkan/OpenGL/DX9 supported. DX9 requires the custom RenderDoc build with D3D9 driver. Use `--opt-hook-children` for games with launchers, `--opt-ref-all-resources` if game crashes on inject. + +## Commands + +| Command | Purpose | +|---------|---------| +| `events [--draws-only]` | List events/draw calls | +| `analyze --summary` | Capture overview stats | +| `analyze --biggest-draws N` | Top N draws by vertex count | +| `analyze --render-targets` | Unique render targets | +| `pipeline --event EID` | Pipeline state (optional `--stage pixel`) | +| `textures --event EID [--save-all DIR]` | Bound textures, export | +| `shaders --event EID [--cbuffers]` | Shader disassembly + constants | +| `mesh --event EID [--post-vs]` | Vertex data | +| `descriptors --event EID [--type srv]` | Descriptor bindings | +| `api-calls [--event EID] [--filter NAME]` | API call params | +| `counters [--zero-samples]` | GPU counters, find wasted draws | +| `pixel-history --event EID --resource RID --x X --y Y` | What drew to pixel? | +| `pick-pixel --resource RID --x X --y Y` | Read pixel value | +| `usage --resource RID [--filter read\|write]` | Resource usage across events | +| `debug-shader --event EID --mode vertex\|pixel --vertex-index N` | Debug shader execution | +| `capture [--output FILE] [-- EXE_ARGS]` | Launch game with RenderDoc injection, capture on F12 | +| `open ` | Launch RenderDoc GUI | + +## Finding the Right Draw Call + +1. **Render targets**: `analyze --render-targets` → dump RTs → `usage --resource --filter write` → narrow by EID +2. **Binary search**: `events --draws-only`, pick midpoint, dump RT, converge +3. **By name**: `events --filter "shadow"` (if engine uses debug markers) +4. **By size**: `analyze --biggest-draws 20` + +## Verification + +Always confirm you're looking at the right thing: +- Dump and visually check RTs: `textures --event --save-all ./verify` +- Spot-check pixels: `pick-pixel --resource --x 100 --y 100` +- Compare before/after EIDs + +## Multi-Pass Reconstruction + +1. `analyze --render-targets` — list all RTs +2. Per RT: `usage --resource ` — writes = pass boundaries, reads = consumers +3. Dump textures at key EIDs to label passes (shadow, GBuffer, lighting, post) + +## When to Use GUI + +`python -m renderdoctools open ` — better for texture scrubbing, 3D mesh viewer, shader debugger with source, overlay modes (wireframe, depth, overdraw). diff --git a/.gitignore b/.gitignore index 0ecdde58..77e540c1 100644 --- a/.gitignore +++ b/.gitignore @@ -64,3 +64,6 @@ compass_artifact_* # Superpowers specs/plans (session artifacts) docs/ + +# RenderDoc (bundled distribution) +tools/renderdoc*/ diff --git a/.kiro/powers/dx9-ffp-port/POWER.md b/.kiro/powers/dx9-ffp-port/POWER.md index 47674fe9..71fab400 100644 --- a/.kiro/powers/dx9-ffp-port/POWER.md +++ b/.kiro/powers/dx9-ffp-port/POWER.md @@ -18,6 +18,8 @@ Port a DX9 shader-based game to fixed-function pipeline (FFP) for RTX Remix comp ## What remix-comp-proxy Does +**NEVER MODIFY the template at `rtx_remix_tools/dx/remix-comp-proxy/`.** Always copy to `patches//` first and edit the copy. + Each game folder under `patches//` is a self-contained remix-comp-proxy project (copied from `rtx_remix_tools/dx/remix-comp-proxy/`). It is a C++20 compatibility mod that: 1. Captures VS constants (View, Projection, World matrices) from `SetVertexShaderConstantF` @@ -39,8 +41,12 @@ Each game folder under `patches//` is a self-contained remix-comp-prox | `src/comp/modules/renderer.cpp` | Draw call routing -- `on_draw_indexed_prim()` and `on_draw_primitive()` | | `src/comp/modules/d3d9ex.cpp` | `IDirect3DDevice9` hook layer -- intercepts all 119 methods | | `src/comp/modules/skinning.cpp` | Skinning module (vertex expansion, bone upload, FFP blending) | -| `src/comp/modules/diagnostics.cpp` | Diagnostic logging to `ffp_proxy.log` | +| `src/comp/modules/diagnostics.cpp` | Diagnostic logging to `rtx_comp/diagnostics.log` | | `src/comp/modules/imgui.cpp` | ImGui debug overlay (F4 toggle) | +| `src/comp/comp.cpp` | Module registration order | +| `src/comp/d3d9_proxy.cpp` | Loads real d3d9 chain, DLL pre/post-load | +| `src/comp/game/game.cpp` | Per-game address init (patterns, hooks) | +| `src/comp/game/game.hpp` | Per-game variables and function typedefs | | `src/shared/common/ffp_state.cpp` | FFP state tracker -- engage/disengage, matrix transforms, texture stages | | `src/shared/common/ffp_state.hpp` | `ffp_state` class with all state accessors | | `src/shared/common/config.hpp` | Config structures: `ffp_settings`, `skinning_settings`, etc. | @@ -49,6 +55,8 @@ Each game folder under `patches//` is a self-contained remix-comp-prox Per-game setup: copy the entire `rtx_remix_tools/dx/remix-comp-proxy/` folder to `patches//`, then edit `src/comp/` directly. +**Before reading remix-comp-proxy source files**, read `references/remix-comp-context.md` for a skip-list of boilerplate files you should never open. + --- ## Porting Workflow @@ -60,8 +68,15 @@ Run ALL of the analysis scripts on the game binary. These are purpose-built for ```bash python rtx_remix_tools/dx/scripts/find_d3d_calls.py "" python rtx_remix_tools/dx/scripts/find_vs_constants.py "" +python rtx_remix_tools/dx/scripts/find_ps_constants.py "" python rtx_remix_tools/dx/scripts/decode_vtx_decls.py "" --scan +python rtx_remix_tools/dx/scripts/decode_fvf.py "" python rtx_remix_tools/dx/scripts/find_device_calls.py "" +python rtx_remix_tools/dx/scripts/find_render_states.py "" +python rtx_remix_tools/dx/scripts/find_texture_ops.py "" +python rtx_remix_tools/dx/scripts/find_transforms.py "" +python rtx_remix_tools/dx/scripts/classify_draws.py "" +python rtx_remix_tools/dx/scripts/find_matrix_registers.py "" python rtx_remix_tools/dx/scripts/find_skinning.py "" python rtx_remix_tools/dx/scripts/find_blend_states.py "" ``` @@ -98,6 +113,8 @@ If the tracer gives you the register layout, skip the dynamic approach in Step 2 This is the **most critical** step. Determine which VS constant registers hold View, Projection, and World matrices. +**Remix REQUIRES separate World, View, and Projection matrices.** A concatenated WVP will NOT work. If the game uploads a pre-multiplied WorldViewProj, the proxy must intercept individual matrices before concatenation. Start with `find_matrix_registers.py` to detect this. + **Static approach:** Decompile call sites: ```bash python -m retools.decompiler --types patches//kb.h @@ -118,7 +135,7 @@ Captures: startRegister, Vector4fCount, and the first 4 vec4 constants of actual ### Step 3: Copy comp/ and Configure -Copy the entire `rtx_remix_tools/dx/remix-comp/` folder to `patches//` (excluding `build/`). Edit files directly: +Copy the entire `rtx_remix_tools/dx/remix-comp-proxy/` folder to `patches//` (excluding `build/`). Edit files directly: 1. Edit register layout defaults in `src/shared/common/ffp_state.hpp` 2. Edit `src/comp/main.cpp`: set `WINDOW_CLASS_NAME` @@ -136,7 +153,7 @@ Deploy to game directory: `d3d9.dll` + `remix-comp-proxy.ini`. Place `d3d9_remix ### Step 5: Diagnose with Log and ImGui -The proxy writes `ffp_proxy.log` in the game directory after a configurable delay (default 50 seconds via `[Diagnostics] DelayMs`), then logs frames of detailed draw call data: +The proxy writes `rtx_comp/diagnostics.log` in the game directory after a configurable delay (default 50 seconds via `[Diagnostics] DelayMs`), then logs frames of detailed draw call data: - **VS regs written**: which constant registers the game actually fills - **Vertex declarations**: what vertex elements each draw uses @@ -173,6 +190,34 @@ Other game-specific INI settings: --- +## INI Config (`remix-comp-proxy.ini`) + +Runtime settings (no recompile): + +```ini +[FFP] +Enabled=1 +AlbedoStage=0 ; Diffuse texture stage (0-7) + +[Skinning] +Enabled=0 ; Only after rigid FFP works + +[Diagnostics] +Enabled=1 +DelayMs=50000 +LogFrames=3 + +[Remix] +Enabled=1 +DLLName=d3d9_remix.dll + +[Chain] +PreLoad= ; Semicolon-separated DLLs to load before d3d9 chain +PostLoad= +``` + +--- + ## Architecture: What to Edit vs What to Leave Alone | File / Section | Edit Per-Game? | @@ -182,6 +227,8 @@ Other game-specific INI settings: | `remix-comp-proxy.ini` `[Skinning] Enabled` | **YES** (after rigid works) | | `renderer.cpp` `on_draw_indexed_prim()` | **YES** -- main draw routing | | `renderer.cpp` `on_draw_primitive()` | **YES** -- draw routing | +| `src/comp/main.cpp` WINDOW_CLASS_NAME | **YES** | +| `src/comp/game/game.cpp` address init | **YES** -- per-game hooks | | `ffp_state.cpp` `setup_lighting()`, `setup_texture_stages()`, `apply_transforms()` | MAYBE | | `ffp_state.cpp` `on_set_vs_const_f()` | MAYBE -- dirty tracking | | `ffp_state.cpp` `on_set_vertex_declaration()` | MAYBE -- element parsing | @@ -194,15 +241,17 @@ Other game-specific INI settings: ### DrawIndexedPrimitive Decision Tree ``` -viewProjValid? -+-- NO -> shader passthrough +ffp.is_enabled() AND ffp.view_proj_valid()? ++-- NO -> passthrough with shaders +-- YES - +-- curDeclIsSkinned? + +-- ffp.cur_decl_is_skinned()? | +-- YES + skinning module -> skinning::draw_skinned_dip() - | +-- YES + no skinning -> shader passthrough - +-- NOT skinned - +-- !curDeclHasNormal -> shader passthrough (HUD/UI) - +-- hasNormal -> ffp_state::engage + rigid FFP draw + | +-- YES + no skinning -> passthrough with shaders + +-- !ffp.cur_decl_has_normal()? + | +-- passthrough (HUD/UI) + | GAME-SPECIFIC: remove this filter if world geometry lacks NORMAL + +-- else (rigid 3D mesh) + +-- ffp.engage() + draw + restore ``` **Common per-game changes:** @@ -213,9 +262,10 @@ viewProjValid? ### DrawPrimitive Decision Tree ``` -viewProjValid AND lastDecl AND !curDeclHasPosT AND !curDeclIsSkinned? -+-- YES -> ffp_state::engage (world-space particles / non-indexed geometry) -+-- NO -> shader passthrough (screen-space UI, POSITIONT, no decl, skinned) +ffp.is_enabled() AND ffp.view_proj_valid() AND ffp.last_decl() +AND !ffp.cur_decl_has_pos_t() AND !ffp.cur_decl_is_skinned()? ++-- YES -> ffp.engage() (world-space particles / non-indexed geometry) ++-- NO -> passthrough (screen-space UI, POSITIONT, no decl, skinned) ``` --- @@ -226,9 +276,16 @@ viewProjValid AND lastDecl AND !curDeclHasPosT AND !curDeclIsSkinned? |--------|-----------------| | `python rtx_remix_tools/dx/scripts/find_d3d_calls.py ` | D3D9/D3DX imports and call sites | | `python rtx_remix_tools/dx/scripts/find_vs_constants.py ` | `SetVertexShaderConstantF` call sites and register/count args | +| `python rtx_remix_tools/dx/scripts/find_ps_constants.py ` | `SetPixelShaderConstantF/I/B` call sites and register/count args | | `python rtx_remix_tools/dx/scripts/find_device_calls.py ` | Device vtable call patterns and device pointer refs | | `python rtx_remix_tools/dx/scripts/find_vtable_calls.py ` | D3DX constant table usage and D3D9 vtable calls | | `python rtx_remix_tools/dx/scripts/decode_vtx_decls.py --scan` | Vertex declaration formats (BLENDWEIGHT/BLENDINDICES -> skinning) | +| `python rtx_remix_tools/dx/scripts/decode_fvf.py ` | FVF bitfield decode from SetFVF calls | +| `python rtx_remix_tools/dx/scripts/find_render_states.py ` | SetRenderState args decoded by category | +| `python rtx_remix_tools/dx/scripts/find_texture_ops.py ` | Texture pipeline: stages, TSS ops, sampler states | +| `python rtx_remix_tools/dx/scripts/find_transforms.py ` | SetTransform types (World, View, Projection, Texture) | +| `python rtx_remix_tools/dx/scripts/classify_draws.py ` | Draw call classification (FFP/shader/hybrid %) | +| `python rtx_remix_tools/dx/scripts/find_matrix_registers.py ` | Identify View/Proj/World registers (CTAB + frequency + layout suggestion) | | `python rtx_remix_tools/dx/scripts/find_skinning.py ` | Consolidated skinning analysis: skinned decls, bone palettes, blend states, suggested INI | | `python rtx_remix_tools/dx/scripts/find_blend_states.py ` | D3DRS_VERTEXBLEND + INDEXEDVERTEXBLENDENABLE + WORLDMATRIX transforms | | `python rtx_remix_tools/dx/scripts/scan_d3d_region.py 0xSTART 0xEND` | Map all D3D9 vtable calls in a code region | @@ -282,14 +339,15 @@ python -m retools.search strings -f "vertex,decl,shader" --xrefs ## Common Pitfalls -- **Matrices look wrong**: D3D9 FFP `SetTransform` expects row-major. The proxy transposes. If the game stores matrices row-major in VS constants (uncommon), remove the transpose in `ffp_state::apply_transforms()`. -- **Everything is white/black**: Albedo texture is on stage 1+, not stage 0. Set `AlbedoStage` in `remix-comp-proxy.ini` `[FFP]` section, or trace `SetTexture` calls to find the correct stage. -- **Some objects render, others don't**: Check whether missing geometry has NORMAL in its vertex decl. Check `view_proj_valid()` is true at draw time. `on_draw_primitive()` routes on decl presence + no POSITIONT + not skinned. -- **Skinned meshes invisible**: Enable `[Skinning] Enabled=1` in `remix-comp-proxy.ini`. Verify bone count is non-zero in diagnostic log entries. -- **Bones mixed up between NPCs**: Stale WORLDMATRIX slots from a previous object. If still broken, the game may need a game-specific reset hook. -- **Game crashes on startup**: Set `Enabled=0` in `remix-comp-proxy.ini` `[Remix]` to test without Remix. -- **Geometry at origin / piled up**: World matrix register mapping wrong. Re-examine VS constant writes via `livetools trace`. -- **World geometry shifts after skinned draws**: `WORLDMATRIX(0)` clobbered by bone[0]. The proxy re-applies via world dirty tracking. If still broken, check for bone register overlap with world matrix range. +- **Concatenated WVP/VP instead of separate matrices**: **#1 Remix porting mistake.** If CTAB shows "WorldViewProj" or only one matrix register per draw, you have this problem. Hook individual W/V/P before concatenation. +- **Matrices look wrong**: FFP `SetTransform` expects row-major. Proxy transposes. If game stores row-major VS constants (uncommon), remove transpose in `apply_transforms()`. +- **Everything is white/black**: Albedo on stage 1+, not 0. Set `AlbedoStage` in INI. +- **Some objects missing**: Check NORMAL in vertex decl. Check `ffp.view_proj_valid()` at draw time. +- **Skinned meshes invisible**: Enable `[Skinning] Enabled=1`. Check log for bone count. +- **Game crashes on startup**: Set `[Remix] Enabled=0` to test without Remix. Check `WINDOW_CLASS_NAME`. +- **Geometry at origin**: World matrix register mapping wrong. Trace VS constant writes. +- **World shifts after skinned draws**: `WORLDMATRIX(0)` clobbered by bone[0]. Proxy tracks `world_dirty_`. Check bone register overlap. +- **ImGui overlay not appearing**: Press F4. Check `WINDOW_CLASS_NAME` correct and DirectInput hook conflicts. ### Skinning Stability: Finding Game-Specific Hook Points diff --git a/.kiro/powers/dx9-ffp-port/references/remix-comp-context.md b/.kiro/powers/dx9-ffp-port/references/remix-comp-context.md new file mode 100644 index 00000000..63be154a --- /dev/null +++ b/.kiro/powers/dx9-ffp-port/references/remix-comp-context.md @@ -0,0 +1,58 @@ +# remix-comp-proxy: Context Guide + +**`rtx_remix_tools/dx/remix-comp-proxy/` is the READ-ONLY TEMPLATE.** Never modify it unless explicitly asked. Per-game work goes in `patches//`. + +Each game folder under `patches//` is a self-contained remix-comp-proxy project. All paths below are relative to the game folder root (e.g. `patches/FNV/`) or to the framework template at `rtx_remix_tools/dx/remix-comp-proxy/`. + +## Do Not Read + +These are infrastructure files. Use the summaries below instead of opening them. + +### Auto-generated +- `src/comp/modules/tracer_dispatch.inc` — 119 inline `trace_XXX()` functions generated by `graphics/directx/dx9/tracer/d3d9_methods.py`. Each wraps `tracer::record_begin/arg/end`. + +### D3D9 passthrough proxy +- `src/comp/modules/drawcall_mod_context.cpp` — `drawcall_mod_context` save/restore method implementations. Never needs to be read; the API is in `renderer.hpp`. +- `src/comp/modules/d3d9ex.cpp` — `IDirect3DDevice9` wrapper + exported `Direct3DCreate9`/`Direct3DCreate9Ex`. ~80 of 119 methods are trivial 1-line forwards. The ~12 methods with logic dispatch to renderer/ffp_state/imgui/tracer modules. +- `src/comp/modules/d3d9ex.hpp` — D3D9Device, _d3d9, and _d3d9ex wrapper class declarations. +- `src/comp/d3d9_proxy.cpp` + `d3d9_proxy.hpp` — d3d9.dll proxy: loads real d3d9 chain (Remix bridge or system), DLL pre/post-load chains, forwarded D3DPERF/debug exports. + +### Shared library (framework, never edited per-game) +- `src/shared/utils/hooking.cpp/hpp` — MinHook wrapper + vtable patching. +- `src/shared/utils/vector.hpp` — SIMD float2/float3/float4 math. +- `src/shared/common/remix_api.cpp/hpp` — RTX Remix runtime bridge (load d3d9_remix.dll, init API, submit lights). +- `src/shared/common/console.hpp` — Debug console allocation/output. +- `src/shared/common/imgui_helper.cpp/hpp` — ImGui setup/teardown utilities. +- `src/shared/utils/memory.cpp/hpp` — Pattern scanning and memory allocation. +- `src/shared/utils/utils.cpp/hpp` — String/path/module utilities. +- `src/shared/common/dinput_hook_v1.cpp/hpp` — DirectInput8 hook (input capture). +- `src/shared/common/dinput_hook_v2.cpp/hpp` — Alternate DirectInput hook. +- `src/shared/common/loader.cpp/hpp` — Module loader/registry. +- `src/shared/common/flags.cpp/hpp` — Command-line flag parsing. +- `src/shared/common/shader_cache.hpp` — Shader handle cache. +- `src/shared/globals.cpp/hpp` — Global state (device pointer, module handle). +- `src/shared/structs.hpp`, `src/shared/std_include.*`, `src/comp/std_include.*` — Precompiled headers, forward declarations. + +## Read These Instead + +The files that matter for per-game FFP porting work: + +| File | Role | +|------|------| +| `src/comp/modules/renderer.cpp` | **Draw call routing — THE main edit target** | +| `src/comp/modules/renderer.hpp` | `drawcall_mod_context` API + `renderer` class declaration | +| `src/shared/common/ffp_state.cpp` | FFP state tracker: engage/disengage, transforms, textures | +| `src/shared/common/ffp_state.hpp` | ffp_state class with all accessors | +| `src/shared/common/config.cpp` | INI config loader | +| `src/shared/common/config.hpp` | Config structures | +| `src/comp/main.cpp` | DLL entry, WINDOW_CLASS_NAME, module init | +| `src/comp/comp.cpp` | Module registration order | +| `src/comp/game/game.cpp` | Per-game address patterns and hooks | +| `src/comp/game/game.hpp` | Per-game variables and typedefs | +| `src/comp/game/structs.hpp` | Per-game data structures | + +### Read only when relevant +- `src/comp/modules/diagnostics.cpp/hpp` — Frame logger to `rtx_comp/diagnostics.log`. Read when debugging log output. +- `src/comp/modules/skinning.cpp/hpp` — Skinned mesh FFP conversion. Read only if user asks about skinning. +- `src/comp/modules/imgui.cpp/hpp` — Debug overlay (F4). Read only if debugging ImGui. +- `src/comp/modules/tracer.cpp/hpp` — JSONL D3D9 call recorder. Read only if working on tracer features. diff --git a/.kiro/powers/renderdoc-analysis/POWER.md b/.kiro/powers/renderdoc-analysis/POWER.md new file mode 100644 index 00000000..58f1e16e --- /dev/null +++ b/.kiro/powers/renderdoc-analysis/POWER.md @@ -0,0 +1,101 @@ +--- +name: "renderdoc-analysis" +displayName: "RenderDoc Analysis" +description: "RenderDoc GPU capture analysis. Use for .rdc files, draw call inspection, pipeline state, textures/shaders, mesh data, GPU counters." +keywords: ["renderdoc", "rdc", "gpu", "capture", "draw-call", "pipeline", "textures", "shaders"] +author: "workspace" +--- + +# RenderDoc Analysis + +All commands: `python -m renderdoctools [args]`. All support `--json` and `--output FILE`. + +## Capturing + +Launch a game with RenderDoc injection and capture — no GUI required: +``` +python -m renderdoctools capture # launch + capture +python -m renderdoctools capture --output out.rdc # specify output filename +python -m renderdoctools capture -- --arg1 --arg2 # pass args to the game +``` + +The game launches with RenderDoc hooked in. Press **F12** / **Print Screen** in-game to trigger the capture. The `.rdc` file is written to the working directory (or the `--output` path). + +**Use this as the default capture method.** The agent can run this directly — no need to walk the user through the GUI. + +D3D11/D3D12/Vulkan/OpenGL/DX9 supported. DX9 requires the custom RenderDoc build with D3D9 driver. Use `--opt-hook-children` for games with launchers, `--opt-ref-all-resources` if game crashes on inject. + +## Commands + +| Command | Purpose | +|---------|---------| +| `events [--draws-only]` | List events/draw calls | +| `analyze --summary` | Capture overview stats | +| `analyze --biggest-draws N` | Top N draws by vertex count | +| `analyze --render-targets` | Unique render targets | +| `pipeline --event EID` | Pipeline state (optional `--stage pixel`) | +| `textures --event EID [--save-all DIR]` | Bound textures, export | +| `shaders --event EID [--cbuffers]` | Shader disassembly + constants | +| `mesh --event EID [--post-vs]` | Vertex data | +| `descriptors --event EID [--type srv]` | Descriptor bindings | +| `api-calls [--event EID] [--filter NAME]` | API call params | +| `counters [--zero-samples]` | GPU counters, find wasted draws | +| `pixel-history --event EID --resource RID --x X --y Y` | What drew to pixel? | +| `pick-pixel --resource RID --x X --y Y` | Read pixel value | +| `usage --resource RID [--filter read\|write]` | Resource usage across events | +| `debug-shader --event EID --mode vertex\|pixel --vertex-index N` | Debug shader execution | +| `messages [--severity high]` | API debug/validation messages | +| `tex-stats --resource RID [--histogram]` | Min/max RGBA, value distribution | +| `tex-data --resource RID [--output-file F]` | Raw texture bytes | +| `frame-info ` | Frame stats: draws, dispatches, binds | +| `custom-shader --event EID --source F --output F` | Apply custom viz shader | +| `capture [--output FILE] [-- EXE_ARGS]` | Launch game with RenderDoc injection, capture on F12 | +| `open ` | Launch RenderDoc GUI | + +## Finding the Right Draw Call + +1. **Render targets**: `analyze --render-targets` → dump RTs → `usage --resource --filter write` → narrow by EID +2. **Binary search**: `events --draws-only`, pick midpoint, dump RT, converge +3. **By name**: `events --filter "shadow"` (if engine uses debug markers) +4. **By size**: `analyze --biggest-draws 20` + +## Verification + +Always confirm you're looking at the right thing: +- Dump and visually check RTs: `textures --event --save-all ./verify` +- Spot-check pixels: `pick-pixel --resource --x 100 --y 100` +- Compare before/after EIDs + +## Multi-Pass Reconstruction + +1. `analyze --render-targets` — list all RTs +2. Per RT: `usage --resource ` — writes = pass boundaries, reads = consumers +3. Dump textures at key EIDs to label passes (shadow, GBuffer, lighting, post) + +## Interpreting Shader Debug Output + +`debug-shader` produces inputs, constant blocks, per-step variable changes. Look for: +- **NaN/Inf**: division by zero or bad input — trace the step +- **Unexpected zeros**: wrong binding or uninitialized resource +- **Bad transforms**: check `finalState` output position + constant blocks +- **Shader discards**: `shaderDiscarded: true` in pixel history — debug the discard condition + +## When Data Looks Wrong + +1. **Correct EID?** All queries reflect state at the queried EID +2. **Correct resource?** Resource IDs are per-capture — re-discover with `textures` +3. **Correct format?** `pick-pixel` with wrong `--comp-type` reads garbage +4. **Initialized?** All zeros = not yet written. `usage --filter write` finds first write +5. **Mip/slice?** Querying mip 0 of a texture only written at mip 1+ returns stale data + +## Workflow Recipes + +**Quick overview**: `analyze --summary` > `events --draws-only` > `analyze --biggest-draws 10` + +**Investigate draw**: `pipeline --event EID` > `textures --event EID --save-all ./dump` > `shaders --event EID --cbuffers` + +**Full frame audit**: `analyze --render-targets` > `messages --severity high` > `counters --zero-samples` + +## When to Use GUI + +`python -m renderdoctools open ` — better for texture scrubbing, 3D mesh viewer, shader debugger with source, overlay modes (wireframe, depth, overdraw). diff --git a/.kiro/steering/project-workspace.md b/.kiro/steering/project-workspace.md index 22ab1ced..e00e928e 100644 --- a/.kiro/steering/project-workspace.md +++ b/.kiro/steering/project-workspace.md @@ -6,6 +6,17 @@ name: project-workspace # Project Workspace +## Read-Only Templates + +Do not modify for game-specific work — per-game changes go in `patches//`. + +- `rtx_remix_tools/dx/remix-comp-proxy/` — proxy framework **template** (copy per-game) +- `rtx_remix_tools/dx/scripts/`, `retools/`, `livetools/`, `graphics/`, `renderdoctools/` — shared tooling + +Shared tooling can be modified to improve tools themselves — not for game-specific customization. + +## Per-Game Work + Use `patches//` (git-ignored) for all project-specific artifacts: - Knowledge base files (`kb.h`) - One-off analysis scripts diff --git a/renderdoctools/__init__.py b/renderdoctools/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/renderdoctools/__main__.py b/renderdoctools/__main__.py new file mode 100644 index 00000000..825f1bb8 --- /dev/null +++ b/renderdoctools/__main__.py @@ -0,0 +1,889 @@ +"""CLI entry point for renderdoctools -- RenderDoc capture analysis toolkit. + +Usage: python -m renderdoctools [args] + +Event browser: + python -m renderdoctools events [--draws-only] [--filter TEXT] + +Pipeline state: + python -m renderdoctools pipeline --event [--stage STAGE] + +Textures: + python -m renderdoctools textures --event [--save-all DIR] + +Shaders: + python -m renderdoctools shaders --event [--stage STAGE] + +Mesh data: + python -m renderdoctools mesh --event [--post-vs] + +GPU counters: + python -m renderdoctools counters [--fetch NAME] [--zero-samples] + +Analysis: + python -m renderdoctools analyze [--summary] [--biggest-draws N] + +Capture info: + python -m renderdoctools info + +Pixel history: + python -m renderdoctools pixel-history --event --resource --x 100 --y 200 + +Debug shader: + python -m renderdoctools debug-shader --event --mode pixel --x 100 --y 200 + +Resource usage: + python -m renderdoctools usage --resource [--filter read] + +Pick pixel: + python -m renderdoctools pick-pixel --resource --x 100 --y 200 + +Texture stats: + python -m renderdoctools tex-stats --resource [--histogram] + +Debug messages: + python -m renderdoctools messages [--severity high] + +Frame info: + python -m renderdoctools frame-info + +Descriptors: + python -m renderdoctools descriptors --event [--type sampler] + +Custom shader: + python -m renderdoctools custom-shader --event --source shader.hlsl --output out.dds + +Texture data: + python -m renderdoctools tex-data --resource [--output-file dump.bin] + +API calls: + python -m renderdoctools api-calls [--event ] [--filter DrawIndexed] + +Utilities: + python -m renderdoctools open + python -m renderdoctools capture [--output FILE] +""" +from __future__ import annotations + +import argparse +import json +import subprocess +import sys +from pathlib import Path + +from . import core + + +def cmd_events(args: argparse.Namespace) -> None: + config = { + "draws_only": args.draws_only, + "filter": args.filter or "", + } + result = core.run_script("events", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + events = result["events"] + print("=== %d events ===" % result["total"]) + + for ev in events: + indent = " " * ev["depth"] + tag = "" + if ev["draw"]: + tag = " [DRAW idx=%d inst=%d]" % (ev["numIndices"], ev["numInstances"]) + elif ev["clear"]: + tag = " [CLEAR]" + print("%s%d: %s%s" % (indent, ev["eid"], ev["name"], tag)) + + +def cmd_open(args: argparse.Namespace) -> None: + qrd = core.find_renderdoc() + capture = str(Path(args.capture).resolve()) + subprocess.Popen([str(qrd), capture]) + print("Opened %s in RenderDoc." % capture) + + +def cmd_pipeline(args: argparse.Namespace) -> None: + config = { + "event_id": args.event, + "stage": args.stage or "", + } + result = core.run_script("pipeline", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + print("=== Pipeline State @ EID %d ===" % result["event_id"]) + for stage_name, info in result["stages"].items(): + print("\n[%s]" % stage_name.upper()) + print(" entry: %s" % info["entryPoint"]) + if info["constantBuffers"]: + print(" cbuffers: %d" % len(info["constantBuffers"])) + for cb in info["constantBuffers"]: + print(" %d: %s (%d bytes)" % (cb["index"], cb["name"], cb["byteSize"])) + if info["readOnlyResources"]: + print(" SRVs: %d" % len(info["readOnlyResources"])) + for r in info["readOnlyResources"]: + print(" %d: %s (%s)" % (r["index"], r["name"], r["type"])) + if info["readWriteResources"]: + print(" UAVs: %d" % len(info["readWriteResources"])) + for r in info["readWriteResources"]: + print(" %d: %s (%s)" % (r["index"], r["name"], r["type"])) + + if result.get("renderTargets"): + print("\nRender Targets: %s" % ", ".join(result["renderTargets"])) + if result.get("depthTarget"): + print("Depth Target: %s" % result["depthTarget"]) + + +def cmd_textures(args: argparse.Namespace) -> None: + config = { + "event_id": args.event, + "save_all": args.save_all or "", + "save_rid": args.save or "", + "format": args.format or "png", + "save_output": args.save_output or "", + } + result = core.run_script("textures", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + print("=== %d textures @ EID %d ===" % (result["total"], args.event)) + for tex in result["textures"]: + dim = "%dx%d" % (tex["width"], tex["height"]) + if tex["depth"] > 1: + dim += "x%d" % tex["depth"] + print(" %s %s %s [%s] %s" % ( + tex["resourceId"].rjust(10), + dim.ljust(12), + tex["format"][:24].ljust(24), + tex["binding"], + tex.get("name", ""), + )) + + if result.get("saved"): + print("\nSaved %d textures:" % len(result["saved"])) + for f in result["saved"]: + print(" %s" % f) + + +def cmd_shaders(args: argparse.Namespace) -> None: + config = { + "event_id": args.event, + "stage": args.stage or "", + "cbuffers": args.cbuffers, + } + result = core.run_script("shaders", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + print("=== Shaders @ EID %d [%s] ===" % (result["event_id"], result["disasmTarget"])) + for stage_name, info in result["shaders"].items(): + print("\n-- %s -- (entry: %s)" % (stage_name.upper(), info["entryPoint"])) + print(info["disassembly"][:2000]) + if len(info["disassembly"]) > 2000: + print("... (truncated, use --json for full output)") + + if "constantBuffers" in info: + for cb in info["constantBuffers"]: + print("\n cbuffer %s [%d]:" % (cb["name"], cb["index"])) + if "error" in cb: + print(" (error: %s)" % cb["error"]) + continue + for v in cb.get("variables", []): + if "values" in v: + vals = ", ".join("%.4f" % x for x in v["values"]) + print(" %s: [%s]" % (v["name"], vals)) + + +def cmd_mesh(args: argparse.Namespace) -> None: + config = { + "event_id": args.event, + "post_vs": args.post_vs, + "indices": args.indices or "", + } + result = core.run_script("mesh", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + mode = "Post-VS" if result["post_vs"] else "Input" + print("=== Mesh %s @ EID %d ===" % (mode, result["event_id"])) + print("Attributes: %s" % ", ".join(a["name"] for a in result["attributes"])) + print("") + + for vert in result.get("vertices", []): + parts = ["idx=%d" % vert["index"]] + for a in result["attributes"]: + val = vert.get(a["name"]) + if val: + parts.append("%s=%s" % (a["name"], val)) + print(" ".join(parts)) + + +def cmd_counters(args: argparse.Namespace) -> None: + config = { + "fetch": args.fetch or "", + "zero_samples": args.zero_samples, + } + result = core.run_script("counters", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + mode = result["mode"] + if mode == "list": + print("=== %d GPU Counters ===" % len(result["counters"])) + for c in result["counters"]: + print(" %s (%s) -- %s" % (c["name"], c["unit"], c["description"][:60])) + elif mode == "zero_samples": + print("=== %d draws with 0 samples passed ===" % result["total"]) + for d in result["draws"]: + print(" EID %d: %s (indices=%d)" % (d["eid"], d["name"], d["numIndices"])) + elif mode == "fetch": + print("=== %s (%s) ===" % (result["counter"], result["unit"])) + for r in result["results"]: + print(" EID %d: %s = %d" % (r["eid"], r["name"][:40], r["value"])) + + +def cmd_analyze(args: argparse.Namespace) -> None: + config = { + "summary": args.summary, + "biggest_draws": args.biggest_draws or 0, + "render_targets": args.render_targets, + } + result = core.run_script("analyze", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + if "summary" in result: + s = result["summary"] + print("=== Capture Summary ===") + print(" Events: %d" % s["totalEvents"]) + print(" Draws: %d" % s["totalDraws"]) + print(" Clears: %d" % s["totalClears"]) + print(" Indices: %d" % s["totalIndices"]) + print(" Instances: %d" % s["totalInstances"]) + + if "biggestDraws" in result: + print("\n=== Top %d Draws by Index Count ===" % len(result["biggestDraws"])) + for d in result["biggestDraws"]: + print(" EID %d: %s (indices=%d, instances=%d)" % ( + d["eid"], d["name"], d["numIndices"], d["numInstances"])) + + if "renderTargets" in result: + print("\n=== Render Targets ===") + for rt in result["renderTargets"]: + print(" RID %s: %d draws (EID %d-%d)" % ( + rt["resourceId"], rt["drawCount"], rt["firstEid"], rt["lastEid"])) + + +def cmd_info(args: argparse.Namespace) -> None: + result = core.run_script("info", args.capture) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + print("=== Capture Info ===") + for k, v in result.items(): + print(" %s: %s" % (k, v)) + + +def cmd_capture(args: argparse.Namespace) -> None: + # renderdoccmd lives next to qrenderdoc + qrd = core.find_renderdoc() + renderdoccmd = qrd.parent / "renderdoccmd.exe" + if not renderdoccmd.is_file(): + print("[error] renderdoccmd not found at %s" % renderdoccmd, file=sys.stderr) + sys.exit(1) + + cmd = [str(renderdoccmd), "capture"] + if args.output_file: + cmd.extend(["-c", args.output_file]) + cmd.extend(["-w", args.exe]) + cmd.extend(args.exe_args) + + print("Launching capture: %s" % " ".join(cmd)) + subprocess.run(cmd) + + +def _parse_xyz(val): + parts = val.split(",") + if len(parts) != 3: + raise argparse.ArgumentTypeError("expected X,Y,Z (three comma-separated ints)") + return [int(p) for p in parts] + + +def cmd_pixel_history(args: argparse.Namespace) -> None: + config = { + "event_id": args.event, + "resource_id": args.resource, + "x": args.x, + "y": args.y, + "sub_mip": args.sub_mip or 0, + "sub_slice": args.sub_slice or 0, + "sub_sample": args.sub_sample or 0, + } + result = core.run_script("pixel_history", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + print("=== Pixel History @ (%d, %d) ===" % (args.x, args.y)) + for mod in result.get("modifications", []): + status = "PASS" if mod.get("passed") else "FAIL" + pre = mod.get("preRGBA", [0, 0, 0, 0]) + post = mod.get("postRGBA", [0, 0, 0, 0]) + print(" EID %d [%s] pre=(%.3f, %.3f, %.3f, %.3f) post=(%.3f, %.3f, %.3f, %.3f)" % ( + mod["eid"], status, pre[0], pre[1], pre[2], pre[3], + post[0], post[1], post[2], post[3])) + + +def cmd_debug_shader(args: argparse.Namespace) -> None: + config = { + "event_id": args.event, + "mode": args.mode, + "vertex_index": args.vertex_index or 0, + "instance": args.instance or 0, + "view": args.view or 0, + "x": args.x or 0, + "y": args.y or 0, + "sample": args.sample or 0, + "primitive": args.primitive or 0, + "group": args.group or [0, 0, 0], + "thread": args.thread or [0, 0, 0], + "max_steps": args.max_steps or 10000, + } + result = core.run_script("debug_shader", args.capture, config, timeout=300) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + print("=== Shader Debug Trace @ EID %d [%s] ===" % (args.event, args.mode)) + for step in result.get("steps", []): + line = " step %d: " % step["index"] + if step.get("source"): + line += "%s " % step["source"] + for var in step.get("variables", []): + line += "%s=%s " % (var["name"], var["value"]) + print(line.rstrip()) + + +def cmd_usage(args: argparse.Namespace) -> None: + config = { + "resource_id": args.resource, + "usage_filter": args.filter or "all", + } + result = core.run_script("usage", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + print("=== Resource Usage for %s ===" % args.resource) + for entry in result.get("usages", []): + print(" EID %d: %s (%s)" % (entry["eventId"], entry["eventName"], entry["usage"])) + + +def cmd_pick_pixel(args: argparse.Namespace) -> None: + config = { + "resource_id": args.resource, + "x": args.x, + "y": args.y, + "sub_mip": args.sub_mip or 0, + "sub_slice": args.sub_slice or 0, + "sub_sample": args.sub_sample or 0, + "comp_type": args.comp_type or "", + } + result = core.run_script("pick_pixel", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + print("=== Pixel @ (%d, %d) RID %s ===" % (args.x, args.y, args.resource)) + if "floatValue" in result: + v = result["floatValue"] + print(" float: (%.6f, %.6f, %.6f, %.6f)" % (v[0], v[1], v[2], v[3])) + if "intValue" in result: + v = result["intValue"] + print(" int: (%d, %d, %d, %d)" % (v[0], v[1], v[2], v[3])) + if "uintValue" in result: + v = result["uintValue"] + print(" uint: (%d, %d, %d, %d)" % (v[0], v[1], v[2], v[3])) + + +def cmd_tex_stats(args: argparse.Namespace) -> None: + config = { + "resource_id": args.resource, + "sub_mip": args.mip or 0, + "sub_slice": args.slice or 0, + "sub_sample": args.sample or 0, + "histogram": args.histogram, + "histogram_min": args.hist_min if args.hist_min is not None else 0.0, + "histogram_max": args.hist_max if args.hist_max is not None else 1.0, + } + result = core.run_script("tex_stats", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + print("=== Texture Stats for %s ===" % args.resource) + if "minRGBA" in result: + v = result["minRGBA"] + print(" min: (%.4f, %.4f, %.4f, %.4f)" % (v[0], v[1], v[2], v[3])) + if "maxRGBA" in result: + v = result["maxRGBA"] + print(" max: (%.4f, %.4f, %.4f, %.4f)" % (v[0], v[1], v[2], v[3])) + if "histogram" in result: + print(" histogram: %d buckets" % len(result["histogram"])) + for i, count in enumerate(result["histogram"]): + print(" [%d] %d" % (i, count)) + + +def cmd_messages(args: argparse.Namespace) -> None: + config = { + "severity_filter": args.severity or "all", + } + result = core.run_script("messages", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + msgs = result.get("messages", []) + print("=== %d Debug Messages ===" % len(msgs)) + for m in msgs: + print(" [%s] %s: %s" % (m.get("severity", "?"), m.get("category", "?"), m.get("description", ""))) + + +def cmd_frame_info(args: argparse.Namespace) -> None: + result = core.run_script("frame_info", args.capture) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + print("=== Frame Info ===") + for k, v in result.items(): + print(" %s: %s" % (k, v)) + + +def cmd_descriptors(args: argparse.Namespace) -> None: + config = { + "event_id": args.event, + "type_filter": args.type or "all", + } + result = core.run_script("descriptors", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + descs = result.get("descriptors", []) + print("=== %d Descriptors @ EID %d ===" % (len(descs), args.event)) + for d in descs: + print(" [%s] %s: %s (%s)" % (d.get("stage", "?"), d.get("type", "?"), + d.get("resource", "?"), d.get("format", "?"))) + + +def cmd_custom_shader(args: argparse.Namespace) -> None: + source_path = Path(args.source) + if not source_path.is_file(): + print("[error] shader source not found: %s" % args.source, file=sys.stderr) + sys.exit(1) + shader_source = source_path.read_text() + + config = { + "event_id": args.event, + "shader_source": shader_source, + "output_path": str(Path(args.output_path).resolve()), + "encoding": args.encoding or "", + "entry_point": args.entry_point or "main", + } + result = core.run_script("custom_shader", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + if result.get("success"): + print("Custom shader applied successfully.") + if result.get("saved"): + print("Saved to: %s" % result["saved"]) + else: + print("Custom shader failed.") + + +def cmd_tex_data(args: argparse.Namespace) -> None: + config = { + "resource_id": args.resource, + "sub_mip": args.sub_mip or 0, + "sub_slice": args.sub_slice or 0, + "sub_sample": args.sub_sample or 0, + "output_path": str(Path(args.output_file).resolve()) if args.output_file else "", + "hex_preview_bytes": args.preview_bytes or 256, + } + result = core.run_script("tex_data", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + print("=== Texture Data for %s ===" % args.resource) + if result.get("width"): + print(" %dx%d %s" % (result["width"], result["height"], result.get("format", ""))) + if result.get("saved"): + print(" Saved to: %s" % result["saved"]) + if result.get("hexPreview"): + print(" Hex preview:\n %s" % result["hexPreview"]) + + +def cmd_api_calls(args: argparse.Namespace) -> None: + config = { + "event_id": args.event or 0, + "filter": args.filter or "", + "range_start": args.range[0] if args.range else 0, + "range_end": args.range[1] if args.range else 0, + } + result = core.run_script("api_calls", args.capture, config) + + if args.json: + print(json.dumps(result, indent=2)) + return + + if "error" in result: + print("[error] %s" % result["error"], file=sys.stderr) + sys.exit(1) + + calls = result.get("calls", []) + print("=== %d API Calls ===" % len(calls)) + for c in calls: + print(" EID %d: %s(%s)" % (c.get("eid", 0), c.get("function", ""), c.get("params_inline", ""))) + + +def main() -> None: + parser = argparse.ArgumentParser( + prog="renderdoctools", + description="RenderDoc capture analysis toolkit", + ) + sub = parser.add_subparsers(dest="command") + sub.required = True + + # events + p_events = sub.add_parser("events", help="List events and draw calls") + p_events.add_argument("capture", help="Path to .rdc capture file") + p_events.add_argument("--draws-only", action="store_true", help="Only show draw calls") + p_events.add_argument("--filter", type=str, default="", help="Filter events by name") + p_events.add_argument("--json", action="store_true", help="Output raw JSON") + p_events.add_argument("--output", type=str, help="Write output to file") + p_events.set_defaults(func=cmd_events) + + # open + p_open = sub.add_parser("open", help="Launch RenderDoc GUI with capture") + p_open.add_argument("capture", help="Path to .rdc capture file") + p_open.set_defaults(func=cmd_open) + + # pipeline + p_pipe = sub.add_parser("pipeline", help="Inspect pipeline state at an event") + p_pipe.add_argument("capture", help="Path to .rdc capture file") + p_pipe.add_argument("--event", type=int, required=True, help="Event ID") + p_pipe.add_argument("--stage", type=str, default="", help="Filter to stage: vertex, pixel, geometry, hull, domain, compute") + p_pipe.add_argument("--json", action="store_true", help="Output raw JSON") + p_pipe.add_argument("--output", type=str, help="Write output to file") + p_pipe.set_defaults(func=cmd_pipeline) + + # textures + p_tex = sub.add_parser("textures", help="List and export textures at an event") + p_tex.add_argument("capture", help="Path to .rdc capture file") + p_tex.add_argument("--event", type=int, required=True, help="Event ID") + p_tex.add_argument("--save-all", type=str, metavar="DIR", help="Export all textures to directory") + p_tex.add_argument("--save", type=str, metavar="RID", help="Export specific texture by resource ID") + p_tex.add_argument("--save-output", type=str, metavar="FILE", help="Output path for --save") + p_tex.add_argument("--format", type=str, default="png", choices=["png", "jpg", "dds", "hdr", "bmp", "tga"]) + p_tex.add_argument("--json", action="store_true", help="Output raw JSON") + p_tex.add_argument("--output", type=str, help="Write output to file") + p_tex.set_defaults(func=cmd_textures) + + # shaders + p_shd = sub.add_parser("shaders", help="Disassemble shaders and inspect cbuffers") + p_shd.add_argument("capture", help="Path to .rdc capture file") + p_shd.add_argument("--event", type=int, required=True, help="Event ID") + p_shd.add_argument("--stage", type=str, default="", help="Filter to stage") + p_shd.add_argument("--cbuffers", action="store_true", help="Include constant buffer contents") + p_shd.add_argument("--json", action="store_true", help="Output raw JSON") + p_shd.add_argument("--output", type=str, help="Write output to file") + p_shd.set_defaults(func=cmd_shaders) + + # mesh + p_mesh = sub.add_parser("mesh", help="Decode vertex/mesh data at a draw call") + p_mesh.add_argument("capture", help="Path to .rdc capture file") + p_mesh.add_argument("--event", type=int, required=True, help="Event ID") + p_mesh.add_argument("--post-vs", action="store_true", help="Show post-VS output instead of inputs") + p_mesh.add_argument("--indices", type=str, help="Vertex index range, e.g. 0-10") + p_mesh.add_argument("--json", action="store_true", help="Output raw JSON") + p_mesh.add_argument("--output", type=str, help="Write output to file") + p_mesh.set_defaults(func=cmd_mesh) + + # counters + p_cnt = sub.add_parser("counters", help="GPU performance counters") + p_cnt.add_argument("capture", help="Path to .rdc capture file") + p_cnt.add_argument("--fetch", type=str, help="Fetch specific counter by name") + p_cnt.add_argument("--zero-samples", action="store_true", help="Find draws with 0 samples passed") + p_cnt.add_argument("--json", action="store_true", help="Output raw JSON") + p_cnt.add_argument("--output", type=str, help="Write output to file") + p_cnt.set_defaults(func=cmd_counters) + + # analyze + p_ana = sub.add_parser("analyze", help="Capture-wide analysis and statistics") + p_ana.add_argument("capture", help="Path to .rdc capture file") + p_ana.add_argument("--summary", action="store_true", help="Overview statistics") + p_ana.add_argument("--biggest-draws", type=int, metavar="N", help="Top N draws by vertex count") + p_ana.add_argument("--render-targets", action="store_true", help="List unique render targets") + p_ana.add_argument("--json", action="store_true", help="Output raw JSON") + p_ana.add_argument("--output", type=str, help="Write output to file") + p_ana.set_defaults(func=cmd_analyze) + + # info + p_info = sub.add_parser("info", help="Show capture metadata") + p_info.add_argument("capture", help="Path to .rdc capture file") + p_info.add_argument("--json", action="store_true", help="Output raw JSON") + p_info.set_defaults(func=cmd_info) + + # capture + p_cap = sub.add_parser("capture", help="Capture a running or launched application") + p_cap.add_argument("exe", help="Executable to capture") + p_cap.add_argument("exe_args", nargs="*", help="Arguments to pass to executable") + p_cap.add_argument("--output", "-o", type=str, dest="output_file", help="Output capture filename template") + p_cap.set_defaults(func=cmd_capture) + + # pixel-history + p_ph = sub.add_parser("pixel-history", help="Trace pixel modification history") + p_ph.add_argument("capture", help="Path to .rdc capture file") + p_ph.add_argument("--event", type=int, required=True, help="Event ID") + p_ph.add_argument("--resource", type=str, required=True, help="Resource ID") + p_ph.add_argument("--x", type=int, required=True, help="Pixel X coordinate") + p_ph.add_argument("--y", type=int, required=True, help="Pixel Y coordinate") + p_ph.add_argument("--sub-mip", type=int, default=0, help="Sub-resource mip level") + p_ph.add_argument("--sub-slice", type=int, default=0, help="Sub-resource slice") + p_ph.add_argument("--sub-sample", type=int, default=0, help="Sub-resource sample") + p_ph.add_argument("--json", action="store_true", help="Output raw JSON") + p_ph.add_argument("--output", type=str, help="Write output to file") + p_ph.set_defaults(func=cmd_pixel_history) + + # debug-shader + p_ds = sub.add_parser("debug-shader", help="Debug a shader invocation step by step") + p_ds.add_argument("capture", help="Path to .rdc capture file") + p_ds.add_argument("--event", type=int, required=True, help="Event ID") + p_ds.add_argument("--mode", type=str, required=True, choices=["vertex", "pixel", "compute"], help="Shader stage to debug") + p_ds.add_argument("--vertex-index", type=int, default=0, help="Vertex index for vertex debug") + p_ds.add_argument("--instance", type=int, default=0, help="Instance index") + p_ds.add_argument("--view", type=int, default=0, help="Multiview index") + p_ds.add_argument("--x", type=int, default=0, help="Pixel X for pixel debug") + p_ds.add_argument("--y", type=int, default=0, help="Pixel Y for pixel debug") + p_ds.add_argument("--sample", type=int, default=0, help="MSAA sample index") + p_ds.add_argument("--primitive", type=int, default=0, help="Primitive index") + p_ds.add_argument("--group", type=_parse_xyz, default=None, help="Compute group X,Y,Z") + p_ds.add_argument("--thread", type=_parse_xyz, default=None, help="Compute thread X,Y,Z") + p_ds.add_argument("--max-steps", type=int, default=10000, help="Maximum debug steps") + p_ds.add_argument("--json", action="store_true", help="Output raw JSON") + p_ds.add_argument("--output", type=str, help="Write output to file") + p_ds.set_defaults(func=cmd_debug_shader) + + # usage + p_usg = sub.add_parser("usage", help="Show which events use a resource") + p_usg.add_argument("capture", help="Path to .rdc capture file") + p_usg.add_argument("--resource", type=str, required=True, help="Resource ID") + p_usg.add_argument("--filter", type=str, default="all", choices=["read", "write", "all"], help="Filter by usage type") + p_usg.add_argument("--json", action="store_true", help="Output raw JSON") + p_usg.add_argument("--output", type=str, help="Write output to file") + p_usg.set_defaults(func=cmd_usage) + + # pick-pixel + p_pp = sub.add_parser("pick-pixel", help="Read pixel value from a texture") + p_pp.add_argument("capture", help="Path to .rdc capture file") + p_pp.add_argument("--resource", type=str, required=True, help="Resource ID") + p_pp.add_argument("--x", type=int, required=True, help="Pixel X coordinate") + p_pp.add_argument("--y", type=int, required=True, help="Pixel Y coordinate") + p_pp.add_argument("--sub-mip", type=int, default=0, help="Sub-resource mip level") + p_pp.add_argument("--sub-slice", type=int, default=0, help="Sub-resource slice") + p_pp.add_argument("--sub-sample", type=int, default=0, help="Sub-resource sample") + p_pp.add_argument("--comp-type", type=str, default="", help="Component type override") + p_pp.add_argument("--json", action="store_true", help="Output raw JSON") + p_pp.add_argument("--output", type=str, help="Write output to file") + p_pp.set_defaults(func=cmd_pick_pixel) + + # tex-stats + p_ts = sub.add_parser("tex-stats", help="Get texture min/max stats and histogram") + p_ts.add_argument("capture", help="Path to .rdc capture file") + p_ts.add_argument("--resource", type=str, required=True, help="Resource ID") + p_ts.add_argument("--mip", type=int, default=0, help="Mip level") + p_ts.add_argument("--slice", type=int, default=0, help="Array slice") + p_ts.add_argument("--sample", type=int, default=0, help="MSAA sample") + p_ts.add_argument("--histogram", action="store_true", help="Include histogram data") + p_ts.add_argument("--hist-min", type=float, default=None, help="Histogram range minimum") + p_ts.add_argument("--hist-max", type=float, default=None, help="Histogram range maximum") + p_ts.add_argument("--json", action="store_true", help="Output raw JSON") + p_ts.add_argument("--output", type=str, help="Write output to file") + p_ts.set_defaults(func=cmd_tex_stats) + + # messages + p_msg = sub.add_parser("messages", help="List debug messages from the capture") + p_msg.add_argument("capture", help="Path to .rdc capture file") + p_msg.add_argument("--severity", type=str, default="all", choices=["high", "medium", "low", "info", "all"], help="Filter by severity") + p_msg.add_argument("--json", action="store_true", help="Output raw JSON") + p_msg.add_argument("--output", type=str, help="Write output to file") + p_msg.set_defaults(func=cmd_messages) + + # frame-info + p_fi = sub.add_parser("frame-info", help="Show frame statistics") + p_fi.add_argument("capture", help="Path to .rdc capture file") + p_fi.add_argument("--json", action="store_true", help="Output raw JSON") + p_fi.add_argument("--output", type=str, help="Write output to file") + p_fi.set_defaults(func=cmd_frame_info) + + # descriptors + p_desc = sub.add_parser("descriptors", help="List descriptors accessed at an event") + p_desc.add_argument("capture", help="Path to .rdc capture file") + p_desc.add_argument("--event", type=int, required=True, help="Event ID") + p_desc.add_argument("--type", type=str, default="all", choices=["sampler", "cbuffer", "srv", "uav", "all"], help="Filter by descriptor type") + p_desc.add_argument("--json", action="store_true", help="Output raw JSON") + p_desc.add_argument("--output", type=str, help="Write output to file") + p_desc.set_defaults(func=cmd_descriptors) + + # custom-shader + p_cs = sub.add_parser("custom-shader", help="Apply a custom shader to a texture output") + p_cs.add_argument("capture", help="Path to .rdc capture file") + p_cs.add_argument("--event", type=int, required=True, help="Event ID") + p_cs.add_argument("--source", type=str, required=True, help="Path to shader source file") + p_cs.add_argument("--output", type=str, dest="output_path", required=True, help="Output file path") + p_cs.add_argument("--encoding", type=str, default="", help="Texture encoding") + p_cs.add_argument("--entry-point", type=str, default="main", help="Shader entry point name") + p_cs.add_argument("--json", action="store_true", help="Output raw JSON") + p_cs.set_defaults(func=cmd_custom_shader) + + # tex-data + p_td = sub.add_parser("tex-data", help="Export raw texture data") + p_td.add_argument("capture", help="Path to .rdc capture file") + p_td.add_argument("--resource", type=str, required=True, help="Resource ID") + p_td.add_argument("--sub-mip", type=int, default=0, help="Sub-resource mip level") + p_td.add_argument("--sub-slice", type=int, default=0, help="Sub-resource slice") + p_td.add_argument("--sub-sample", type=int, default=0, help="Sub-resource sample") + p_td.add_argument("--output-file", type=str, default="", help="Save raw data to file") + p_td.add_argument("--preview-bytes", type=int, default=256, help="Number of bytes for hex preview") + p_td.add_argument("--json", action="store_true", help="Output raw JSON") + p_td.add_argument("--output", type=str, help="Write output to file") + p_td.set_defaults(func=cmd_tex_data) + + # api-calls + p_api = sub.add_parser("api-calls", help="List API calls in the capture") + p_api.add_argument("capture", help="Path to .rdc capture file") + p_api.add_argument("--event", type=int, default=0, help="Show detail for specific event ID") + p_api.add_argument("--filter", type=str, default="", help="Filter calls by name") + p_api.add_argument("--range", type=int, nargs=2, metavar=("START", "END"), help="Event ID range") + p_api.add_argument("--json", action="store_true", help="Output raw JSON") + p_api.add_argument("--output", type=str, help="Write output to file") + p_api.set_defaults(func=cmd_api_calls) + + args = parser.parse_args() + + # Handle --output redirect + if hasattr(args, "output") and args.output: + with open(args.output, "w") as f: + old_stdout = sys.stdout + sys.stdout = f + args.func(args) + sys.stdout = old_stdout + else: + args.func(args) + + +if __name__ == "__main__": + main() diff --git a/renderdoctools/core.py b/renderdoctools/core.py new file mode 100644 index 00000000..c58ae69a --- /dev/null +++ b/renderdoctools/core.py @@ -0,0 +1,132 @@ +# renderdoctools/core.py +"""RenderDoc script execution engine. + +Locates the bundled RenderDoc, generates temp analysis scripts, +executes them via qrenderdoc --python, and parses JSON output. +""" +from __future__ import annotations + +import json +import os +import subprocess +import sys +import tempfile +from pathlib import Path + +# Workspace root: parent of the renderdoctools/ package +WORKSPACE_ROOT = Path(__file__).resolve().parent.parent + +# Directory containing template scripts +SCRIPTS_DIR = Path(__file__).resolve().parent / "scripts" + + +def find_renderdoc() -> Path: + """Locate bundled qrenderdoc.exe. Raises FileNotFoundError if missing.""" + # Search for RenderDoc in tools/ — supports both "renderdoc" and versioned names + tools_dir = WORKSPACE_ROOT / "tools" + for name in ["renderdoc", "RenderDoc_DX9", "RenderDoc_1.43_64"]: + qrd = tools_dir / name / "qrenderdoc.exe" + if qrd.is_file(): + return qrd + # Also try any RenderDoc_* directory + if tools_dir.is_dir(): + for d in sorted(tools_dir.iterdir(), reverse=True): + if d.name.lower().startswith("renderdoc") and d.is_dir(): + qrd = d / "qrenderdoc.exe" + if qrd.is_file(): + return qrd + raise FileNotFoundError( + "RenderDoc not found in %s\n" + "Run 'python verify_install.py' to auto-install, or download from " + "https://github.com/Kim2091/renderdoc/releases and extract to tools/" % tools_dir + ) + + +def run_script( + script_name: str, + capture_path: str, + config: dict | None = None, + timeout: int = 120, +) -> dict: + """Execute a RenderDoc analysis script and return parsed JSON output.""" + qrd = find_renderdoc() + config = config or {} + + local_tmp = WORKSPACE_ROOT / "tmp" + local_tmp.mkdir(exist_ok=True) + with tempfile.TemporaryDirectory(prefix="rdtools_", dir=str(local_tmp)) as tmpdir: + tmpdir = Path(tmpdir) + output_path = tmpdir / "output.json" + config_path = tmpdir / "config.json" + script_path = tmpdir / "script.py" + + full_config = { + "capture": str(Path(capture_path).resolve()), + "output": str(output_path), + **config, + } + config_path.write_text(json.dumps(full_config)) + + template_path = SCRIPTS_DIR / (script_name + ".py") + if not template_path.is_file(): + raise FileNotFoundError("Script template not found: %s" % template_path) + + base_header = (SCRIPTS_DIR / "_base_header.py").read_text() + template_body = template_path.read_text() + + # Indent template body for try/except wrapper + indented_body = "\n".join( + (" " + line if line.strip() else line) + for line in template_body.splitlines() + ) + + full_script = ( + "# Generated by renderdoctools -- do not edit\n" + "_CONFIG_PATH = %r\n\n" % str(config_path.as_posix()) + + base_header + + "\ntry:\n" + + indented_body + + "\nexcept Exception as _e:\n" + + " _shutdown()\n" + + " _write_error('Script error: ' + str(_e))\n" + + "\nimport os; os._exit(0)\n" + ) + script_path.write_text(full_script) + + result = subprocess.run( + [str(qrd), "--python", str(script_path)], + capture_output=True, + text=True, + timeout=timeout, + ) + + if not output_path.is_file(): + err = result.stderr.strip() if result.stderr else "No output produced" + raise RuntimeError( + "Script '%s' failed (exit %d): %s" + % (script_name, result.returncode, err) + ) + + return json.loads(output_path.read_text()) + + +def format_table(rows: list[dict], columns: list[str]) -> str: + """Format a list of dicts as an aligned text table.""" + if not rows: + return "(no results)" + + widths = {c: len(c) for c in columns} + for row in rows: + for c in columns: + val = str(row.get(c, "")) + widths[c] = max(widths[c], len(val)) + + header = " ".join(c.ljust(widths[c]) for c in columns) + sep = " ".join("-" * widths[c] for c in columns) + lines = [header, sep] + + for row in rows: + line = " ".join(str(row.get(c, "")).ljust(widths[c]) for c in columns) + lines.append(line) + + return "\n".join(lines) diff --git a/renderdoctools/scripts/_base_header.py b/renderdoctools/scripts/_base_header.py new file mode 100644 index 00000000..8577768f --- /dev/null +++ b/renderdoctools/scripts/_base_header.py @@ -0,0 +1,50 @@ +# ── Base header for renderdoctools analysis scripts ── +# Runs inside RenderDoc's embedded Python 3.6. +# _CONFIG_PATH is injected above this header by core.py. + +import json +import sys +import os + +# Load config +with open(_CONFIG_PATH, "r") as _f: + _cfg = json.load(_f) + +_CAPTURE = _cfg["capture"] +_OUTPUT = _cfg["output"] + + +def _write_output(data): + """Write result JSON and exit cleanly.""" + with open(_OUTPUT, "w") as f: + json.dump(data, f) + + +def _write_error(msg): + """Write error JSON and exit.""" + _write_output({"error": str(msg)}) + sys.exit(1) + + +# ── Load capture ── +import renderdoc as rd + +rd.InitialiseReplay(rd.GlobalEnvironment(), []) + +_cap = rd.OpenCaptureFile() +_result = _cap.OpenFile(_CAPTURE, "", None) +if _result != rd.ResultCode.Succeeded: + _write_error("Failed to open capture: " + str(_result)) + +_result, _controller = _cap.OpenCapture(rd.ReplayOptions(), None) +if _result != rd.ResultCode.Succeeded: + _cap.Shutdown() + rd.ShutdownReplay() + _write_error("Failed to replay capture: " + str(_result)) + + +def _shutdown(): + """Clean shutdown of replay.""" + _controller.Shutdown() + _cap.Shutdown() + rd.ShutdownReplay() diff --git a/renderdoctools/scripts/analyze.py b/renderdoctools/scripts/analyze.py new file mode 100644 index 00000000..8c907d98 --- /dev/null +++ b/renderdoctools/scripts/analyze.py @@ -0,0 +1,63 @@ +# renderdoctools/scripts/analyze.py +# Capture-wide analysis and statistics. +# Runs inside RenderDoc Python 3.6. + +show_summary = _cfg.get("summary", False) +biggest_n = _cfg.get("biggest_draws", 0) +show_render_targets = _cfg.get("render_targets", False) + +sf = _controller.GetStructuredFile() + +all_actions = [] +def collect(d, depth=0): + all_actions.append((d, depth)) + for c in d.children: + collect(c, depth + 1) +for a in _controller.GetRootActions(): + collect(a) + +draws = [(a, dep) for a, dep in all_actions if a.flags & rd.ActionFlags.Drawcall] +clears = [(a, dep) for a, dep in all_actions if a.flags & rd.ActionFlags.Clear] + +result = {} + +if show_summary or (not biggest_n and not show_render_targets): + total_indices = sum(a.numIndices for a, _ in draws) + total_instances = sum(a.numInstances for a, _ in draws) + result["summary"] = { + "totalEvents": len(all_actions), + "totalDraws": len(draws), + "totalClears": len(clears), + "totalIndices": total_indices, + "totalInstances": total_instances, + } + +if biggest_n: + sorted_draws = sorted(draws, key=lambda x: x[0].numIndices, reverse=True) + top = sorted_draws[:biggest_n] + result["biggestDraws"] = [{ + "eid": a.eventId, + "name": a.GetName(sf), + "numIndices": a.numIndices, + "numInstances": a.numInstances, + } for a, _ in top] + +if show_render_targets: + rt_map = {} + for a, _ in draws: + for o in a.outputs: + if o != rd.ResourceId.Null(): + key = str(int(o)) + if key not in rt_map: + rt_map[key] = [] + rt_map[key].append(a.eventId) + result["renderTargets"] = [{ + "resourceId": k, + "drawCount": len(v), + "firstEid": v[0], + "lastEid": v[-1], + } for k, v in sorted(rt_map.items(), key=lambda x: -len(x[1]))] + +_write_output(result) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/api_calls.py b/renderdoctools/scripts/api_calls.py new file mode 100644 index 00000000..9afb15d5 --- /dev/null +++ b/renderdoctools/scripts/api_calls.py @@ -0,0 +1,236 @@ +# renderdoctools/scripts/api_calls.py +# Structured API call log -- enumerate every API call with parameters. +# Runs inside RenderDoc Python 3.6. _cfg, _controller, _cap, rd available from base header. + +event_id = _cfg.get("event_id", 0) +name_filter = _cfg.get("filter", "") +range_start = _cfg.get("range_start", 0) +range_end = _cfg.get("range_end", 0) + +sf = _controller.GetStructuredFile() + + +def format_value(obj): + """Extract a human-readable value from an SDObject.""" + bt = obj.type.basetype + + if bt == rd.SDBasic.Null: + return None + + if bt == rd.SDBasic.String: + return obj.data.string + + if bt == rd.SDBasic.Boolean: + return obj.data.basic.b + + if bt == rd.SDBasic.UnsignedInteger: + return obj.data.basic.u + + if bt == rd.SDBasic.SignedInteger: + return obj.data.basic.i + + if bt == rd.SDBasic.Float: + return obj.data.basic.d + + if bt == rd.SDBasic.Enum: + # Enums have a custom string and integer storage + s = obj.data.string + if s: + return s + return obj.data.basic.u + + if bt == rd.SDBasic.Resource: + return "RID:%d" % obj.data.basic.id + + if bt == rd.SDBasic.Character: + return obj.data.basic.c + + if bt == rd.SDBasic.Buffer: + return "" % obj.type.byteSize + + if bt == rd.SDBasic.GPUAddress: + return "0x%x" % obj.data.basic.u + + if bt == rd.SDBasic.Array: + items = [] + for i in range(obj.NumChildren()): + child = obj.GetChild(i) + items.append(format_value(child)) + return items + + if bt == rd.SDBasic.Struct or bt == rd.SDBasic.Chunk: + members = {} + for i in range(obj.NumChildren()): + child = obj.GetChild(i) + members[child.name] = format_value(child) + return members + + return "" % str(bt) + + +def format_value_inline(obj): + """Format a value as a compact inline string for one-line display.""" + bt = obj.type.basetype + + if bt == rd.SDBasic.Null: + return "NULL" + + if bt == rd.SDBasic.String: + s = obj.data.string + if len(s) > 40: + return '"%s..."' % s[:37] + return '"%s"' % s + + if bt == rd.SDBasic.Boolean: + return "true" if obj.data.basic.b else "false" + + if bt == rd.SDBasic.UnsignedInteger: + v = obj.data.basic.u + if v > 0xFFFF: + return "0x%x" % v + return str(v) + + if bt == rd.SDBasic.SignedInteger: + return str(obj.data.basic.i) + + if bt == rd.SDBasic.Float: + return "%.4g" % obj.data.basic.d + + if bt == rd.SDBasic.Enum: + s = obj.data.string + if s: + return s + return str(obj.data.basic.u) + + if bt == rd.SDBasic.Resource: + return "RID:%d" % obj.data.basic.id + + if bt == rd.SDBasic.GPUAddress: + return "0x%x" % obj.data.basic.u + + if bt == rd.SDBasic.Buffer: + return "" % obj.type.byteSize + + if bt == rd.SDBasic.Array: + n = obj.NumChildren() + if n == 0: + return "[]" + if n <= 4: + parts = [] + for i in range(n): + parts.append(format_value_inline(obj.GetChild(i))) + return "[%s]" % ", ".join(parts) + return "[%d items]" % n + + if bt == rd.SDBasic.Struct or bt == rd.SDBasic.Chunk: + n = obj.NumChildren() + if n == 0: + return "{}" + # Show up to 3 members inline + parts = [] + for i in range(min(n, 3)): + child = obj.GetChild(i) + parts.append("%s=%s" % (child.name, format_value_inline(child))) + s = "{%s}" % ", ".join(parts) + if n > 3: + s = s[:-1] + ", ...}" + return s + + return "?" + + +def param_detail(obj): + """Build a detailed parameter dict for JSON output.""" + return { + "name": obj.name, + "type": obj.type.name, + "basetype": str(obj.type.basetype), + "value": format_value(obj), + } + + +# Build event_id -> chunk_index mapping by walking actions +eid_to_chunk = {} + + +def map_events(action): + for ev in action.events: + if ev.chunkIndex != 0xFFFFFFFF: # APIEvent.NoChunk + eid_to_chunk[ev.eventId] = ev.chunkIndex + for child in action.children: + map_events(child) + + +for root_action in _controller.GetRootActions(): + map_events(root_action) + + +# --- Single event detail mode --- +if event_id: + chunk_idx = eid_to_chunk.get(event_id, -1) + if chunk_idx < 0 or chunk_idx >= len(sf.chunks): + _write_error("Event ID %d not found or has no chunk" % event_id) + + chunk = sf.chunks[chunk_idx] + params = [] + for i in range(chunk.NumChildren()): + child = chunk.GetChild(i) + params.append(param_detail(child)) + + meta = chunk.metadata + result = { + "event_id": event_id, + "chunk_index": chunk_idx, + "function": chunk.name, + "parameters": params, + "metadata": { + "chunkID": meta.chunkID, + "length": meta.length, + "threadID": meta.threadID, + "durationMicro": meta.durationMicro, + "timestampMicro": meta.timestampMicro, + }, + } + _write_output(result) + _shutdown() + sys.exit(0) + + +# --- Listing mode --- +# If range is specified, filter to that range +if range_start or range_end: + eid_list = sorted(eid_to_chunk.keys()) + if range_start: + eid_list = [e for e in eid_list if e >= range_start] + if range_end: + eid_list = [e for e in eid_list if e <= range_end] +else: + eid_list = sorted(eid_to_chunk.keys()) + +calls = [] +for eid in eid_list: + chunk_idx = eid_to_chunk[eid] + if chunk_idx >= len(sf.chunks): + continue + chunk = sf.chunks[chunk_idx] + fn_name = chunk.name + + if name_filter and name_filter.lower() not in fn_name.lower(): + continue + + # Build compact parameter summary + param_parts = [] + for i in range(chunk.NumChildren()): + child = chunk.GetChild(i) + param_parts.append("%s=%s" % (child.name, format_value_inline(child))) + + calls.append({ + "eid": eid, + "function": fn_name, + "params_inline": ", ".join(param_parts), + "num_params": chunk.NumChildren(), + }) + +_write_output({"calls": calls, "total": len(calls)}) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/counters.py b/renderdoctools/scripts/counters.py new file mode 100644 index 00000000..1c468cc1 --- /dev/null +++ b/renderdoctools/scripts/counters.py @@ -0,0 +1,98 @@ +# renderdoctools/scripts/counters.py +# GPU performance counter queries. +# Runs inside RenderDoc Python 3.6. + +fetch_name = _cfg.get("fetch", "") +zero_samples = _cfg.get("zero_samples", False) + +counters = _controller.EnumerateCounters() +sf = _controller.GetStructuredFile() + +def _collect_all_actions(): + result = {} + def _walk(d): + result[d.eventId] = d + for c in d.children: + _walk(c) + for a in _controller.GetRootActions(): + _walk(a) + return result + +if not fetch_name and not zero_samples: + counter_list = [] + for c in counters: + desc = _controller.DescribeCounter(c) + counter_list.append({ + "id": int(c), + "name": desc.name, + "description": desc.description, + "unit": str(desc.unit), + "resultType": str(desc.resultType), + "resultByteWidth": desc.resultByteWidth, + }) + _write_output({"mode": "list", "counters": counter_list}) + +elif zero_samples: + if not (rd.GPUCounter.SamplesPassed in counters): + _write_error("SamplesPassed counter not supported") + + results = _controller.FetchCounters([rd.GPUCounter.SamplesPassed]) + desc = _controller.DescribeCounter(rd.GPUCounter.SamplesPassed) + + actions = _collect_all_actions() + + zero_draws = [] + for r in results: + if r.eventId not in actions: + continue + draw = actions[r.eventId] + if not (draw.flags & rd.ActionFlags.Drawcall): + continue + if desc.resultType == rd.CompType.Float: + val = r.value.f if desc.resultByteWidth == 4 else r.value.d + else: + val = r.value.u32 if desc.resultByteWidth == 4 else r.value.u64 + if val == 0: + zero_draws.append({ + "eid": r.eventId, + "name": draw.GetName(sf), + "numIndices": draw.numIndices, + }) + + _write_output({"mode": "zero_samples", "draws": zero_draws, "total": len(zero_draws)}) + +else: + target_counter = None + for c in counters: + desc = _controller.DescribeCounter(c) + if desc.name.lower() == fetch_name.lower(): + target_counter = c + break + + if target_counter is None: + _write_error("Counter '%s' not found" % fetch_name) + + results = _controller.FetchCounters([target_counter]) + desc = _controller.DescribeCounter(target_counter) + + actions = _collect_all_actions() + + entries = [] + for r in results: + if r.eventId not in actions: + continue + draw = actions[r.eventId] + if desc.resultType == rd.CompType.Float: + val = r.value.f if desc.resultByteWidth == 4 else r.value.d + else: + val = r.value.u32 if desc.resultByteWidth == 4 else r.value.u64 + entries.append({ + "eid": r.eventId, + "name": draw.GetName(sf), + "value": val, + }) + + _write_output({"mode": "fetch", "counter": desc.name, "unit": str(desc.unit), "results": entries}) + +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/custom_shader.py b/renderdoctools/scripts/custom_shader.py new file mode 100644 index 00000000..a5d22af6 --- /dev/null +++ b/renderdoctools/scripts/custom_shader.py @@ -0,0 +1,182 @@ +# renderdoctools/scripts/custom_shader.py +# Build and apply a custom visualization shader, then save the result. +# Runs inside RenderDoc Python 3.6. + +event_id = _cfg.get("event_id") +shader_source = _cfg.get("shader_source", "") +output_path = _cfg.get("output_path", "") +encoding_name = _cfg.get("encoding", "") +entry_point = _cfg.get("entry_point", "main") + +if event_id is None: + _write_error("--event is required") +if not shader_source: + _write_error("--source is required (shader source code)") +if not output_path: + _write_error("--output is required (path to save result)") + +_controller.SetFrameEvent(event_id, True) + +# --- Determine shader encoding --- +supported = _controller.GetCustomShaderEncodings() +if not supported: + _write_error("No custom shader encodings supported by this capture's API") + +ENCODING_NAMES = { + "hlsl": rd.ShaderEncoding.HLSL, + "glsl": rd.ShaderEncoding.GLSL, + "spirv": rd.ShaderEncoding.SPIRV, + "dxbc": rd.ShaderEncoding.DXBC, + "dxil": rd.ShaderEncoding.DXIL, + "spirvasm": rd.ShaderEncoding.SPIRVAsm, +} + +if encoding_name: + key = encoding_name.lower().strip() + if key not in ENCODING_NAMES: + _write_error("Unknown encoding '%s'. Supported: %s" % ( + encoding_name, ", ".join(ENCODING_NAMES.keys()))) + source_encoding = ENCODING_NAMES[key] + if source_encoding not in supported: + _write_error("Encoding '%s' not supported for this capture. Supported: %s" % ( + encoding_name, ", ".join(str(e) for e in supported))) +else: + # Auto-detect: prefer HLSL, then GLSL, then first available + source_encoding = supported[0] + for preferred in [rd.ShaderEncoding.HLSL, rd.ShaderEncoding.GLSL]: + if preferred in supported: + source_encoding = preferred + break + +# --- Build the custom shader --- +compile_flags = rd.ShaderCompileFlags() +source_bytes = shader_source.encode("utf-8") if isinstance(shader_source, str) else shader_source + +shader_id, errors = _controller.BuildCustomShader( + entry_point, + source_encoding, + source_bytes, + compile_flags, + rd.ShaderStage.Pixel, +) + +if shader_id == rd.ResourceId.Null(): + _write_error("Shader compilation failed: %s" % errors) + +# --- Find the target texture (first color output of the draw) --- +def _find_action(eid): + """Find action by event ID, searching children recursively.""" + def _search(action): + cur = action + while cur is not None: + if cur.eventId == eid: + return cur + for child in cur.children: + found = _search(child) + if found is not None: + return found + cur = cur.next + return None + for root in _controller.GetRootActions(): + found = _search(root) + if found is not None: + return found + return None + +action = _find_action(event_id) +if action is None: + _controller.FreeCustomShader(shader_id) + _write_error("Event %d not found" % event_id) + +# Pick the first valid color output as the texture to apply the custom shader to +target_tex = rd.ResourceId.Null() +if action.outputs: + for o in action.outputs: + if o != rd.ResourceId.Null(): + target_tex = o + break + +if target_tex == rd.ResourceId.Null(): + _controller.FreeCustomShader(shader_id) + _write_error("No render target found at event %d to apply custom shader to" % event_id) + +# --- Create a headless texture output and render with the custom shader --- +# Look up target texture dimensions +_all_textures = {} +for t in _controller.GetTextures(): + _all_textures[int(t.resourceId)] = t + +tex_desc = _all_textures.get(int(target_tex)) +tex_w = tex_desc.width if tex_desc else 1920 +tex_h = tex_desc.height if tex_desc else 1080 + +wdata = rd.CreateHeadlessWindowingData(tex_w, tex_h) +output = _controller.CreateOutput(wdata, rd.ReplayOutputType.Texture) + +tex_display = rd.TextureDisplay() +tex_display.resourceId = target_tex +tex_display.customShaderId = shader_id +tex_display.rangeMin = 0.0 +tex_display.rangeMax = 1.0 +tex_display.scale = 1.0 +tex_display.red = True +tex_display.green = True +tex_display.blue = True +tex_display.alpha = True +tex_display.linearDisplayAsGamma = True +tex_display.rawOutput = False + +output.SetTextureDisplay(tex_display) +output.Display() + +# --- Save the custom shader result texture --- +custom_tex_id = output.GetCustomShaderTexID() + +if custom_tex_id == rd.ResourceId.Null(): + _controller.FreeCustomShader(shader_id) + _write_error("Custom shader produced no output texture") + +# Determine file format from extension +ext = output_path.rsplit(".", 1)[-1].lower() if "." in output_path else "png" +FORMAT_MAP = { + "png": rd.FileType.PNG, + "jpg": rd.FileType.JPG, + "dds": rd.FileType.DDS, + "hdr": rd.FileType.HDR, + "bmp": rd.FileType.BMP, + "tga": rd.FileType.TGA, +} +file_type = FORMAT_MAP.get(ext, rd.FileType.PNG) + +os.makedirs(os.path.dirname(os.path.abspath(output_path)) or ".", exist_ok=True) + +texsave = rd.TextureSave() +texsave.resourceId = custom_tex_id +texsave.alpha = rd.AlphaMapping.Preserve +texsave.mip = 0 +texsave.slice.sliceIndex = 0 +texsave.destType = file_type +_controller.SaveTexture(texsave, output_path) + +# --- Build encoding name for output --- +ENC_TO_NAME = {} +for name, enc in ENCODING_NAMES.items(): + ENC_TO_NAME[enc] = name +encoding_used = ENC_TO_NAME.get(source_encoding, str(source_encoding)) + +# --- Clean up --- +_controller.FreeCustomShader(shader_id) + +_write_output({ + "success": True, + "event_id": event_id, + "shader_id": str(int(shader_id)), + "encoding": encoding_used, + "entry_point": entry_point, + "target_texture": str(int(target_tex)), + "custom_texture": str(int(custom_tex_id)), + "saved": output_path, + "compile_warnings": errors if errors else "", +}) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/debug_shader.py b/renderdoctools/scripts/debug_shader.py new file mode 100644 index 00000000..3ca18f42 --- /dev/null +++ b/renderdoctools/scripts/debug_shader.py @@ -0,0 +1,226 @@ +# renderdoctools/scripts/debug_shader.py +# Shader debugging via DebugVertex / DebugPixel / DebugThread. +# Runs inside RenderDoc Python 3.6. + +event_id = _cfg.get("event_id") +mode = _cfg.get("mode", "") + +if event_id is None: + _write_error("--event is required") +if mode not in ("vertex", "pixel", "compute"): + _write_error("--mode must be vertex, pixel, or compute") + +_controller.SetFrameEvent(event_id, True) +state = _controller.GetPipelineState() + +# ── Determine which stage we are debugging ── +if mode == "vertex": + stage_enum = rd.ShaderStage.Vertex +elif mode == "pixel": + stage_enum = rd.ShaderStage.Pixel +elif mode == "compute": + stage_enum = rd.ShaderStage.Compute + +refl = state.GetShaderReflection(stage_enum) +if refl is None: + _write_error("No %s shader bound at EID %d" % (mode, event_id)) + +if not refl.debugInfo.debuggable: + _write_error("Shader at EID %d (%s stage) is not debuggable" % (event_id, mode)) + + +# ── Helper: extract values from a ShaderVariable ── +def _extract_var(var): + """Serialise a ShaderVariable to a JSON-friendly dict.""" + info = { + "name": var.name, + "type": str(var.type), + "rows": var.rows, + "columns": var.columns, + } + if len(var.members) > 0: + info["members"] = [_extract_var(m) for m in var.members] + else: + count = max(var.rows, 1) * max(var.columns, 1) + t = var.type + if t == rd.VarType.Float: + info["float"] = [var.value.f32v[i] for i in range(count)] + elif t == rd.VarType.Double: + info["float"] = [var.value.f64v[i] for i in range(count)] + elif t == rd.VarType.Half: + # f16v stores raw halfs; expose as float via f32v reinterpret is + # unreliable, so just pass the raw u16 bits and the float array. + info["float"] = [float(var.value.f16v[i]) for i in range(count)] + elif t in (rd.VarType.SInt, rd.VarType.SShort, rd.VarType.SLong, rd.VarType.SByte): + info["int"] = [var.value.s32v[i] for i in range(count)] + elif t in (rd.VarType.UInt, rd.VarType.UShort, rd.VarType.ULong, rd.VarType.UByte, rd.VarType.Bool): + info["uint"] = [var.value.u32v[i] for i in range(count)] + else: + # Fallback: expose both float and uint interpretations + info["float"] = [var.value.f32v[i] for i in range(count)] + info["uint"] = [var.value.u32v[i] for i in range(count)] + return info + + +# ── Helper: extract source variable mapping at an instruction ── +def _extract_source_var(svm): + return { + "name": svm.name, + "type": str(svm.type), + "rows": svm.rows, + "columns": svm.columns, + "signatureIndex": svm.signatureIndex, + } + + +# ── Start the debug trace ── +trace = None + +if mode == "vertex": + vertex_index = _cfg.get("vertex_index") + if vertex_index is None: + _write_error("--vertex-index is required for vertex mode") + instance = _cfg.get("instance", 0) + view = _cfg.get("view", 0) + raw_index = _cfg.get("raw_index", vertex_index) + trace = _controller.DebugVertex(vertex_index, instance, raw_index, view) + +elif mode == "pixel": + x = _cfg.get("x") + y = _cfg.get("y") + if x is None or y is None: + _write_error("--x and --y are required for pixel mode") + inputs = rd.DebugPixelInputs() + sample = _cfg.get("sample", None) + primitive = _cfg.get("primitive", None) + if sample is not None: + inputs.sample = sample + if primitive is not None: + inputs.primitive = primitive + trace = _controller.DebugPixel(x, y, inputs) + +elif mode == "compute": + group = _cfg.get("group") + thread = _cfg.get("thread") + if group is None or thread is None: + _write_error("--group and --thread are required for compute mode") + trace = _controller.DebugThread(tuple(group), tuple(thread)) + +if trace is None or trace.debugger is None: + if trace is not None: + _controller.FreeTrace(trace) + _write_error("Failed to start shader debug at EID %d (mode=%s). " + "The shader may not be debuggable or the invocation is invalid." % (event_id, mode)) + + +# ── Collect trace inputs ── +trace_inputs = [_extract_var(v) for v in trace.inputs] + +# ── Collect constant blocks snapshot ── +trace_cbuffers = [_extract_var(v) for v in trace.constantBlocks] + +# ── Collect source variable mappings from trace level ── +trace_source_vars = [_extract_source_var(sv) for sv in trace.sourceVars] + +# ── Step through the shader ── +max_steps = _cfg.get("max_steps", 10000) +steps = [] +variables = {} # accumulated variable state by name + +step_count = 0 +while True: + states = _controller.ContinueDebug(trace.debugger) + if len(states) == 0: + break + + for s in states: + step_info = { + "stepIndex": s.stepIndex, + "nextInstruction": s.nextInstruction, + "flags": int(s.flags), + } + + # Record variable changes + changes = [] + for change in s.changes: + ch = {} + if change.before.name: + ch["before"] = _extract_var(change.before) + if change.after.name: + ch["after"] = _extract_var(change.after) + variables[change.after.name] = _extract_var(change.after) + changes.append(ch) + step_info["changes"] = changes + + # Source location from instInfo (binary search the sparse array) + inst = s.nextInstruction + src_info = None + lo, hi = 0, len(trace.instInfo) - 1 + while lo <= hi: + mid = (lo + hi) // 2 + if trace.instInfo[mid].instruction == inst: + src_info = trace.instInfo[mid] + break + elif trace.instInfo[mid].instruction < inst: + lo = mid + 1 + else: + hi = mid - 1 + # Lower-bound fallback + if src_info is None and len(trace.instInfo) > 0: + idx = lo - 1 if lo > 0 else 0 + src_info = trace.instInfo[idx] + + if src_info is not None: + li = src_info.lineInfo + step_info["source"] = { + "fileIndex": li.fileIndex, + "lineStart": li.lineStart, + "lineEnd": li.lineEnd, + "colStart": li.colStart, + "colEnd": li.colEnd, + "disassemblyLine": li.disassemblyLine, + } + # Per-instruction source variable mappings + if len(src_info.sourceVars) > 0: + step_info["sourceVars"] = [_extract_source_var(sv) for sv in src_info.sourceVars] + + steps.append(step_info) + step_count += 1 + + if step_count >= max_steps: + break + + +# ── Collect final variable state ── +final_vars = variables + +# ── Source files from debug info ── +source_files = [] +try: + if refl.debugInfo and len(refl.debugInfo.files) > 0: + for f in refl.debugInfo.files: + source_files.append({ + "index": len(source_files), + "filename": f.filename, + }) +except Exception: + pass + +# ── Build output ── +output = { + "event_id": event_id, + "mode": mode, + "stage": str(trace.stage), + "totalSteps": step_count, + "inputs": trace_inputs, + "constantBlocks": trace_cbuffers, + "sourceVars": trace_source_vars, + "sourceFiles": source_files, + "steps": steps, + "finalState": final_vars, +} + +_controller.FreeTrace(trace) +_write_output(output) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/descriptors.py b/renderdoctools/scripts/descriptors.py new file mode 100644 index 00000000..ec62ae1b --- /dev/null +++ b/renderdoctools/scripts/descriptors.py @@ -0,0 +1,170 @@ +# renderdoctools/scripts/descriptors.py +# Low-level descriptor access auditing at a specific event. +# Lists all descriptors accessed via GetDescriptorAccess() and fetches their contents. +# Runs inside RenderDoc Python 3.6. + +event_id = _cfg.get("event_id") +type_filter = _cfg.get("type_filter", "all") + +if event_id is None: + _write_error("--event is required") + +_controller.SetFrameEvent(event_id, True) + +# Build resource name lookup +_resource_names = {} +for r in _controller.GetResources(): + _resource_names[int(r.resourceId)] = r.name + +# Build texture metadata lookup +_all_textures = {} +for t in _controller.GetTextures(): + _all_textures[int(t.resourceId)] = t + +STAGES = [ + ("vertex", rd.ShaderStage.Vertex), + ("hull", rd.ShaderStage.Hull), + ("domain", rd.ShaderStage.Domain), + ("geometry", rd.ShaderStage.Geometry), + ("pixel", rd.ShaderStage.Pixel), + ("compute", rd.ShaderStage.Compute), +] + +STAGE_MAP = {v: k for k, v in STAGES} + +# Map type_filter strings to category check functions +FILTER_CATEGORIES = { + "sampler": lambda t: rd.IsSamplerDescriptor(t), + "cbuffer": lambda t: rd.IsConstantBlockDescriptor(t), + "srv": lambda t: rd.IsReadOnlyDescriptor(t), + "uav": lambda t: rd.IsReadWriteDescriptor(t), + "all": lambda t: True, +} + +filter_fn = FILTER_CATEGORIES.get(type_filter, FILTER_CATEGORIES["all"]) + +# Get all descriptor accesses at this event +accesses = _controller.GetDescriptorAccess() + +# Get shader reflection for binding name lookups +state = _controller.GetPipelineState() + + +def _get_binding_name(access): + """Look up the shader reflection name for a descriptor access.""" + if access.index == 0xFFFF: # NoShaderBinding + return "(direct heap access)" + refl = state.GetShaderReflection(access.stage) + if refl is None: + return "" + cat = rd.CategoryForDescriptorType(access.type) + try: + if cat == rd.DescriptorCategory.ConstantBlock: + if access.index < len(refl.constantBlocks): + return refl.constantBlocks[access.index].name + elif cat == rd.DescriptorCategory.Sampler: + if access.index < len(refl.samplers): + return refl.samplers[access.index].name + elif cat == rd.DescriptorCategory.ReadOnlyResource: + if access.index < len(refl.readOnlyResources): + return refl.readOnlyResources[access.index].name + elif cat == rd.DescriptorCategory.ReadWriteResource: + if access.index < len(refl.readWriteResources): + return refl.readWriteResources[access.index].name + except Exception: + pass + return "" + + +def _format_name(fmt): + """Safely format a ResourceFormat to string.""" + try: + return fmt.Name() + except Exception: + try: + return "%s_%s%d" % (str(fmt.type), str(fmt.compType), fmt.compByteWidth * 8) + except Exception: + return "unknown" + + +descriptors_out = [] + +for access in accesses: + # Apply type filter + if not filter_fn(access.type): + continue + + # Skip statically unused if flagged + stage_name = STAGE_MAP.get(access.stage, str(access.stage)) + desc_type = str(access.type) + binding_name = _get_binding_name(access) + + entry = { + "stage": stage_name, + "descriptorType": desc_type, + "index": access.index, + "arrayElement": access.arrayElement, + "bindingName": binding_name, + "descriptorStore": str(int(access.descriptorStore)), + "byteOffset": access.byteOffset, + "byteSize": access.byteSize, + "staticallyUnused": access.staticallyUnused, + } + + # Fetch descriptor contents + desc_range = rd.DescriptorRange() + desc_range.offset = access.byteOffset + desc_range.descriptorSize = access.byteSize + desc_range.count = 1 + desc_range.type = access.type + + # Fetch normal descriptor contents + try: + descs = _controller.GetDescriptors(access.descriptorStore, [desc_range]) + if descs and len(descs) > 0: + desc = descs[0] + rid = int(desc.resource) + entry["resourceId"] = str(rid) + entry["resourceName"] = _resource_names.get(rid, "") + entry["viewFormat"] = _format_name(desc.format) + entry["descriptorByteOffset"] = desc.byteOffset + entry["descriptorByteSize"] = desc.byteSize + entry["textureType"] = str(desc.textureType) + entry["elementByteSize"] = desc.elementByteSize + + # Add texture-specific info if this is a texture resource + if rid in _all_textures: + tex = _all_textures[rid] + entry["textureWidth"] = tex.width + entry["textureHeight"] = tex.height + entry["textureDepth"] = tex.depth + entry["textureMips"] = tex.mips + entry["textureArraySize"] = tex.arraysize + entry["textureFormat"] = _format_name(tex.format) + except Exception: + pass + + # Fetch sampler descriptor contents for sampler types + if rd.IsSamplerDescriptor(access.type) or access.type == rd.DescriptorType.ImageSampler: + try: + samplers = _controller.GetSamplerDescriptors(access.descriptorStore, [desc_range]) + if samplers and len(samplers) > 0: + samp = samplers[0] + entry["samplerObject"] = str(int(samp.object)) + entry["samplerAddressU"] = str(samp.addressU) + entry["samplerAddressV"] = str(samp.addressV) + entry["samplerAddressW"] = str(samp.addressW) + entry["samplerFilter"] = str(samp.filter) + except Exception: + pass + + descriptors_out.append(entry) + +_write_output({ + "event_id": event_id, + "type_filter": type_filter, + "total": len(descriptors_out), + "descriptors": descriptors_out, +}) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/events.py b/renderdoctools/scripts/events.py new file mode 100644 index 00000000..94b47902 --- /dev/null +++ b/renderdoctools/scripts/events.py @@ -0,0 +1,50 @@ +# renderdoctools/scripts/events.py +# Event browser -- enumerate draw calls and events from a capture. +# Runs inside RenderDoc Python 3.6. _cfg, _controller, _cap, rd available from base header. + +draws_only = _cfg.get("draws_only", False) +name_filter = _cfg.get("filter", "") + + +def walk_actions(action, depth=0): + """Recursively walk the action tree, collecting event info.""" + sf = _controller.GetStructuredFile() + name = action.GetName(sf) + + include = True + if draws_only and not (action.flags & rd.ActionFlags.Drawcall): + include = False + if name_filter and name_filter.lower() not in name.lower(): + include = False + + entry = None + if include: + entry = { + "eid": action.eventId, + "name": name, + "depth": depth, + "flags": int(action.flags), + "draw": bool(action.flags & rd.ActionFlags.Drawcall), + "clear": bool(action.flags & rd.ActionFlags.Clear), + "numIndices": action.numIndices, + "numInstances": action.numInstances, + } + + children = [] + for child in action.children: + children.extend(walk_actions(child, depth + 1)) + + results = [] + if entry is not None: + results.append(entry) + results.extend(children) + return results + + +events = [] +for root_action in _controller.GetRootActions(): + events.extend(walk_actions(root_action)) + +_write_output({"events": events, "total": len(events)}) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/frame_info.py b/renderdoctools/scripts/frame_info.py new file mode 100644 index 00000000..e114cde2 --- /dev/null +++ b/renderdoctools/scripts/frame_info.py @@ -0,0 +1,142 @@ +# renderdoctools/scripts/frame_info.py +# Detailed frame metadata and statistics via GetFrameInfo(). +# Runs inside RenderDoc Python 3.6. + +frame = _controller.GetFrameInfo() +stats = frame.stats + +# -- Shader stage names for per-stage stats -- +stage_names = ["Vertex", "Hull", "Domain", "Geometry", "Pixel", "Compute"] + +# -- Per-stage shader change stats -- +shader_changes = [] +for i, name in enumerate(stage_names): + if i < len(stats.shaders): + s = stats.shaders[i] + shader_changes.append({ + "stage": name, + "calls": s.calls, + "sets": s.sets, + "nulls": s.nulls, + "redundants": s.redundants, + }) + +# -- Per-stage constant buffer bind stats -- +cbuffer_binds = [] +for i, name in enumerate(stage_names): + if i < len(stats.constants): + c = stats.constants[i] + cbuffer_binds.append({ + "stage": name, + "calls": c.calls, + "sets": c.sets, + "nulls": c.nulls, + }) + +# -- Per-stage sampler bind stats -- +sampler_binds = [] +for i, name in enumerate(stage_names): + if i < len(stats.samplers): + s = stats.samplers[i] + sampler_binds.append({ + "stage": name, + "calls": s.calls, + "sets": s.sets, + "nulls": s.nulls, + }) + +# -- Per-stage resource bind stats -- +resource_binds = [] +for i, name in enumerate(stage_names): + if i < len(stats.resources): + r = stats.resources[i] + resource_binds.append({ + "stage": name, + "calls": r.calls, + "sets": r.sets, + "nulls": r.nulls, + }) + +# -- Debug messages -- +debug_msgs = [] +for msg in frame.debugMessages: + debug_msgs.append({ + "category": str(msg.category), + "severity": str(msg.severity), + "messageID": msg.messageID, + "description": msg.description, + }) + +result = { + "frameNumber": frame.frameNumber, + "captureTime": frame.captureTime, + "fileOffset": frame.fileOffset, + "uncompressedFileSize": frame.uncompressedFileSize, + "compressedFileSize": frame.compressedFileSize, + "persistentSize": frame.persistentSize, + "initDataSize": frame.initDataSize, + "containsAnnotations": frame.containsAnnotations, + "api": str(_cap.DriverName()), + "statsRecorded": stats.recorded, + "draws": { + "calls": stats.draws.calls, + "instanced": stats.draws.instanced, + "indirect": stats.draws.indirect, + }, + "dispatches": { + "calls": stats.dispatches.calls, + "indirect": stats.dispatches.indirect, + }, + "indexBinds": { + "calls": stats.indices.calls, + "sets": stats.indices.sets, + "nulls": stats.indices.nulls, + }, + "vertexBinds": { + "calls": stats.vertices.calls, + "sets": stats.vertices.sets, + "nulls": stats.vertices.nulls, + }, + "layoutBinds": { + "calls": stats.layouts.calls, + "sets": stats.layouts.sets, + "nulls": stats.layouts.nulls, + }, + "resourceUpdates": { + "calls": stats.updates.calls, + "clients": stats.updates.clients, + "servers": stats.updates.servers, + }, + "blendState": { + "calls": stats.blends.calls, + "sets": stats.blends.sets, + "nulls": stats.blends.nulls, + "redundants": stats.blends.redundants, + }, + "depthStencilState": { + "calls": stats.depths.calls, + "sets": stats.depths.sets, + "nulls": stats.depths.nulls, + "redundants": stats.depths.redundants, + }, + "rasterizerState": { + "calls": stats.rasters.calls, + "sets": stats.rasters.sets, + "nulls": stats.rasters.nulls, + "redundants": stats.rasters.redundants, + }, + "outputTargets": { + "calls": stats.outputs.calls, + "sets": stats.outputs.sets, + "nulls": stats.outputs.nulls, + }, + "shaderChanges": shader_changes, + "constantBufferBinds": cbuffer_binds, + "samplerBinds": sampler_binds, + "resourceBinds": resource_binds, + "debugMessages": debug_msgs, +} + +_write_output(result) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/info.py b/renderdoctools/scripts/info.py new file mode 100644 index 00000000..dc86939a --- /dev/null +++ b/renderdoctools/scripts/info.py @@ -0,0 +1,11 @@ +# renderdoctools/scripts/info.py +# Capture metadata. +# Runs inside RenderDoc Python 3.6. + +_write_output({ + "api": str(_cap.DriverName()), + "machineIdent": _cap.RecordedMachineIdent(), + "timestamp": _cap.TimestampBase(), +}) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/mesh.py b/renderdoctools/scripts/mesh.py new file mode 100644 index 00000000..6aa84702 --- /dev/null +++ b/renderdoctools/scripts/mesh.py @@ -0,0 +1,152 @@ +# renderdoctools/scripts/mesh.py +# Vertex/mesh data decode at a draw call. +# Runs inside RenderDoc Python 3.6. + +import struct + +event_id = _cfg.get("event_id") +post_vs = _cfg.get("post_vs", False) +index_range = _cfg.get("indices", "") +max_verts = 64 + +if event_id is None: + _write_error("--event is required") + +start_idx, end_idx = 0, max_verts +if index_range: + parts = index_range.split("-") + start_idx = int(parts[0]) + end_idx = int(parts[1]) if len(parts) > 1 else start_idx + 1 + +_controller.SetFrameEvent(event_id, True) +state = _controller.GetPipelineState() + +FORMAT_CHARS = { + int(rd.CompType.UInt): "xBHxIxxxL", + int(rd.CompType.SInt): "xbhxixxxl", + int(rd.CompType.Float): "xxexfxxxd", +} +FORMAT_CHARS[int(rd.CompType.UNorm)] = FORMAT_CHARS[int(rd.CompType.UInt)] +FORMAT_CHARS[int(rd.CompType.SNorm)] = FORMAT_CHARS[int(rd.CompType.SInt)] +FORMAT_CHARS[int(rd.CompType.UScaled)] = FORMAT_CHARS[int(rd.CompType.UInt)] +FORMAT_CHARS[int(rd.CompType.SScaled)] = FORMAT_CHARS[int(rd.CompType.SInt)] + + +def unpack_data(fmt, data): + char = FORMAT_CHARS.get(int(fmt.compType), "") + if not char or fmt.compByteWidth >= len(char): + return None + c = char[fmt.compByteWidth] + if c == "x": + return None + vert_fmt = str(fmt.compCount) + c + try: + value = struct.unpack_from(vert_fmt, data, 0) + except struct.error: + return None + if fmt.compType == rd.CompType.UNorm: + divisor = float((2 ** (fmt.compByteWidth * 8)) - 1) + value = tuple(float(i) / divisor for i in value) + elif fmt.compType == rd.CompType.SNorm: + max_neg = -float(2 ** (fmt.compByteWidth * 8)) / 2 + divisor = float(-(max_neg - 1)) + value = tuple((float(i) if i == max_neg else float(i) / divisor) for i in value) + return list(value) + + +action = None +for root_action in _controller.GetRootActions(): + cur = root_action + while cur is not None: + if cur.eventId == event_id: + action = cur + break + cur = cur.next + if action: + break + +if action is None: + _write_error("Event %d not found" % event_id) + +mesh_data = {"event_id": event_id, "post_vs": post_vs, "attributes": [], "vertices": []} + +if post_vs: + postvs = _controller.GetPostVSData(0, 0, rd.MeshDataStage.VSOut) + vs_refl = state.GetShaderReflection(rd.ShaderStage.Vertex) + if vs_refl: + attrs = [] + for attr in vs_refl.outputSignature: + name = attr.semanticIdxName if attr.varName == "" else attr.varName + attrs.append({ + "name": name, + "compCount": attr.compCount, + }) + mesh_data["attributes"] = attrs + + if postvs.numIndices > 0: + data = _controller.GetBufferData(postvs.vertexResourceId, postvs.vertexByteOffset, 0) + stride = postvs.vertexByteStride + num_verts = min(postvs.numIndices, end_idx) + + for i in range(start_idx, num_verts): + vert = {"index": i} + offset = i * stride + attr_offset = 0 + for attr in vs_refl.outputSignature: + name = attr.semanticIdxName if attr.varName == "" else attr.varName + comp_count = attr.compCount + byte_size = comp_count * 4 + try: + vals = struct.unpack_from("%df" % comp_count, data, offset + attr_offset) + vert[name] = list(vals) + except struct.error: + pass + attr_offset += byte_size + mesh_data["vertices"].append(vert) + if len(mesh_data["vertices"]) >= max_verts: + break +else: + ib = state.GetIBuffer() + vbs = state.GetVBuffers() + attrs = state.GetVertexInputs() + + for attr in attrs: + mesh_data["attributes"].append({ + "name": attr.name, + "format": str(attr.format), + "buffer": attr.vertexBuffer, + "offset": attr.byteOffset, + }) + + if ib.resourceId != rd.ResourceId.Null() and (action.flags & rd.ActionFlags.Indexed): + idx_fmt = "H" if ib.byteStride == 2 else "I" + ibdata = _controller.GetBufferData(ib.resourceId, ib.byteOffset, 0) + num = min(action.numIndices, end_idx) + indices = [] + for i in range(start_idx, num): + offset = (action.indexOffset + i) * ib.byteStride + try: + val = struct.unpack_from(idx_fmt, ibdata, offset)[0] + indices.append(val + action.baseVertex) + except struct.error: + break + else: + indices = list(range(start_idx, min(action.numIndices, end_idx))) + + for idx in indices: + vert = {"index": idx} + for attr in attrs: + if attr.perInstance: + continue + vb = vbs[attr.vertexBuffer] + offset = attr.byteOffset + vb.byteOffset + idx * vb.byteStride + data = _controller.GetBufferData(vb.resourceId, offset, attr.format.compByteWidth * attr.format.compCount) + vals = unpack_data(attr.format, data) + vert[attr.name] = vals + mesh_data["vertices"].append(vert) + if len(mesh_data["vertices"]) >= max_verts: + break + +_write_output(mesh_data) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/messages.py b/renderdoctools/scripts/messages.py new file mode 100644 index 00000000..d208b6a2 --- /dev/null +++ b/renderdoctools/scripts/messages.py @@ -0,0 +1,99 @@ +# renderdoctools/scripts/messages.py +# Retrieve API debug/validation messages from the capture. +# Runs inside RenderDoc Python 3.6. _cfg, _controller, _cap, rd available from base header. + +severity_filter = _cfg.get("severity_filter", "all").lower() + +# Map severity enum values to readable names +severity_names = { + rd.MessageSeverity.High: "high", + rd.MessageSeverity.Medium: "medium", + rd.MessageSeverity.Low: "low", + rd.MessageSeverity.Info: "info", +} + +# Map category enum values to readable names +category_names = { + rd.MessageCategory.Application_Defined: "Application_Defined", + rd.MessageCategory.Miscellaneous: "Miscellaneous", + rd.MessageCategory.Initialization: "Initialization", + rd.MessageCategory.Cleanup: "Cleanup", + rd.MessageCategory.Compilation: "Compilation", + rd.MessageCategory.State_Creation: "State_Creation", + rd.MessageCategory.State_Setting: "State_Setting", + rd.MessageCategory.State_Getting: "State_Getting", + rd.MessageCategory.Resource_Manipulation: "Resource_Manipulation", + rd.MessageCategory.Execution: "Execution", + rd.MessageCategory.Shaders: "Shaders", + rd.MessageCategory.Deprecated: "Deprecated", + rd.MessageCategory.Undefined: "Undefined", + rd.MessageCategory.Portability: "Portability", + rd.MessageCategory.Performance: "Performance", +} + +# Map source enum values to readable names +source_names = { + rd.MessageSource.API: "API", + rd.MessageSource.RedundantAPIUse: "RedundantAPIUse", + rd.MessageSource.IncorrectAPIUse: "IncorrectAPIUse", + rd.MessageSource.GeneralPerformance: "GeneralPerformance", + rd.MessageSource.GCNPerformance: "GCNPerformance", + rd.MessageSource.RuntimeWarning: "RuntimeWarning", + rd.MessageSource.UnsupportedConfiguration: "UnsupportedConfiguration", +} + +# Severity priority for filtering: high is most severe +severity_priority = {"high": 0, "medium": 1, "low": 2, "info": 3} + +# Replay to the last event to ensure all messages are generated +actions = _controller.GetRootActions() +if actions: + last = actions[-1] + while last.children: + last = last.children[-1] + _controller.SetFrameEvent(last.eventId, True) + +# Retrieve all debug messages +msgs = _controller.GetDebugMessages() + +messages = [] +for m in msgs: + sev = severity_names.get(m.severity, str(m.severity)) + + # Apply severity filter + if severity_filter != "all": + if severity_filter in severity_priority: + msg_priority = severity_priority.get(sev, 99) + filter_priority = severity_priority[severity_filter] + if msg_priority > filter_priority: + continue + elif sev != severity_filter: + continue + + cat = category_names.get(m.category, str(m.category)) + src = source_names.get(m.source, str(m.source)) + + messages.append({ + "eventId": m.eventId, + "category": cat, + "severity": sev, + "source": src, + "messageID": m.messageID, + "description": m.description, + }) + +# Summary counts by severity +counts = {"high": 0, "medium": 0, "low": 0, "info": 0} +for m in messages: + sev = m["severity"] + if sev in counts: + counts[sev] += 1 + +_write_output({ + "messages": messages, + "total": len(messages), + "counts": counts, + "severity_filter": severity_filter, +}) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/pick_pixel.py b/renderdoctools/scripts/pick_pixel.py new file mode 100644 index 00000000..b93f46a4 --- /dev/null +++ b/renderdoctools/scripts/pick_pixel.py @@ -0,0 +1,114 @@ +# renderdoctools/scripts/pick_pixel.py +# Pick a single pixel value from a texture or render target. +# Runs inside RenderDoc Python 3.6. + +resource_id = _cfg.get("resource_id") +x = _cfg.get("x") +y = _cfg.get("y") +sub_mip = _cfg.get("sub_mip", 0) +sub_slice = _cfg.get("sub_slice", 0) +sub_sample = _cfg.get("sub_sample", 0) +comp_type = _cfg.get("comp_type", "") + +if resource_id is None: + _write_error("--resource is required") +if x is None or y is None: + _write_error("--x and --y are required") + +# Resolve resource ID +rid = rd.ResourceId() +rid_int = int(resource_id) + +# Find the matching ResourceId object from the capture's resource list +found_rid = None +for r in _controller.GetResources(): + if int(r.resourceId) == rid_int: + found_rid = r.resourceId + break + +if found_rid is None: + # Try textures list as fallback + for t in _controller.GetTextures(): + if int(t.resourceId) == rid_int: + found_rid = t.resourceId + break + +if found_rid is None: + _write_error("Resource ID %s not found in capture" % resource_id) + +# Build Subresource +sub = rd.Subresource(int(sub_mip), int(sub_slice), int(sub_sample)) + +# Resolve CompType +COMP_TYPE_MAP = { + "": rd.CompType.Typeless, + "typeless": rd.CompType.Typeless, + "float": rd.CompType.Float, + "unorm": rd.CompType.UNorm, + "snorm": rd.CompType.SNorm, + "uint": rd.CompType.UInt, + "sint": rd.CompType.SInt, + "uscaled": rd.CompType.UScaled, + "sscaled": rd.CompType.SScaled, + "depth": rd.CompType.Depth, + "unormsrgb": rd.CompType.UNormSRGB, +} + +type_cast = COMP_TYPE_MAP.get(comp_type.lower(), rd.CompType.Typeless) + +# Pick the pixel +pixel = _controller.PickPixel(found_rid, int(x), int(y), sub, type_cast) + +# Look up texture metadata for context +tex_info = None +for t in _controller.GetTextures(): + if int(t.resourceId) == rid_int: + try: + fmt_str = t.format.Name() + except Exception: + try: + fmt_str = "%s_%s%d" % (str(t.format.type), str(t.format.compType), t.format.compByteWidth * 8) + except Exception: + fmt_str = "unknown" + tex_info = { + "width": t.width, + "height": t.height, + "depth": t.depth, + "mips": t.mips, + "arraysize": t.arraysize, + "format": fmt_str, + } + break + +# Look up resource name +res_name = "" +for r in _controller.GetResources(): + if int(r.resourceId) == rid_int: + res_name = r.name + break + +# Extract all union interpretations +result = { + "resourceId": str(rid_int), + "name": res_name, + "x": int(x), + "y": int(y), + "subresource": { + "mip": int(sub_mip), + "slice": int(sub_slice), + "sample": int(sub_sample), + }, + "compType": comp_type if comp_type else "Typeless", + "value": { + "float": [pixel.floatValue[i] for i in range(4)], + "uint": [pixel.uintValue[i] for i in range(4)], + "int": [pixel.intValue[i] for i in range(4)], + }, +} + +if tex_info: + result["texture"] = tex_info + +_write_output(result) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/pipeline.py b/renderdoctools/scripts/pipeline.py new file mode 100644 index 00000000..2597200f --- /dev/null +++ b/renderdoctools/scripts/pipeline.py @@ -0,0 +1,107 @@ +# renderdoctools/scripts/pipeline.py +# Pipeline state inspection at a specific event. +# Runs inside RenderDoc Python 3.6. + +event_id = _cfg.get("event_id") +stage_filter = _cfg.get("stage", "") + +if event_id is None: + _write_error("--event is required") + +_controller.SetFrameEvent(event_id, True) +state = _controller.GetPipelineState() + +STAGES = [ + ("vertex", rd.ShaderStage.Vertex), + ("hull", rd.ShaderStage.Hull), + ("domain", rd.ShaderStage.Domain), + ("geometry", rd.ShaderStage.Geometry), + ("pixel", rd.ShaderStage.Pixel), + ("compute", rd.ShaderStage.Compute), +] + +pipeline = {"event_id": event_id, "stages": {}} + +for stage_name, stage_enum in STAGES: + if stage_filter and stage_filter != stage_name: + continue + + refl = state.GetShaderReflection(stage_enum) + if refl is None: + continue + + stage_info = { + "bound": True, + "entryPoint": refl.entryPoint, + "debugInfo": "", + "constantBuffers": [], + "readOnlyResources": [], + "readWriteResources": [], + } + + # Safe debugInfo access + try: + if refl.debugInfo and len(refl.debugInfo.files) > 0: + stage_info["debugInfo"] = refl.debugInfo.files[0].filename + except Exception: + pass + + for i, cb in enumerate(refl.constantBlocks): + stage_info["constantBuffers"].append({ + "index": i, + "name": cb.name, + "byteSize": cb.byteSize, + }) + + for i, res in enumerate(refl.readOnlyResources): + stage_info["readOnlyResources"].append({ + "index": i, + "name": res.name, + "type": str(res.textureType), + }) + + for i, res in enumerate(refl.readWriteResources): + stage_info["readWriteResources"].append({ + "index": i, + "name": res.name, + "type": str(res.textureType), + }) + + pipeline["stages"][stage_name] = stage_info + +def _find_action(eid): + """Find action by event ID, searching children recursively.""" + def _search(action): + cur = action + while cur is not None: + if cur.eventId == eid: + return cur + for child in cur.children: + found = _search(child) + if found is not None: + return found + cur = cur.next + return None + for root in _controller.GetRootActions(): + found = _search(root) + if found is not None: + return found + return None + +# Render targets +action = _find_action(event_id) +if action is None: + action = _controller.GetRootActions()[0] + +outputs = [] +for o in action.outputs: + if o != rd.ResourceId.Null(): + outputs.append(str(int(o))) +pipeline["renderTargets"] = outputs + +depth_id = action.depthOut +pipeline["depthTarget"] = str(int(depth_id)) if depth_id != rd.ResourceId.Null() else None + +_write_output(pipeline) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/pixel_history.py b/renderdoctools/scripts/pixel_history.py new file mode 100644 index 00000000..0fad7564 --- /dev/null +++ b/renderdoctools/scripts/pixel_history.py @@ -0,0 +1,129 @@ +# renderdoctools/scripts/pixel_history.py +# Pixel history -- "what drew to this pixel?" for a given render target at (x, y). +# Runs inside RenderDoc Python 3.6. _cfg, _controller, _cap, rd available from base header. + +event_id = _cfg.get("event_id") +resource_id = _cfg.get("resource_id") +px = _cfg.get("x") +py = _cfg.get("y") +sub_mip = _cfg.get("sub_mip", 0) +sub_slice = _cfg.get("sub_slice", 0) +sub_sample = _cfg.get("sub_sample", 0) + +if event_id is None: + _write_error("--event is required") +if resource_id is None: + _write_error("--resource is required") +if px is None or py is None: + _write_error("--x and --y are required") + +# Move to the requested event so the replay state is correct +_controller.SetFrameEvent(event_id, True) + +# Resolve the resource ID -- accept int or string +rid = rd.ResourceId() +rid_int = int(resource_id) + +# Walk resources to find the matching one +found = False +for r in _controller.GetResources(): + if int(r.resourceId) == rid_int: + rid = r.resourceId + found = True + break + +if not found: + _write_error("Resource ID %s not found in capture" % resource_id) + +sub = rd.Subresource(sub_mip, sub_slice, sub_sample) + +# PixelHistory(ResourceId texture, uint32_t x, uint32_t y, Subresource sub, CompType typeCast) +modifications = _controller.PixelHistory(rid, int(px), int(py), sub, rd.CompType.Typeless) + +# Build action lookup for event names +sf = _controller.GetStructuredFile() +_action_map = {} + +def _walk(action): + _action_map[action.eventId] = action + for child in action.children: + _walk(child) + +for root in _controller.GetRootActions(): + _walk(root) + + +def _extract_color(mod_value): + """Extract RGBA floats from a ModificationValue.""" + col = mod_value.col + return { + "r": col.floatValue[0], + "g": col.floatValue[1], + "b": col.floatValue[2], + "a": col.floatValue[3], + } + + +results = [] +for mod in modifications: + action = _action_map.get(mod.eventId) + is_clear = False + is_draw = False + action_name = "" + if action is not None: + action_name = action.GetName(sf) + is_draw = bool(action.flags & rd.ActionFlags.Drawcall) + is_clear = bool(action.flags & rd.ActionFlags.Clear) + + passed = mod.Passed() + + # Collect all test results + tests = { + "sampleMasked": mod.sampleMasked, + "backfaceCulled": mod.backfaceCulled, + "depthClipped": mod.depthClipped, + "depthBoundsFailed": mod.depthBoundsFailed, + "viewClipped": mod.viewClipped, + "scissorClipped": mod.scissorClipped, + "shaderDiscarded": mod.shaderDiscarded, + "depthTestFailed": mod.depthTestFailed, + "stencilTestFailed": mod.stencilTestFailed, + "predicationSkipped": mod.predicationSkipped, + } + + # Gather failed tests as a list for convenience + failed_tests = [name for name, val in tests.items() if val] + + entry = { + "eventId": mod.eventId, + "name": action_name, + "passed": passed, + "isDraw": is_draw, + "isClear": is_clear, + "directShaderWrite": mod.directShaderWrite, + "unboundPS": mod.unboundPS, + "fragIndex": mod.fragIndex, + "primitiveID": mod.primitiveID, + "preMod": _extract_color(mod.preMod), + "preModDepth": mod.preMod.depth, + "preModStencil": mod.preMod.stencil, + "shaderOut": _extract_color(mod.shaderOut), + "shaderOutDepth": mod.shaderOut.depth, + "shaderOutStencil": mod.shaderOut.stencil, + "postMod": _extract_color(mod.postMod), + "postModDepth": mod.postMod.depth, + "postModStencil": mod.postMod.stencil, + "tests": tests, + "failedTests": failed_tests, + } + + results.append(entry) + +_write_output({ + "pixel": {"x": int(px), "y": int(py)}, + "resourceId": str(rid_int), + "modifications": results, + "total": len(results), +}) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/shaders.py b/renderdoctools/scripts/shaders.py new file mode 100644 index 00000000..55019400 --- /dev/null +++ b/renderdoctools/scripts/shaders.py @@ -0,0 +1,76 @@ +# renderdoctools/scripts/shaders.py +# Shader disassembly and constant buffer inspection. +# Runs inside RenderDoc Python 3.6. + +event_id = _cfg.get("event_id") +stage_filter = _cfg.get("stage", "") +show_cbuffers = _cfg.get("cbuffers", False) + +if event_id is None: + _write_error("--event is required") + +_controller.SetFrameEvent(event_id, True) +state = _controller.GetPipelineState() + +targets = _controller.GetDisassemblyTargets(True) +target = targets[0] if targets else "" + +pipe = state.GetGraphicsPipelineObject() + +STAGES = [ + ("vertex", rd.ShaderStage.Vertex), + ("hull", rd.ShaderStage.Hull), + ("domain", rd.ShaderStage.Domain), + ("geometry", rd.ShaderStage.Geometry), + ("pixel", rd.ShaderStage.Pixel), + ("compute", rd.ShaderStage.Compute), +] + +shaders = {} + +for stage_name, stage_enum in STAGES: + if stage_filter and stage_filter != stage_name: + continue + + refl = state.GetShaderReflection(stage_enum) + if refl is None: + continue + + entry = state.GetShaderEntryPoint(stage_enum) + disasm = _controller.DisassembleShader(pipe, refl, target) + + stage_data = { + "entryPoint": entry, + "disassembly": disasm, + } + + if show_cbuffers: + cbuffers = [] + for i, cb in enumerate(refl.constantBlocks): + cb_bind = state.GetConstantBlock(stage_enum, i, 0) + try: + variables = _controller.GetCBufferVariableContents( + pipe, refl.resourceId, stage_enum, entry, i, + cb_bind.descriptor.resource, cb_bind.descriptor.byteOffset, + cb_bind.descriptor.byteSize + ) + vars_data = [] + for v in variables: + var_entry = {"name": v.name, "rows": v.rows, "columns": v.columns} + if len(v.members) == 0: + vals = [] + for r in range(v.rows): + for c in range(v.columns): + vals.append(v.value.f32v[r * v.columns + c]) + var_entry["values"] = vals + vars_data.append(var_entry) + cbuffers.append({"name": cb.name, "index": i, "variables": vars_data}) + except Exception as e: + cbuffers.append({"name": cb.name, "index": i, "error": str(e)}) + stage_data["constantBuffers"] = cbuffers + + shaders[stage_name] = stage_data + +_write_output({"event_id": event_id, "disasmTarget": target, "shaders": shaders}) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/tex_data.py b/renderdoctools/scripts/tex_data.py new file mode 100644 index 00000000..58eccba4 --- /dev/null +++ b/renderdoctools/scripts/tex_data.py @@ -0,0 +1,87 @@ +# renderdoctools/scripts/tex_data.py +# Extract raw texture pixel data via GetTextureData(). +# Runs inside RenderDoc Python 3.6. + +resource_id = _cfg.get("resource_id") +sub_mip = _cfg.get("sub_mip", 0) +sub_slice = _cfg.get("sub_slice", 0) +sub_sample = _cfg.get("sub_sample", 0) +output_path = _cfg.get("output_path", "") +hex_preview_bytes = _cfg.get("hex_preview_bytes", 256) + +if resource_id is None: + _write_error("--resource is required") + +rid_int = int(resource_id) + +# Find the texture descriptor +tex_desc = None +for t in _controller.GetTextures(): + if int(t.resourceId) == rid_int: + tex_desc = t + break + +if tex_desc is None: + _write_error("Resource ID %d not found among textures in this capture" % rid_int) + +# Get resource name +res_name = "" +for r in _controller.GetResources(): + if int(r.resourceId) == rid_int: + res_name = r.name + break + +# Build format string +fmt = tex_desc.format +try: + fmt_str = fmt.Name() +except Exception: + try: + fmt_str = "%s_%s%d" % (str(fmt.type), str(fmt.compType), fmt.compByteWidth * 8) + except Exception: + fmt_str = "unknown" + +# Build Subresource and fetch raw data +sub = rd.Subresource(sub_mip, sub_slice, sub_sample) +raw_bytes = _controller.GetTextureData(tex_desc.resourceId, sub) + +result = { + "resourceId": str(rid_int), + "name": res_name, + "width": tex_desc.width, + "height": tex_desc.height, + "depth": tex_desc.depth, + "mips": tex_desc.mips, + "arraysize": tex_desc.arraysize, + "format": fmt_str, + "type": str(tex_desc.type), + "subresource": { + "mip": sub_mip, + "slice": sub_slice, + "sample": sub_sample, + }, + "byteSize": len(raw_bytes), +} + +if output_path: + # Write raw bytes to file + out_dir = os.path.dirname(output_path) + if out_dir: + os.makedirs(out_dir, exist_ok=True) + with open(output_path, "wb") as f: + f.write(raw_bytes) + result["savedTo"] = output_path +else: + # Hex dump of first N bytes + preview_len = min(hex_preview_bytes, len(raw_bytes)) + hex_lines = [] + for offset in range(0, preview_len, 16): + chunk = raw_bytes[offset:offset + 16] + hex_part = " ".join("%02x" % (b if isinstance(b, int) else ord(b)) for b in chunk) + hex_lines.append("%08x %s" % (offset, hex_part)) + result["hexPreview"] = hex_lines + result["hexPreviewBytes"] = preview_len + +_write_output(result) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/tex_stats.py b/renderdoctools/scripts/tex_stats.py new file mode 100644 index 00000000..4215fe1f --- /dev/null +++ b/renderdoctools/scripts/tex_stats.py @@ -0,0 +1,90 @@ +# renderdoctools/scripts/tex_stats.py +# Texture min/max/histogram analysis for HDR range debugging. +# Runs inside RenderDoc Python 3.6. + +resource_id = _cfg.get("resource_id") +sub_mip = _cfg.get("sub_mip", 0) +sub_slice = _cfg.get("sub_slice", 0) +sub_sample = _cfg.get("sub_sample", 0) +do_histogram = _cfg.get("histogram", False) +hist_min = _cfg.get("histogram_min", 0.0) +hist_max = _cfg.get("histogram_max", 1.0) + +if resource_id is None: + _write_error("--resource is required") + +# Resolve the resource ID +rid_int = int(resource_id) + +# Look up texture metadata +_all_textures = {} +for t in _controller.GetTextures(): + _all_textures[int(t.resourceId)] = t + +if rid_int not in _all_textures: + _write_error("Resource ID %d not found in capture" % rid_int) + +tex_desc = _all_textures[rid_int] +tex_rid = tex_desc.resourceId + +# Build resource name lookup +_resource_names = {} +for r in _controller.GetResources(): + _resource_names[int(r.resourceId)] = r.name + +# Format string +try: + fmt_str = tex_desc.format.Name() +except Exception: + try: + fmt = tex_desc.format + fmt_str = "%s_%s%d" % (str(fmt.type), str(fmt.compType), fmt.compByteWidth * 8) + except Exception: + fmt_str = "unknown" + +# Build Subresource +sub = rd.Subresource(sub_mip, sub_slice, sub_sample) + +# GetMinMax: returns (PixelValue min, PixelValue max) +minmax = _controller.GetMinMax(tex_rid, sub, rd.CompType.Typeless) +min_val = minmax[0] +max_val = minmax[1] + +result = { + "resourceId": str(rid_int), + "name": _resource_names.get(rid_int, ""), + "width": tex_desc.width, + "height": tex_desc.height, + "depth": tex_desc.depth, + "mips": tex_desc.mips, + "format": fmt_str, + "subresource": {"mip": sub_mip, "slice": sub_slice, "sample": sub_sample}, + "min": { + "r": float(min_val.floatValue[0]), + "g": float(min_val.floatValue[1]), + "b": float(min_val.floatValue[2]), + "a": float(min_val.floatValue[3]), + }, + "max": { + "r": float(max_val.floatValue[0]), + "g": float(max_val.floatValue[1]), + "b": float(max_val.floatValue[2]), + "a": float(max_val.floatValue[3]), + }, +} + +# Optional histogram +if do_histogram: + channels = (True, True, True, True) + buckets = _controller.GetHistogram(tex_rid, sub, rd.CompType.Typeless, + float(hist_min), float(hist_max), channels) + result["histogram"] = { + "min_range": hist_min, + "max_range": hist_max, + "bucket_count": len(buckets), + "buckets": [int(b) for b in buckets], + } + +_write_output(result) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/textures.py b/renderdoctools/scripts/textures.py new file mode 100644 index 00000000..8af92be1 --- /dev/null +++ b/renderdoctools/scripts/textures.py @@ -0,0 +1,166 @@ +# renderdoctools/scripts/textures.py +# Texture listing and export at a specific event. +# Runs inside RenderDoc Python 3.6. + +event_id = _cfg.get("event_id") +save_all_dir = _cfg.get("save_all", "") +save_rid = _cfg.get("save_rid", "") +save_format = _cfg.get("format", "png") +save_output = _cfg.get("save_output", "") + +if event_id is None: + _write_error("--event is required") + +_controller.SetFrameEvent(event_id, True) +state = _controller.GetPipelineState() + +FORMAT_MAP = { + "png": rd.FileType.PNG, + "jpg": rd.FileType.JPG, + "dds": rd.FileType.DDS, + "hdr": rd.FileType.HDR, + "bmp": rd.FileType.BMP, + "tga": rd.FileType.TGA, +} + +# Build a lookup of all textures in the capture +_all_textures = {} +for t in _controller.GetTextures(): + _all_textures[int(t.resourceId)] = t + +# Build a lookup of resource names (TextureDescription has no .name) +_resource_names = {} +for r in _controller.GetResources(): + _resource_names[int(r.resourceId)] = r.name + + +def get_texture_info(rid): + """Get texture metadata for a resource ID.""" + rid_int = int(rid) + if rid_int not in _all_textures: + return None + tex_desc = _all_textures[rid_int] + # Build format string safely + fmt = tex_desc.format + try: + fmt_str = fmt.Name() + except Exception: + try: + fmt_str = "%s_%s%d" % (str(fmt.type), str(fmt.compType), fmt.compByteWidth * 8) + except Exception: + fmt_str = "unknown" + + return { + "resourceId": str(rid_int), + "name": _resource_names.get(rid_int, ""), + "width": tex_desc.width, + "height": tex_desc.height, + "depth": tex_desc.depth, + "mips": tex_desc.mips, + "arraysize": tex_desc.arraysize, + "format": fmt_str, + "type": str(tex_desc.type), + } + + +def save_texture(rid, filepath, fmt="png"): + """Save a texture to disk.""" + texsave = rd.TextureSave() + texsave.resourceId = rid + texsave.alpha = rd.AlphaMapping.Preserve + texsave.mip = 0 + texsave.slice.sliceIndex = 0 + texsave.destType = FORMAT_MAP.get(fmt, rd.FileType.PNG) + _controller.SaveTexture(texsave, filepath) + + +# Collect all bound textures at this event +textures = [] +seen = set() + +def _find_action(eid): + """Find action by event ID, searching children recursively.""" + def _search(action): + cur = action + while cur is not None: + if cur.eventId == eid: + return cur + for child in cur.children: + found = _search(child) + if found is not None: + return found + cur = cur.next + return None + for root in _controller.GetRootActions(): + found = _search(root) + if found is not None: + return found + return None + +# From render targets +action = _find_action(event_id) +if action is not None: + for o in action.outputs: + if o != rd.ResourceId.Null() and int(o) not in seen: + seen.add(int(o)) + info = get_texture_info(o) + if info: + info["binding"] = "renderTarget" + textures.append(info) + if action.depthOut != rd.ResourceId.Null() and int(action.depthOut) not in seen: + seen.add(int(action.depthOut)) + info = get_texture_info(action.depthOut) + if info: + info["binding"] = "depthTarget" + textures.append(info) + +# From shader SRVs +for stage_name, stage_enum in [("vertex", rd.ShaderStage.Vertex), ("hull", rd.ShaderStage.Hull), + ("domain", rd.ShaderStage.Domain), ("pixel", rd.ShaderStage.Pixel), + ("geometry", rd.ShaderStage.Geometry), ("compute", rd.ShaderStage.Compute)]: + refl = state.GetShaderReflection(stage_enum) + if refl is None: + continue + try: + ro_binds = state.GetReadOnlyResources(stage_enum) + except Exception: + continue + # Build index -> UsedDescriptor lookup via access.index + bind_map = {} + for used in ro_binds: + bind_map[used.access.index] = used + for i, res in enumerate(refl.readOnlyResources): + used = bind_map.get(i) + if used is None: + continue + rid = used.descriptor.resource + if rid != rd.ResourceId.Null() and int(rid) not in seen: + seen.add(int(rid)) + info = get_texture_info(rid) + if info: + info["binding"] = "%s:SRV[%d] %s" % (stage_name, i, res.name) + textures.append(info) + +# Handle save operations +saved = [] +if save_all_dir: + os.makedirs(save_all_dir, exist_ok=True) + for tex in textures: + rid_int = int(tex["resourceId"]) + if rid_int in _all_textures: + t = _all_textures[rid_int] + fname = "%s_%s.%s" % (tex["resourceId"], tex.get("name", "").replace("/", "_").replace("\\", "_")[:32], save_format) + fpath = os.path.join(save_all_dir, fname) + save_texture(t.resourceId, fpath, save_format) + saved.append(fpath) + +elif save_rid and save_output: + target_rid = int(save_rid) + if target_rid in _all_textures: + t = _all_textures[target_rid] + save_texture(t.resourceId, save_output, save_format) + saved.append(save_output) + +_write_output({"textures": textures, "total": len(textures), "saved": saved}) +_shutdown() +sys.exit(0) diff --git a/renderdoctools/scripts/usage.py b/renderdoctools/scripts/usage.py new file mode 100644 index 00000000..bb0d5f99 --- /dev/null +++ b/renderdoctools/scripts/usage.py @@ -0,0 +1,105 @@ +# renderdoctools/scripts/usage.py +# Resource usage tracking -- find which events read/write a given resource. +# Runs inside RenderDoc Python 3.6. _cfg, _controller, _cap, rd available from base header. + +resource_id_str = _cfg.get("resource_id", "") +usage_filter = _cfg.get("usage_filter", "all").lower() + +if not resource_id_str: + _write_error("--resource is required") + +# Parse resource ID (accept integer or string representation) +try: + rid_int = int(resource_id_str) +except ValueError: + _write_error("Invalid resource ID: %s (must be an integer)" % resource_id_str) + +# Build a ResourceId from the integer +target_rid = rd.ResourceId() +# ResourceId can be set via its id() accessor or constructed; use the int directly +# In RenderDoc Python, ResourceId has an id() method that returns the int, and we +# can find it by scanning GetResources() +resources = _controller.GetResources() +found_rid = None +resource_name = "" +resource_type = "" +for res in resources: + if int(res.resourceId) == rid_int: + found_rid = res.resourceId + resource_name = res.name + resource_type = str(res.type) + break + +if found_rid is None: + _write_error("Resource ID %d not found in capture" % rid_int) + +# Get all usages for this resource +usages = _controller.GetUsage(found_rid) + +# Build event name lookup from the action tree +sf = _controller.GetStructuredFile() +event_names = {} +def _collect_event_names(action): + event_names[action.eventId] = action.GetName(sf) + for child in action.children: + _collect_event_names(child) +for root_action in _controller.GetRootActions(): + _collect_event_names(root_action) + +# Classify usage types for filtering +# "read" usages: constants, resources (SRVs), vertex/index buffers, copy source, resolve source, indirect +# "write" usages: RW resources (UAVs), color targets, depth targets, copy dest, resolve dest, clear, stream out +READ_USAGES = { + "VertexBuffer", "IndexBuffer", + "VS_Constants", "HS_Constants", "DS_Constants", "GS_Constants", + "PS_Constants", "CS_Constants", "TS_Constants", "MS_Constants", "All_Constants", + "VS_Resource", "HS_Resource", "DS_Resource", "GS_Resource", + "PS_Resource", "CS_Resource", "TS_Resource", "MS_Resource", "All_Resource", + "InputTarget", "Indirect", "CopySrc", "ResolveSrc", +} +WRITE_USAGES = { + "VS_RWResource", "HS_RWResource", "DS_RWResource", "GS_RWResource", + "PS_RWResource", "CS_RWResource", "TS_RWResource", "MS_RWResource", "All_RWResource", + "ColorTarget", "DepthStencilTarget", + "StreamOut", "CopyDst", "ResolveDst", + "Clear", "Discard", "GenMips", "CPUWrite", +} + +entries = [] +for u in usages: + usage_str = str(u.usage) + # Strip enum prefix if present (e.g. "ResourceUsage.ColorTarget" -> "ColorTarget") + short_usage = usage_str.split(".")[-1] if "." in usage_str else usage_str + + # Determine read/write category + if short_usage in READ_USAGES: + category = "read" + elif short_usage in WRITE_USAGES: + category = "write" + else: + # Copy, Resolve, Barrier -- treat as read+write + category = "readwrite" + + # Apply filter + if usage_filter == "read" and category not in ("read", "readwrite"): + continue + if usage_filter == "write" and category not in ("write", "readwrite"): + continue + + entries.append({ + "eventId": u.eventId, + "eventName": event_names.get(u.eventId, "(unknown)"), + "usage": short_usage, + "category": category, + }) + +_write_output({ + "resourceId": rid_int, + "resourceName": resource_name, + "resourceType": resource_type, + "filter": usage_filter, + "total": len(entries), + "usages": entries, +}) +_shutdown() +sys.exit(0) diff --git a/tests/test_rdc/README.md b/tests/test_rdc/README.md new file mode 100644 index 00000000..e15bbaa0 --- /dev/null +++ b/tests/test_rdc/README.md @@ -0,0 +1 @@ +Place .rdc capture files here for integration tests. diff --git a/tests/test_renderdoctools/__init__.py b/tests/test_renderdoctools/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/tests/test_renderdoctools/test_core.py b/tests/test_renderdoctools/test_core.py new file mode 100644 index 00000000..9e1fcfd2 --- /dev/null +++ b/tests/test_renderdoctools/test_core.py @@ -0,0 +1,93 @@ +# tests/test_renderdoctools/test_core.py +"""Unit tests for renderdoctools.core.""" +from __future__ import annotations + +import json +import os +import tempfile +from pathlib import Path +from unittest.mock import patch, MagicMock + +import pytest + +from renderdoctools import core + + +class TestFindRenderdoc: + def test_finds_bundled_renderdoc(self, tmp_path): + """find_renderdoc() returns path to bundled qrenderdoc.exe.""" + rd_dir = tmp_path / "tools" / "renderdoc" + rd_dir.mkdir(parents=True) + (rd_dir / "qrenderdoc.exe").touch() + + with patch.object(core, "WORKSPACE_ROOT", tmp_path): + result = core.find_renderdoc() + assert result == rd_dir / "qrenderdoc.exe" + + def test_raises_if_not_found(self, tmp_path): + """find_renderdoc() raises FileNotFoundError when RenderDoc is missing.""" + with patch.object(core, "WORKSPACE_ROOT", tmp_path): + with pytest.raises(FileNotFoundError, match="RenderDoc not found"): + core.find_renderdoc() + + def test_finds_versioned_renderdoc(self, tmp_path): + """find_renderdoc() finds RenderDoc in versioned directory names.""" + rd_dir = tmp_path / "tools" / "RenderDoc_1.43_64" + rd_dir.mkdir(parents=True) + (rd_dir / "qrenderdoc.exe").touch() + + with patch.object(core, "WORKSPACE_ROOT", tmp_path): + result = core.find_renderdoc() + assert result == rd_dir / "qrenderdoc.exe" + + def test_finds_any_renderdoc_prefix(self, tmp_path): + """find_renderdoc() falls back to any renderdoc* directory.""" + rd_dir = tmp_path / "tools" / "RenderDoc_2.0_64" + rd_dir.mkdir(parents=True) + (rd_dir / "qrenderdoc.exe").touch() + + with patch.object(core, "WORKSPACE_ROOT", tmp_path): + result = core.find_renderdoc() + assert result == rd_dir / "qrenderdoc.exe" + + +class TestRunScript: + def test_generates_script_and_parses_json(self, tmp_path): + """run_script() writes temp script, executes qrenderdoc, reads JSON output.""" + output_data = {"events": [{"eid": 1, "name": "Draw"}]} + + def fake_run(cmd, **kwargs): + script_path = cmd[2] + script_text = Path(script_path).read_text() + for line in script_text.splitlines(): + if "_CONFIG_PATH" in line: + config_path = line.split("= ")[1].strip().strip("'\"") + break + cfg = json.loads(Path(config_path).read_text()) + Path(cfg["output"]).write_text(json.dumps(output_data)) + return MagicMock(returncode=0, stderr="") + + with patch.object(core, "find_renderdoc", return_value=tmp_path / "qrenderdoc.exe"): + with patch("subprocess.run", side_effect=fake_run): + result = core.run_script( + script_name="events", + capture_path=str(tmp_path / "test.rdc"), + config={"draws_only": False}, + ) + assert result == output_data + + def test_missing_script_raises(self, tmp_path): + """run_script() raises FileNotFoundError for unknown script names.""" + with patch.object(core, "find_renderdoc", return_value=tmp_path / "qrenderdoc.exe"): + with pytest.raises(FileNotFoundError, match="Script template not found"): + core.run_script("nonexistent_script", str(tmp_path / "test.rdc")) + + def test_no_output_raises_runtime_error(self, tmp_path): + """run_script() raises RuntimeError when script produces no output.""" + def fake_run(cmd, **kwargs): + return MagicMock(returncode=1, stderr="something went wrong") + + with patch.object(core, "find_renderdoc", return_value=tmp_path / "qrenderdoc.exe"): + with patch("subprocess.run", side_effect=fake_run): + with pytest.raises(RuntimeError, match="failed"): + core.run_script("events", str(tmp_path / "test.rdc")) diff --git a/tests/test_renderdoctools/test_integration.py b/tests/test_renderdoctools/test_integration.py new file mode 100644 index 00000000..eb734cc6 --- /dev/null +++ b/tests/test_renderdoctools/test_integration.py @@ -0,0 +1,395 @@ +# tests/test_renderdoctools/test_integration.py +"""Integration tests for renderdoctools against real RenderDoc captures. + +Requires: +- RenderDoc extracted to tools/RenderDoc_1.43_64/ (or tools/renderdoc/) +- At least one .rdc capture file + +Skip automatically if RenderDoc or captures are not available. +""" +from __future__ import annotations + +import json +import os +import tempfile +from pathlib import Path + +import pytest + +from renderdoctools import core + +# ── Fixtures ────────────────────────────────────────────────────────────── + +CAPTURE_DIR = Path(__file__).resolve().parent.parent / "test_rdc" + + +def _find_capture() -> Path | None: + """Find an .rdc capture in tests/test_rdc/.""" + if not CAPTURE_DIR.is_dir(): + return None + for f in CAPTURE_DIR.iterdir(): + if f.suffix == ".rdc": + return f + return None + + +def _renderdoc_available() -> bool: + """Check if bundled RenderDoc is available.""" + try: + core.find_renderdoc() + return True + except FileNotFoundError: + return False + + +_capture = _find_capture() +_has_renderdoc = _renderdoc_available() + +skip_no_renderdoc = pytest.mark.skipif( + not _has_renderdoc, reason="RenderDoc not found in tools/" +) +skip_no_capture = pytest.mark.skipif( + _capture is None, reason="No .rdc capture found in %s" % CAPTURE_DIR +) +requires_integration = pytest.mark.skipif( + not (_has_renderdoc and _capture), + reason="Integration test requires RenderDoc + capture file", +) + + +# ── Tests ───────────────────────────────────────────────────────────────── + + +@requires_integration +class TestEvents: + def test_events_returns_list(self): + result = core.run_script("events", str(_capture), {"draws_only": False, "filter": ""}) + assert isinstance(result["events"], list) + assert isinstance(result["total"], int) and result["total"] > 0 + assert len(result["events"]) == result["total"] + + def test_events_have_required_fields(self): + result = core.run_script("events", str(_capture), {"draws_only": False, "filter": ""}) + ev = result["events"][0] + assert isinstance(ev["eid"], int) and ev["eid"] > 0 + assert isinstance(ev["name"], str) and len(ev["name"]) > 0 + assert isinstance(ev["depth"], int) and ev["depth"] >= 0 + assert isinstance(ev["flags"], int) and ev["flags"] >= 0 + assert isinstance(ev["draw"], bool) + assert isinstance(ev["numIndices"], int) and ev["numIndices"] >= 0 + + def test_draws_only_filters(self): + all_result = core.run_script("events", str(_capture), {"draws_only": False, "filter": ""}) + draws_result = core.run_script("events", str(_capture), {"draws_only": True, "filter": ""}) + # draws-only should return fewer or equal events + assert draws_result["total"] <= all_result["total"] + assert draws_result["total"] > 0 + # every event should be a draw with valid fields + for ev in draws_result["events"]: + assert ev["draw"] is True + assert isinstance(ev["eid"], int) and ev["eid"] > 0 + assert isinstance(ev["name"], str) and len(ev["name"]) > 0 + assert isinstance(ev["numIndices"], int) and ev["numIndices"] >= 0 + + def test_filter_narrows_results(self): + all_result = core.run_script("events", str(_capture), {"draws_only": False, "filter": ""}) + filtered = core.run_script("events", str(_capture), {"draws_only": False, "filter": "DrawIndexed"}) + assert filtered["total"] <= all_result["total"] + for ev in filtered["events"]: + assert isinstance(ev["name"], str) and len(ev["name"]) > 0 + assert "DrawIndexed" in ev["name"] + assert isinstance(ev["eid"], int) and ev["eid"] > 0 + + +@requires_integration +class TestAnalyze: + def test_summary(self): + result = core.run_script("analyze", str(_capture), { + "summary": True, "biggest_draws": 0, "render_targets": False, + }) + s = result["summary"] + assert isinstance(s["totalEvents"], int) and s["totalEvents"] > 0 + assert isinstance(s["totalDraws"], int) and s["totalDraws"] > 0 + assert isinstance(s["totalClears"], int) and s["totalClears"] >= 0 + assert isinstance(s["totalIndices"], int) and s["totalIndices"] > 0 + assert isinstance(s["totalInstances"], int) and s["totalInstances"] > 0 + assert s["totalDraws"] <= s["totalEvents"] + + def test_biggest_draws(self): + result = core.run_script("analyze", str(_capture), { + "summary": False, "biggest_draws": 5, "render_targets": False, + }) + draws = result["biggestDraws"] + assert isinstance(draws, list) + assert len(draws) <= 5 + assert len(draws) > 0 + # should be sorted descending by numIndices + for i in range(len(draws) - 1): + assert draws[i]["numIndices"] >= draws[i + 1]["numIndices"] + # validate each draw entry has meaningful values + for d in draws: + assert isinstance(d["eid"], int) and d["eid"] > 0 + assert isinstance(d["name"], str) and len(d["name"]) > 0 + assert isinstance(d["numIndices"], int) and d["numIndices"] > 0 + assert isinstance(d["numInstances"], int) and d["numInstances"] > 0 + + def test_render_targets(self): + result = core.run_script("analyze", str(_capture), { + "summary": False, "biggest_draws": 0, "render_targets": True, + }) + rts = result["renderTargets"] + assert isinstance(rts, list) and len(rts) > 0 + for rt in rts: + assert isinstance(rt["resourceId"], str) and int(rt["resourceId"]) > 0 + assert isinstance(rt["drawCount"], int) and rt["drawCount"] > 0 + assert isinstance(rt["firstEid"], int) and rt["firstEid"] > 0 + assert isinstance(rt["lastEid"], int) and rt["lastEid"] > 0 + assert rt["lastEid"] >= rt["firstEid"] + + +@requires_integration +class TestPipeline: + def _get_first_draw_eid(self): + result = core.run_script("events", str(_capture), {"draws_only": True, "filter": ""}) + return result["events"][0]["eid"] + + def test_pipeline_returns_stages(self): + eid = self._get_first_draw_eid() + result = core.run_script("pipeline", str(_capture), {"event_id": eid, "stage": ""}) + assert isinstance(result["event_id"], int) and result["event_id"] == eid + assert isinstance(result["stages"], dict) and len(result["stages"]) > 0 + for stage_name, info in result["stages"].items(): + assert stage_name in ("vertex", "hull", "domain", "geometry", "pixel", "compute") + assert isinstance(info["entryPoint"], str) and len(info["entryPoint"]) > 0 + assert info["bound"] is True + assert isinstance(info["constantBuffers"], list) + assert isinstance(info["readOnlyResources"], list) + assert isinstance(info["readWriteResources"], list) + for cb in info["constantBuffers"]: + assert isinstance(cb["index"], int) and cb["index"] >= 0 + assert isinstance(cb["name"], str) + assert isinstance(cb["byteSize"], int) and cb["byteSize"] >= 0 + for res in info["readOnlyResources"] + info["readWriteResources"]: + assert isinstance(res["type"], str) and len(res["type"]) > 0 + assert isinstance(res["name"], str) + assert isinstance(res["index"], int) and res["index"] >= 0 + + def test_pipeline_has_render_targets(self): + eid = self._get_first_draw_eid() + result = core.run_script("pipeline", str(_capture), {"event_id": eid, "stage": ""}) + assert isinstance(result["renderTargets"], list) + for rt in result["renderTargets"]: + assert isinstance(rt, str) and int(rt) > 0 + # depthTarget should be present (may be None if no depth bound) + assert "depthTarget" in result + if result["depthTarget"] is not None: + assert isinstance(result["depthTarget"], str) and int(result["depthTarget"]) > 0 + + def test_pipeline_stage_filter(self): + eid = self._get_first_draw_eid() + result = core.run_script("pipeline", str(_capture), {"event_id": eid, "stage": "vertex"}) + # should only contain vertex stage (if bound) + assert isinstance(result["stages"], dict) + for stage_name, info in result["stages"].items(): + assert stage_name == "vertex" + assert isinstance(info["entryPoint"], str) and len(info["entryPoint"]) > 0 + assert info["bound"] is True + + +@requires_integration +class TestTextures: + def _get_first_draw_eid(self): + result = core.run_script("events", str(_capture), {"draws_only": True, "filter": ""}) + return result["events"][0]["eid"] + + def test_textures_list(self): + eid = self._get_first_draw_eid() + result = core.run_script("textures", str(_capture), { + "event_id": eid, "save_all": "", "save_rid": "", + "format": "png", "save_output": "", + }) + assert isinstance(result["textures"], list) + assert isinstance(result["total"], int) and result["total"] > 0 + assert len(result["textures"]) == result["total"] + tex = result["textures"][0] + assert isinstance(tex["resourceId"], str) and int(tex["resourceId"]) > 0 + assert isinstance(tex["width"], int) and tex["width"] > 0 + assert isinstance(tex["height"], int) and tex["height"] > 0 + assert isinstance(tex["depth"], int) and tex["depth"] >= 1 + assert isinstance(tex["mips"], int) and tex["mips"] >= 1 + assert isinstance(tex["arraysize"], int) and tex["arraysize"] >= 1 + assert isinstance(tex["format"], str) and tex["format"] not in ("", "unknown") + assert isinstance(tex["binding"], str) and len(tex["binding"]) > 0 + assert isinstance(tex["name"], str) + assert isinstance(tex["type"], str) and len(tex["type"]) > 0 + + def test_texture_values_are_valid(self): + """Texture entries should have non-degenerate dimensions and real format strings.""" + eid = self._get_first_draw_eid() + result = core.run_script("textures", str(_capture), { + "event_id": eid, "save_all": "", "save_rid": "", + "format": "png", "save_output": "", + }) + for tex in result["textures"]: + assert isinstance(tex["resourceId"], str) and int(tex["resourceId"]) > 0 + assert isinstance(tex["width"], int) and tex["width"] > 0 + assert isinstance(tex["height"], int) and tex["height"] > 0 + assert isinstance(tex["depth"], int) and tex["depth"] >= 1 + assert isinstance(tex["mips"], int) and tex["mips"] >= 1 + assert isinstance(tex["arraysize"], int) and tex["arraysize"] >= 1 + assert isinstance(tex["format"], str) and tex["format"] not in ("", "unknown") + assert isinstance(tex["binding"], str) and len(tex["binding"]) > 0 + assert isinstance(tex["type"], str) and len(tex["type"]) > 0 + assert isinstance(tex["name"], str) + + def test_save_single_texture(self): + eid = self._get_first_draw_eid() + # First get the texture list to find a valid RID + result = core.run_script("textures", str(_capture), { + "event_id": eid, "save_all": "", "save_rid": "", + "format": "png", "save_output": "", + }) + rid = result["textures"][0]["resourceId"] + assert isinstance(rid, str) and int(rid) > 0 + + with tempfile.TemporaryDirectory(prefix="rdtools_test_") as tmpdir: + out_path = os.path.join(tmpdir, "test_texture.png") + result = core.run_script("textures", str(_capture), { + "event_id": eid, "save_all": "", "save_rid": rid, + "format": "png", "save_output": out_path, + }) + assert isinstance(result["saved"], list) and len(result["saved"]) == 1 + assert isinstance(result["saved"][0], str) and len(result["saved"][0]) > 0 + assert os.path.isfile(out_path) + assert os.path.getsize(out_path) > 0 + + def test_save_all_textures(self): + eid = self._get_first_draw_eid() + with tempfile.TemporaryDirectory(prefix="rdtools_test_") as tmpdir: + out_dir = os.path.join(tmpdir, "texdump") + result = core.run_script("textures", str(_capture), { + "event_id": eid, "save_all": out_dir, "save_rid": "", + "format": "png", "save_output": "", + }) + assert isinstance(result["saved"], list) + assert isinstance(result["total"], int) and result["total"] > 0 + assert len(result["saved"]) == result["total"] + for f in result["saved"]: + assert isinstance(f, str) and f.endswith(".png") + assert os.path.isfile(f) + assert os.path.getsize(f) > 0 + + +@requires_integration +class TestShaders: + def _get_first_draw_eid(self): + result = core.run_script("events", str(_capture), {"draws_only": True, "filter": ""}) + return result["events"][0]["eid"] + + def test_shaders_disassembly(self): + eid = self._get_first_draw_eid() + result = core.run_script("shaders", str(_capture), { + "event_id": eid, "stage": "", "cbuffers": False, + }) + assert isinstance(result["event_id"], int) and result["event_id"] == eid + assert isinstance(result["disasmTarget"], str) and len(result["disasmTarget"]) > 0 + assert isinstance(result["shaders"], dict) and len(result["shaders"]) > 0 + # At least one stage should have disassembly + for stage_name, info in result["shaders"].items(): + assert stage_name in ("vertex", "hull", "domain", "geometry", "pixel", "compute") + assert isinstance(info["disassembly"], str) and len(info["disassembly"]) > 0 + assert isinstance(info["entryPoint"], str) and len(info["entryPoint"]) > 0 + + def test_shaders_stage_filter(self): + eid = self._get_first_draw_eid() + result = core.run_script("shaders", str(_capture), { + "event_id": eid, "stage": "vertex", "cbuffers": False, + }) + assert isinstance(result["shaders"], dict) + for stage_name, info in result["shaders"].items(): + assert stage_name == "vertex" + assert isinstance(info["entryPoint"], str) and len(info["entryPoint"]) > 0 + assert isinstance(info["disassembly"], str) and len(info["disassembly"]) > 0 + + def test_shaders_cbuffers(self): + eid = self._get_first_draw_eid() + result = core.run_script("shaders", str(_capture), { + "event_id": eid, "stage": "vertex", "cbuffers": True, + }) + if "vertex" in result["shaders"]: + info = result["shaders"]["vertex"] + assert isinstance(info["constantBuffers"], list) + # FO4 vertex shaders typically have cbuffers + if len(info["constantBuffers"]) > 0: + cb = info["constantBuffers"][0] + assert isinstance(cb["name"], str) + assert isinstance(cb["index"], int) and cb["index"] >= 0 + # Each cbuffer should have either variables or an error + if "variables" in cb: + assert isinstance(cb["variables"], list) + for v in cb["variables"]: + assert isinstance(v["name"], str) and len(v["name"]) > 0 + assert isinstance(v["rows"], int) and v["rows"] >= 1 + assert isinstance(v["columns"], int) and v["columns"] >= 1 + elif "error" in cb: + assert isinstance(cb["error"], str) and len(cb["error"]) > 0 + + +@requires_integration +class TestMesh: + def _get_first_draw_eid(self): + result = core.run_script("events", str(_capture), {"draws_only": True, "filter": ""}) + return result["events"][0]["eid"] + + def test_mesh_input(self): + eid = self._get_first_draw_eid() + result = core.run_script("mesh", str(_capture), { + "event_id": eid, "post_vs": False, "indices": "", + }) + assert isinstance(result["event_id"], int) and result["event_id"] == eid + assert result["post_vs"] is False + assert isinstance(result["attributes"], list) and len(result["attributes"]) > 0 + assert isinstance(result["vertices"], list) and len(result["vertices"]) > 0 + # Validate attribute entries + for attr in result["attributes"]: + assert isinstance(attr["name"], str) and len(attr["name"]) > 0 + assert isinstance(attr["format"], str) and len(attr["format"]) > 0 + assert isinstance(attr["buffer"], int) and attr["buffer"] >= 0 + assert isinstance(attr["offset"], int) and attr["offset"] >= 0 + # Each vertex should have a non-negative index and attribute data + for v in result["vertices"]: + assert isinstance(v["index"], int) and v["index"] >= 0 + + def test_mesh_index_range(self): + eid = self._get_first_draw_eid() + result = core.run_script("mesh", str(_capture), { + "event_id": eid, "post_vs": False, "indices": "0-3", + }) + assert isinstance(result["vertices"], list) and len(result["vertices"]) <= 3 + assert len(result["vertices"]) > 0 + for v in result["vertices"]: + assert isinstance(v["index"], int) and v["index"] >= 0 + + +@requires_integration +class TestInfo: + def test_info_returns_metadata(self): + result = core.run_script("info", str(_capture)) + assert isinstance(result["api"], str) and len(result["api"]) > 0 + assert isinstance(result["timestamp"], int) and result["timestamp"] > 0 + assert isinstance(result["machineIdent"], int) and result["machineIdent"] >= 0 + + +@requires_integration +class TestCounters: + def test_list_counters(self): + result = core.run_script("counters", str(_capture), {"fetch": "", "zero_samples": False}) + assert result["mode"] == "list" + assert isinstance(result["counters"], list) and len(result["counters"]) > 0 + for c in result["counters"]: + assert isinstance(c["id"], int) and c["id"] > 0 + assert isinstance(c["name"], str) and len(c["name"]) > 0 + assert isinstance(c["unit"], str) and len(c["unit"]) > 0 + assert isinstance(c["description"], str) and len(c["description"]) > 0 + assert isinstance(c["resultType"], str) and len(c["resultType"]) > 0 + assert isinstance(c["resultByteWidth"], int) and c["resultByteWidth"] > 0 diff --git a/verify_install.py b/verify_install.py index 857bdcaf..b46fbccc 100644 --- a/verify_install.py +++ b/verify_install.py @@ -7,13 +7,16 @@ import importlib import io +import json import os import shutil import struct import subprocess import sys +import tempfile import zipfile from pathlib import Path +from urllib.parse import urlparse from urllib.request import urlopen ROOT = Path(__file__).resolve().parent @@ -30,6 +33,12 @@ JDK_VERSION = "21" JDK_ADOPTIUM_URL = "https://api.adoptium.net/v3/binary/latest/21/ga/windows/x64/jdk/hotspot/normal/eclipse" +RENDERDOC_VERSION = "1.43" +RENDERDOC_URL = "https://github.com/Kim2091/renderdoc/releases/download/v1.43_dx9_v2/RenderDoc_DX9_Windows_x86_x64.zip" +RENDERDOC_ZIP = Path(urlparse(RENDERDOC_URL).path).name +RENDERDOC_DIR_NAME = "RenderDoc_DX9" +RENDERDOC_METADATA_FILE = ".vibe-renderdoc-install.json" + def record(name: str, status: str, detail: str = ""): results.append((name, status, detail)) @@ -167,6 +176,106 @@ def check_pyghidra(): "pyghidra not installed -- run: python verify_install.py --setup") +def _find_renderdoc_dir() -> Path | None: + """Find RenderDoc installation in tools/.""" + for name in ["renderdoc", RENDERDOC_DIR_NAME]: + candidate = TOOLS_DIR / name + if candidate.is_dir() and (candidate / "qrenderdoc.exe").exists(): + return candidate + if TOOLS_DIR.is_dir(): + for d in sorted(TOOLS_DIR.iterdir(), reverse=True): + if d.name.lower().startswith("renderdoc") and d.is_dir(): + if (d / "qrenderdoc.exe").exists(): + return d + return None + + +def _renderdoc_expected_metadata() -> dict[str, str]: + return { + "version": RENDERDOC_VERSION, + "url": RENDERDOC_URL, + "archive": RENDERDOC_ZIP, + } + + +def _renderdoc_metadata_path(renderdoc_dir: Path) -> Path: + return renderdoc_dir / RENDERDOC_METADATA_FILE + + +def _read_renderdoc_metadata(renderdoc_dir: Path) -> dict[str, str] | None: + metadata_path = _renderdoc_metadata_path(renderdoc_dir) + if not metadata_path.is_file(): + return None + + try: + data = json.loads(metadata_path.read_text(encoding="utf-8")) + except (OSError, json.JSONDecodeError): + return None + + return data if isinstance(data, dict) else None + + +def _write_renderdoc_metadata(renderdoc_dir: Path): + metadata = _renderdoc_expected_metadata() + metadata_path = _renderdoc_metadata_path(renderdoc_dir) + metadata_path.write_text(json.dumps(metadata, indent=2) + "\n", encoding="utf-8") + + +def _get_renderdoc_update_reason(renderdoc_dir: Path) -> str | None: + metadata = _read_renderdoc_metadata(renderdoc_dir) + if metadata is None: + return "installed package version is unknown" + + expected = _renderdoc_expected_metadata() + installed_version = metadata.get("version") + if installed_version != expected["version"]: + return f"installed version {installed_version or 'unknown'} != required {expected['version']}" + + if metadata.get("url") != expected["url"]: + return "pinned download URL changed" + + return None + + +def _next_backup_path(path: Path) -> Path: + backup_path = path.with_name(f"{path.name}.backup") + index = 1 + while backup_path.exists(): + backup_path = path.with_name(f"{path.name}.backup.{index}") + index += 1 + return backup_path + + +def _backup_directory(path: Path) -> Path: + backup_path = _next_backup_path(path) + shutil.copytree(path, backup_path) + return backup_path + + +def _overlay_tree(src: Path, dst: Path): + shutil.copytree(src, dst, dirs_exist_ok=True) + + +def check_renderdoc() -> bool: + """Check for RenderDoc installation. Returns True if found.""" + rd = _find_renderdoc_dir() + if rd: + update_reason = _get_renderdoc_update_reason(rd) + if update_reason is None: + record("renderdoc", PASS, f"{rd} (v{RENDERDOC_VERSION})") + return True + record( + "renderdoc", + WARN, + f"{rd} needs reinstall ({update_reason}) -- run: python verify_install.py --setup", + ) + return False + record("renderdoc", WARN, + "RenderDoc not found in tools/ -- " + "run: python verify_install.py --setup") + return False + + def check_retools_import(): modules = [ "retools.common", "retools.decompiler", "retools.disasm", @@ -306,6 +415,58 @@ def setup_pyghidra(): return False +def setup_renderdoc(): + """Download and extract RenderDoc to tools/.""" + existing = _find_renderdoc_dir() + if existing: + update_reason = _get_renderdoc_update_reason(existing) + if update_reason is None: + print(f" RenderDoc already at {existing} (v{RENDERDOC_VERSION})") + return True + print(f" RenderDoc at {existing} needs reinstall: {update_reason}") + + print(f"\n Downloading RenderDoc {RENDERDOC_VERSION} (~23 MB)...") + try: + data = _download(RENDERDOC_URL, f"RenderDoc {RENDERDOC_VERSION}") + TOOLS_DIR.mkdir(parents=True, exist_ok=True) + zip_path = TOOLS_DIR / RENDERDOC_ZIP + zip_path.write_bytes(data) + print(" Extracting...") + rd_dir = TOOLS_DIR / RENDERDOC_DIR_NAME + with tempfile.TemporaryDirectory(prefix="renderdoc-install-", dir=TOOLS_DIR) as temp_dir_str: + temp_dir = Path(temp_dir_str) + with zipfile.ZipFile(zip_path) as zf: + zf.extractall(temp_dir) + members = [Path(name) for name in zf.namelist() if name and not name.endswith("/")] + top_levels = {member.parts[0] for member in members if member.parts} + + # Support both archive layouts: + # 1. A wrapped folder like RenderDoc_DX9/qrenderdoc.exe + # 2. A flat runtime zip with qrenderdoc.exe and x86/ at archive root + if "qrenderdoc.exe" in top_levels or len(top_levels) != 1: + extracted_root = temp_dir + else: + extracted_root = temp_dir / next(iter(top_levels)) + + if existing and existing.is_dir(): + backup_dir = _backup_directory(existing) + print(f" Backed up existing RenderDoc to {backup_dir}") + + _overlay_tree(extracted_root, rd_dir) + + zip_path.unlink() + _write_renderdoc_metadata(rd_dir) + rd_dir = _find_renderdoc_dir() or rd_dir + if rd_dir.is_dir() and (rd_dir / "qrenderdoc.exe").exists(): + record("setup:renderdoc", PASS, f"Installed to {rd_dir}") + return True + record("setup:renderdoc", FAIL, "Extraction succeeded but qrenderdoc.exe not found") + return False + except Exception as e: + record("setup:renderdoc", FAIL, f"Download failed: {e}") + return False + + def run_setup(): """Auto-install optional pyghidra dependencies: JDK, Ghidra, pyghidra.""" print("=" * 60) @@ -341,6 +502,7 @@ def main(): check_java() check_ghidra() check_pyghidra() + check_renderdoc() check_r2_runs() check_retools_import() check_sigdb() @@ -359,17 +521,27 @@ def main(): n in ("java", "ghidra-install", "pip:pyghidra") for n, s, _ in results if s == WARN ) + renderdoc_warn = any( + n == "renderdoc" for n, s, _ in results if s == WARN + ) + if renderdoc_warn: + print("=" * 60) + print("RenderDoc Setup (GPU capture analysis)") + print("=" * 60) + setup_renderdoc() + print() if ghidra_warns: run_setup() - # Re-check after setup - results.clear() - print("Re-checking after setup...\n") - check_java() - check_ghidra() - check_pyghidra() + # Re-check after setup + results.clear() + print("Re-checking after setup...\n") + check_java() + check_ghidra() + check_pyghidra() + check_renderdoc() elif warns: print(f"ALL REQUIRED CHECKS PASSED ({warns} optional warning(s)).") - print("Run 'python verify_install.py --setup' to auto-install optional Ghidra backend.") + print("Run 'python verify_install.py --setup' to auto-install optional dependencies.") else: print("ALL CHECKS PASSED.")