From 7225b3f5fbf112dd862ab19120e18dc72b6ee0a0 Mon Sep 17 00:00:00 2001 From: hyeokjun32 Date: Fri, 15 May 2026 01:09:33 +0900 Subject: [PATCH] Sync ecosystem layer positioning --- README.ko.md | 10 +++++----- README.md | 10 +++++----- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/README.ko.md b/README.ko.md index 76c4ef1..bdfad63 100644 --- a/README.ko.md +++ b/README.ko.md @@ -17,8 +17,8 @@ ONNX model -> optional InferEdgeAIGuard deterministic diagnosis evidence -> deploy / review / blocked decision -Supporting sidecar: -InferEdgeEnv -> local-first run evidence registry / comparability checker +Experiment hygiene / comparability layer: +InferEdgeEnv -> v0.1.5 v1-complete local-first run evidence registry / comparability checker ``` ## Summary @@ -27,7 +27,7 @@ InferEdgeEnv -> local-first run evidence registry / comparability checker - Real device execution: Jetson TensorRT + ONNX Runtime CPU - Structured comparison: latency, accuracy, validation evidence - Deployment decision: deployable / review / blocked -- Sidecar evidence registry: InferEdgeEnv는 Lab decision과 분리된 local benchmark evidence와 comparability를 기록 +- Comparability layer: InferEdgeEnv `v0.1.5`는 Lab decision과 분리된 local benchmark evidence와 comparability를 기록 - Local Studio: inference validation을 브라우저에서 확인하는 local-first workflow UI ## What Makes InferEdge Different? @@ -113,9 +113,9 @@ bash scripts/demo_pipeline_full.sh --run-jetson-command-print - **InferEdge-Runtime:** Forge artifact 또는 Lab worker request를 받아 C++ 실행/검증 결과 JSON을 생성합니다. - **InferEdgeLab:** 결과를 비교/리포트/API/job/deployment decision으로 정리하는 owner입니다. - **InferEdgeAIGuard:** provenance mismatch나 suspicious result를 rule/evidence 기반으로 진단하는 optional evidence layer입니다. -- **InferEdgeEnv:** Edge AI inference benchmark result를 local artifact와 SQLite registry로 고정하고 비교 가능성을 판정하는 local-first run evidence registry입니다. +- **InferEdgeEnv:** `v0.1.5` v1-complete experiment hygiene / comparability layer로, Edge AI inference benchmark result를 local artifact와 SQLite registry로 고정하고 비교 가능성을 판정합니다. -포트폴리오 경계: InferEdgeLab은 validation / decision layer이고, InferEdgeEnv는 run evidence registry / comparability layer입니다. InferEdge는 모델이 배포 가능한지 검증하고, InferEdgeEnv는 benchmark evidence가 신뢰 가능하고 비교 가능한 형태로 기록됐는지 관리합니다. +포트폴리오 경계: InferEdgeLab은 validation / decision layer이고, InferEdgeEnv는 `v0.1.5` v1-complete experiment hygiene / comparability layer입니다. InferEdge는 모델이 배포 가능한지 검증하고, InferEdgeEnv는 benchmark evidence가 신뢰 가능하고 비교 가능한 형태로 기록됐는지 관리합니다. ## 현재 범위와 future work diff --git a/README.md b/README.md index 53567a7..458fc4c 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ Language: English | [한국어](README.ko.md) - Real device execution: Jetson TensorRT + ONNX Runtime CPU - Structured comparison: latency, accuracy, and validation evidence - Deployment decision: deployable / review / blocked -- Sidecar evidence registry: InferEdgeEnv records local benchmark evidence and comparability separately from Lab decisions +- Comparability layer: InferEdgeEnv v0.1.5 records local benchmark evidence and comparability separately from Lab decisions - Local Studio: interactive workflow UI for inference validation ## What Makes InferEdge Different? @@ -46,8 +46,8 @@ ONNX model -> optional InferEdgeAIGuard provenance diagnosis -> deploy / review / blocked decision -Supporting sidecar: -InferEdgeEnv -> local-first run evidence registry / comparability checker +Experiment hygiene / comparability layer: +InferEdgeEnv -> v0.1.5 v1-complete local-first run evidence registry / comparability checker ``` Repository roles are deliberately split: @@ -56,9 +56,9 @@ Repository roles are deliberately split: - **InferEdgeRuntime:** C++ execution, profiling, result export, and worker response boundary. - **InferEdgeLab:** compare/report/API/job workflow and final deployment decision ownership. - **InferEdgeAIGuard:** optional rule + evidence based failure and provenance diagnosis. -- **InferEdgeEnv:** local-first run evidence registry and comparability checker for Edge AI inference benchmark results. +- **InferEdgeEnv:** v0.1.5 v1-complete experiment hygiene / comparability layer; local-first run evidence registry and comparability checker for Edge AI inference benchmark results. -Portfolio boundary: InferEdgeLab is the validation / decision layer. InferEdgeEnv is the run evidence registry / comparability layer. InferEdge validates whether a model is deployable; InferEdgeEnv records whether benchmark evidence can be trusted and compared. +Portfolio boundary: InferEdgeLab is the validation / decision layer. InferEdgeEnv is the v0.1.5 v1-complete experiment hygiene / comparability layer. InferEdge validates whether a model is deployable; InferEdgeEnv records whether benchmark evidence can be trusted and compared. Implemented today: Lab API response contract, `/api/compare`, `/api/analyze` in-memory jobs, worker request/response mappings, Runtime dry-run validation/export, Forge worker/runtime summary, AIGuard provenance mismatch diagnosis, Lab decision/report evidence smoke coverage, dev-only Lab -> Runtime ONNX Runtime smoke using `yolov8n.onnx`, manual Jetson TensorRT Runtime smoke using a Forge manifest plus TensorRT engine artifact, and Runtime source-model identity preservation for compare-ready TensorRT engine results.