Transform any website into a structured design system (DESIGN.md) instantly by just entering a URL.
DESIGNMD is a VS Code extension that directly analyzes a website's CSSOM (CSS Object Model) and leverages local LLMs (via llama.cpp) to generate comprehensive design guidelines ready for implementation.
- 🚀 Instant Extraction: Extract colors, typography, spacing, and components from any URL in seconds.
- 🧠 AI-Powered Insights: Uses
llama.cppto automatically generate design philosophy, Do's & Don'ts, and component usage guidance. - 📂 Multi-Format Export:
DESIGN.md: Structured documentation optimized for AI agents (Cursor, Windsurf, etc.).Tailwind v4: Modern@themeblock format for seamless integration.CSS Variables: Ready-to-use:rootcustom properties.DTCG JSON: Standard format for design tools like Figma.
- Launch Command: Press
Ctrl+Shift+Dor selectDesign Extractor: Extract Design System from URLfrom the Command Palette. - Enter URL: Provide the website URL you wish to analyze.
- Preview: Review the extracted design system in the interactive Webview panel.
- Save: Download your preferred format and drop it into your project.
A snippet from a real DESIGN.md extracted from https://note.com:
## Overview
Design tokens extracted via CSSOM frequency analysis.
## Colors
- **Primary-1** (#08131a): Deep, professional main color
- **Surface** (#ffffff): Clean base white
- **Accent** (#1e7b65): Vibrant accent for interactive elements
## Typography
Clean sans-serif based on Helvetica Neue for high readability.
## Do's and Don'ts
✅ Do: Maintain consistent white space using the base grid.
✅ Do: Use the accent color only for the primary call-to-action.
❌ Don't: Mix different border-radii within a single view.graph TD
A[URL Input] --> B[HTML/CSS Fetcher]
B --> C[CSSOM Parser]
C --> D{Token Extractor}
D -->|Frequency Analysis| E[Raw Tokens]
E --> F[Local LLM - llama.cpp]
F --> G[Prose & Rationale]
G --> H[Final Outputs]
H --> I[DESIGN.md]
H --> J[Tailwind v4]
H --> K[CSS Vars]
H --> L[DTCG JSON]
git clone https://github.com/msandroid/DESIGNMD.git
cd DESIGNMD
npm install
npm run compileRun llama.cpp as a backend to enable high-quality prose generation.
./llama-server -m model.gguf --port 8000By default, the extension connects to http://localhost:8000.
If you find this project helpful, please give us a star! You can also support the developer on Ko-fi.
Built with ❤️ for AI-native developers.
Optimized for AI agents like Cursor, Windsurf, and Claude Code.
