Assertify is an intelligent testing assistant that analyzes your project description, asks clarifying questions, and generates comprehensive test suites across multiple testing styles and frameworks.
- AI project analysis that classifies your work and requests extra context before any generation starts.
- Dynamic question generation with a resilient fallback list so you never get stuck when the API is unavailable.
- Configurable settings for persistent default context, disabling unwanted test types, and tailoring boilerplate sample sizes.
- Comprehensive test coverage spanning unit, integration, feature, performance, and manual scenarios with prioritization metadata.
- Boilerplate code generation for eight popular frameworks, respecting the frameworks you disable in settings.
- Flexible exports and saving allowing JSON/CSV downloads plus local storage history.
- Responsive, themable UI that works on desktop, tablet, and mobile with light/dark theme support.
- Vitest (JavaScript/TypeScript)
- Jest (JavaScript/TypeScript)
- Pytest (Python)
- Unittest (Python)
- JUnit (Java)
- PHPUnit (PHP)
- RSpec (Ruby)
- Mocha (JavaScript)
- Next.js 14
- TypeScript
- Tailwind CSS
- Multi-LLM Support (OpenAI, Anthropic Claude, Google Gemini)
- Install dependencies:
npm install - Start the dev server:
npm run dev - Open http://localhost:3000 in your browser
- Enter your API key in the UI. Keys are stored in your browser's
sessionStoragein plaintext for convenience; they are not encrypted or transmitted to our servers. Be aware that browser extensions or any cross-site scripting (XSS) vulnerability in this tab could potentially access this session data.
- Select your preferred LLM provider (OpenAI, Anthropic, or Gemini) and model from the landing page or settings.
- Provide your project description from the landing page; the app automatically classifies the category.
- Answer the context questions (or skip) so the generator can tailor scenarios to your needs.
- Review generated tests on the results page, filter by type or priority, and inspect the suggested testing strategy and risk areas.
- Generate boilerplate code for your preferred frameworks or export the dataset as JSON/CSV.
- Manage settings at
/settingsto define default context, disable frameworks, and control boilerplate sample sizes.
- Offer more granular configuration for question generation (e.g., required question count, tone, or domain presets).
- Optimize large test suites by streaming responses and deduplicating similar scenarios before persistence.
- Provide deeper integrations with CI/CD by exporting ready-to-run suites or syncing with test management tools.
MIT