An open-source toolkit for creating presentations using a spec-driven approach. Design "what to communicate" first, then let AI build "how to present it."
Traditional slide creation follows a "open a blank slide and figure it out as you go" approach. Without a clear structure, time is spent tweaking visuals while the core message gets diluted.
Spec-driven presentation applies the concept of Spec-Driven Development from software engineering to presentation creation.
| Traditional | Spec-Driven | |
|---|---|---|
| Starting point | Blank slide | Source materials and requirements |
| Design | Think while building | Define logical structure as a spec first |
| Build | Manual layout | AI builds automatically following the template |
| Quality | Ad hoc | Reviewable process based on the spec |
π Want to try it quickly? Deploy the full stack from CloudShell in minutes β no local CDK or Docker required. See CloudShell Deploy Guide.
Copy skill/ to your Kiro CLI skills directory. The engine, references, and sample templates are all included.
You can also install the engine as a Python package:
# Latest
pip install git+https://github.com/aws-samples/sample-spec-driven-presentation-maker.git#subdirectory=skill
# Specific version
pip install git+https://github.com/aws-samples/sample-spec-driven-presentation-maker.git@v0.1.0#subdirectory=skillCheck the installed version:
import sdpm
print(sdpm.__version__)cd mcp-local && uv syncAdd to your MCP client config:
{
"mcpServers": {
"spec-driven-presentation-maker": {
"command": "uv",
"args": ["run", "--directory", "/path/to/mcp-local", "python", "server.py"]
}
}
}cd infra
cp config.example.yaml config.yaml # Enable/disable stacks
npm install && npx cdk deploy --allFor detailed setup instructions for each layer, see Getting Started.
Built on a 4-layer architecture. Each layer is a thin wrapper around the previous one. Use only the layers you need.
| Use Case | Layer | AWS |
|---|---|---|
| Personal use with Kiro CLI | Layer 1: skill/ |
Not required |
| Local MCP (Claude Desktop, VS Code, Kiro) | Layer 2: skill/ + mcp-local/ |
Not required |
| Team deployment | Layer 3: + mcp-server/ + infra/ |
Required |
| Full stack | Layer 4: + agent/ + api/ + web-ui/ |
Required |
See Architecture for details.
- Authentication: Cognito User Pool with JWT tokens (Layer 4)
- Authorization: Resource-level RBAC enforced at API and storage layers
- Encryption: S3 server-side encryption (SSE-S3), DynamoDB encryption at rest
- Network: CloudFront with OAI for static assets, API Gateway with Cognito authorizer
| Document | Description |
|---|---|
| Architecture | 4-layer design, data flow, auth model, MCP tool reference |
| Getting Started | Setup and deployment for Layer 1β4 |
| CloudShell Deploy | One-command deploy from CloudShell (no local CDK/Docker) |
| Connecting Agents | Amazon Bedrock AgentCore Gateway and MCP client configuration |
| Teams & Slack Integration | Chat platform integration |
| Custom Templates & Assets | Adding custom templates and icons |
spec-driven-presentation-maker/
βββ skill/ Layer 1 β Engine, references, templates
βββ mcp-local/ Layer 2 β Local stdio MCP server
βββ mcp-server/ Layer 3 β Streamable HTTP MCP server (LibreOffice built-in)
βββ infra/ Layer 3-4 β CDK stacks
βββ agent/ Layer 4 β Strands Agent
βββ api/ Layer 4 β Unified REST API Lambda
βββ web-ui/ Layer 4 β React Web UI
βββ shared/ Shared modules (authorization, schema)
βββ scripts/ Deployment and operations helpers
βββ tests/ Unit tests
βββ docs/ Documentation
make all # Lint + unit tests
make test # Unit tests only
make lint # ruff lint onlyContributions are welcome.
See CONTRIBUTING.md for details.
This project has adopted the Amazon Open Source Code of Conduct.
This is sample code for demonstration and educational purposes only, not for production use. You should work with your security and legal teams to meet your organizational security, regulatory and compliance requirements before deployment.
- All S3 buckets use server-side encryption (SSE-S3)
- DynamoDB tables use AWS managed encryption
- All data in transit is encrypted via TLS
- Block Public Access is enabled on all S3 buckets
- S3 Buckets: Public access blocked, server-side encryption (SSE-S3), versioning enabled
- DynamoDB: Encryption at rest enabled, point-in-time recovery enabled
- IAM: Least-privilege roles scoped per service; no wildcard resource permissions
- API Gateway: Cognito JWT authorizer on all endpoints
- CloudFront: Origin Access Identity (OAI), HTTPS-only, security headers
- Secrets: No hardcoded credentials; all secrets via environment variables or IAM roles
- AI/GenAI: Model outputs labeled as AI-generated; dataset compliance documented
- Logging: CloudWatch Logs with configurable retention; Bedrock invocation logging optional
- Enable AWS CloudTrail for audit logging
- Configure VPC endpoints for S3 and DynamoDB if running in a VPC
- Set up AWS WAF rules on CloudFront and API Gateway
- Review and tighten CORS configuration for your domain
- Enable S3 access logging on all buckets
- Configure Cognito advanced security features (MFA, compromised credentials)
- Review Amazon Bedrock model access and region settings β avoid cross-region inference profiles if data sovereignty is a concern
See CONTRIBUTING.md for more information.
This project is licensed under the MIT-0 License.
