Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 41 additions & 1 deletion docs/docs/index.md
Original file line number Diff line number Diff line change
@@ -1 +1,41 @@
# Coming soon
# What is streams-bootstrap?

`streams-bootstrap` is a Java library that standardizes the development and operation of Kafka-based applications (Kafka
Streams and plain Kafka clients).

The framework supports Apache Kafka 4.1 and Java 17. Its modules are published to Maven Central for straightforward
integration into existing projects.

## Why use it?

Kafka Streams and the core Kafka clients provide strong primitives for stream processing and messaging, but they do not
prescribe:

- How to structure a full application around those primitives
- How to configure applications consistently
- How to deploy and operate these services on Kubernetes
- How to perform repeatable reprocessing and cleanup
- How to handle errors and large messages uniformly

`streams-bootstrap` addresses these aspects by supplying:

1. **Standardized base classes** for Kafka Streams and client applications.
2. **A common CLI/configuration contract** for all Kafka applications.
3. **Helm-based deployment templates** and conventions for Kubernetes.
4. **Built-in reset/clean workflows** for reprocessing and state management.
5. **Consistent error-handling** and dead-letter integration.
6. **Testing infrastructure** for local development and CI environments.
7. **Optional blob-storage-backed serialization** for large messages.

## Architecture

The framework uses a modular architecture with a clear separation of concerns.

### Core Modules

- `streams-bootstrap-core`: Core abstractions for application lifecycle, execution, and cleanup
- `streams-bootstrap-cli`: CLI framework based on `picocli`
- `streams-bootstrap-test`: Utilities for testing streams-bootstrap applications
- `streams-bootstrap-large-messages`: Support for handling large Kafka messages
- `streams-bootstrap-cli-test`: Test support for CLI-based applications

179 changes: 179 additions & 0 deletions docs/docs/user/concepts/common.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
# Common concepts

## Application types

In streams-bootstrap, there are three application types:

- **App**
- **ConfiguredApp**
- **ExecutableApp**

---

### App

The **App** represents your application logic. Each application type has its own `App` interface:

- **StreamsApp** – for Kafka Streams applications
- **ProducerApp** – for producer applications
- **ConsumerApp** – for consumer applications
- **ConsumerProducerApp** – for consumer–producer applications

You implement the appropriate interface to define your application's behavior.

---

### ConfiguredApp

A **ConfiguredApp** pairs an `App` with its configuration. Examples include:

- `ConfiguredStreamsApp<T extends StreamsApp>`
- `ConfiguredProducerApp<T extends ProducerApp>`
- `ConfiguredConsumerApp<T extends ConsumerApp>`
- `ConfiguredConsumerProducerApp<T extends ConsumerProducerApp>`

This layer handles Kafka property creation, combining:

- base configuration
- app-specific configuration
- user configuration
- runtime configuration, e.g., brokers and schema registry

---

### ExecutableApp

An **ExecutableApp** is a `ConfiguredApp` with runtime configuration applied, making it ready to execute.
It can create:

- a **Runner** for running the application
- a **CleanUpRunner** for cleanup operations

---

### Usage Pattern

1. You implement an **App**.
2. The system wraps it in a **ConfiguredApp**, applying the configuration.
3. Runtime configuration is then applied to create an **ExecutableApp**, which can be:

- **run**, or
- **cleaned up**.

---

## Application lifecycle

Applications built with streams-bootstrap follow a defined lifecycle with specific states and transitions.

The lifecycle is managed through the KafkaApplication base class and provides several extension points for
customization.

| Phase | Description | Entry Point |
|----------------|--------------------------------------------------------------------------|----------------------------------------------------------|
| Initialization | Parse CLI arguments, inject environment variables, configure application | `startApplication()` or `startApplicationWithoutExit()` |
| Preparation | Execute pre-run/pre-clean hooks | `onApplicationStart()`, `prepareRun()`, `prepareClean()` |
| Execution | Run main application logic or cleanup operations | `run()`, `clean()`, `reset()` |
| Shutdown | Stop runners, close resources, cleanup | `stop()`, `close()` |

### Running an application

Applications built with streams-bootstrap can be started in two primary ways:

- **Via Command Line Interface**: When packaged as a runnable JAR (for example, in a container),
the `run` command is the default entrypoint. An example invocation:

```bash
java -jar example-app.jar \
run \
--bootstrap-servers kafka:9092 \
--input-topics input-topic \
--output-topic output-topic \
--schema-registry-url http://schema-registry:8081
```

- **Programmatically**: You can create a `Runner` from an `ExecutableApp` to run it directly.

```java
// For streams applications
try (StreamsRunner runner = streamsApp.createRunner()) {
runner.run();
}

// For producer applications
try (Runner runner = producerApp.createRunner()) {
runner.run();
}
```

### Cleaning an application

A built-in mechanism is provided to clean up all resources associated with an application.

When the cleanup operation is triggered, the following resources are removed:

| Resource Type | Description | Streams Apps | Producer Apps | Consumer Apps | Consumer-Producer Apps |
|---------------------|-----------------------------------------------------------|--------------|---------------|---------------|------------------------|
| Output Topics | Topics the application produces to | ✓ | ✓ | N/A | ✓ |
| Intermediate Topics | Topics the applications produces to and consumes from | ✓ | N/A | N/A | N/A |
| Internal Topics | Topics for state stores or repartitioning (Kafka Streams) | ✓ | N/A | N/A | N/A |
| Consumer Groups | Consumer group metadata | ✓ | N/A | ✓ | ✓ |

Cleanup can be triggered:

- **Via Command Line**: When packaged as a runnable JAR, the `clean` command can be used.

```bash
java -jar example-app.jar \
clean \
--bootstrap-servers kafka:9092 \
--output-topic output-topic
```
- **Programmatically**:

```java
// For streams applications
try(StreamsCleanUpRunner cleanUpRunner = streamsApp.createCleanUpRunner()){
cleanUpRunner.

clean();
}

// For producer applications
try(
CleanUpRunner cleanUpRunner = producerApp.createCleanUpRunner()){
cleanUpRunner.

clean();
}
```

Cleanup operations are idempotent, meaning they can be safely retried without causing
additional issues.

## Configuration

Kafka properties are applied in the following order (later values override earlier ones):

1. Base configuration
2. App config from .createKafkaProperties()
3. Kafka-specific environment variables with the `KAFKA_` prefix
4. Runtime args (`--bootstrap-servers`, `--schema-registry`, `--kafka-config`)
5. Serialization config
6. Group ID configuration

Environment variables with the `APP_ prefix` (configurable via `ENV_PREFIX`) are automatically parsed.
Environment variables are converted to CLI arguments:

```text
APP_BOOTSTRAP_SERVERS → --bootstrap-servers
APP_SCHEMA_REGISTRY_URL → --schema-registry-url
APP_OUTPUT_TOPIC → --output-topic
```

### Common CLI Configuration Options

- `--bootstrap-servers`: Kafka bootstrap servers (required)
- `--schema-registry-url`: URL for the Schema Registry. When this option is provided schema cleanup is handled as part
of the `clean` command
- `--kafka-config`: Key-value Kafka configuration
Empty file.
Empty file.
150 changes: 150 additions & 0 deletions docs/docs/user/concepts/producer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
# Producer applications

Producer applications generate data and send it to Kafka topics. They can be used to produce messages from various
sources, such as databases, files, or real-time events.

streams-bootstrap provides a structured way to build producer applications with consistent configuration handling,
command-line support, and lifecycle management.

---

## Application lifecycle

### Running an application

Producer applications are executed using the `ProducerRunner`, which runs the producer logic defined by the application.

Unlike Kafka Streams applications, producer applications typically:

- Run to completion and terminate automatically, or
- Run continuously when implemented as long-lived services

The execution model is fully controlled by the producer implementation and its runnable logic.

---

### Cleaning an application

Producer applications support a dedicated `clean` command.

```bash
java -jar my-producer-app.jar \
--bootstrap-servers localhost:9092 \
--output-topic my-topic \
clean
```

The clean process can perform the following operations:

- Delete output topics
- Delete registered schemas from Schema Registry
- Execute custom cleanup hooks defined by the application

Applications can register custom cleanup logic by overriding `setupCleanUp`.

---

## Configuration

### Topics

Producer applications support output topic configuration:

- `--output-topic`: Default output topic for produced messages
- `--labeled-output-topics`: Named output topics with different message types

### Kafka properties

#### Base configuration

The following Kafka properties are configured by default for Producer applications in streams-bootstrap:

- `max.in.flight.requests.per.connection = 1`
- `acks = all`
- `compression.type = gzip`

#### Custom Kafka properties

Kafka configuration can be customized by overriding `createKafkaProperties()`:

```java

@Override
public Map<String, Object> createKafkaProperties() {
return Map.of(
ProducerConfig.RETRIES_CONFIG, 3,
ProducerConfig.BATCH_SIZE_CONFIG, 16384,
ProducerConfig.LINGER_MS_CONFIG, 5
);
}
```

---

### Lifecycle hooks

Producer applications can register cleanup logic via `setupCleanUp`. This method allows you to attach:

- **Cleanup hooks** – for general cleanup logic not tied to Kafka topics
- **Topic hooks** – for reacting to topic lifecycle events (e.g. deletion)

#### Clean up

Custom cleanup logic that is not tied to Kafka topics can be registered via cleanup hooks:

```java

@Override
public ProducerCleanUpConfiguration setupCleanUp(
final AppConfiguration<ProducerTopicConfig> configuration) {

return ProducerApp.super.setupCleanUp(configuration)
.registerCleanHook(() -> {
// Custom cleanup logic
});
}
```

#### Topic hooks

Topic hooks should be used for topic-related cleanup or side effects, such as releasing external
resources associated with a topic or logging topic deletions:

```java

@Override
public ProducerCleanUpConfiguration setupCleanUp(
final AppConfiguration<ProducerTopicConfig> configuration) {

return ProducerApp.super.setupCleanUp(configuration)
.registerTopicHook(new TopicHook() {

@Override
public void deleted(final String topic) {
// Called when a managed topic is deleted
System.out.println("Deleted topic: " + topic);
}

@Override
public void close() {
// Optional closing of connections/resources
}
});
}
```

## Command line interface

Producer applications inherit standard CLI options from `KafkaApplication`. The following CLI options are
producer-specific:

| Option | Description | Default |
|---------------------------|-------------------------------------------|---------|
| `--output-topic` | Default output topic | - |
| `--labeled-output-topics` | Named output topics (`label1=topic1,...`) | - |

---

## Deployment

TODO
Loading
Loading