This is the multi-page printable view of this section. Click here to print.
Releases
1 - Release v0.37.0
π Introducing Jikkou 0.37.0
We’re pleased to announce Jikkou 0.37.0.
This release tackles real pain points reported by the community β from managing multiple Kafka environments in a single config, to fixing broken Schema Registry workflows. Here’s the overview:
- π Multiple provider instances β manage prod, staging, and dev from one configuration
- π New
replacecommand for full resource recreation - π‘οΈ Schema Registry overhaul: subject modes, failover, regex validation, and more
- βοΈ KIP-980: create Kafka connectors in STOPPED or PAUSED state
- π¦ Directories as input for
--values-files - π Jinja template file locations for reusable templates
- π All resource schemas promoted to v1 API version
- π§ Java 25 migration and REST client modernization
To install, check out the installation guide. For the full changelog, see the GitHub release page.
π All Resource Schemas Promoted to v1
All resource apiVersion values have been promoted from v1beta1/v1beta2 to v1. This applies across every provider β Kafka, Schema Registry, Kafka Connect, Aiven, AWS, and core resources.
# Before (still works, but deprecated)
apiVersion: "kafka.jikkou.io/v1beta2"
# After
apiVersion: "kafka.jikkou.io/v1"
Existing YAML files using v1beta1 or v1beta2 will continue to work β Jikkou automatically resolves old versions to the latest registered resource class and normalises the apiVersion during deserialisation. But we recommend updating your files to v1 going forward.
π Multiple Provider Instances
Most teams don’t run a single Kafka cluster. You have production, staging, maybe a dev cluster β and until now, you needed separate Jikkou configurations or manual overrides to target each one. There was no way to register multiple instances of the same provider type.
Jikkou 0.37.0 introduces named provider instances. Register as many Kafka (or Schema Registry, or Kafka Connect) providers as you need, each with its own name and configuration. Then target the right one with --provider on the CLI or "provider" in API requests.
jikkou {
provider.kafka-prod {
enabled = true
type = io.streamthoughts.jikkou.kafka.KafkaExtensionProvider
default = true
config = {
client { bootstrap.servers = "prod-kafka:9092" }
}
}
provider.kafka-staging {
enabled = true
type = io.streamthoughts.jikkou.kafka.KafkaExtensionProvider
config = {
client { bootstrap.servers = "staging-kafka:9092" }
}
}
}
# Preview changes against production
jikkou diff --values-files topics.yaml --provider kafka-prod
# Apply to staging
jikkou create --values-files topics.yaml --provider kafka-staging
How Provider Selection Works
The --provider flag is always optional. Jikkou resolves the target provider with a simple fallback chain:
- Single provider of a given type: It’s used automatically β no
--providerflag needed, nodefaultflag needed. If you only have one Kafka provider configured, everything works exactly as before. - Multiple providers, one marked
default = true: The default is used when--provideris omitted. You only need the flag when targeting a non-default provider. - Multiple providers, no default: You must specify
--provideron every command. Omitting it will fail with an explicit error: “No default configuration defined, and multiple configurations found for provider type”.
This means existing single-provider configurations continue to work without any changes. The default and --provider flags only matter once you add a second provider.
Provider selection works across all commands β create, update, delete, diff, validate, replace, and patch β and extends to the REST API server with a provider field in reconciliation request bodies.
π New replace Command
If you’ve used Terraform’s taint or --replace flag, you know the pattern: sometimes you don’t want to update a resource in place β you want to tear it down and recreate it from scratch.
Jikkou now has its own version of this. The new replace command forces a full delete-then-create cycle on all targeted resources, regardless of whether an in-place update would be possible.
jikkou replace --values-files resources.yaml
# Preview first
jikkou replace --values-files resources.yaml --dry-run
A typical use case: development environments where you want to drop and recreate all your Kafka topics daily to start from a clean state. Rather than chaining jikkou delete and jikkou create in a script, replace handles it in one pass.
π‘οΈ Schema Registry β Major Improvements
This release brings five targeted fixes and features for the Schema Registry provider, most addressing specific community-reported issues.
Subject Modes
Schema Registry subjects now support mode management directly from your resource definitions. This is essential for schema migration workflows β set a subject to IMPORT mode, register schemas with specific IDs, then switch back to READWRITE.
apiVersion: "schemaregistry.jikkou.io/v1"
kind: "SchemaRegistrySubject"
metadata:
name: "user-events"
spec:
mode: "IMPORT"
compatibilityLevel: "BACKWARD"
schemaType: "AVRO"
schema: |
{
"type": "record",
"name": "User",
"fields": [{"name": "id", "type": "int"}]
}
Supported modes: IMPORT, READONLY, READWRITE, FORWARD.
Schema ID and Version on Create
When migrating schemas between registries, you often need to preserve the original schema IDs and versions. Jikkou now lets you specify both via annotations:
metadata:
name: "events"
annotations:
schemaregistry.jikkou.io/schema-id: "100"
schemaregistry.jikkou.io/schema-version: "1"
Combined with IMPORT mode, this gives you full control over registry migrations.
Multiple URLs with Failover
The documentation promised comma-separated Schema Registry URLs for failover, but the raw string was passed directly to the underlying HTTP client. Failover simply didn’t work.
Jikkou now properly parses comma-separated URLs. On connection failure, it automatically tries the next URL in the list.
provider.schemaregistry {
config = {
url = "http://sr-primary:8081,http://sr-backup:8081,http://sr-dr:8081"
}
}
Subject Name Regex Validation
Enforce naming conventions on your Schema Registry subjects with a regex pattern. Jikkou will reject any subject that doesn’t match β before it ever reaches the registry.
validations:
- name: "subjectMustHaveValidName"
type: "io.streamthoughts.jikkou.schema.registry.validation.SubjectNameRegexValidation"
config:
subjectNameRegex: "[a-zA-Z0-9\\._\\-]+"
Permanent Schema Deletion in One Step
Permanently deleting a schema used to require two separate Jikkou runs β first a soft delete, then a hard delete. In a CI/CD pipeline, that’s impractical. You declare “delete this permanently” and expect it to happen in one operation.
When you set jikkou.io/permanent-delete: true, Jikkou now automatically performs the soft delete followed by the hard delete in a single reconciliation pass.
βοΈ Kafka Connect: KIP-980 Support
When creating Kafka connectors, the state field was silently ignored β every connector started in RUNNING state, even if you specified STOPPED or PAUSED. For production deployments, you often want to create a connector, verify its configuration, and only then start it manually.
Jikkou now uses the KIP-980 API to pass initial_state when creating connectors:
apiVersion: "kafkaconnect.jikkou.io/v1"
kind: "KafkaConnector"
metadata:
name: "jdbc-source"
annotations:
kafkaconnect.jikkou.io/initial_state: "STOPPED"
spec:
connectorClass: "io.confluent.connect.jdbc.JdbcSourceConnector"
config:
connection.url: "jdbc:mysql://db:3306/mydb"
Options: RUNNING, STOPPED, PAUSED.
π¦ Directory Support for --values-files
Teams organizing Kafka configurations by team or environment β e.g., configurations/cluster1/resources/teamA/values.yml β couldn’t pass a directory to --values-files. You’d get java.io.IOException: Is a directory, and wildcard expansion didn’t work either.
--values-files now accepts directories. Jikkou recursively loads all matching files and deep-merges their values:
jikkou create --values-files config/environments/prod/
config/environments/prod/
βββ teamA/
β βββ topics.yml
β βββ acls.yml
βββ teamB/
β βββ topics.yml
βββ shared.yaml
All files are loaded and their values are merged recursively. No more listing every file individually.
π Jinja Template File Locations
Jinja’s {% include %} directive only resolved templates from the Java classpath. If you wanted to split a large Jikkou YAML into reusable fragments, you had to mess with CLASSPATH_PREFIX or embed files into container images. For a tool that champions declarative configuration, this was a rough edge.
You can now configure filesystem directories where Jinja resolves template includes:
jikkou {
jinja {
resourceLocations = [
"/etc/jikkou/templates",
"/opt/shared-templates"
]
}
}
Your templates can reference local files naturally:
{%- include "kafka/topic-defaults.jinja" -%}
{%- import "macros/common.jinja" as m -%}
{{ m.topic_config(name, partitions) }}
No classpath gymnastics required.
π§ Under the Hood
- Java 25: The project now targets Java 25 with GraalVM 25.0.2. Native image metadata has been migrated to the newer reachability-metadata format for better compilation support.
- REST client modernization: Migrated from Jersey to RESTEasy proxy client, removing dependency on Jersey internals. OkHttp upgraded from 4.12.0 to 5.3.2.
- Security fixes: Addressed CVE-2024-47561, CVE-2025-12183, CVE-2025-55163, and CVE-2026-25526 (Jinjava).
- Bug fixes: Fixed repository double-loading on reconcile, GitHub repository file-pattern config, invalid supplier for Schema Registry basic auth, and comma-separated key-value pair parsing.
- Server: API resources now use a blocking executor for improved stability.
β Wrapping Up
A lot of what’s in this release came directly from issues and feedback you opened on GitHub β so thank you. Keep it coming.
If you run into problems, open a GitHub issue. Give us a βοΈ on GitHub and join the conversation on Slack.
2 - Release v0.36.0
π Introducing Jikkou 0.36.0
Weβre excited to announce the release of Jikkou 0.36.0! π
This release brings major new features to make Jikkou more powerful, flexible, and GitOps-friendly than ever before:
- π New resource for managing AWS Glue Schemas
- π‘οΈ New resource for defining ValidatingResourcePolicy
- π New selector based on Google Common Expression Language
- π¦ New concept of Resource Repositories
- βοΈ Enhanced Kafka actions
- π Evolved provider configuration system
To install the new version, check out the installation guide.
For detailed release notes, see the GitHub page.
π AWS Glue Schema Provider
We know that many developers and DevOps teams rely on Jikkou to manage their AWS MSK clusters.
With this release, weβre going one step further: Jikkou 0.36.0 adds a new provider for AWS Glue Schemas.
You can now fully manage schemas registered in AWS Glue Registries β just like you already do with Confluent Schema Registry.
Example:
# file: ./aws-glue-schema-user.yaml
---
apiVersion: "aws.jikkou.io/v1"
kind: "AwsGlueSchema"
metadata:
name: "Person"
labels:
glue.aws.amazon.com/registry-name: Test
annotations:
glue.aws.amazon.com/normalize-schema: true
spec:
compatibility: "BACKWARD"
dataFormat: "AVRO"
schemaDefinition: |
{
"namespace": "example",
"type": "record",
"name": "Person",
"fields": [
{
"name": "id",
"type": "int",
"doc": "The person's unique ID (required)"
},
{
"name": "firstname",
"type": "string",
"doc": "The person's legal firstname (required)"
},
{
"name": "lastname",
"type": "string",
"doc": "The person's legal lastname (required)"
}
}
π With the Jikkou CLI, you can now run commands like:
jikkou get aws-glueschemas
π Learn more: AWS Provider Documentation
π‘οΈ ValidatingResourcePolicy for Smarter Governance
Validating resources has always been a challenge in Jikkou.
Earlier releases provided a validation chain with built-in
checks (e.g., TopicMinReplicationFactor, TopicMaxNumPartitions, TopicNamePrefix).
However, these were limited, provider-specific, and mostly resource-scoped.
With Jikkou 0.36, weβre introducing ValidatingResourcePolicy β a declarative, reusable way to enforce governance
and compliance across any resource.
β‘ How It Works
A ValidatingResourcePolicy defines:
Selectors:
- Match by resource kinds or operations
- Match by labels
- Use Google CEL expressions for advanced logic
Rules:
Write expressions using Google Common Expression Language to enforce validation logic.Failure Policies:
Decide what happens when validation fails:FAILβ abort the operationCONTINUEβ log but proceedFILTERβ automatically remove invalid resources
π Examples
Enforcing min/max partitions for KafkaTopic:
---
apiVersion: core.jikkou.io/v1
kind: ValidatingResourcePolicy
metadata:
name: KafkaTopicPolicy
spec:
failurePolicy: FAIL
selector:
matchResources:
- kind: KafkaTopic
rules:
- name: MaxTopicPartitions
expression: "resource.spec.partitions <= 50"
messageExpression: "'Topic partitions MUST be <= 50, but was: ' + string(resource.spec.partitions)"
- name: MinTopicPartitions
expression: "resource.spec.partitions >= 3"
message: "Topic must have at least 3 partitions"
Filtering out DELETE operations on Kafka Topics:
---
apiVersion: core.jikkou.io/v1
kind: ValidatingResourcePolicy
metadata:
name: KafkaTopicPolicy
spec:
failurePolicy: FILTER
selector:
matchResources:
- kinds: KafkaTopicChange
rules:
- name: FilterDeleteOperation
expression: "resource.spec.op == 'DELETE'"
messageExpression: "'Operation ' + resource.spec.op + ' on topics is not authorized'"
π Learn more: ValidatingResourcePolicy Documentation
π¦ Resource Repositories β GitOps-Friendly Resource Management
Managing and sharing resource definitions just got easier.
Jikkou 0.36 introduces Resource Repositories, allowing you to load resources directly from GitHub repositories
or local directories.
Repositories are perfect for:
- Reusable resources across multiple environments
- Shared definitions across teams
- Keeping transient or computed resources (e.g.,
ConfigMap,ValidatingResourcePolicy) separate from persistent ones - Injecting dynamic configuration without polluting your main repo
π‘ Use Case Spotlight:
Instead of keeping temporary validation policies or config maps in your CLI input, you can store them in a repository
and inject them dynamically. This makes transient resources clean, shareable, and environment-specific.
Example Configuration
jikkou {
repositories = [
{
name = "github-repository"
type = io.streamthoughts.jikkou.core.repository.GitHubResourceRepository
config {
repository = "streamthoughts/jikkou"
branch = "main"
paths = [
"examples/",
]
# Optionally set an access token for private repositories
# token = ${?GITHUB_TOKEN}
}
}
]
}
π Learn more: Repositories Documentation
π Expression-based CLI Selectors
Selectors just became more powerful with Google CEL.
You can now filter resources dynamically based on any attribute.
Example: List all topics with more than 12 partitions:
jikkou get kafkatopics --selector "expr: resource.spec.partitions >= 12"
π Learn more: Expression Documentation
βοΈ Actions Improvements
- Added
TruncateKafkaTopicRecordsaction β truncate topic-partitions to a specific datetime. - Extended
KafkaConsumerGroupsResetOffsets:- Reset offsets for multiple consumer groups.
- New options:
--all: apply to all consumer groups--groups: specify consumer groups--includes: regex patterns for inclusion--excludes: regex patterns for exclusion
π Migration: Provider Configurations
Starting in 0.36.0, provider configuration has evolved to support future extensibility.
Before 0.36.0:
jikkou {
extension.providers {
kafka.enabled = true
}
kafka {
client {
bootstrap.servers = "localhost:9092"
}
}
}
After 0.36.0:
provider.kafka {
enabled = true
type = io.streamthoughts.jikkou.kafka.KafkaExtensionProvider
config = {
client {
bootstrap.servers = "localhost:9092"
}
}
}
β οΈ Old configs still work for now, but we recommend migrating.
β Wrapping Up
We canβt wait to see what you build with this new release.
If you encounter any issues, please open a GitHub issue on
our project page.
Donβt forget to give us a βοΈ on GitHub and join the community
on Slack.
3 - Release v0.34.0
Introducing Jikkou 0.34.0
We’re excited to unveil the latest release of Jikkou 0.34.0. π
To install the new version, please visit the installation guide. For detailed release notes, check out the GitHub page.
What’s New in Jikkou 0.34.0?
- Enhanced Aiven provider with support for Kafka topics.
- Added SSL support for Kafka Connect and Schema Registry
- Introduced dynamic connection for Kafka Connect clusters
Below is a summary of these new features with examples.
Topic Aiven for Apache Kafka
Jikkou 0.34.0 adds a new KafkaTopic kind
that can be used to manage kafka Topics directly though the Aiven API.
You can list kafka topics using the new command below:
jikkou get avn-kafkatopics
In addition, topics can be described, created and updated using the same resource model as the Apache Kafka provider.
# file:./aiven-kafkat-topics.yaml
---
apiVersion: "kafka.aiven.io/v1beta2"
kind: "KafkaTopic"
metadata:
name: "test"
labels:
tag.aiven.io/my-tag: "test-tag"
spec:
partitions: 1
replicas: 3
configs:
cleanup.policy: "delete"
compression.type: "producer"
The main advantages of using this new resource kind are the use of the Aiven Token API to authenticate to the Aiven API and the ability to manage tags for kafka topics.
SSL support for Kafka Connect and Schema Registry
Jikkou 0.34.0 also brings SSL support for the Kafka Connect and Schema Registry providers. Therefore, it’s now possible to configure the providers to authenticate using SSL certificate.
Example for the Schema Registry:
jikkou {
schemaRegistry {
url = "https://localhost:8081"
authMethod = "SSL"
sslKeyStoreLocation = "/certs/registry.keystore.jks"
sslKeyStoreType = "JKS"
sslKeyStorePassword = "password"
sslKeyPassword = "password"
sslTrustStoreLocation = "/certs/registry.truststore.jks"
sslTrustStoreType = "JKS"
sslTrustStorePassword = "password"
sslIgnoreHostnameVerification = true
}
}
Dynamically connection for Kafka Connect clusters
Before Jikkou 0.34.0, to deploy a Kafka Connect connector, it was mandatory to configure a connection to a target cluster:
jikkou {
extensions.provider.kafkaconnect.enabled = true
kafkaConnect {
clusters = [
{
name = "my-connect-cluster"
url = "http://localhost:8083"
}
]
}
}
This connection could then be referenced in a connector resource definition via the
annotation kafka.jikkou.io/connect-cluster.
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConnector"
metadata:
name: "mu-connector"
labels:
kafka.jikkou.io/connect-cluster: "my-connect-cluster"
This approach is suitable for most use cases, but can be challenging if you need to manage a large and dynamic number of Kafka Connect clusters.
To meet this need, it is now possible to provide connection information for the cluster to connect to directly, through
the new metadata annotation new metadata annotation: jikkou.io/config-override.
Here is a simple example showing the use of the new annotation:
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConnector"
metadata:
name: "mu-connector"
labels:
kafka.jikkou.io/connect-cluster: "my-connect-cluster"
annotations:
jikkou.io/config-override: |
{ "url": "http://localhost:8083" }
This new annotation can be used with the Jikkou’s Jinja template creation mechanism to define dynamic configuration.
Wrapping Up
We hope you enjoy these new features. If you encounter any issues with Jikkou v0.34.0, please feel free to open a GitHub issue on our project page. Don’t forget to give us a βοΈ on Github to support the team, and join us on Slack.
4 - Release v0.33.0
Introducing Jikkou 0.33.0
We’re excited to unveil the latest release of Jikkou 0.33.0. π
To install the new version, please visit the installation guide. For detailed release notes, check out the GitHub page.
What’s New in Jikkou 0.33.0?
- Enhanced resource change format.
- Added support for the patch command.
- Introduced the new
--statusoption forKafkaTopicresources. - Exported offset-lag to the status of
KafkaConsumerGroupresources.
Below is a summary of these new features with examples.
Diff/Patch Commands
In previous versions, Jikkou provided the diff command to display changes required to reconcile input resources.
However, this command lacked certain capabilities to be truly useful. This new version introduces a standardized change
format for all resource types, along with two new options for filtering changes:
--filter-resource-op=: Filters out all state changes except those corresponding to the given operations.--filter-change-op=: Filters out all resources except those corresponding to the given operations.
The new output format you can expect from the diff command is as follows:
---
apiVersion: [ group/version of the change ]
kind: [ kind of the change ]
metadata: [ resource metadata ]
spec:
# Array of state changes
changes:
- name: [ name of the changed state ]
op: [ change operation ]
before: [ value of the state before the operation ]
after: [ value of the state after the operation ]
data: [ static data attached to the resource ]
op: [ resource change operation ]
The primary motivation behind this new format is the introduction of a new patch command. Prior to Jikkou 0.33.0,
when
using the apply command after a dry-run or a diff command, Jikkou couldn’t guarantee that the applied changes
matched
those returned from the previous command. With Jikkou 0.33.0, you can now directly pass the result of the diff
command
to the new patch command to efficiently apply the desired state changes.
Here’s a workflow to create your resources:
Step 1) Create a resource descriptor file
cat << EOF > my-topic.yaml
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: KafkaTopic
metadata:
name: 'my-topic'
labels:
environment: example
spec:
partitions: 3
replicas: 1
configs:
min.insync.replicas: 1
cleanup.policy: 'delete'
EOF
Step 2) Run diff
jikkou diff -f ./my-topic.yaml > my-topic-diff.yaml
(output)
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaTopicChange"
metadata:
name: "my-topic"
labels:
environment: "example"
annotations:
jikkou.io/managed-by-location: "my-topic.yaml"
spec:
changes:
- name: "partitions"
op: "CREATE"
after: 3
- name: "replicas"
op: "CREATE"
after: 1
- name: "config.cleanup.policy"
op: "CREATE"
after: "delete"
- name: "config.min.insync.replicas"
op: "CREATE"
after: 1
op: "CREATE"
Step 3) Run patch
jikkou patch -f ./my-topic-diff.yaml --mode FULL --output compact
(output)
TASK [
CREATE
] Create topic 'my-topic' (partitions=3, replicas=1, configs=[cleanup.policy=delete, min.insync.replicas=1]) - CHANGED
EXECUTION in 3s 797ms
ok: 0, created: 1, altered: 0, deleted: 0 failed: 0
Attempting to apply the changes a second time may result in an error from the remote system:
{
"status": "FAILED",
"description": "Create topic 'my-topic' (partitions=3, replicas=1,configs=[cleanup.policy=delete,min.insync.replicas=1])",
"errors": [ {
"message": "TopicExistsException: Topic 'my-topic' already exists."
} ],
...
}
Resource Provider for Apache Kafka
Jikkou 0.33.0 also packs with some minor improvements for the Apache Kafka provider.
KafkaTopic Status
You can now describe the status of a topic-partitions by using the new --status option
when getting a KafkaTopic resource.
jikkou get kt --status --selector "metadata.name IN (my-topic)"
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaTopic"
metadata:
name: "my-topic"
labels:
kafka.jikkou.io/topic-id: "UbZI2N2YQTqfNcbKKHps5A"
annotations:
kafka.jikkou.io/cluster-id: "xtzWWN4bTjitpL3kfd9s5g"
spec:
partitions: 1
replicas: 1
configs:
cleanup.policy: "delete"
configMapRefs: [ ]
status:
partitions:
- id: 0
leader: 101
isr:
- 101
replicas:
- 101
KafkaConsumerGroup OffsetLags
With Jikkou 0.33.0, you can export the offset-lag of a KafkaConsumerGroup resource using the --offsets option.
jikkou get kafkaconsumergroups --offsets
---
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConsumerGroup"
metadata:
name: "my-group"
labels:
kafka.jikkou.io/is-simple-consumer: false
status:
state: "EMPTY"
members: [ ]
offsets:
- topic: "my-topic"
partition: 0
offset: 16
offset-lag: 0
coordinator:
id: "101"
host: "localhost"
port: 9092
Finally, all those new features are also completely available through the Jikkou REST Server.
Wrapping Up
We hope you enjoy these new features. If you encounter any issues with Jikkou v0.33.0, please feel free to open a GitHub issue on our project page. Don’t forget to give us a βοΈ on Github to support the team, and join us on Slack.
5 - Release v0.32.0
Jikkou 0.32.0: Moving Beyond Apache Kafka. Introducing new features: Extension Providers, Actions, etc.!
Iβm thrilled to announce the release of Jikkou 0.32.0 which packs two major features: External Extension Providers and Actions. π
Highlights: Whatβs new in Jikkou 0.32.0?
New External Extension Provider mechanism to extend Jikkou features.
New extension type Action to execute specific operations against resources.
New action for resetting consumer group offsets.
New action for restarting connector and tasks for Kafka Connect.
New option selector-match to exclude/include resources from being returned or reconciled by Jikkou.
New API to get resources by their name.
Extension Providers
Jikkou is a project that continues to reinvent and redefine itself with each new version. Initially developed exclusively to manage the configuration of Kafka topics, it can now be used to manage Schema Registries, Kafka Connect connectors, and more. But, the funny thing is that Jikkou isnβt coupled with Kafka. It was designed around a concept of pluggable extensions that enable new capabilities and kind of resources to be seamlessly added to the project. For this, Jikkou uses the Java Service Loader mechanism to automatically discover new extensions at runtime.
Unfortunately, until now there has been no official way of using this mechanism with Jikkou CLI or Jikkou API Server. For this reason, Jikkou 0.32.0 brings the capability to easily configuration external extensions.
So how does it work? Well, letβs imagine you want to be able to load Text Files from the local filesystem using Jikkou.
First, we need to create a new Java project and add the Jikkou Core library to your projectβs dependencies (
io.streamthoughts:jikkou-core:0.32.0 dependency).
Then, you will need to create some POJO classes to represent your resource (e.g., V1File.class) and to implement the Collector interface :
@SupportedResource(type = V1File.class)
@ExtensionSpec(
options = {
@ExtensionOptionSpec(
name = "directory",
description = "The absolute path of the directory from which to collect files",
type = String.class,
required = true
)
}
)
@Description("FileCollector allows listing all files in a given directory.")
public final class FileCollector
extends ContextualExtension
implements Collector<V1File> {
private static final Logger LOG = LoggerFactory.getLogger(FileCollector.class);
@Override
public ResourceListObject<V1File> listAll(@NotNull Configuration configuration,
@NotNull Selector selector) {
// Get the 'directory' option.
String directory = extensionContext().<String>configProperty("directory").get(configuration);
// Collect all files.
List<V1File> objects = Stream.of(new File(directory).listFiles())
.filter(file -> !file.isDirectory())
.map(file -> {
try {
Path path = file.toPath();
String content = Files.readString(path);
V1File object = new V1File(ObjectMeta
.builder()
.withName(file.getName())
.withAnnotation("system.jikkou.io/fileSize", Files.size(path))
.withAnnotation("system.jikkou.io/fileLastModifier", Files.getLastModifiedTime(path))
.build(),
new V1FileSpec(content)
);
return Optional.of(object);
} catch (IOException e) {
LOG.error("Cannot read content from file: {}", file.getName(), e);
return Optional.<V1File>empty();
}
})
.flatMap(Optional::stream)
.toList();
ObjectMeta objectMeta = ObjectMeta
.builder()
.withAnnotation("system.jikkou.io/directory", directory)
.build();
return new DefaultResourceListObject<>("FileList", "system.jikkou.io/v1", objectMeta, objects);
}
}
Next, you will need to implement the ExtensionProvider interface to register both your extension and your resource kind.
public final class FileExtensionProvider implements ExtensionProvider {
/**
* Registers the extensions for this provider.
*
* @param registry The ExtensionRegistry.
*/
public void registerExtensions(@NotNull ExtensionRegistry registry) {
registry.register(FileCollector.class, FileCollector::new);
}
/**
* Registers the resources for this provider.
*
* @param registry The ResourceRegistry.
*/
public void registerResources(@NotNull ResourceRegistry registry) {
registry.register(V1File.class);
}
}
Then, the fully qualified name of the class must be added to the resource file META-INF/service/io.streamthoughts.jikkou.spi.ExtensionProvider.
Finally, all you need to do is to package your project as a tarball or ZIP archive. The archive must contain a single top-level directory containing the extension JAR files, as well as any resource files or third-party libraries required by your extensions.
To install your Extension Provider, all you need to do is to unpacks the archive into a desired location (e.g., /usr/share/jikkou-extensions) and to configure the Jikkouβs API Server or Jikkou CLI (when running the Java Binary Distribution, i.e., not the native version) with the jikkou.extension.paths property (e.g., jikkou.extension.paths=/usr/share/jikkou-extensions). For people who are familiar with how Kafka Connect works, itβs more or less the same mechanism.
(The full code source of this example is available on GitHub).
And thatβs it! π
Extension Providers open up the possibility of infinitely expanding Jikkou to manage your own resources, and use it with systems other than Kafka.
Actions
Jikkou uses a declarative approach to manage the asset state of your data infrastructure using resource descriptors written in YAML. But sometimes, ops and development teams may need to perform specific operations on their resources that cannot be included in their descriptor files. For example, you may need to reset offsets for one or multiple Kafka Consumer Groups, restart failed connectors and tasks for Kafka Connect, etc. So instead of having to switch from one tool to another, why not just use Jikkou for this?
Well, to solve that need, Jikkou 0.32.0 introduces a new type of extension called βActionsβ that allows users to perform specific operations on resources.
Combined with the External Extension Provider mechanism, you can now implement the Action interface to add custom operations to Jikkou.
@Category(ExtensionCategory.ACTION)
public interface Action<T extends HasMetadata> extends HasMetadataAcceptable, Extension {
/**
* Executes the action.
*
* @param configuration The configuration
* @return The ExecutionResultSet
*/
@NotNull
ExecutionResultSet<T> execute(@NotNull Configuration configuration);
}
Actions are fully integrated with Jikkou API Server through the new Endpoint: /api/v1/actions/{name}/execute{?[options]
Kafka Consumer Groups
Altering Consumer Group Offsets
Jikkou 0.32.0 introduces the new KafkaConsumerGroupsResetOffsets action allows resetting offsets of consumer groups.
Here is an example showing how to reset the group my-group to the earliest offsets for topic test.
$ jikkou action kafkaconsumergroupresetoffsets execute \
--group my-group \
--topic test \
--to-earliest
(output)
kind: "ApiActionResultSet"
apiVersion: "core.jikkou.io/v1"
metadata:
labels: { }
annotations:
configs.jikkou.io/to-earliest: "true"
configs.jikkou.io/group: "my-group"
configs.jikkou.io/dry-run: "false"
configs.jikkou.io/topic:
- "test"
results:
- status: "SUCCEEDED"
errors: [ ]
data:
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConsumerGroup"
metadata:
name: "my-group"
labels:
kafka.jikkou.io/is-simple-consumer: false
annotations: { }
status:
state: "EMPTY"
members: [ ]
offsets:
- topic: "test"
partition: 1
offset: 0
coordinator:
id: "101"
host: "localhost"
port: 9092
This action is pretty similar to the kafka-consumer-group script that ships with Apache Kafka. You can use it to reset a consumer group to the earliest or latest offsets, to a specific datetime or specific offset.
In addition, it can be executed in a dry-run.
Deleting Consumer Groups
You can now add the core annotation jikkou.io/delete to a KafkaConsumerGroup resource to mark it for deletion:
---
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConsumerGroup"
metadata:
name: "my-group"
labels:
kafka.jikkou.io/is-simple-consumer: false
annotations:
jikkou.io/delete: true
$ jikkou delete --files my-consumer-group.yaml -o wide
TASK [DELETE] Delete consumer group 'my-group' - CHANGED ************************************************
{
"status" : "CHANGED",
"changed" : true,
"failed" : false,
"end" : 1701162781494,
"data" : {
"apiVersion" : "core.jikkou.io/v1beta2",
"kind" : "GenericResourceChange",
"metadata" : {
"name" : "my-group",
"labels" : {
"kafka.jikkou.io/is-simple-consumer" : false
},
"annotations" : {
"jikkou.io/delete" : true,
"jikkou.io/managed-by-location" : "./my-consumer-group.yaml"
}
},
"change" : {
"before" : {
"apiVersion" : "kafka.jikkou.io/v1beta1",
"kind" : "KafkaConsumerGroup",
"metadata" : {
"name" : "my-group",
"labels" : {
"kafka.jikkou.io/is-simple-consumer" : false
},
"annotations" : { }
}
},
"operation" : "DELETE"
}
},
"description" : "Delete consumer group 'my-group'"
}
EXECUTION in 64ms
ok : 0, created : 0, altered : 0, deleted : 1 failed : 0
Kafka Connect
Restarting Connector and Tasks
This new release also packs with the new action KafkaConnectRestartConnectors action allows a user to restart all or just the failed Connector and Task instances for one or multiple named connectors.
Here are a few examples from the documentation:
- Restarting all connectors for all Kafka Connect clusters.
$ jikkou action kafkaconnectrestartconnectors execute
---
kind: "ApiActionResultSet"
apiVersion: "core.jikkou.io/v1"
metadata:
labels: {}
annotations: {}
results:
- status: "SUCCEEDED"
data:
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConnector"
metadata:
name: "local-file-sink"
labels:
kafka.jikkou.io/connect-cluster: "my-connect-cluster"
annotations: {}
spec:
connectorClass: "FileStreamSink"
tasksMax: 1
config:
file: "/tmp/test.sink.txt"
topics: "connect-test"
state: "RUNNING"
status:
connectorStatus:
name: "local-file-sink"
connector:
state: "RUNNING"
workerId: "connect:8083"
tasks:
- id: 0
state: "RUNNING"
workerId: "connect:8083"
- Restarting a specific connector and tasks for on Kafka Connect cluster
$ jikkou action kafkaconnectrestartconnectors execute \
--cluster-name my-connect-cluster
--connector-name local-file-sink \
--include-tasks
New Selector Matching Strategy
Jikkou CLI allows you to provide one or multiple *selector expressions *in order to include or exclude resources from being returned or reconciled by Jikkou. In previous versions, selectors were cumulative, so resources had to match all selectors to be returned. Now, you can specify a selector matching strategy to determine how expressions must be combined using the option –selector-match=[ANY|ALL|NONE].
ALL: A resource is selected if it matches all selectors.
ANY: A resource is selected if it matches one of the selectors.
NONE: A resource is selected if it matches none of the selectors.
For example, the below command will only return topics matching the regex ^__.* or having a name equals to _schemas.
$ jikkou get kafkatopics \
--selector 'metadata.name MATCHES (^__connect-*)'
--selector 'metadata.name IN (_schemas)'
--selector-match ANY
New Get Resource by Name
In some cases, it may be necessary to retrieve only a specific resource for a specific name. In previous versions, the solution was to use selectors. Unfortunately, this approach isnβt very efficient, as it involves retrieving all the resources and then filtering them. To start solving that issue, Jikkou v0.32.0 adds a new API to retrieve a resource by its name.
Example (using Jikkou CLI):
$ jikkou get kafkatopics --name _schemas
Example (using Jikkou API Server):
$ curl -sX GET \
http://localhost:28082/apis/kafka.jikkou.io/v1/kafkatopics/_schemas \
-H 'Accept:application/json'
Note : Currently not all resources have been updated to use that new API, so itβs possible that selectors are used as a default implementation by internal Jikkou API.