Jikkou Getting Started
This document will guide you through setting up Jikkou in a few minutes and managing your first resources with Jikkou.
Prerequisites
The following prerequisites are required for a successful and properly use of Jikkou.
Make sure the following is installed:
- An Apache Kafka cluster.
- Using Docker, Docker Compose is the easiest way to use it.
- Java 21 (not required when using the binary version).
Start your local Apache Kafka Cluster
You must have access to an Apache Kafka cluster for using Jikkou. Most of the time, the latest version of Jikkou is always built for working with the most recent version of Apache Kafka.
Make sure the Docker is up and running.
Then, run the following commands:
$ git clone https://github.com/streamthoughts/jikkou
$ cd jikkou
$ ./up # use ./down for stopping the docker-compose stack
Run Jikkou
Download the latest distribution (For Linux)
Run the following commands to install the latest version:
wget https://github.com/streamthoughts/jikkou/releases/download/v0.34.0/jikkou-0.34.0-linux-x86_64.zip && \
unzip jikkou-0.34.0-linux-x86_64.zip && \
cp jikkou-0.34.0-linux-x86_64/bin/jikkou $HOME/.local/bin && \
source <(jikkou generate-completion) && \
jikkou --version
For more details, or for other options, see the installation guide.
Configure Jikkou for your local Apache Kafka cluster
Set configuration context for localhost
jikkou config set-context localhost --config-props=kafka.client.bootstrap.servers=localhost:9092
Show the complete configuration.
jikkou config view --name localhost
Finally, let’s check if your cluster is accessible:
jikkou health get kafka
(output)
If OK, you should get an output similar to :
---
name: "kafka"
status: "UP"
details:
resource: "urn:kafka:cluster:id:KRzY-7iRTHy4d1UVyNlcuw"
brokers:
- id: "1"
host: "localhost"
port: 9092
Create your first topics
First, create a resource YAML file describing the topics you want to create on your cluster:
file: kafka-topics.yaml
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaTopicList"
items:
- metadata:
name: 'my-first-topic'
spec:
partitions: 5
replicationFactor: 1
configs:
cleanup.policy: 'compact'
- metadata:
name: 'my-second-topic'
spec:
partitions: 4
replicationFactor: 1
configs:
cleanup.policy: 'delete'
Then, run the following Jikkou command to trigger the topic creation on the cluster:
jikkou create -f ./kafka-topics.yaml
(output)
TASK [ADD] Add topic 'my-first-topic' (partitions=5, replicas=-1, configs=[cleanup.policy=compact]) - CHANGED
{
"changed": true,
"end": 1683986528117,
"resource": {
"name": "my-first-topic",
"partitions": {
"after": 5
},
"replicas": {
"after": -1
},
"configs": {
"cleanup.policy": {
"after": "compact",
"operation": "ADD"
}
},
"operation": "ADD"
},
"failed": false,
"status": "CHANGED"
}
TASK [ADD] Add topic 'my-second-topic' (partitions=4, replicas=-1, configs=[cleanup.policy=delete]) - CHANGED
{
"changed": true,
"end": 1683986528117,
"resource": {
"name": "my-second-topic",
"partitions": {
"after": 4
},
"replicas": {
"after": -1
},
"configs": {
"cleanup.policy": {
"after": "delete",
"operation": "ADD"
}
},
"operation": "ADD"
},
"failed": false,
"status": "CHANGED"
}
EXECUTION in 772ms
ok:
0, created:
2, altered:
0, deleted:
0 failed: 0
Tips
In the above command, we chose to use thecreate
command to create the new topics.
But we could just as easily use the update
or apply
command to get the same result depending on our needs.Finally, you can verify that topics are created on the cluster
jikkou get kafkatopics --default-configs
Tips
We use the--default-configs
to export built-in default configuration for configs that have a default value.Update Kafka Topics
Edit your kafka-topics.yaml
to add a retention.ms: 86400000
property to the defined topics.
Then, run the following command.
jikkou update -f ./kafka-topics.yaml
Delete Kafka Topics
To delete all topics defines in the topics.yaml
, add an annotation jikkou.io/delete: true
as follows:
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaTopicList"
metadata:
annotations:
# Annotation to specify that all resources must be deleted.
jikkou.io/delete: true
items:
- metadata:
name: 'my-first-topic'
spec:
partitions: 5
replicationFactor: 1
configs:
cleanup.policy: 'compact'
- metadata:
name: 'my-second-topic'
spec:
partitions: 4
replicationFactor: 1
configs:
cleanup.policy: 'delete'
Then, run the following command:
$ jikkou apply \
--files ./kafka-topics.yaml \
--selector "metadata.name MATCHES (my-.*-topic)" \
--dry-run
Using the dry-run
option, give you the possibility to check the changes that will be made before applying them.
Now, rerun the above command without the --dry-run
option to definitively delete the topics.
Recommendation
When working in a production environment, we strongly recommend running commands with a--selector
option to ensure
that changes are only applied to a specific set of resources. Also, always run your command in --dry-run
mode to
verify the changes
that will be executed by Jikkou before continuing.Reading the Help
To learn more about the available Jikkou commands, use jikkou help
or type a command followed by the -h
flag:
$ jikkou help get
Next Steps
Now, you’re ready to use Jikkou!🚀
As next steps, we suggest reading the following documentation in this order:
- Learn Jikkou concepts
- Read the Developer Guide to understand how to use the Jikkou API for Java
- Look at the examples