Here, you will find the list of resources supported for Apache Kafka.
Apache Kafka Resources
More information:
This is the multi-page printable view of this section. Click here to print.
Here, you will find the list of resources supported for Apache Kafka.
More information:
This section describes the resource definition format for kafkabrokers
entities, which can be used to define the
brokers you plan to manage on a specific Kafka cluster.
KafkaBroker
You can retrieve the state of Kafka Consumer Groups using the jikkou get kafkabrokers
(or jikkou get kb
) command.
Usage:
Get all 'KafkaBroker' resources.
jikkou get kafkabrokers [-hV] [--default-configs] [--dynamic-broker-configs]
[--list] [--static-broker-configs]
[--logger-level=<level>] [-o=<format>]
[-s=<expressions>]...
DESCRIPTION:
Use jikkou get kafkabrokers when you want to describe the state of all
resources of type 'KafkaBroker'.
OPTIONS:
--default-configs Describe built-in default configuration for configs
that have a default value.
--dynamic-broker-configs
Describe dynamic configs that are configured as
default for all brokers or for specific broker in
the cluster.
-h, --help Show this help message and exit.
--list Get resources as ResourceListObject.
--logger-level=<level>
Specify the log level verbosity to be used while
running a command.
Valid level values are: TRACE, DEBUG, INFO, WARN,
ERROR.
For example, `--logger-level=INFO`
-o, --output=<format> Prints the output in the specified format. Allowed
values: json, yaml (default yaml).
-s, --selector=<expressions>
The selector expression used for including or
excluding resources.
--static-broker-configs
Describe static configs provided as broker properties
at start up (e.g. server.properties file).
-V, --version Print version information and exit.
(The output from your current Jikkou version may be different from the above example.)
(command)
$ jikkou get kafkabrokers --static-broker-configs
(output)
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaBroker"
metadata:
name: "101"
labels: {}
annotations:
kafka.jikkou.io/cluster-id: "xtzWWN4bTjitpL3kfd9s5g"
spec:
id: "101"
host: "localhost"
port: 9092
configs:
advertised.listeners: "PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092"
authorizer.class.name: "org.apache.kafka.metadata.authorizer.StandardAuthorizer"
broker.id: "101"
controller.listener.names: "CONTROLLER"
controller.quorum.voters: "101@kafka:29093"
inter.broker.listener.name: "PLAINTEXT"
listener.security.protocol.map: "CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT"
listeners: "PLAINTEXT://kafka:29092,PLAINTEXT_HOST://0.0.0.0:9092,CONTROLLER://kafka:29093"
log.dirs: "/var/lib/kafka/data"
node.id: "101"
offsets.topic.replication.factor: "1"
process.roles: "broker,controller"
transaction.state.log.replication.factor: "1"
zookeeper.connect: ""
This section describes the resource definition format for KafkaConsumerGroup
entities, which can be used to define the
consumer groups you plan to manage on a specific Kafka cluster.
KafkaConsumerGroup
You can retrieve the state of Kafka Consumer Groups using the jikkou get kafkaconsumergroups
(or jikkou get kcg
) command.
$ jikkou get kafkaconsumergroups --help
Usage:
Get all 'KafkaConsumerGroup' resources.
jikkou get kafkaconsumergroups [-hV] [--list] [--offsets]
[--logger-level=<level>] [-o=<format>]
[--in-states=PARAM]... [-s=<expressions>]...
DESCRIPTION:
Use jikkou get kafkaconsumergroups when you want to describe the state of all
resources of type 'KafkaConsumerGroup'.
OPTIONS:
-h, --help Show this help message and exit.
--in-states=PARAM If states is set, only groups in these states will be
returned. Otherwise, all groups are returned. This
operation is supported by brokers with version
2.6.0 or later
--list Get resources as ResourceListObject.
--logger-level=<level>
Specify the log level verbosity to be used while
running a command.
Valid level values are: TRACE, DEBUG, INFO, WARN,
ERROR.
For example, `--logger-level=INFO
-o, --output=<format> Prints the output in the specified format. Allowed
values: json, yaml (default yaml).
--offsets Specify whether consumer group offsets should be
described.
-s, --selector=<expressions>
The selector expression used for including or
excluding resources.
-V, --version Print version information and exit
(The output from your current Jikkou version may be different from the above example.)
(command)
$ jikkou get kafkaconsumergroups --in-states STABLE --offsets
(output)
---
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConsumerGroup"
metadata:
name: "my-group"
labels:
kafka.jikkou.io/is-simple-consumer: false
status:
state: "STABLE"
members:
- memberId: "console-consumer-b103994e-bcd5-4236-9d03-97065057e594"
clientId: "console-consumer"
host: "/127.0.0.1"
assignments:
- "my-topic-0"
offsets:
- topic: "my-topic"
partition: 0
offset: 0
coordinator:
id: "101"
host: "localhost"
port: 9092
KafkaTopic resources are used to define the topics you want to manage on your Kafka Cluster(s). A KafkaTopic resource defines the number of partitions, the replication factor, and the configuration properties to be associated to a topics.
KafkaTopic
Here is the resource definition file for defining a KafkaTopic
.
apiVersion: "kafka.jikkou.io/v1beta2" # The api version (required)
kind: "KafkaTopic" # The resource kind (required)
metadata:
name: <The name of the topic> # (required)
labels: { }
annotations: { }
spec:
partitions: <Number of partitions> # (optional)
replicas: <Number of replicas> # (optional)
configs:
<config_key>: <Config Value> # The topic config properties keyed by name to override (optional)
configMapRefs: [ ] # The list of ConfigMap to be applied to this topic (optional)
The metadata.name
property is mandatory and specifies the name of the kafka topic.
To use the cluster default values for the number of partitions
and replicas
you can set the property
spec.partitions
and spec.replicas
to -1
.
Here is a simple example that shows how to define a single YAML file containing two topic definition using
the KafkaTopic
resource type.
file: kafka-topics.yaml
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: KafkaTopic
metadata:
name: 'my-topic-p1-r1' # Name of the topic
labels:
environment: example
spec:
partitions: 1 # Number of topic partitions (use -1 to use cluster default)
replicas: 1 # Replication factor per partition (use -1 to use cluster default)
configs:
min.insync.replicas: 1
cleanup.policy: 'delete'
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: KafkaTopic
metadata:
name: 'my-topic-p2-r1' # Name of the topic
labels:
environment: example
spec:
partitions: 2 # Number of topic partitions (use -1 to use cluster default)
replicas: 1 # Replication factor per partition (use -1 to use cluster default)
configs:
min.insync.replicas: 1
cleanup.policy: 'delete'
See official Apache Kafka documentation for details about the topic-level configs.
---
lines.KafkaTopicList
If you need to define multiple topics (e.g. using a template), it may be easier to use a KafkaTopicList
resource.
Here the resource definition file for defining a KafkaTopicList
.
apiVersion: "kafka.jikkou.io/v1beta2" # The api version (required)
kind: "KafkaTopicList" # The resource kind (required)
metadata: # (optional)
labels: { }
annotations: { }
items: [ ] # An array of KafkaTopic
Here is a simple example that shows how to define a single YAML file containing two topic definitions using
the KafkaTopicList
resource type. In addition, the example uses a ConfigMap
object to define the topic configuration
only once.
file: kafka-topics.yaml
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: KafkaTopicList
metadata:
labels:
environment: example
items:
- metadata:
name: 'my-topic-p1-r1'
spec:
partitions: 1
replicas: 1
configMapRefs: [ "TopicConfig" ]
- metadata:
name: 'my-topic-p2-r1'
spec:
partitions: 2
replicas: 1
configMapRefs: [ "TopicConfig" ]
---
apiVersion: "core.jikkou.io/v1beta2"
kind: ConfigMap
metadata:
name: 'TopicConfig'
data:
min.insync.replicas: 1
cleanup.policy: 'delete'
This section describes the resource definition format for KafkaUser
entities, which can be used to
manage SCRAM Users for Apache Kafka.
KafkaUser
Below is the overall structure of the KafkaUser
resource.
---
apiVersion: kafka.jikkou.io/v1 # The api version (required)
kind: KafkaUser # The resource kind (required)
metadata:
name: <string>
annotations:
# force update
kafka.jikkou.io/force-password-renewal: <boolean>
spec:
authentications:
- type: <enum> # or
password: <string> # leave empty to generate secure password
See below for details about all these fields.
metadata.name
[required]The name of the User.
kafka.jikkou.io/force-password-renewal
[optional]spec.authentications
[required]The list of authentications to manage for the user.
spec.authentications[].type
[required]The authentication type:
scram-sha-256
scram-sha-512
spec.authentications[].password
[required]The password of the user.
The following is an example of a resource describing a User:
---
# Example: file: kafka-scram-users.yaml
apiVersion: "kafka.jikkou.io/v1"
kind: "User"
metadata:
name: "Bob"
spec:
authentications:
- type: scram-sha-256
password: null
- type: scram-sha-512
password: null
Kafka Users
You can retrieve the SCRAM users of a Kafka cluster using the jikkou get kafkausers
(or jikkou get ku
) command.
$ jikkou get kc --help
Usage:
Get all 'KafkaUser' resources.
jikkou get kafkausers [-hV] [--list] [--logger-level=<level>] [--name=<name>]
[-o=<format>]
[--selector-match=<selectorMatchingStrategy>]
[-s=<expressions>]...
DESCRIPTION:
Use jikkou get kafkausers when you want to describe the state of all resources
of type 'KafkaUser'.
OPTIONS:
-h, --help Show this help message and exit.
--list Get resources as ResourceListObject (default: false).
--logger-level=<level>
Specify the log level verbosity to be used while
running a command.
Valid level values are: TRACE, DEBUG, INFO, WARN,
ERROR.
For example, `--logger-level=INFO`
--name=<name> The name of the resource.
-o, --output=<format> Prints the output in the specified format. Valid
values: JSON, YAML (default: YAML).
-s, --selector=<expressions>
The selector expression used for including or
excluding resources.
--selector-match=<selectorMatchingStrategy>
The selector matching strategy. Valid values: NONE,
ALL, ANY (default: ALL)
-V, --version Print version information and exit.
(The output from your current Jikkou version may be different from the above example.)
(command)
$ jikkou get ku
(output)
apiVersion: "kafka.jikkou.io/v1"
kind: "KafkaUser"
metadata:
name: "Bob"
labels: {}
annotations:
kafka.jikkou.io/cluster-id: "xtzWWN4bTjitpL3kfd9s5g"
spec:
authentications:
- type: "scram-sha-256"
iterations: 8192
- type: "scram-sha-512"
iterations: 8192
KafkaPrincipalAuthorization resources are used to define Access Control Lists (ACLs) for principals authenticated to your Kafka Cluster.
Jikkou can be used to describe all ACL policies that need to be created on Kafka Cluster
KafkaPrincipalAuthorization
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalAuthorization"
metadata:
name: "User:Alice"
spec:
roles: [ ] # List of roles to be added to the principal (optional)
acls: # List of KafkaPrincipalACL (required)
- resource:
type: <The type of the resource> # (required)
pattern: <The pattern to be used for matching resources> # (required)
patternType: <The pattern type> # (required)
type: <The type of this ACL> # ALLOW or DENY (required)
operations: [ ] # Operation that will be allowed or denied (required)
host: <HOST> # IP address from which principal will have access or will be denied (optional)
For more information on how to define authorization and ACLs, see the official Apache Kafka documentation: Security
The list below describes the valid values for the spec.acls.[].operations
property :
READ
WRITE
CERATE
DELETE
ALTER
DESCRIBE
CLUSTER_ACTION
DESCRIBE_CONFIGS
ALTER_CONFIGS
IDEMPOTENT_WRITE
CREATE_TOKEN
DESCRIBE_TOKENS
ALL
For more information see official Apache Kafka documentation: Operations in Kafka
The list below describes the valid values for the spec.acls.[].resource.type
property :
TOPIC
GROUP
CLUSTER
USER
TRANSACTIONAL_ID
For more information see official Apache Kafka documentation: Resources in Kafka
The list below describes the valid values for the spec.acls.[].resource.patternType
property :
LITERAL
: Use to allow or denied a principal to have access to a specific resource name.MATCH
: Use to allow or denied a principal to have access to all resources matching the given regex.PREFIXED
: Use to allow or denied a principal to have access to all resources having the given prefix.---
apiVersion: "kafka.jikkou.io/v1beta2" # The api version (required)
kind: "KafkaPrincipalAuthorization" # The resource kind (required)
metadata:
name: "User:Alice"
spec:
acls:
- resource:
type: 'topic'
pattern: 'my-topic-'
patternType: 'PREFIXED'
type: "ALLOW"
operations: [ 'READ', 'WRITE' ]
host: "*"
- resource:
type: 'topic'
pattern: 'my-other-topic-.*'
patternType: 'MATCH'
type: 'ALLOW'
operations: [ 'READ' ]
host: "*"
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalAuthorization"
metadata:
name: "User:Bob"
spec:
acls:
- resource:
type: 'topic'
pattern: 'my-topic-'
patternType: 'PREFIXED'
type: 'ALLOW'
operations: [ 'READ', 'WRITE' ]
host: "*"
KafkaPrincipalRole
apiVersion: "kafka.jikkou.io/v1beta2" # The api version (required)
kind: "KafkaPrincipalRole" # The resource kind (required)
metadata:
name: <Name of role> # The name of the role (required)
spec:
acls: [ ] # A list of KafkaPrincipalACL (required)
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalRole"
metadata:
name: "KafkaTopicPublicRead"
spec:
acls:
- type: "ALLOW"
operations: [ 'READ' ]
resource:
type: 'topic'
pattern: '/public-([.-])*/'
patternType: 'MATCH'
host: "*"
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalRole"
metadata:
name: "KafkaTopicPublicWrite"
spec:
acls:
- type: "ALLOW"
operations: [ 'WRITE' ]
resource:
type: 'topic'
pattern: '/public-([.-])*/'
patternType: 'MATCH'
host: "*"
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalAuthorization"
metadata:
name: "User:Alice"
spec:
roles:
- "KafkaTopicPublicRead"
- "KafkaTopicPublicWrite"
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalAuthorization"
metadata:
name: "User:Bob"
spec:
roles:
- "KafkaTopicPublicRead"
KafkaClientQuota resources are used to define the quota limits to be applied on Kafka consumers and producers.
A KafkaClientQuota resource can be used to apply limit to consumers and/or producers identified by a client-id
or a
user principal
.
KafkaClientQuota
Here is the resource definition file for defining a KafkaClientQuota
.
apiVersion: "kafka.jikkou.io/v1beta2" # The api version (required)
kind: "KafkaClientQuota" # The resource kind (required)
metadata: # (optional)
labels: { }
annotations: { }
spec:
type: <The quota type> # (required)
entity:
clientId: <The id of the client> # (required depending on the quota type).
user: <The principal of the user> # (required depending on the quota type).
configs:
requestPercentage: <The quota in percentage (%) of total requests> # (optional)
producerByteRate: <The quota in bytes for restricting data production> # (optional)
consumerByteRate: <The quota in bytes for restricting data consumption> # (optional)
The list below describes the supported quota types:
USERS_DEFAULT
: Set default quotas for all users.USER
: Set quotas for a specific user principal.USER_CLIENT
: Set quotas for a specific user principal and a specific client-id.USER_ALL_CLIENTS
: Set default quotas for a specific user and all clients.CLIENT
: Set default quotas for a specific client.CLIENTS_DEFAULT
: Set default quotas for all clients.Here is a simple example that shows how to define a single YAML file containing two quota definitions using
the KafkaClientQuota
resource type.
file: kafka-quotas.yaml
---
apiVersion: 'kafka.jikkou.io/v1beta2'
kind: 'KafkaClientQuota'
metadata:
labels: { }
annotations: { }
spec:
type: 'CLIENT'
entity:
clientId: 'my-client'
configs:
requestPercentage: 10
producerByteRate: 1024
consumerByteRate: 1024
---
apiVersion: 'kafka.jikkou.io/v1beta2'
kind: 'KafkaClientQuota'
metadata:
labels: { }
annotations: { }
spec:
type: 'USER'
entity:
user: 'my-user'
configs:
requestPercentage: 10
producerByteRate: 1024
consumerByteRate: 1024
KafkaClientQuotaList
If you need to define multiple topics (e.g. using a template), it may be easier to use a KafkaClientQuotaList
resource.
Here the resource definition file for defining a KafkaTopicList
.
apiVersion: "kafka.jikkou.io/v1beta2" # The api version (required)
kind: "KafkaClientQuotaList" # The resource kind (required)
metadata: # (optional)
name: <The name of the topic>
labels: { }
annotations: { }
items: [ ] # An array of KafkaClientQuota
Here is a simple example that shows how to define a single YAML file containing two KafkaClientQuota
definition using
the KafkaClientQuotaList
resource type.
apiVersion: 'kafka.jikkou.io/v1beta2'
kind: 'KafkaClientQuotaList'
metadata:
labels: { }
annotations: { }
items:
- spec:
type: 'CLIENT'
entity:
clientId: 'my-client'
configs:
requestPercentage: 10
producerByteRate: 1024
consumerByteRate: 1024
- spec:
type: 'USER'
entity:
user: 'my-user'
configs:
requestPercentage: 10
producerByteRate: 1024
consumerByteRate: 1024
A KafkaTableRecord resource can be used to produce a key/value record into a given compacted topic, i.e., a topic
with cleanup.policy=compact
(a.k.a. KTable).
KafkaTableRecord
Here is the resource definition file for defining a KafkaTableRecord
.
apiVersion: "kafka.jikkou.io/v1beta1" # The api version (required)
kind: "KafkaTableRecord" # The resource kind (required)
metadata:
labels: { }
annotations: { }
spec:
type: <string> # The topic name (required)
headers: # The list of headers
- name: <string>
value: <string>
key: # The record-key (required)
type: <string> # The record-key type. Must be one of: BINARY, STRING, JSON (required)
data: # The record-key in JSON serialized form.
$ref: <url or path> # Or an url to a local file containing the JSON string value.
value: # The record-value (required)
type: <string> # The record-value type. Must be one of: BINARY, STRING, JSON (required)
data: # The record-value in JSON serialized form.
$ref: <url or path> # Or an url to a local file containing the JSON string value.
The KafkaTableRecord
resource has been designed primarily to manage reference data published and shared via Kafka.
Therefore, it
is highly recommended to use this resource only with compacted Kafka topics containing a small amount of data.
Here are some examples that show how to a KafkaTableRecord
using the different supported data type.
STRING:
---
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaTableRecord"
spec:
topic: "my-topic"
headers:
- name: "content-type"
value: "application/text"
key:
type: STRING
data: |
"bar"
value:
type: STRING
data: |
"foo"
JSON:
---
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaTableRecord"
spec:
topic: "my-topic"
headers:
- name: "content-type"
value: "application/text"
key:
type: STRING
data: |
"bar"
value:
type: JSON
data: |
{
"foo": "bar"
}
BINARY:
---
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaTableRecord"
spec:
topic: "my-topic"
headers:
- name: "content-type"
value: "application/text"
key:
type: STRING
data: |
"bar"
value:
type: BINARY
data: |
"eyJmb28iOiAiYmFyIn0K"