Skip to content

mcolomerc/cfk-kraft

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

Confluent For Kubernetes (CFK) - KRAFT playground

This is a playground for KRAFT running with CFK

Prerequisites

  • A Kubernetes cluster.

  • Kubectl, with cluster context configured.

  • Helm

  • OpenSSL: openssl command line tool.

  • CFSSL command: cfssl

Confluent For Kubernetes

Confluent For Kubernetes

Resoources:

  • KraftControllers
  • Kafka
  • Schema Registry
  • Control Center

Networking:

  • Internal communication (TLS): Autogenerated Certificates by the Operator.
  • External communication (Ingress. TLS Host based routing): Custom Certificates.

External Listener:

  • Authentication: mTLS

Generate certificates

1. Deploy

  • scripts/1_helm_setup.sh

Confluent For Kubernetes Operator with KRAFT enabled, Namespace: confluent, and NGINX Ingress Controller.

2. Create Certificates

Replace $DOMAIN with your domain name in the certificates configuration.

Example of using a fake DNS Domain like nip.io.

Use: kubectl get svc ingress-nginx-controller -o json | jq .status.loadBalancer.ingress to get the IP address of the ingress controller.

Using a fake DNS Domain: $DOMAIN = <load_balancer_ip>.nip.io

sed -i '' -e 's/$DOMAIN/<load_balancer_ip>.nip.io/g' FILENAME

Generate certificates: scripts/2_certs.sh

All files will be created under the certs/generated directory.

3. Secrets

Create Kubernetes Secrets: scripts/3_secrets.sh

4. Deploy Confluent Platform

  • Kraft Controllers
  • Brokers
  • Schema Registry
  • Control Center
  • Demo Topic

Update the $DOMAIN variable in the crds files:

  • Control Center ingress: controlcenter.$DOMAIN - ./scripts/crds/c3-ingress.yaml

  • Kafka ingress: kafka.$DOMAIN:433, b0.$DOMAIN, b1.$DOMAIN, b2.$DOMAIN - ./scripts/crds/kafkaingress.yaml

  • Kafka external access: domain: $DOMAIN - ./scripts/crds/kafka.yaml

Deploy: scripts/4_confluent.sh

Control center: Open Browser https://controlcenter.$DOMAIN

Kafka Clients

Internal clients

  • Use the autogenerated certificates.

scripts/config/internal.properties

bootstrap.servers=kafka.confluent.svc.cluster.local:9071
security.protocol=SSL
ssl.truststore.location=/mnt/sslcerts/truststore.jks
ssl.truststore.password=mystorepassword
  • Demo producer (kafka-producer-perf-test) to topic demo.topic, it uses internal listener with SSL.
kubectl apply -f ./crds/producer.yaml

it uses kafka-client-config-secure secret created previously.

   ...
      volumes:
        # This application pod will mount a volume for Kafka client properties from 
        # the secret `kafka-client-config-secure`
        - name: kafka-properties
          secret:
            secretName: kafka-client-config-secure
        # Confluent for Kubernetes, when configured with autogenerated certs, will create a
        # JKS keystore and truststore and store that in a Kubernetes secret named `kafka-generated-jks`.
        # Here, this client appliation will mount a volume from this secret so that it can use the JKS files.
        - name: kafka-ssl-autogenerated
          secret:
            secretName: kafka-generated-jks

External clients (mTLS authentication)

Client certificates: script/5_clients.sh

Use custom certificates. scripts/config/external.properties

security.protocol=SSL 
ssl.truststore.type=PEM
ssl.truststore.location=<FULL_PATH>/certs/generated/cacerts.pem 
ssl.keystore.type=PKCS12
ssl.keystore.location=<FULL_PATH>/certs/generated/user.p12
ssl.keystore.password=changeme

Test client:

kafka-topics --bootstrap-server kafka.$DOMAIN:443 \
--command-config ./scripts/config/external.properties \ 
--topic demo.topic \
--describe

librdkafka client:

security.protocol: SSL 
ssl.ca.location: "<FULL_PATH>/certs/generated/cacerts.pem"
ssl.certificate.location: "<FULL_PATH>/certs/generated/user.pem"
ssl.key.location: "<FULL_PATH>/certs/generated/user-key.pem"
ssl.key.password: "mystorepassword" 

Extra - GKE

  • Create GKE (example):
gcloud beta container --project <PROJECT_ID> clusters create "cluster-gke-1" \
--zone "<ZONE>" --no-enable-basic-auth --cluster-version "1.27.3-gke.100" \
--release-channel "regular" --machine-type "e2-standard-4" --image-type "COS_CONTAINERD" \
--disk-type "pd-balanced" --disk-size "500" --metadata disable-legacy-endpoints=true \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write", \
"https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol", \
"https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--max-pods-per-node "110" --num-nodes "3" --logging=SYSTEM,WORKLOAD --monitoring=SYSTEM --enable-ip-alias \ 
--network "projects/<PROJECT_ID>/global/networks/<NETWORK_ID>" \ 
--subnetwork "projects/<PROJECT_ID>/regions/<REGION>/subnetworks/<SUBNETID>" \
--no-enable-intra-node-visibility --default-max-pods-per-node "110" \ 
--no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \
--enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 \
--enable-shielded-nodes --node-locations "<ZONE>"
  • Get GKE credentials (example):
gcloud container clusters get-credentials cluster-gke-1 --zone <ZONE> --project <PROJECT_ID>

Authorizer

Adding ACLs to the Kafka cluster.

  • Internal authentication using mTLS.
  • External authentication using SASL SSL.

Kraft Controller Authorizer

ACL Authorizer for Kraft Controller (StandardAuthorizer).

KraftController CRD:

spec: 
  authorization: 
    type: simple
    superUsers:
    - User:kafka
    - User:kraftcontroller
  configOverrides:
    server:
      - authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer

Kafka Broker Authorizer

ACL Authorizer for Kafka Broker (StandardAuthorizer).

Kafka CRD:

spec: 
  authorization: 
    type: simple
    superUsers:
    - User:kafka
    - User:kraftcontroller
  configOverrides:
    server:
      - authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer

Internal authentication using mTLS:

spec:
  listeners:
    internal:
      authentication:
        type: mtls
        principalMappingRules:
          - RULE:.*CN[\s]?=[\s]?([a-zA-Z0-9.]*)?.*/$1/
    external:
      authentication:
        type: plain
        jaasConfig:
          secretRef: credential

Clients

External authentication using SASL SSL:

kafka user configuration: external.properties

security.protocol=SASL_SSL

ssl.truststore.type=PEM
ssl.truststore.location=<FULL_PATH>/certs/generated/cacerts.pem
 
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="kafka-secret";

sasl.mechanism=PLAIN

Configure ACLs

Add permissions to the testuser to produce and consume from the demo.topic topic.

See users/plain-users.json.

Configure ACLS (using kafka user):

kafka-acls --bootstrap-server kafka.$DOMAIN:443 \
  --command-config <FULL_PATH>/external.properties \
  --add \
  --allow-principal User:'test' \
  --operation All \
  --topic demo.topic
 
Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=demo.topic, patternType=LITERAL)`: 
        (principal=User:test, host=*, operation=ALL, permissionType=ALLOW) 

Test user

Configure user: test.properties

security.protocol=SASL_SSL

ssl.truststore.type=PEM
ssl.truststore.location=<FULL_PATH>/certs/generated/cacerts.pem
 
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="test" password="test123";

sasl.mechanism=PLAIN

test user can describe the demo.topic topic:

kafka-topics --bootstrap-server kafka.$DOMAIN:443 --command-config <PATH>/test.properties --topic demo.topic --describe                                                                                 
  
Topic: demo.topic    TopicId: uvihRw2ZRbCUPyPhDIKWZg PartitionCount: 4       ReplicationFactor: 3    Configs: min.insync.replicas=2,cleanup.policy=delete,segment.bytes=1073741824,message.format.version=3.0-IV1
        Topic: demo.topic       Partition: 0    Leader: 1       Replicas: 1,2,0 Isr: 2,1,0
        Topic: demo.topic       Partition: 1    Leader: 2       Replicas: 2,0,1 Isr: 2,1,0
        Topic: demo.topic       Partition: 2    Leader: 0       Replicas: 0,1,2 Isr: 2,1,0
        Topic: demo.topic       Partition: 3    Leader: 0       Replicas: 0,1,2 Isr: 2,1,0

Testing with unauthorized user:

Error while executing topic command : Topic 'demo.topic' does not exist as expected 

Authorizer logs

Enable DEBUG log level for the authorizer in the Kafka and in the KraftControllerCRD:

spec: 
  configOverrides:
    server:
      - authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
    log4j:  
      - log4j.logger.kafka.authorizer.logger=DEBUG, authorizerAppender
      - log4j.appender.authorizerAppender=org.apache.log4j.ConsoleAppender
      - log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH 
      - log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
      - log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

Monitoring

Script: ./monitoring/monitoring.sh

  • monitoring namespace
  • Prometheus deployment
  • Grafana deployment
  • Grafana ingress

Prometheus & Grafana

Prometheus

There is no Ingress rules for Prometheus, if you need to access the Prometheus Web:

kubectl port-forward --namespace monitoring svc/prometheus-server 9090:80

Then use http://localhost:9090/ to access the Prometheus Web UI.

Grafana

Update ./monitoring/grafana/ingress.yaml with your domain name.

Use [http://monitoring.< $DOMAIN >/](http://monitoring.< $DOMAIN >) to access Grafana.

  • Get your 'admin' user password:

kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode

Dashboards

Dashboards (JSON files) are located in ./monitoring/grafana/dashboards.

  • Kraft Controller (./monitoring/grafana/dashboards/cp-kraft.json)
  • Confluent Platform (./monitoring/grafana/dashboards/confluentplatform.json)
  • Confluent Platform - Overview (./monitoring/grafana/dashboards/cp-overview.json)
  • Confluent Operator (./monitoring/grafana/dashboards/cfk-operator.json)
  • Confluent Connect (./monitoring/grafana/dashboards/connect-dashboard.json)

About

CFK and Kraft playground

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages