Exposing Kafka in Docker Desktop Kubernetes

In an effort to get more fluent with Kubernetes, I’m using it instead of Docker/Docker compose during my local development. I’m using Docker Desktop on a Mac, so it’s going to be slightly different than minikube or on Windows.

In most cases I can take a Docker Compose file and recreate it as a services and deployments that my code can interact with. But in the case of Kafka, there are a couple complexities that I needed to iron out to get it running. The two main problems were exposing the Kafka UI web app from inside the cluster, and exposing Kafka in a way that both Kafka UI and applications running outside the cluster (maybe from an IDE or command-line) can access topics.

The first problem is to expose Kafka both inside and outside the cluster. Inside the cluster it gets used by Kafka UI, and outside the cluster it gets used by the code I’m creating. The trick is in both the Service and Deployment definitions. In the Service definition, I set the type to NodePort, but I define ports for both internal and external access, with only the external access using a nodeport:

apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-broker
name: kafka-service
namespace: dev
spec:
type: NodePort
ports:
- port: 9092
name: inner
targetPort: 9092
protocol: TCP
- port: 30092
name: outer
targetPort: 30092
nodePort: 30092
protocol: TCP
selector:
app: kafka-broker

Setting up the broker is more complex. First, we make an alias for PLAINTEXT:

- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP           
value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT"

Then, make sure the container is exposed on two ports:

ports:         
- containerPort: 9092
- containerPort: 30092

Port 9092 is the standard Kafka port, and is used inside the cluster. Port 30092 is in the accepted range for nodeports, and is the port used outside the cluster. Now, we tell Kafka to listen on those ports using the aliases, and advertise listeners both inside the cluster and using the domain we initially defined:

- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INSIDE"
- name: KAFKA_LISTENERS
value: "INSIDE://:9092,OUTSIDE://:30092"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://$(KAFKA_SERVICE_SERVICE_HOST):9092,OUTSIDE://kafka.example.net:30092"

Here’s the full Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kafka-broker
  name: kafka-broker
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-broker
  template:
    metadata:
      labels:
        app: kafka-broker
    spec:
      hostname: kafka-broker
      containers:
      - env:
        - name: KAFKA_BROKER_ID
          value: "1"
        - name: KAFKA_ZOOKEEPER_CONNECT
          value: $(ZOOKEEPER_SERVICE_SERVICE_HOST):2181
        - name: KAFKA_INTER_BROKER_LISTENER_NAME
          value: "INSIDE"
        - name: KAFKA_LISTENERS
          value: "INSIDE://:9092,OUTSIDE://:30092"
        - name: KAFKA_ADVERTISED_LISTENERS
          value: "INSIDE://$(KAFKA_SERVICE_SERVICE_HOST):9092,OUTSIDE://kafka.example.net:30092"
        - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
          value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT"
        - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
          value: "1"
        - name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR
          value: "1"
        - name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
          value: "1"
        image: confluentinc/cp-kafka:7.6.0
        imagePullPolicy: IfNotPresent
        name: kafka-broker
        ports:
        - containerPort: 9092
        - containerPort: 30092

The next problem, exposing Kafka UI, is not unique to Kafka, but is a problem for any web app you run inside the Docker Desktop cluster. I didn’t want to use port forwarding. While I know that works, it’s not as convenient as just accessing a URL.

The first thing I set up was URL resolution in my hosts file. I edited my /etc/hosts file and added the following line:

127.0.0.1 kafka.example.net

I have taken to using example.net for my cluster resources, and example.com for my Docker Compose resources. Both those domains are reserved and won’t conflict with actual domains.

The next step is to install an nginx ingress into my local cluster:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.0/deploy/static/provider/cloud/deploy.yaml

Now I can set up Kafka UI:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: kafka-ui-service
  name: kafka-ui-service
  namespace: dev
spec:
  ports:
    - name: kafka-ui-port
      port: 80
      targetPort: 8080
  selector:
    app: kafka-ui
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: kafka-ui
  name: kafka-ui
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-ui
  template:
    metadata:
      labels:
        app: kafka-ui
    spec:
      hostname: kafka-ui
      containers:
      - env:
        - name: KAFKA_CLUSTERS_0_NAME
          value: "local"
        - name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
          value: $(KAFKA_SERVICE_SERVICE_HOST):9092
        image: provectuslabs/kafka-ui:master
        imagePullPolicy: IfNotPresent
        name: kafka-ui
        ports:
        - containerPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kafka-ui-ingress
  namespace: dev
spec:
  ingressClassName: nginx
  rules:
  - host: kafka.example.net
    http:
      paths:
      - backend:
          service:
            name: kafka-ui-service
            port:
              number: 80
        path: /
        pathType: Prefix

I’ve placed the complete yaml file in a repo, and this post sent me in the right direction for my solution.