Skip to content

honeycombio/honeycomb-kubernetes-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Cluster-level Kubernetes Logging with Honeycomb

OSS Lifecycle CircleCI

Honeycomb's Kubernetes agent aggregates logs across a Kubernetes cluster. Stop managing log storage in all your clusters and start tracking down real problems.

To get started with Honeycomb, check out the Honeycomb general quickstart.

This README includes some basic information about getting started with the Kubernetes agent. Please see Honeycomb's Kubernetes documentation for more comprehensive documentation.

How it Works

honeycomb-agent runs as a DaemonSet on each node in a cluster. It reads container log files from the node's filesystem, augments them with metadata from the Kubernetes API, and ships them to Honeycomb so that you can see what's going on.

architecture diagram

Quickstart

The following steps will deploy the Honeycomb agent to each node in your cluster, and configure it to process logs from all pods.

  1. Grab your Honeycomb writekey from your account page, and create a Kubernetes secret from it:

    kubectl create secret generic honeycomb --from-literal=api-key=$WRITEKEY --namespace=honeycomb
    
  2. Run the agent

    kubectl apply -f examples/quickstart.yaml
    

    This will do three things:

    • create a service account for the agent so that it can list pods from the API
    • create a minimal ConfigMap containing configuration for the agent
    • create a DaemonSet from the agent.

Production-Ready Use

Service-specific parsing

It's best if all of your containers output structured JSON logs. But that's not always realistic. In particular, you're likely to operate third-party services, such as proxies or databases, that don't log JSON.

You may also want to aggregate logs from specific services, rather than from everything that might be running in a cluster.

In order to get usefully structured data from services, you can use Kubernetes label selectors to describe how to parse logs for specific services.

For example, to parse logs from pods with the label app: nginx as NGINX logs, you'd specify the following configuration:

watchers:
- labelSelector: "app=nginx"
  dataset: kubernetes-nginx
  parser: nginx

Post-Processing Events

You might want to do additional munging of events before sending them to Honeycomb. For each label selector, you can specify a list of processors, which will be applied in order. For example:

watchers:
- labelSelector: "app=nginx"
  parser: nginx
  dataset: kubernetes-nginx
  processors:
  - request_shape:            # Unpack the field "request": "GET /path HTTP/1.x"
      field: request          # into its constituent components

  - drop_field:               # Remove the "user_email" field from all events
      field: user_email

  - sample:                   # Sample events: only send one in 20
      type: static
      rate: 20

See the docs for more examples.