Merge branch 'fluentd'

This commit is contained in:
Steve Waterworth
2021-03-19 10:51:19 +00:00
8 changed files with 258 additions and 0 deletions

View File

@@ -0,0 +1,32 @@
# Configuration
Edit `fluent.conf` setting the parameters to match either your Humio account or Elasticsearch instance. See the [fluentd documentation](https://docs.fluentd.org/output/elasticsearch) and/or [Humio documentation](https://docs.humio.com/docs/ingesting-data/data-shippers/fluentd/) for details.
Start `fluentd` in a Docker container using the `run.sh` script.
## Docker Compose
To have all the containers in Stan's Robot Shop use fluentd for logging, the `docker-compose.yaml` needs to be edited. Change the logging section at the top of the file.
```yaml
services:
mongodb:
build:
context: mongo
image: ${REPO}/rs-mongodb:${TAG}
networks:
- robto-shop
logging: &logging
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: "{{.ImageName}}"
redis:
```
If Robot Shop is already running, shut it down `docker-compose down`
Start Robot Shop with `docker-compose up -d`. It takes a few minutes to start, after that check with Humio or ELK for log entries.
Set up [logging integration](https://www.instana.com/docs/logging/) in Instana.

View File

@@ -0,0 +1,24 @@
<source>
@type forward
</source>
<filter **>
@type record_transformer
enable_ruby
<record>
docker.container_id ${record["container_id"]}
docker.image_name ${tag}
</record>
</filter>
<match **>
@type elasticsearch
host cloud.humio.com
port 9200
scheme https
ssl_version TLSv1_2
user <Humio index or Elasticsearch user>
password <Humio API key or Elasticsearch password>
logstash_format true
</match>

View File

@@ -0,0 +1,12 @@
#!/bin/sh
IMAGE_NAME="robotshop/fluentd:elastic"
docker run \
-d \
--rm \
--name fluentd \
-p 24224:24224 \
-v $(pwd)/fluent.conf:/fluentd/etc/fluent.conf \
$IMAGE_NAME

9
fluentd/Dockerfile Normal file
View File

@@ -0,0 +1,9 @@
FROM fluentd
USER root
RUN apk update && \
apk add --virtual .build-dependencies build-base ruby-dev
RUN fluent-gem install fluent-plugin-elasticsearch && \
fluent-gem install fluent-plugin-kubernetes_metadata_filter && \
fluent-gem install fluent-plugin-multi-format-parser

View File

@@ -0,0 +1,11 @@
# Kubernetes
Edit the `fluentd.yaml` file inserting your Humio or Elasticsearch instance details.
Apply the configuration:
```shell
$ kubectl apply -f fluentd.yaml
```
Set up [logging integration](https://www.instana.com/docs/logging/) in Instana.

View File

@@ -0,0 +1,148 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
namespace: logging
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fluentd
namespace: logging
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: logging
#
# CONFIGURATION
#
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: logging
data:
fluent.conf: |
<source>
@type tail
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head false
<parse>
@type json
</parse>
</source>
<filter kubernetes.**>
@type kubernetes_metadata
@id filter_kube_metadata
</filter>
# Throw away what is not needed first
#<match fluent.**>
#@type null
#</match>
<match kubernetes.var.log.containers.**kube-system**.log>
@type null
</match>
# Capture what is left
<match **>
@type elasticsearch
host cloud.humio.com
port 9200
scheme https
ssl_version TLSv1_2
logstash_format true
user <Humio index or Elasticsearch user>
password <Humio API key or Elasticsearch password>
</match>
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: logging
labels:
k8s-app: fluentd
#https://github.com/kubernetes/kubernetes/issues/51376
#kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
name: fluentd
template:
metadata:
labels:
name: fluentd
#kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
terminationGracePeriodSeconds: 30
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: robotshop/fluentd:elastic
#args:
# - "-v"
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: fluentd-config
mountPath: /fluentd/etc
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
imagePullPolicy: Always
volumes:
- name: fluentd-config
configMap:
name: fluentd-config
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

10
fluentd/README.md Normal file
View File

@@ -0,0 +1,10 @@
# Logging with Fluentd
This example works with [Humio](https://humio.com/) and [ELK](https://elastic.co/). Fluentd is used to ship the logs from the containers to the logging backend.
## Build Fluentd Container
The default `fluentd` Docker image does not include the output plugin for Elasticsearch. Therefore a new Docker image based on the default image with the Elasticsearch output plugin installed should be created, see the `Dockerfile` and `build.sh` script for examples. This example has already been built and pushed to Docker Hub.
Deployment is slightly different depending on which platform Robot Shop is run on. See the appropriate subdirectories for the required files and further instructions.

12
fluentd/build.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/sh
IMAGE_NAME="robotshop/fluentd:elastic"
docker build -t "$IMAGE_NAME" .
if [ "$1" = "push" ]
then
docker push "$IMAGE_NAME"
fi