initial load
This commit is contained in:
5
fluentd/Docker/Dockerfile
Normal file
5
fluentd/Docker/Dockerfile
Normal file
@@ -0,0 +1,5 @@
|
||||
FROM fluentd
|
||||
USER root
|
||||
RUN fluent-gem install fluent-plugin-elasticsearch
|
||||
USER fluent
|
||||
|
6
fluentd/Docker/build.sh
Normal file
6
fluentd/Docker/build.sh
Normal file
@@ -0,0 +1,6 @@
|
||||
#!/bin/sh
|
||||
|
||||
. ./setenv.sh
|
||||
|
||||
docker build -t $IMAGE_NAME .
|
||||
|
24
fluentd/Docker/humio.conf
Normal file
24
fluentd/Docker/humio.conf
Normal file
@@ -0,0 +1,24 @@
|
||||
<source>
|
||||
@type forward
|
||||
</source>
|
||||
|
||||
<filter **>
|
||||
@type record_transformer
|
||||
enable_ruby
|
||||
<record>
|
||||
docker.container_id ${record["container_id"]}
|
||||
docker.image_name ${tag}
|
||||
</record>
|
||||
</filter>
|
||||
|
||||
<match **>
|
||||
@type elasticsearch
|
||||
host cloud.humio.com
|
||||
port 9200
|
||||
scheme https
|
||||
ssl_version TLSv1_2
|
||||
user <Humio index or Elasticsearch user>
|
||||
password <Humio API key or Elasticsearch password>
|
||||
logstash_format true
|
||||
</match>
|
||||
|
13
fluentd/Docker/run.sh
Normal file
13
fluentd/Docker/run.sh
Normal file
@@ -0,0 +1,13 @@
|
||||
#!/bin/sh
|
||||
|
||||
. ./setenv.sh
|
||||
|
||||
docker run \
|
||||
-d \
|
||||
--rm \
|
||||
--name fluentd \
|
||||
-p 24224:24224 \
|
||||
-v $(pwd)/humio.conf:/fluentd/etc/humio.conf \
|
||||
-e FLUENTD_CONF=humio.conf \
|
||||
$IMAGE_NAME
|
||||
|
4
fluentd/Docker/setenv.sh
Normal file
4
fluentd/Docker/setenv.sh
Normal file
@@ -0,0 +1,4 @@
|
||||
#!/bin/sh
|
||||
|
||||
IMAGE_NAME="repo/image:tag"
|
||||
|
12
fluentd/README.md
Normal file
12
fluentd/README.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# Logging with Fluentd
|
||||
|
||||
This example works with [Humio](https://humio.com/) and [ELK](https://elastic.co/). Fluentd is used to ship the logs from the containers to the logging backend.
|
||||
|
||||
## Build Fluentd Container
|
||||
|
||||
The default `fluentd` Docker image does not include the output plugin for Elasticsearch. Therefore a new Docker image based on the default image with the Elasticsearch output plugin installed should be created.
|
||||
|
||||
If running Robot Shop locally via `docker-compose`, the image does not need to be pushed to a registry. If running on Kubernetes, the image will need to be pushed to a registry.
|
||||
|
||||
Deployment is also slightly different depending on which platform Robot Shop is run on. See the appropriate subdirectories for the required files and further instructions.
|
||||
|
Reference in New Issue
Block a user