autoscaling

This commit is contained in:
Steve Waterworth
2019-05-22 15:54:05 +01:00
parent 51f0c93e6c
commit e5c10b597f
12 changed files with 57 additions and 164 deletions

14
K8s/autoscaling/autoscale.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/sh
NS="robot-shop"
DEPLOYMENTS="cart catalogue dispatch payment ratings shipping user web"
for DEP in $DEPLOYMENTS
do
kubectl -n $NS autoscale deployment $DEP --max 2 --min 1 --cpu-percent 50
done
echo "Waiting 5 seconds for changes to apply..."
sleep 5
kubectl -n $NS get hpa

View File

@@ -1,18 +0,0 @@
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: cart
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: cart
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

View File

@@ -1,18 +0,0 @@
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: catalogue
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: catalogue
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

View File

@@ -1,18 +0,0 @@
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: dispatch
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: dispatch
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

View File

@@ -1,18 +0,0 @@
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: payment
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: payment
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

View File

@@ -1,18 +0,0 @@
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: ratings
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: ratings
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

View File

@@ -1,18 +0,0 @@
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: shipping
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: shipping
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

View File

@@ -1,18 +0,0 @@
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: user
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

View File

@@ -1,18 +0,0 @@
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: web
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web
minReplicas: 1
maxReplicas: 2
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

View File

@@ -18,11 +18,11 @@ spec:
- name: load
env:
- name: HOST
value: "http://web:8080"
value: "http://web:8080/"
- name: NUM_CLIENTS
value: "15"
- name: SILENT
value: "0"
value: "1"
- name: ERROR
value: "1"
image: robotshop/rs-load:latest

View File

@@ -50,17 +50,12 @@ The manifests for robotshop are in the *DCOS/* directory. These manifests were b
You may install Instana via the DCOS package manager, instructions are here: https://github.com/dcos/examples/tree/master/instana-agent/1.9
## Kubernetes
The Docker container images are all available on [Docker Hub](https://hub.docker.com/u/robotshop/). The deployment and service definition files using these images are in the *K8s* directory, use these to deploy to a Kubernetes cluster. If you pushed your own images to your registry the deployment files will need to be updated to pull from your registry; using [kompose](https://github.com/kubernetes/kompose) may be of assistance here.
You can run Kubernetes locally using [minikube](https://github.com/kubernetes/minikube) or on one of the many cloud providers.
If you want to deploy Stan's Robot Shop to Google Compute you will need to edit the *K8s/web-service.yaml* file and change the type from NodePort to LoadBalancer. This can also be done in the Google Compute console.
#### NOTE
I have found some issues with kompose reading the *.env* correctly, just export the variables in the shell environment to work around this.
You can also run Kubernetes locally using [minikube](https://github.com/kubernetes/minikube).
The Docker container images are all available on [Docker Hub](https://hub.docker.com/u/robotshop/). The deployment and service definition files using these images are in the *K8s* directory, use these to deploy to a Kubernetes cluster. If you pushed your own images to your registry the deployment files will need to be updated to pull from your registry.
$ kubectl create namespace robot-shop
$ kubectl -n robot-shop create -f K8s/descriptors
$ kubectl -n robot-shop apply -f K8s/descriptors
To deploy the Instana agent to Kubernetes, just use the [helm](https://hub.helm.sh/charts/stable/instana-agent) chart.
@@ -73,7 +68,7 @@ $ helm install --name instana-agent --namespace instana-agent \
stable/instana-agent
```
If you are having difficulties get helm running with your K8s install it is most likely due to RBAC, most K8s now have RBAC enabled by default. Therefore helm requires a [service account](https://github.com/helm/helm/blob/master/docs/rbac.md) to have permission to do stuff.
If you are having difficulties getting helm running with your K8s install, it is most likely due to RBAC, most K8s now have RBAC enabled by default. Therefore helm requires a [service account](https://github.com/helm/helm/blob/master/docs/rbac.md) to have permission to do stuff.
## Accessing the Store
If you are running the store locally via *docker-compose up* then, the store front is available on localhost port 8080 [http://localhost:8080](http://localhost:8080/)
@@ -100,10 +95,10 @@ The store front is then available on the IP address of minikube port 30080. To f
$ minikube ip
If you are using a cloud Kubernetes / Openshift / Mesosphere then it will be available on the load balancer of that system. There will be specific blog posts on the Instana site covering these scenarios.
If you are using a cloud Kubernetes / Openshift / Mesosphere then it will be available on the load balancer of that system.
## Load Generation
A separate load generation utility is provided in the *load-gen* directory. This is not automatically run when the application is started. The load generator is built with Python and [Locust](https://locust.io). The *build.sh* script builds the Docker image, optionally taking *push* as the first argument to also push the image to the registry. The registry and tag settings are loaded from the *.env* file in the parent directory. The script *load-gen.sh* runs the image, edit this and set the HOST environment variable to point the load at where you are running the application. You could run this inside an orchestration system (K8s) as well if you want to, how to do this is left as an exercise for the reader.
A separate load generation utility is provided in the *load-gen* directory. This is not automatically run when the application is started. The load generator is built with Python and [Locust](https://locust.io). The *build.sh* script builds the Docker image, optionally taking *push* as the first argument to also push the image to the registry. The registry and tag settings are loaded from the *.env* file in the parent directory. The script *load-gen.sh* runs the image, it takes a number of command line arguments. You could run the container inside an orchestration system (K8s) as well if you want to, an example descriptor is provided in K8s/autoscaling. For more details see the [README](loadgen/README.md) in the loadgen directory.
## End User Monitoring
To enable End User Monitoring (EUM) see the official [documentation](https://docs.instana.io/products/website_monitoring/) for how to create a configuration. There is no need to inject the javascript fragment into the page, this will be handled automatically. Just make a note of the unique key and set the environment variable INSTANA_EUM_KEY for the *web* image, see *docker-compose.yaml* for an example.

View File

@@ -8,19 +8,47 @@ Will build with image and optionally push it.
$ ./load-gen.sh
Runs the load generation script against the application started with docker-compose up
Runs the load generation script against the application started with `docker-compose up` . There are various command line options to configure the load.
Alternatively, you can run the Container from Dockerhub directly on one of the nodes having access to the web node:
Alternatively, you can run the Container from Dockerhub directly on one of the nodes having access to the web service:
`docker run -e 'HOST=$webnodeIP:8080' -e 'NUM_CLIENTS=3' -d --rm --name="loadgen" robotshop/rs-load`
```shell
$ docker run \
-d \
--rm \
--name="loadgen" \
--network=host \
-e "HOST=http://host:8080/"
-e "HUM_CLIENTS=1" \
-e "ERROR=1" \
-e "SILENT=1" \
robotshop/rs-load
```
Set the following environment variables to configure the load:
* HOST - The target for the load e.g. http://host:8080/
* NUM_CLIENTS - How many simultaneous load scripts to run, the bigger the number the bigger the load. The default is 1
* ERROR - Set this to 1 to have erroroneous calls made to the payment service.
* SILENT - Set this to 1 to surpress the very verbose output from the script. This is a good idea if you're going to run load for more than a few minutes.
## Kubernetes
To run the load test in Kubernetes, apply the `K8s/autoscaling/load-deployment.yaml` configuration in your Kubernetes cluster. This will a replica of the above load test
To run the load test in Kubernetes, apply the `K8s/autoscaling/load-deployment.yaml` configuration in your Kubernetes cluster. This will deploy the load generation, check the settings in the file first.
kubectl -n robot-shop apply -f K8s/autoscaling/load-deployment.yaml
```shell
$ kubectl -n robot-shop apply -f K8s/load-deployment.yaml
```
If you want to enable auto-scaling on relevant components (non-databases), you can apply everything in that directory. However you will first need to run a `metrics-server` in your cluster so the Horizontal Pod Autoscaler can know about the CPU usage of the pods.
If you want to enable auto-scaling on relevant components (non-databases), just run the script in the autoscaling directory. However you will first need to make sure a [metrics-server](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/) is running in your cluster, enabling the Horizontal Pod Autoscaler to know about the CPU and memory usage of the pods. From Kubernetes version 1.8 a `metrics-serer` deployment should be configured by default, run the command below to check.
kubectl -n robot-shop apply -f K8s/autoscaling/
```shell
$ kubectl -n kube-system get deployment
```
The autoscaling is installed with:
```shell
$ K8s/autoscaling/autoscale.sh
```