Change password and update to k8s 1.19

This commit is contained in:
David Zuber
2020-10-09 18:48:37 +01:00
parent 8d3e77803c
commit 759d3edd4e
5 changed files with 24 additions and 18 deletions

View File

@ -1,3 +1,10 @@
# 0.5.0
* New image storaxdev/kubedoom:1.0.0
* New default VNC password is `idbehold`.
* Update kubernetes to 1.19.1
* Update to Ubuntu 20.10
# 0.4.0 # 0.4.0
* New image storadev/kubedoom:0.4.0 * New image storadev/kubedoom:0.4.0

View File

@ -4,7 +4,7 @@ WORKDIR /go/src/kubedoom
ADD kubedoom.go . ADD kubedoom.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o kubedoom . RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o kubedoom .
FROM ubuntu:19.10 AS ubuntu FROM ubuntu:20.10 AS ubuntu
# make sure the package repository is up to date # make sure the package repository is up to date
RUN apt-get update RUN apt-get update
@ -46,7 +46,7 @@ RUN apt-get install -y \
WORKDIR /root/ WORKDIR /root/
# Setup a password # Setup a password
RUN mkdir ~/.vnc && x11vnc -storepasswd 1234 ~/.vnc/passwd RUN mkdir ~/.vnc && x11vnc -storepasswd idbehold ~/.vnc/passwd
COPY --from=ubuntu-deps /doom1.wad . COPY --from=ubuntu-deps /doom1.wad .
COPY --from=ubuntu-deps /usr/bin/kubectl /usr/bin/ COPY --from=ubuntu-deps /usr/bin/kubectl /usr/bin/

View File

@ -13,37 +13,37 @@ which was forked from psdoom.
## Running Locally ## Running Locally
In order to run locally you will need to In order to run locally you will need to
1. Run the kubedoom container 1. Run the kubedoom container
2. Attach a VNC client to the appropriate port (5901) 2. Attach a VNC client to the appropriate port (5901)
### With Docker ### With Docker
Run `storaxdev/kubedoom:0.4.0` with docker locally: Run `storaxdev/kubedoom:0.5.0` with docker locally:
```console ```console
$ docker run -p5901:5900 \ $ docker run -p5901:5900 \
--net=host \ --net=host \
-v ~/.kube:/root/.kube \ -v ~/.kube:/root/.kube \
--rm -it --name kubedoom \ --rm -it --name kubedoom \
storaxdev/kubedoom:0.4.0 storaxdev/kubedoom:0.5.0
``` ```
### With Podman ### With Podman
Run `storaxdev/kubedoom:0.4.0` with podman locally: Run `storaxdev/kubedoom:0.5.0` with podman locally:
```console ```console
$ podman run -it -p5901:5900/tcp \ $ podman run -it -p5901:5900/tcp \
-v ~/.kube:/tmp/.kube --security-opt label=disable \ -v ~/.kube:/tmp/.kube --security-opt label=disable \
--env "KUBECONFIG=/tmp/.kube/config" --name kubedoom --env "KUBECONFIG=/tmp/.kube/config" --name kubedoom
storaxdev/kubedoom:0.4.0 storaxdev/kubedoom:0.5.0
``` ```
### Attaching a VNC Client ### Attaching a VNC Client
Now start a VNC viewer and connect to `localhost:5901`. The password is `1234`: Now start a VNC viewer and connect to `localhost:5901`. The password is `idbehold`:
```console ```console
$ vncviewer viewer localhost:5901 $ vncviewer viewer localhost:5901
``` ```
@ -64,7 +64,7 @@ $ docker run -p5901:5900 \
--net=host \ --net=host \
-v ~/.kube:/root/.kube \ -v ~/.kube:/root/.kube \
--rm -it --name kubedoom \ --rm -it --name kubedoom \
storaxdev/kubedoom:0.4.0 \ storaxdev/kubedoom:0.5.0 \
-mode namespaces -mode namespaces
``` ```
@ -77,7 +77,7 @@ example config from this repository:
```console ```console
$ kind create cluster --config kind-config.yaml $ kind create cluster --config kind-config.yaml
Creating cluster "kind" ... Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.18.2) 🖼 ✓ Ensuring node image (kindest/node:v1.19.1) 🖼
✓ Preparing nodes 📦 📦 ✓ Preparing nodes 📦 📦
✓ Writing configuration 📜 ✓ Writing configuration 📜
✓ Starting control-plane 🕹️ ✓ Starting control-plane 🕹️
@ -89,7 +89,7 @@ You can now use your cluster with:
kubectl cluster-info --context kind-kind kubectl cluster-info --context kind-kind
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/ Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
``` ```
This will spin up a 2 node cluster inside docker, with port 5900 exposed from This will spin up a 2 node cluster inside docker, with port 5900 exposed from
@ -97,7 +97,6 @@ the worker node. Then run kubedoom inside the cluster by applying the manifest
provided in this repository: provided in this repository:
```console ```console
$ export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
$ kubectl apply -f manifest/ $ kubectl apply -f manifest/
namespace/kubedoom created namespace/kubedoom created
deployment.apps/kubedoom created deployment.apps/kubedoom created
@ -111,4 +110,4 @@ $ vncviewer viewer localhost:5900
``` ```
Kubedoom requires a service account with permissions to list all pods and delete Kubedoom requires a service account with permissions to list all pods and delete
them and uses kubectl 1.18.2. them and uses kubectl 1.19.2.

View File

@ -2,9 +2,9 @@ kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4 apiVersion: kind.x-k8s.io/v1alpha4
nodes: nodes:
- role: control-plane - role: control-plane
image: kindest/node:v1.18.2@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f image: kindest/node:v1.19.1@sha256:98cf5288864662e37115e362b23e4369c8c4a408f99cbc06e58ac30ddc721600
- role: worker - role: worker
image: kindest/node:v1.18.2@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f image: kindest/node:v1.19.1@sha256:98cf5288864662e37115e362b23e4369c8c4a408f99cbc06e58ac30ddc721600
extraPortMappings: extraPortMappings:
- containerPort: 5900 - containerPort: 5900
hostPort: 5900 hostPort: 5900

View File

@ -18,7 +18,7 @@ spec:
hostNetwork: true hostNetwork: true
serviceAccountName: kubedoom serviceAccountName: kubedoom
containers: containers:
- image: storaxdev/kubedoom:0.4.0 - image: storaxdev/kubedoom:0.5.0
name: kubedoom name: kubedoom
ports: ports:
- containerPort: 5900 - containerPort: 5900