Rename repository

Unforunately, It came to my knowledge that this repository
promoted a phenomenon where DevOps interviews became a trivia game
where people think it's normal to throw 20 random short questions
like "what is fork()" or "which tools would you use for each
of the following areas?" and this was not my intention.

To explcitly state this repository doesn't represents real DevOps
interview questions I've decided to rename it.
This commit is contained in:
abregman
2020-01-12 22:18:39 +02:00
parent cbda545d60
commit d43ec162f0
23 changed files with 54 additions and 63 deletions

View File

@ -0,0 +1,6 @@
## Ansible, Minikube and Docker
* Write a simple program in any language you want that outputs "I'm on %HOSTNAME%" (HOSTNAME should be the actual host name on which the app is running)
* Write a Dockerfile which will run your app
* Create the YAML files required for deploying the pods
* Write and run an Ansible playbook which will install Docker, Minikube and kubectl and then create a deployment in minikube with your app running.

View File

@ -0,0 +1,12 @@
## CI for Open Source Project
1. Choose an open source project from Github and fork it
2. Create a CI pipeline/workflow for the project you forked
3. The CI pipeline/workflow will include anything that is relevant to the project you forked. For example:
* If it's a Python project, you will run PEP8
* If the project has unit tests directory, you will run these unit tests as part of the CI
4. In a separate file, describe what is running as part of the CI and why you chose to include it. You can also describe any thoughts, dilemmas, challenge you had
### Bonus
Containerize the app of the project you forked using any containerization technology you want.

View File

@ -0,0 +1,19 @@
## Cloud Slack Bot
Create a slack bot to manage cloud instances. You can choose whatever cloud provider you want (e.g. Openstack, AWS, GCP, Azure)
You should provide:
* Instructions on how to use it
* Source code of the slack bot
* A running slack bot account or a deployment script so we can test it
The bot should be able to support:
* Creating new instances
* Removing existing instances
* Starting an instance
* Stopping an instance
* Displaying the status of an instance
* List all available instances
The bot should also be able to show help message.

View File

@ -0,0 +1,21 @@
# Elasticsearch, Kibana and AWS
Your task is to build an elasticsearch cluster along with Kibana dashboard on one of the following clouds:
* AWS
* OpenStack
* Azure
* GCP
You have to describe in details (preferably with some drawings) how you are going to set it up.
Please describe in detail:
- How you scale it up or down
- How you quickly (less 20 minutes) provision the cluster
- How you apply security policy for access control
- How you transfer the logs from the app to ELK
- How you deal with multi apps running in different regions
## Solution
One Possible solution can be found [here](solutions/elk_kibana_aws.md)

View File

@ -0,0 +1,60 @@
Your mission, should you choose to accept it, involves fixing the app in this directory, containerize it and set up a CI for it.
Please read carefully all the instructions.
## Installation
1. Create a virtual environment with `python3 -m venv challenge_venv`
2. Activate it with `source challenge_venv/bin/activate`
3. Install the requirements in this directory `pip install -r requirements.txt`
## Run the app
If any of the following steps is not working, it is expected from you to fix them
1. Move to `challenges/flask_container_ci` directory, if you are not already there
1. Run `export FLASK_APP=app/main.py`
1. To run the app execute `flask run`. If it doesn't works, fix it
3. Access `http://127.0.0.1:5000`. You should see the following
```
{
"resources_uris": {
"user": "/users/\<username\>",
"users": "/users"
},
"current_uri": "/"
}
```
4. You should be able to access any of the resources and get the following data:
* /users - all users data
* /users/<username> - data on the specific chosen user
5. When accessing /users, the data returned should not include the id of the user, only its name and description. Also, the data should be ordered by users names.
## Containers
Using Docker or Podman, containerize the flask app so users can run the following two commands:
```
docker build -t app:latest /path/to/Dockerfile
docker run -d -p 5000:5000 app
```
1. You can use any image base you would like
2. Containerize only what you need for running the application, nothing else.
## CI
Great, now that we have a working app and also can run it in a container, let's set up a CI for it so it won't break again in the future
In current directory you have a file called tests.py which includes the tests for the app. What is required from you, is:
1. The CI should run the app tests. You are free to choose whatever CI system or service you prefer. Use `python tests.py` for running the tests.
2. There should be some kind of test for the Dockerfile you wrote
3. Add additional unit test (or another level of tests) for testing the app
### Guidelines
* Except the app functionality, you can change whatever you want - structure, tooling, libraries, ... If possible add `notes.md` file which explains reasons, logic, thoughts and anything else you would like to share
* The CI part should include the source code for the pipeline definition

View File

@ -0,0 +1,2 @@
#!/usr/bin/env python
# coding=utf-8

View File

@ -0,0 +1,53 @@
#!/usr/bin/env python
# coding=utf-8
from flask import Flask
from flask import make_response
import json
from werkzeug.exceptions import NotFound
app = Flask(__name__)
with open("./users.json", "r") as f:
users = json.load(f)
@app.routee("/", methods=['GET'])
def index():
return pretty_json({
"resources": {
"users": "/users",
"user": "/users/<username>",
},
"current_uri": "/"
})
@app.route("/users", methods=['GET'])
def all_users():
return pretty_json(users)
@app.route("/users/<username>", methods=['GET'])
def user_data(username):
if username not in users:
raise NotFound
return pretty_json(users[username])
@app.route("/users/<username>/something", methods=['GET'])
def user_something(username):
raise NotImplementedError()
def pretty_json(arg):
response = make_response(json.dumps(arg, sort_keys=True, indent=4))
response.headers['Content-type'] = "application/json"
return response
if __name__ == "__main__":
app.run(port=5000)

View File

@ -0,0 +1,11 @@
#!/usr/bin/env python
# coding=utf-8
import os
basedir = os.path.abspath(os.path.dirname(__file__))
SECRET_KEY = 'shhh'
CSRF_ENABLED = True
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'app.db')

View File

@ -0,0 +1,58 @@
#!/usr/bin/env python
# coding=utf-8
from flask import Flask
from flask import make_response
import json
from werkzeug.exceptions import NotFound
app = Flask(__name__)
with open("./users.json", "r") as f:
users = json.load(f)
@app.route("/", methods=['GET'])
def index():
return pretty_json({
"resources": {
"users": "/users",
"user": "/users/<username>",
},
"current_uri": "/"
})
@app.route("/users", methods=['GET'])
def all_users():
return pretty_json(users)
@app.route("/users/<username>", methods=['GET'])
def user_data(username):
if username not in users:
raise NotFound
return pretty_json(users[username])
@app.route("/users/<username>/something", methods=['GET'])
def user_something(username):
raise NotImplementedError()
def pretty_json(arg):
response = make_response(json.dumps(arg, sort_keys=True, indent=4))
response.headers['Content-type'] = "application/json"
return response
def create_test_app():
app = Flask(__name__)
return app
if __name__ == "__main__":
app.run(port=5000)

View File

@ -0,0 +1,28 @@
#!/usr/bin/env python
# coding=utf-8
import os
import unittest
from config import basedir
from app import app
from app import db
class TestCase(unittest.TestCase):
def setUp(self):
app.config['TESTING'] = True
app.config['WTF_CSRF_ENABLED'] = False
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.join(
basedir, 'test.db')
self.app = app.test_client()
db.create_all()
def tearDown(self):
db.session.remove()
db.drop_all()
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1 @@
flask

View File

@ -0,0 +1,24 @@
#!/usr/bin/env python
# coding=utf-8
import unittest
from app import main
class TestCase(unittest.TestCase):
def setUp(self):
self.app = main.app.test_client()
def test_main_page(self):
response = self.app.get('/', follow_redirects=True)
self.assertEqual(response.status_code, 200)
def test_users_page(self):
response = self.app.get('/users', follow_redirects=True)
self.assertEqual(response.status_code, 200)
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,22 @@
{
"geralt" : {
"id": "whitewolf",
"name": "Geralt of Rivia",
"description": "Traveling monster slayer for hire"
},
"lara_croft" : {
"id": "m31a3n6sion",
"name": "Lara Croft",
"description": "Highly intelligent and athletic English archaeologist"
},
"mario" : {
"id": "smb3igiul",
"name": "Mario",
"description": "Italian plumber who really likes mushrooms"
},
"gordon_freeman" : {
"id": "nohalflife3",
"name": "Gordon Freeman",
"description": "Physicist with great shooting skills"
}
}

View File

@ -0,0 +1,6 @@
def jobs = Jenkins.instance.items.findAll { job -> job.name =~ /"REMOVE_ME"/ }
jobs.each { job ->
println job.name
//job.delete()
}

View File

@ -0,0 +1,16 @@
def removeOldBuilds(buildDirectory, days = 14) {
def wp = new File("${buildDirectory}")
def currentTime = new Date()
def backTime = currentTime - days
wp.list().each { fileName ->
folder = new File("${buildDirectory}/${fileName}")
if (folder.isDirectory()) {
def timeStamp = new Date(folder.lastModified())
if (timeStamp.before(backTime)) {
folder.delete()
}
}
}
}

View File

@ -0,0 +1,10 @@
## Jenkins Pipelines
Write/Create the following Jenkins pipelines:
* A pipeline which will run unit tests upon git push to a certain repository
* A pipeline which will do to the following:
* Provision an instance (can also be a container)
* Configure the instance as Apache web server
* Deploy a web application on the provisioned instance

View File

@ -0,0 +1,11 @@
## Jenkins Scripts
Write the following scripts:
* Remove all the jobs which include the string "REMOVE_ME" in their name
* Remove builds older than 14 days
### Answer
* [Remove jobs which include specific string](jenkins/scripts/jobs_with_string.groovy)
* [Remove builds older than 14 days](jenkins/scripts/old_builds.groovy)

View File

@ -0,0 +1,22 @@
# Elasticsearch, Kibana and AWS - Solution
This one out of many possible solutions. This solution is relying heavily on AWS.
* Create a VPC with subnet so we can place Elasticsearch node(s) in internal environment only.
If required, we will also setup NAT for public access.
* Create an IAM role for the access to the cluster. Also, create a separate role for admin access.
* To provision the solution quickly, we will use the elasticsearch service directly from AWS for production deployment.
This way we also cover multiple AZs. As for authentication, we either use Amazon cognito or the organization LDAP server.
* To transfer data, we will have to install logstash agent on the instances. The agent will be responsible
for pushing the data to elasticsearch.
* For monitoring we will use:
* Cloud watch to monitor cluster resource utilization
* Cloud metrics dashboard
* If access required from multiple regions we will transfer all the data to S3 which will allow us to view the data
from different regions and consolidate it in one dashboard

View File

@ -0,0 +1,11 @@
# Write a Dockerfile and run a container
Your task is as follows:
1. Create a Docker image:
* Use centos or ubuntu as the base image
* Install apache web server
* Deploy any web application you want
* Add https support (using HAProxy as reverse-proxy)))
2. Once you wrote the Dockerfile and created an image, run the container and test the application. Describe how did you test it and provide output
3. Describe one or more weakness of your Dockerfile. Is it ready to be used in production?