Life with HELM Series
Hi all,
Today I’ll introduce my experience with helm and what got me helming, HELM has become the first thing I install after I deploy a Kubernetes cluster.
This short series named “Life with HELM” is a small glimpse of my relatively “short life” with it ~1 year and is really the “text” behind the meetup we hosted in FullStack Developers Israel ~ a month ago.
This post will cover:
- Why there was no question I needed some kind of “helmer” ~1 min
- Create Kubernetes Resource Definitions for the “msa-demo-app” and point out point some architectural decisions which need a solution ! ~10 min
- Deploy the “msa-demo-app” the
native
way ! ~10 min - Validate the
native
way! ~2 min - How would HELM make this easier ? ~1 min
- Cleanup before we start helming our own … ~1 min
Future parts of the series will cover:
- Installing HELM (so we are all on the same page) link
- Working with existing charts. basically bisect charts parts which seem simple yet important when you write your own. link
- Building your own msa-demo-app chart link following some best practices I came across in helm’s docs, bitnami and honestbee and others, which today help define best practices. - still WIP link
- How will my CI/CD workflow look like with all of the above - still WIP link
Please note If plan you plan on following this post I saved you the trouble by designing this as a walkthrough so make sure you comply to the perquisites at the end of this post (which will also be mentioned in the separate parts of this series when needed)
1. Why there was no question I needed some kind of “helmer” (or package manager) ?
This was the simple part …
As a DevOps consultant i consider myself a chef
, puppet
, ansible
veteran I was likely to make templates
out of any deployment/service/ingress/role Resource Definition - this wasn’t even a question, the question was do I need helm
or good old “one of the above” would suffice ?
Due to popularity and a vast community and of course the joining the cloud native foundation has it’s impact helm
is by far the most talked about topic after / before istio
im not sure :smiley: So the helm
with it …. let’s see what it can do for me to improve my Continuous Delivery experience.
In order to do that let’s deploy our msa-demo-app
which does a “complex” task :smiley: as described below:
Before helm
(and after, but different) we needed to manage each of the 4 components:
redis
- our key value storemsa-api
- provides an api for storingpings
in thepings
keymsa-pinger
- increment the pings key everyn
secondsmsa-poller
- shows the current pings count They all need standard Kubernetes Resource Definitions … and of course provide the glue so they work with each other in form oflabels
andselectors
.
In our example we would need:
To keep things simple we will use pure kubectl
to create all manifests - please follow the stesp below:
2. Generate Kubernetes Manifests
Once you’ve done this part your current working directory should include the following Resource Definitions:
- Namespace -> msa-demo-ns.yml
- Deployment + Service -> redis.yml
- Deployment + Service -> msa-api.yml
- Deployment -> msa-pinger.yml
- Deployment -> msa-poller.yml
2.1 Create msa-demo
namespace (using kubectl –dry-run)
kubectl create namespace msa-demo --dry-run -o yaml > msa-demo-ns.yml
This is the file we will use to create the msa-demo
namespace and ensure isolation of our app you could choose to deploy it to any namespace e.g default
2.2 Create deployment + service for redis (using kubectl –dry-run)
kubectl run redis \
--image=redis \
--port=6379 \
--expose \
--dry-run -o yaml > redis.yml
which yields:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
run: redis
status:
loadBalancer: {}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
run: redis
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: redis
spec:
containers:
- image: redis
name: redis
ports:
- containerPort: 6379
resources: {}
status: {}
This is pretty straightforward, we want redis:latest
with a service exposing redis on port 6379
.
2.3 Create deployment + service for msa-api
kubectl run msa-api --image=shelleg/msa-api:config --port=8080 --image-pull-policy=Always --expose --dry-run -o yaml > msa-api.yml
which yields:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: msa-api
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: msa-api
status:
loadBalancer: {}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: msa-api
name: msa-api
spec:
replicas: 1
selector:
matchLabels:
run: msa-api
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: msa-api
spec:
containers:
- image: shelleg/msa-api:config
imagePullPolicy: Always
name: msa-api
ports:
- containerPort: 8080
resources: {}
status: {}
Similarly to how we deployed redis this is pretty forward too.
On thing to highlight is the kubectl run
is reflected in the labels.run: msa-api
and the selector.run: msa-api
but note (for later) how there is no affiliation between redis
and msa-api
there is no “application” awareness (at least not natively).
Please note when developing and using “latest” tag or specifying "--image-pull-policy=Always"
results in forcing the docker daemon to pull the image even if there is one on the underlaying host, in our case we are using the config
tag which I might be updating with changes hence this is set to Always
.
2.4 Create deployment for msa-pinger
kubectl run msa-pinger \
--image=shelleg/msa-pinger:latest \
--env="API_URL=msa-api:8080" \
--env="DEBUG=true" \
--dry-run -o yaml > msa-pinger.yml
Explained:
2.4.1. passing env vars:
In our use case --env="API_URL=http://msa-api:8080"
could be using the MSA_API_SERVICE_HOST
and MSA_API_SERVICE_PORT
to construct the API_URL
variable which is expected to be set by the msa-pinger service.
Considering we know we have a service names msa-api
we could choose to assume that we have that info laying around …
As an example if msa-api
is deployed before the msa-pinger
or msa-poller
you could have an “easy life” using environment variables!
As an example I am testing an existing deployment like so:
kubectl exec -it `kubectl -n msa-demo get po | grep msa-pinger | awk '{print $1}'` -- printenv | grep MSA
Which should yield something like:
API_URL=${MSA_API_SERVICE_HOST}:${MSA_API_SERVICE_PORT}
MSA_API_SERVICE_HOST=100.68.245.88
MSA_API_SERVICE_PORT=8080
MSA_API_PORT=tcp://100.68.245.88:8080
MSA_API_PORT_8080_TCP_PORT=8080
MSA_API_PORT_8080_TCP_ADDR=100.68.245.88
MSA_API_PORT_8080_TCP=tcp://100.68.245.88:8080
MSA_API_PORT_8080_TCP_PROTO=tcp`
Hence we can use these environment variables in our deployment … which could look like something like the following:
kubectl run msa-pinger ֿ
--image=shelleg/msa-pinger:latest ֿ
--env="API_URL=\${MSA_API_SERVICE_HOST}:\${MSA_API_SERVICE_PORT}" ֿ
--env="DEBUG=true" ֿ
--dry-run -o yaml > msa-pinger.yml
2.4.2 Additional env vars via --env=
--env="DEBUG=true"
by default there will be no log unless this environment variable is set so I guess this also shows how to pass an arbitrary environment variable to a pod (via deployment).
So if we replace our msa-api
and 8080
with the environment vars we expect to have present MSA_API_SERVICE_HOST
and ${MSA_API_SERVICE_PORT}
our deployment should look like the following - which looks like:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: msa-pinger
name: msa-pinger
spec:
replicas: 1
selector:
matchLabels:
run: msa-pinger
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: msa-pinger
spec:
containers:
- env:
- name: API_URL
value: ${MSA_API_SERVICE_HOST}:${MSA_API_SERVICE_PORT}
- name: DEBUG
value: "true"
image: shelleg/msa-pinger:latest
name: msa-pinger
resources: {}
status: {}
Note the - env:
above.
I have to say this seemed like a hack
to me from the start and I could use dns names (well I did the first time - but i’ll discuss later on) - i’m sure to mention this when we start helming.
2.5 Create deployment for msa-poller
kubectl run msa-poller \
--image=shelleg/msa-poller:latest \
--env="API_URL=\${MSA_API_SERVICE_HOST}:\${MSA_API_SERVICE_PORT}" \
--dry-run -o yaml > msa-poller.yml
which yields:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: msa-poller
name: msa-poller
spec:
replicas: 1
selector:
matchLabels:
run: msa-poller
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: msa-poller
spec:
containers:
- env:
- name: API_URL
value: ${MSA_API_SERVICE_HOST}:${MSA_API_SERVICE_PORT}
image: shelleg/msa-poller:latest
name: msa-poller
resources: {}
status: {}
3. Deploy to kubernetes the ‘native’ way
Considering we now have all we need to deploy our demo-app let’s use kubectl
to deploy our manifests like so:
kubectl create -f msa-demo-ns.yml
kubectl create -n msa-demo -f redis.yml
kubectl create -n msa-demo -f msa-api.yml
kubectl create -n msa-demo -f msa-pinger.yml -f msa-poller.yml
This would yield:
namespace/msa-demo created
service/redis created
deployment.apps/redis created
service/msa-api created
deployment.apps/msa-api created
deployment.apps/msa-pinger created
deployment.apps/msa-poller created
4. Verify our deployment
4.1 validate services
kubectl -n msa-demo get svc
should yield:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
msa-api ClusterIP 100.68.245.88 <none> 8080/TCP 4m
redis ClusterIP 100.64.28.115 <none> 6379/TCP 4m
4.2 validate pods
Get all pods with label “run” (the default when we ran kubectl run
)
kubectl get po -l run --all-namespaces
should yield:
NAMESPACE NAME READY STATUS RESTARTS AGE
msa-demo msa-api-6cb7c9c6bf-rz6fd 1/1 Running 0 32m
msa-demo msa-pinger-599f4c5bf9-s8sc7 1/1 Running 0 20m
msa-demo msa-poller-7cfcb4c8d-wlrr6 1/1 Running 0 15m
msa-demo redis-685c788858-dw7nl 1/1 Running 0 1h
4.3 validate redis
Test redis is working:
kubectl -n msa-demo exec -it \
`kubectl -n msa-demo get pod | grep redis | awk '{print $1}'` \
redis-cli KEYS '*'
should yield:
1) "pings"
Get the value of pings:
kubectl -n msa-demo exec -it \
`kubectl -n msa-demo get pod | grep redis | awk '{print $1}'` \
redis-cli GET pings
should yield some number:
"2152"
4.4 validate msa-api
Test msa-api is working:
kubectl -n msa-demo logs \
`kubectl -n msa-demo get pod | grep msa-api | awk '{print $1}'`
should yield:
> msa-api@1.0.0 start /opt/tikal
> node api.js
loading ./config/development.json
Connecting to cache_host: redis://10.110.76.53:6379
Server running on port 8080!
node_redis: Warning: Redis server does not require a password, but a password was supplied.
Perquisites
minkube
or an existing kubernetes cluster with administrative privilegeshelm
-cli installedtiller
component is covered in the part 1
kubectl
installed- recommended:
default
context (to simplify the steps mentioned thourght the series) or- set the context to your liking and omit the
--namespace
from the provided command examples.
We will contact you as soon as possible.