A collection of playbooks for deploying a Kubernetes development cluster onto machines, they are fully automated command to bring up a Kubernetes cluster on AWS EC2.
Feature list:
In this section you will deploy a cluster via AWS EC2.
Prerequisites:
$ ./py-ansible.sh
[Updating & Installing Python Latest Version]
[Installing Ansible]
[successfully Installed Python3 & Ansible]
$ ansible-vault edit group_vars/cred.yml
Vault password: 1233
group_vars/cred.yml
ansible vault password was set to 1233
ansible-vault view group_vars/cred.yml
Vault password: 1233
access_key: **********************
secret_key: **************************
ansible-vault rekey group_vars/cred.yml
Vault password: ***
New Vault password: ***
Confirm New Vault password: ***
Rekey successful
Provisioning will create a custom ansible inventory file for setting up k8s cluster.
$cat /etc/ansible/custom_inv.ini
# This is custom inventory file which will use in setting up k8s cluster
[master]
3.90.3.247 ansible_ssh_private_key_file=/etc/ansible/id_rsa_aws
[worker]
3.89.143.224 ansible_ssh_private_key_file=/etc/ansible/id_rsa_aws
[addnode]
35.173.233.160 ansible_ssh_private_key_file=/etc/ansible/id_rsa_aws
[kube_cluster:children]
master
worker
addnode
AWS EC2 related Variables, located under group_vars/all.yml
# To change region.
ec2:
region: eu-west-1
# To change Master Instance type
master:
vm_type: t2.medium
# To change no. of worker nodes and it's types
worker:
vm_type: t2.micro
vm_count: 1
Set the variables in group_vars/all.yml
to reflect you need options.
# kube_version('v1.20.4', 'v1.19.2')
kube_version: v1.20.4
# Supported Network implementation('flannel', 'calico')
network: calico
# Supported container_runtime: ('docker', 'containerd', 'crio')
container_runtime: crio
# Additional feature to install
additional_features:
helm: false
nfs_dynamic: true
metric_server: true
logging: true
kube_state_metric: true
dynamic_hostpath_provisioning: true
# Dashboard
enable_dashboard: yes
need_to_save_dashboard_token: yes
# NFS Directory
nfs:
dir: /nfs-private
If everything is ready, just run ./aws-k8s.sh
to provision ec2 and deploy the cluster on it:
$ ./main.sh initcluster
Vault password: 1233
Verify that you have deployed the cluster, check the cluster as following commands:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kmaster Ready control-plane,master 3h28m v1.20.4 172.31.4.27 Ubuntu 20.04.2 LTS 5.4.0-1038-aws cri-o://1.20.1
ip-172-31-91-218 Ready <none> 3h25m v1.20.4 172.31.13.91 Ubuntu 20.04.2 LTS 5.4.0-1038-aws cri-o://1.20.1
...
Need to specific how many no. of worker nodes you want to add in group_vars/all.yml
# Specify the number of worker node you want to add in existing k8s cluster
addnode: 1
And run below command:
$ ./main.sh addnode
Vault password: 1233
Verify that you have added worker nodes, check the cluster as following commands:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kmaster Ready control-plane,master 3h28m v1.20.4 172.31.4.27 Ubuntu 20.04.2 LTS 5.4.0-1038-aws cri-o://1.20.1
ip-172-31-91-218 Ready <none> 3h25m v1.20.4 172.31.13.91 Ubuntu 20.04.2 LTS 5.4.0-1038-aws cri-o://1.20.1
ip-172-31-92-95 Ready <none> 1h25m v1.20.4 172.31.25.10 Ubuntu 20.04.2 LTS 5.4.0-1038-aws cri-o://1.20.1
...
To delete AWS all resouces you need to run below command:
$ ./main.sh reset
Do you want to decommission the AWS resources(y/n):n
Decommission is cancelled by user
$ ./main.sh reset
Do you want to decommission the AWS resources(y/n):y
Vault password: 1233
Give a ⭐️ if this project helped you!
👤 Adil Abdullah Khan