未加星标

My kubernetes setup

字体大小 | |
[系统(linux) 所属分类 系统(linux) | 发布者 店小二04 | 时间 2016 | 作者 红领巾 ] 0人收藏点击收藏

This is a description of my local kubernetes setup. If you want to set up kubernetes yourself chances are you should follow the proper guide . This is intended to be the reference that I was desparate for when I set out doing this a few months ago.

I wanted to run my own kubernetes deployment to run applications and experiment. I didn't just want to try out kubernetes, I wanted to run it 24/7. From the looks of it the easiest way to do this is using Google Compute Engine or AWS . The problem with both of these is to run 24/7 you end up spending quite a lot of money every month just to keep a basic install running.

After considering a bunch of options (including running a Raspberry Pi Cluster ) I came to the conclusion that my best setup would be to run a single physical server that hosted bunch of virtual machines.

I picked Xen as my hypervisor, Ubuntu as my "dom0" (more on this later) and CoreOS as my kubernetes host. Here's my set up.

Hardware Dell T20 Server Intel i5-4590 16 GB RAM 120 GB SSD Software

Hypervisor: Xen Hypervisor / Ubuntu 16.04. I found myself thoroughly confused by all this talk of "dom0" but the gist of this is: You install Ubuntu 16.04 on your server, you then install (via apt-get) Xen which installs itself as the main OS with your original Ubuntu install as a virtual machine. This virtual machine is called "dom0" and is what you use to manage all your other virtual machines.

(Another source of confusion - Xen is not XenServer, which is a commercial product you can safely ignore).

Kubernetes OS: CoreOS Alpha Channel. Right now Stable does not include the kubelet (which we need) so I'm using Alpha. This is what I picked as it tries to support Kubernetes right out of the box.

Installing Xen

On a fresh Ubuntu 16.04, install Xen, libvirt and virtinst. Replace it as the deafult boot point and restart. virtinst gives us a CLI we will use to launch virtual machines later.

sudo apt-get install xen-hypervisor-amd64 virtinst sudo sed -i 's/GRUB_DEFAULT=.*\+/GRUB_DEFAULT="Xen 4.1-amd64"/' /etc/default/grub sudo update-grub sudo reboot

What comes back up should be the original Ubuntu install running as a virtual machine on the Xen hypervisor. Because it's the original install we don't know for sure that anything actually changed. We can check with xl :

[email protected]:~# xl list Name ID Mem VCPUs State Time(s) Domain-0 0 19989 4 r----- 75.3

Looks good!

Installing Kubernetes

Kubernetes comes with these nifty scripts that basically set up your whole cluster for you. The problem I found with this is I wanted to manage (and understand) the pieces of software myself. I didn't want a mysterious bash script that promised to take care of it all for me.

Instead I've created my own set of mysterious scripts, that are slightly less generated and templated that may be useful to some as examples. This is how to use them.

We're going to use as little as possible of my stuff - the following git repo is 4 CoreOS cloud-config files. These define basic configuration (network setup, applications to run). There's also a piece of config to generate our SSL certificate for the cluster.

So, grab my config from Github and grab the latest CoreOS Alpha:

sudo su mkdir -p /var/lib/libvirt/images/ cd /var/lib/libvirt/images/ git clone -b blog_post https://github.com/andrewmichaelsmith/xen-coreos-kube.git coreos cd coreos wget https://alpha.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2 -O - | bzcat > coreos_production_qemu_image.img

Now create a disk for master, node1, node2, node3:

qemu-img create -f qcow2 -b coreos_production_qemu_image.img master1.qcow2 qemu-img create -f qcow2 -b coreos_production_qemu_image.img node1.qcow2 qemu-img create -f qcow2 -b coreos_production_qemu_image.img node2.qcow2 qemu-img create -f qcow2 -b coreos_production_qemu_image.img node3.qcow2

You may need to generate an SSH key if you haven't already:

ssh-keygen -t rsa -b 4096 -C "$USER@$HOSTNAME"

We then put our SSH key in to the cloud-configs for our nodes:

KEY=$(cat ~/.ssh/id_rsa.pub) sed "s#SSH_KEY#$KEY#g" < master1/openstack/latest/user_data.tmpl > master1/openstack/latest/user_data sed "s#SSH_KEY#$KEY#g" < node1/openstack/latest/user_data.tmpl > node1/openstack/latest/user_data sed "s#SSH_KEY#$KEY#g" < node2/openstack/latest/user_data.tmpl > node2/openstack/latest/user_data sed "s#SSH_KEY#$KEY#g" < node3/openstack/latest/user_data.tmpl > node3/openstack/latest/user_data

We also need to generate our certificates:

cd certs openssl genrsa -out ca-key.pem 2048 openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca" openssl genrsa -out apiserver-key.pem 2048 openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf cd ..

And then put the certificates we generated in to the master node:

#Total hack, so it's indented correctly when we move it in to .yml sed -i 's/^/ /' certs/*.pem sed -i $'/CA.PEM/ {r certs/ca.pem\n d}' master1/openstack/latest/user_data sed -i $'/APISERVER.PEM/ {r certs/apiserver.pem\n d}' master1/openstack/latest/user_data sed -i $'/APISERVER-KEY.PEM/ {r certs/apiserver-key.pem\n d}' master1/openstack/latest/user_data

Configs done, we can validate to double check:

curl 'https://validate.core-os.net/validate' -X PUT --data-binary [email protected]/openstack/latest/user_data' | python -mjson.tool curl 'https://validate.core-os.net/validate' -X PUT --data-binary [email protected]/openstack/latest/user_data' | python -mjson.tool curl 'https://validate.core-os.net/validate' -X PUT --data-binary [email protected]/openstack/latest/user_data' | python -mjson.tool curl 'https://validate.core-os.net/validate' -X PUT --data-binary [email protected]/openstack/latest/user_data' | python -mjson.tool

If that passed ("null" from the server), create the CoreOS virtual machines using those disks and cloud-configs:

virt-install \ --connect qemu:///system \ --import \ --name master1 \ --ram 2048 \ --vcpus 2 \ --os-type=linux \ --os-variant=virtio26 \ --disk path=/var/lib/libvirt/images/coreos/master1.qcow2,format=qcow2,bus=virtio \ --filesystem /var/lib/libvirt/images/coreos/master1/,config-2,type=mount,mode=squash \ --network bridge=virbr0,mac=52:54:00:00:00:3 \ --vnc \ --noautoconsole \ --hvm virt-install \ --connect qemu:///system \ --import \ --name node1 \ --ram 2048 \ --vcpus 2 \ --os-type=linux \ --os-variant=virtio26 \ --disk path=/var/lib/libvirt/images/coreos/node1.qcow2,format=qcow2,bus=virtio \ --filesystem /var/lib/libvirt/images/coreos/node1/,config-2,type=mount,mode=squash \ --network bridge=virbr0,mac=52:54:00:00:00:0 \ --vnc \ --noautoconsole \ --hvm virt-install \ --connect qemu:///system \ --import \ --name node2 \ --ram 2048 \ --vcpus 1 \ --os-type=linux \ --os-variant=virtio26 \ --disk path=/var/lib/libvirt/images/coreos/node2.qcow2,format=qcow2,bus=virtio \ --filesystem /var/lib/libvirt/images/coreos/node2/,config-2,type=mount,mode=squash \ --network bridge=virbr0,mac=52:54:00:00:00:1 \ --vnc \ --noautoconsole \ --hvm virt-install \ --connect qemu:///system \ --import \ --name node3 \ --ram 2048 \ --vcpus 1 \ --os-type=linux \ --os-variant=virtio26 \ --disk path=/var/lib/libvirt/images/coreos/node3.qcow2,format=qcow2,bus=virtio \ --filesystem /var/lib/libvirt/images/coreos/node3/,config-2,type=mount,mode=squash \ --network bridge=virbr0,mac=52:54:00:00:00:2 \ --vnc \ --noautoconsole \ --hvm

This will start 4 virtual machines running CoreOS and our cloud configs. Depending on where you run this (internet speed, server power) this can take quite a long time to get up and running.

What happens:

Download flannel image Kubelet starts and downloads hyperkube Containers started for api server, controller manager, scheduler on master Container for kube-proxy starts on on nodes

If you need to you can attach to the console and monitor a node booting up:

virsh console master1

You can also ssh on to the master and check journalctl:

ssh [email protected] journalctl -f

So.. did it work? Let's try using kubectl (which we need to install locally first):

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.2.3/bin/linux/amd64/kubectl chmod +x kubectl mv kubectl /usr/local/bin/kubectl

Let's see:

[email protected]# kubectl -s http://192.168.122.254:8080 get nodes NAME STATUS AGE 192.168.122.2 Ready 1m 192.168.122.254 Ready 1m 192.168.122.3 Ready 1m 192.168.122.4 Ready 1m

One last thing, if we try and list the pods (running processes) we won't get anything. We need to create the "kube-system" namespace. Which can be easily done:

curl -H "Content-Type: application/json" -XPOST -d'{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"kube-system"}}' "http://192.168.122.254:8080/api/v1/namespaces"

Now:

NAME READY STATUS RESTARTS AGE kube-apiserver-192.168.122.254 1/1 Running 0 3m kube-controller-manager-192.168.122.254 1/1 Running 1 4m kube-proxy-192.168.122.2 1/1 Running 1 4m kube-proxy-192.168.122.254 1/1 Running 0 3m kube-proxy-192.168.122.3 1/1 Running 0 3m kube-proxy-192.168.122.4 1/1 Running 0 3m kube-scheduler-192.168.122.254 1/1 Running 0 3m

Woohoo!

Conclusion

So what have we actually done? We've turned an Ubuntu server in to a Xen Hypervisor. On that hypervisor we've created 4 virtual machines all running CoreOS. From the CoreOS config from my git repo we've set up 1 CoreOS install running the master kubernetes components, 3 others are running the node components.

There's many ways we can get Kubernetes running on CoreOS. The particular way we have set it up as is follows.

flannel service - This handles our networking. It allows a container on one node to speak to a container on another node. etcd service - This is where kubernetes persists state. docker service - Docker is how this kubernetes setup launches images. kubelet service - This is the only kubernetes component installed as a system service. We use the kubelet to join our kubernetes cluster and launch other kubernetes applications.

As well as system services we've also installed the following as services managed by kubernetes, we do this by placing kubernetes config in /etc/kubernetes/manifests/ . The kubelet service monitors this directory and launches applications based on what it finds.

kube-apiserver kube-scheduler kube-controller-manager kube-proxy

That's all! We've not got a fully functioning kubernetes cluster. Time to play with it.

本文系统(linux)相关术语:linux系统 鸟哥的linux私房菜 linux命令大全 linux操作系统

主题: CoreOSXenUbuntuKubernetesRESTGitRaspberry PiDockerCPUUT
分页:12
转载请注明
本文标题:My kubernetes setup
本站链接:http://www.codesec.net/view/485562.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 系统(linux) | 评论(0) | 阅读(24)