Skip to content
This repository was archived by the owner on Jul 24, 2019. It is now read-only.

Installation

Brandon B. Jozsa edited this page Dec 22, 2016 · 9 revisions

Instructions:

IMPORTANT: Please see the latest published information about our application versions.

Overview

In order to drive towards a production-ready solution, our aim is to provide a containerized and stable persistent volumes, which Kubernetes can then call for deployments such as MariaDB (Galera) and other services that require state for Openstack. Although we assumed that this project should provide a “batteries included” approach towards persistent storage, we want to allow operators to define their own solution as well. Examples of this work can be explored by looking through the various {{ project }}/templates/* and {{ project }}/values.yaml definitions throughout the project. If you have any questions or comments, please create an issue.

Quickstart (Bare Metal)

Our walk through will help you set up a bare metal environment, using kubeadm on Ubuntu 16.04. The assumption is that you have a working kubeadm environment, and your environment is at a working state, prior to deploying a CNI-SDN. This is only to standardize our deployment process, and limit questions to a known working deployment. This will expand as the project becomes more mature.

If you’re environment looks like this, you are ready to continue:

admin@kubenode01:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                       READY     STATUS              RESTARTS   AGE       IP              NODE
kube-system   dummy-2088944543-lg0vc                     1/1       Running             1          5m        192.168.3.21    kubenode01
kube-system   etcd-kubenode01                            1/1       Running             1          5m        192.168.3.21    kubenode01
kube-system   kube-apiserver-kubenode01                  1/1       Running             3          5m        192.168.3.21    kubenode01
kube-system   kube-controller-manager-kubenode01         1/1       Running             0          5m        192.168.3.21    kubenode01
kube-system   kube-discovery-1769846148-8g4d7            1/1       Running             1          5m        192.168.3.21    kubenode01
kube-system   kube-dns-2924299975-xxtrg                  0/4       ContainerCreating   0          5m        <none>          kubenode01
kube-system   kube-proxy-7kxpr                           1/1       Running             0          5m        192.168.3.22    kubenode02
kube-system   kube-proxy-b4xz3                           1/1       Running             0          5m        192.168.3.24    kubenode04
kube-system   kube-proxy-b62rp                           1/1       Running             0          5m        192.168.3.23    kubenode03
kube-system   kube-proxy-s1fpw                           1/1       Running             1          5m        192.168.3.21    kubenode01
kube-system   kube-proxy-thc4v                           1/1       Running             0          5m        192.168.3.25    kubenode05
kube-system   kube-scheduler-kubenode01                  1/1       Running             1          5m        192.168.3.21    kubenode01
admin@kubenode01:~$

Deploying a CNI-Enabled SDN (Calico)

Now it is time to deploy a CNI-enabled SDN. We have selected Calico (see versions, as this changes which manifest you use). For Calico version 2.0, you can apply the Kubeadm Hosted Install manifest:

admin@kubenode01:~$ kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml

NOTE: If you are using a 192.168.0.0/16 network, you will need to modify line 42 for the cidr range of the ippool. This must be a /16 range or higher, as the kube-controller-manager will hand out /24 ranges to each of the nodes. We have included a sample comparison of the changes you may need for comparison here and here.

Once this is deployed, you may want to view your Calico deployment by downloading the recommended version of calicoctl, and performing the following command:

admin@kubenode01:~$ sudo calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 192.168.3.22 | node-to-node mesh | up    | 16:34:03 | Established |
| 192.168.3.23 | node-to-node mesh | up    | 16:33:59 | Established |
| 192.168.3.24 | node-to-node mesh | up    | 16:34:00 | Established |
| 192.168.3.25 | node-to-node mesh | up    | 16:33:59 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

admin@kubenode01:~$

It's really important to note the referenced Calico version 2.0 manifest (above) supports nodetonode mesh, and nat-outgoing by default.

Preparing Persistent Storage

Persistant storage is improving. Please check our current and/or resolved issues to find out how we're working to improve this solution for our project. For now, a few things need to be done first.

Kubernetes Controller Manager

Before deploying Ceph, we need to deploy a kube-control-manager with RDB utilities. We are maintaining a `kube-controller-manager container with the proper tools installed for a containerized Ceph deployment. If you would like to check tags, and the security of these pre-built containers you may view them at our public Quay container registry. If you would prefer to build this yourself, you are free to use our GitHub dockerfiles repository.

On your Kubernetes master, edit the image line of your kube-controller-manager json manifest:

admin@kubenode01:~$ export kube_version=v1.5.1
admin@kubenode01:~$ sed -i "s|gcr.io/google_containers/kube-controller-manager-amd64:'$kube_version'|quay.io/attcomdev/kube-controller-manager:'$kube_version'|g" /etc/kubernetes/manifests/kube-controller-manager.json

Now you will want to restart your master server.

Ceph Secrets Generation

Another thing of interest is that our deployment assumes that you can generate secrets at the time of the container deployment. It is a requirement to load the Sigil binary on your deployment host.

admin@kubenode01:~$ curl -L https://github.com/gliderlabs/sigil/releases/download/v0.4.0/sigil_0.4.0_Linux_x86_64.tgz | tar -zxC /usr/local/bin

Kube Controller Manager DNS Resolution

Until the following Kubernetes Pull Request is accepted, you will need to force the kube-controller-manager to use the internal skydns endpoint as a DNS server, and add your search suffix into the kube-controller-manager resolv.conf. As of now, kube-controller-manager only mirrors the host resolv.conf, but that is not sufficent if you want the controller to know how to communicate with containers (in the case of DaemonSets).

First, find out what the IP Address of your kube-dns deployment is:

admin@kubenode01:~$ kubectl describe svc kube-dns -n kube-system
Name:			kube-dns
Namespace:		kube-system
Labels:			component=kube-dns
			k8s-app=kube-dns
			kubernetes.io/cluster-service=true
			kubernetes.io/name=KubeDNS
			name=kube-dns
			tier=node
Selector:		name=kube-dns
Type:			ClusterIP
IP:			10.96.0.10
Port:			dns	53/UDP
Endpoints:		10.25.162.193:53
Port:			dns-tcp	53/TCP
Endpoints:		10.25.162.193:53
Session Affinity:	None
No events.
admin@kubenode01:~$

As you can see, in this example 10.25.162.193 is our kube-dns-2924299975-xxtrg endpoint. Now, have a look at your current kube-controller-manager-kubenode01 /etc/resolv.conf:

admin@kubenode01:~$ kubectl exec kube-controller-manager-kubenode01 -n kube-system -- cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.1.70
nameserver 8.8.8.8
search jinkit.com
admin@kubenode01:~$

What we need is for kube-controller-manager-kubenode01 /etc/resolv.conf to look like this:

admin@kubenode01:~$ kubectl exec kube-controller-manager-kubenode01 -n kube-system -- cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.25.162.193
nameserver 192.168.1.70
nameserver 8.8.8.8
search svc.cluster.local jinkit.com
admin@kubenode01:~$

QuickStart

Before you begin, make sure you have read and understand the project Requirements.

You can start aic-helm fairly quickly. Assuming the above requirements are met, you can install the charts in a layered approach. The OpenStack parent chart, which installs all OpenStack services, is a work in progress and is simply a one-stack convenience. For now, please install each individual service chart as noted below.

Note that the bootstrap chart is meant to be installed in every namespace you plan to use. It helps install required secrets.

If any helm install step fails, you can back it out with helm delete <releaseName> --purge

Make sure sigil is installed to perform the ceph secret generation, as noted in the Requirements.

# label all known nodes as candidates for pods
kubectl label nodes node-type=storage --all
kubectl label nodes openstack-control-plane=enabled --all

# move into the aic-helm directory
cd aic-helm

# set your network cidr--these values are only
# appropriate for calico and may be different in your
# environment
export osd_cluster_network=10.32.0.0/12
export osd_public_network=10.32.0.0/12

# on every node that will receive ceph instances, 
# create some local directories used as nodeDirs
# for persistent storage
mkdir -p /var/lib/aic-helm/ceph

# generate secrets (ceph, etc.)
cd common/utils/secret-generator
./generate_secrets.sh all `./generate_secrets.sh fsid`
cd ../../..

# now you are ready to build aic-helm
helm serve . &
helm repo add local http://localhost:8879/charts
make

# install ceph
helm install --name=ceph local/ceph --namespace=ceph

# bootstrap the openstack namespace for chart installation
helm install --name=bootstrap local/bootstrap --namespace=openstack

# install mariadb
helm install --name=mariadb local/mariadb --namespace=openstack

# install rabbitmq/memcache
helm install --name=memcached local/memcached --namespace=openstack
helm install --name=rabbitmq local/rabbitmq --namespace=openstack

# install keystone
helm install --name=keystone local/keystone --namespace=openstack

# install horizon
helm install --name=horizon local/horizon --namespace=openstack

# install glance
helm install --name=glance local/glance --namespace=openstack

# ensure all services enter a running state, with the
# exception of one jobs/glance-post and the ceph
# rgw containers, due to outstanding issues
watch kubectl get all --namespace=openstack

You should now be able to access horizon at http:// using admin/password

Clone this wiki locally