阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Ubuntu 16.04下安装搭建Kubernetes集群环境

174次阅读
没有评论

共计 15107 个字符,预计需要花费 38 分钟才能阅读完成。

简介

目前 Kubernetes 为 Ubuntu 提供的 kube-up 脚本,不支持 15.10 以及 16.04 这两个使用 systemd 作为 init 系统的版本。

这里详细介绍一下如何以非 Docker 方式在 Ubuntu16.04 集群上手动安装部署 Kubernetes 的过程。

手动的部署过程,可以很容易写成自动部署的脚本。同时了解整个部署过程,对深入理解 Kubernetes 的架构及各功能模块也会很有帮助。

环境信息

版本信息

组件 版本
etcd 2.3.1
Flannel 0.5.5
Kubernetes 1.3.4

 

 

 

主机信息

主机 IP OS
k8s-master 172.16.203.133 Ubuntu 16.04
k8s-node01 172.16.203.134 Ubuntu 16.04
k8s-node02 172.16.203.135 Ubuntu 16.04

 

 

 

安装 Docker

每台主机上安装最新版 Docker Engine(目前是 1.12) – https://docs.docker.com/engine/installation/linux/ubuntulinux/

部署 etcd 集群

我们将在 3 台主机上安装部署 etcd 集群

下载 etcd

在部署机上下载 etcd

ETCD_VERSION=${ETCD_VERSION:-"2.3.1"}
ETCD="etcd-v${ETCD_VERSION}-linux-amd64"
curl -L https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${ETCD}.tar.gz -o etcd.tar.gz

tar xzf etcd.tar.gz -C /tmp
cd /tmp/etcd-v${ETCD_VERSION}-linux-amd64

for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h mkdir -p '$HOME/kube' && scp -r etcd* user@$h:~/kube; done
for h in k8s-master k8s-node01 k8s-node02; do ssh user@$h 'sudo mkdir -p /opt/bin && sudo mv $HOME/kube/* /opt/bin && rm -rf $home/kube/*'; done

 配置 etcd 服务

在每台主机上,分别创建 /opt/config/etcd.conf 和 /lib/systemd/system/etcd.service 文件,(注意修改红色粗体处的 IP 地址)

/opt/config/etcd.conf

sudo mkdir -p /var/lib/etcd/
sudo mkdir -p /opt/config/
sudo  cat <<EOF |  sudo tee /opt/config/etcd.conf
ETCD_DATA_DIR=/var/lib/etcd
ETCD_NAME=$(hostname)
ETCD_INITIAL_CLUSTER=master=http://172.16.203.133:2380,node01=http://172.16.203.134:2380,node02=http://172.16.203.135:2380
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_LISTEN_PEER_URLS=http://172.16.203.133:2380
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://172.16.203.133:2380
ETCD_ADVERTISE_CLIENT_URLS=http://172.16.203.133:2379
ETCD_LISTEN_CLIENT_URLS=http://172.16.203.133:2379
GOMAXPROCS=$(nproc)
EOF

 /lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
Documentation=https://github.com/coreos/etcd
After=network.target


[Service]
User=root
Type=simple
EnvironmentFile=-/opt/config/etcd.conf
ExecStart=/opt/bin/etcd
Restart=on-failure
RestartSec=10s
LimitNOFILE=40000

[Install]
WantedBy=multi-user.target

 然后在每台主机上运行

sudo systemctl daemon-reload 
sudo systemctl enable etcd
sudo systemctl start etcd

下载 Flannel

FLANNEL_VERSION=${FLANNEL_VERSION:-"0.5.5"}
curl -L  https://github.com/coreos/flannel/releases/download/v${FLANNEL_VERSION}/flannel-${FLANNEL_VERSION}-linux-amd64.tar.gz flannel.tar.gz
tar xzf  flannel.tar.gz -C /tmp

编译 K8s

在部署机上编译 K8s,需要安装 docker engine(1.12) 和 go(1.6.2)

git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
make release-skip-tests
tar xzf _output/release-stage/full/kubernetes/server/kubernetes-server-linux-amd64.tar.gz -C /tmp

Note

除了 linux/amd64,默认还会为其他平台做交叉编译。为了减少编译时间,可以修改 hack/lib/golang.sh,把 KUBE_SERVER_PLATFORMS,KUBE_CLIENT_PLATFORMS 和 KUBE_TEST_PLATFORMS 中除 linux/amd64 以外的其他平台注释掉。

部署 K8s Master

复制程序文件

cd /tmp
scp kubernetes/server/bin/kube-apiserver \
     kubernetes/server/bin/kube-controller-manager \
     kubernetes/server/bin/kube-scheduler kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@172.16.203.133:~/kube
scp flannel-${FLANNEL_VERSION}/flanneld user@172.16.203.133:~/kube
ssh -t user@172.16.203.133 'sudo mv ~/kube/* /opt/bin/'

创建证书

在 master 主机上,运行如下命令创建证书

mkdir -p /srv/kubernetes/

cd /srv/kubernetes

export MASTER_IP=172.16.203.133
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=${MASTER_IP}" -out server.csr
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000 

配置 kube-apiserver 服务

我们使用如下的 Service 以及 Flannel 的网段:

SERVICE_CLUSTER_IP_RANGE=172.18.0.0/16

FLANNEL_NET=192.168.0.0/16

在 master 主机上,创建 /lib/systemd/system/kube-apiserver.service 文件,内容如下

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
User=root
ExecStart=/opt/bin/kube-apiserver \
 --insecure-bind-address=0.0.0.0 \
 --insecure-port=8080 \
 --etcd-servers=http://172.16.203.133:2379, http://172.16.203.134:2379, http://172.16.203.135:2379 \
 --logtostderr=true \
--allow-privileged=false \ --service-cluster-ip-range=172.18.0.0/16 \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota \ --service-node-port-range=30000-32767 \ --advertise-address=172.16.203.133 \ --client-ca-file=/srv/kubernetes/ca.crt \ --tls-cert-file=/srv/kubernetes/server.crt \ --tls-private-key-file=/srv/kubernetes/server.key Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target

配置 kube-controller-manager 服务

在 master 主机上,创建 /lib/systemd/system/kube-controller-manager.service 文件,内容如下

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-controller-manager \
  --master=127.0.0.1:8080 \
  --root-ca-file=/srv/kubernetes/ca.crt \
  --service-account-private-key-file=/srv/kubernetes/server.key \
  --logtostderr=true
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

配置 kuber-scheduler 服务

在 master 主机上,创建 /lib/systemd/system/kube-scheduler.service 文件,内容如下

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
User=root
ExecStart=/opt/bin/kube-scheduler \
  --logtostderr=true \
  --master=127.0.0.1:8080
Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target

配置 flanneld 服务

在 master 主机上,创建 /lib/systemd/system/flanneld.service 文件,内容如下

[Unit]
Description=Flanneld
Documentation=https://github.com/coreos/flannel
After=network.target
Before=docker.service

[Service]
User=root
ExecStart=/opt/bin/flanneld \
  --etcd-endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" \
  --iface=172.16.203.133 \
  --ip-masq
Restart=on-failure
Type=notify
LimitNOFILE=65536

启动服务

/opt/bin/etcdctl --endpoints="http://172.16.203.133:2379,http://172.16.203.134:2379,http://172.16.203.135:2379" mk /coreos.com/network/config \ 
  '{"Network":"192.168.0.0/16","Backend": {"Type":"vxlan"}}'

sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver sudo systemctl enable kube-controller-manager sudo systemctl enable kube-scheduler sudo systemctl enable flanneld sudo systemctl start kube-apiserver sudo systemctl start kube-controller-manager sudo systemctl start kube-scheduler sudo systemctl start flanneld

修改 Docker 服务 

source /run/flannel/subnet.env

sudo sed -i "s|^ExecStart=/usr/bin/dockerd -H fd://$|ExecStart=/usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}|g" /lib/systemd/system/docker.service
rc=0
ip link show docker0 >/dev/null 2>&1 || rc="$?"
if [["$rc" -eq "0"]]; then
ip link set dev docker0 down
ip link delete docker0
fi


sudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl restart docker

部署 K8s Node

复制程序文件

cd /tmp
for h in k8s-master k8s-node01 k8s-node02; do scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy user@$h:~/kube; done
for h in k8s-master k8s-node01 k8s-node02; do scp flannel-${FLANNEL_VERSION}/flanneld user@$h:~/kube;done
for h in k8s-master k8s-node01 k8s-node02; do ssh -t user@$h 'sudo mkdir -p /opt/bin && sudo mv ~/kube/* /opt/bin/'; done

配置 Flanned 以及修改 Docker 服务

参见 Master 部分相关步骤:配置 Flanneld 服务,启动 Flanneld 服务,修改 Docker 服务。注意修改 iface 的地址

配置 kubelet 服务

/lib/systemd/system/kubelet.service,注意修改 IP 地址

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
ExecStart=/opt/bin/kubelet \
  --hostname-override=172.16.203.133 \
  --api-servers=http://172.16.203.133:8080 \
  --logtostderr=true
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

启动服务

sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet

配置 kube-proxy 服务

/lib/systemd/system/kube-proxy.service,注意修改 IP 地址

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
ExecStart=/opt/bin/kube-proxy  \
  --hostname-override=172.16.203.133 \
  --master=http://172.16.203.133:8080 \
  --logtostderr=true
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动服务

sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy

配置验证 K8s

生成配置文件

在部署机上运行

KUBE_USER=admin
KUBE_PASSWORD=$(Python -c 'import string,random; print("".join(random.SystemRandom().choice(string.ascii_letters + string.digits) for _ in range(16)))')

DEFAULT_KUBECONFIG="${HOME}/.kube/config"
mkdir -p $(dirname "${KUBECONFIG}")
touch "${KUBECONFIG}"
CONTEXT=ubuntu
KUBECONFIG=${KUBECONFIG:-$DEFAULT_KUBECONFIG}

KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-cluster "${CONTEXT}" --server=http://172.16.203.133:8080 --insecure-skip-tls-verify=true
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-credentials "${CONTEXT}" --username=${KUBE_USER} --password=${KUBE_PASSWORD}
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-context "${CONTEXT}" --cluster="${CONTEXT}" --user="${CONTEXT}"
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config use-context "${CONTEXT}"  --cluster="${CONTEXT}"

验证

$ kubectl get nodes
NAME             STATUS    AGE
172.16.203.133   Ready     2h
172.16.203.134   Ready     2h
172.16.203.135   Ready     2h

$cat <<EOF > nginx.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
EOF

$kubectl create -f nginx.yml
$kubectl get pods -l run=my-nginx -o wide
NAME                        READY     STATUS    RESTARTS   AGE       IP             NODE
my-nginx-1636613490-9ibg1   1/1       Running   0          13m       192.168.31.2   172.16.203.134
my-nginx-1636613490-erx98   1/1       Running   0          13m       192.168.56.3   172.16.203.133

$kubectl expose deployment/my-nginx

$kubectl get service my-nginx
NAME       CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
my-nginx   172.18.28.48   <none>        80/TCP    37s

在三台主机上访问 pod 或者 service 的 IP 地址,都可以访问到 nginx 服务

$ curl http://172.18.28.48
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html> 

用户认证和安全

我们在最后一步生成 kube 配置文件的时候,创建了用户名和密码,但并没在 apiserver 上启用(使用 --basic-auth-file 参数 ),也就是说,只要能访问到 172.16.203.133:8080,就可以操作 k8s 集群。如果是内部系统,并且配置好访问规则,也是可以接受的

为了增强安全性,可以启用证书认证,有两种方式:同时启用 minion 和客户端与 master 之间的认证,或者只启用客户端与 master 之间的证书认证。

minion 节点的证书生成和配置可以参考 http://kubernetes.io/docs/getting-started-guides/scratch/#security-models 以及 http://kubernetes.io/docs/getting-started-guides/ubuntu-calico/ 的相关部分。

这里我们看一下如何启用客户端与 master 之间的证书认证。使用这种方式也相对安全,minion 节点和 master 一般在同一个数据中心,可以把对 HTTP 8080 的访问限制在数据中心内部,而客户端只能使用证书通过 HTTPS 访问 api server。

创建客户端证书

在 master 主机上运行如下命令

cd /srv/kubernetes

export CLINET_IP=172.16.203.1
openssl genrsa -out client.key 2048
openssl req -new -key client.key -subj "/CN=${CLINET_IP}" -out client.csr
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 10000

把 client.crt 和 client.key 复制到部署机,然后运如下命令,生成 kube 配置文件

DEFAULT_KUBECONFIG="${HOME}/.kube/config"
mkdir -p $(dirname "${KUBECONFIG}")
touch "${KUBECONFIG}"
CONTEXT=ubuntu
KUBECONFIG=${KUBECONFIG:-$DEFAULT_KUBECONFIG}
KUBE_CERT=client.crt
KUBE_KEY=client.key

KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-cluster "${CONTEXT}" --server=https://172.16.203.133:6443 --insecure-skip-tls-verify=true
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-credentials "${CONTEXT}" --client-certificate=${KUBE_CERT} --client-key=${KUBE_KEY} --embed-certs=true
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config set-context "${CONTEXT}" --cluster="${CONTEXT}" --user="${CONTEXT}"
KUBECONFIG="${KUBECONFIG}" /tmp/kubernetes/server/bin/kubectl config use-context "${CONTEXT}"  --cluster="${CONTEXT}"

部署附件组件

部署 DNS

DNS_SERVER_IP="172.18.8.8"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
KUBE_APISERVER_URL=http://172.16.203.133:8080

cat <<EOF > skydns.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v17.1
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    version: v17.1
    kubernetes.io/cluster-service: "true"
spec:
  replicas: $DNS_REPLICAS
  selector:
    k8s-app: kube-dns
    version: v17.1
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v17.1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: kubedns
        image: gcr.io/google_containers/kubedns-amd64:1.5
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            cpu: 100m
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 30
          timeoutSeconds: 5
        args:
        # command = "/kube-dns"
        - --domain=$DNS_DOMAIN.
        - --dns-port=10053
        - --kube-master-url=$KUBE_APISERVER_URL
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
      - name: dnsmasq
        image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
        args:
        - --cache-size=1000
        - --no-resolv
        - --server=127.0.0.1#10053
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
      - name: healthz
        image: gcr.io/google_containers/exechealthz-amd64:1.1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 10m
            memory: 50Mi
          requests:
            cpu: 10m
            # Note that this container shouldn't really need 50Mi of memory. The
            # limits are set higher than expected pending investigation on #29688.
            # The extra memory was stolen from the kubedns container to keep the
            # net memory requested by the pod constant.
            memory: 50Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.$DNS_DOMAIN 127.0.0.1 >/dev/null
        - -port=8080
        - -quiet
        ports:
        - containerPort: 8080
          protocol: TCP
      dnsPolicy: Default  # Don't use cluster DNS.
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: $DNS_SERVER_IP
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
EOF

kubectl create -f skydns.yml

 然后,修该各节点的 kubelet.service, 添加 –cluster-dns=172.18.8.8 以及 –cluster-domain=cluster.local

部署 Dashboard

echo <<'EOF' > kube-dashboard.yml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    app: kubernetes-dashboard
    version: v1.1.0
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubernetes-dashboard
  template:
    metadata:
      labels:
        app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
        - --apiserver-host=http://172.16.203.133:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 9090
  selector:
    app: kubernetes-dashboard
EOF

kubectl create -f kube-dashboard.yml

Docker 中部署 Kubernetes http://www.linuxidc.com/Linux/2016-07/133020.htm

Kubernetes 集群部署  http://www.linuxidc.com/Linux/2015-12/125770.htm

OpenStack, Kubernetes, Mesos 谁主沉浮  http://www.linuxidc.com/Linux/2015-09/122696.htm

Kubernetes 集群搭建过程中遇到的问题及解决  http://www.linuxidc.com/Linux/2015-12/125735.htm

Kubernetes 集群部署  http://www.linuxidc.com/Linux/2015-12/125770.htm

Kubernetes 的详细介绍 :请点这里
Kubernetes 的下载地址 :请点这里

更多 Ubuntu 相关信息见 Ubuntu 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=2

本文永久更新链接地址 :http://www.linuxidc.com/Linux/2017-02/140555.htm

正文完
星哥说事-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-21发表,共计15107字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中