kubernetes安装

发布 : 2019-08-19 分类 : kubernetes 浏览 :

k8s 安装文档

应用版本

  • 操作系统
    Centos7.5
  • Docker Supported Versions
    1.12.6
    1.13.1
    17.03.2
  • etcd
    3.3.9
  • 服务器信息
IP 服务 主机名
10.10.1.4 etcd k8s-etcd-4
10.10.1.5 etcd k8s-etcd-5
10.10.1.6 etcd k8s-etcd-6
10.10.0.208 k8s-node k8s-node-208
10.10.0.199 k8s-node k8s-node-199
10.10.1.7 k8s-master k8s-master-7
10.10.1.8 k8s-master k8s-master-8
10.10.1.3 vip
  • 软件安装位置
    /xdfapp/server

生成证书

服务名 使用到的证书
etcd ca.pem, server.pem, server-key.pem
flannel ca.pem, server.pem, server-key.pem
kube-apiserver ca.pem, server.pem, server-key.pem
kubelet ca.pem, server.pem
kube-proxy ca.pem, kube-proxy.pem, kube-proxy-key.pem
kubectl ca.pem, admin.pem, admin-key.pem
  • 下载生成证书的工具:cfssl
1
2
3
4
curl -s -L -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
curl -s -L -o /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
curl -s -L -o /usr/local/bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x /usr/local/bin/cfssl*
  • 生成证书

    • ca 配置文件及pem文件生成
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF

# “CN”:Common Name,kube-apiserver从证书中提取该字段作为申请的用户名(User Name);浏览器使用该字段验证网站是否合法
# “O”:Organization,kube-apiserver从证书中提取该字段作为申请用户所属的组 (Group)

cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  • 生成server证书
    • 配置etcd及k8s的证书,此处hosts中包含etcd三个节点的IP,k8s-master和k8s-node的IP
    • 其中10.254.0.1是kube-apiserver –service-cluster-ip-range选项值指定的服务IP(Service Cluster IP) >

      执行过程中可能会有以下警告,忽略即可,此报错是cfssl1.2版本的一个bug不影响使用

      [WARNING] This certificate lacks a “hosts” field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 (“Information Requirements”)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"10.10.1.3",
"10.10.1.4",
"10.10.1.5",
"10.10.1.6",
"10.10.1.7",
"10.10.1.8",
"10.254.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
  • 生成admin证书
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
  • 生成kube-proxy证书
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
  • 所生成的所有文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# sh certificate.sh
# ls -l
total 80
-rw-r--r-- 1 root root 1009 Aug 8 14:57 admin.csr
-rw-r--r-- 1 root root 229 Aug 8 14:57 admin-csr.json
-rw------- 1 root root 1675 Aug 8 14:57 admin-key.pem
-rw-r--r-- 1 root root 1399 Aug 8 14:57 admin.pem
-rw-r--r-- 1 root root 294 Aug 8 14:57 ca-config.json
-rw-r--r-- 1 root root 1001 Aug 8 14:57 ca.csr
-rw-r--r-- 1 root root 266 Aug 8 14:57 ca-csr.json
-rw------- 1 root root 1679 Aug 8 14:57 ca-key.pem
-rw-r--r-- 1 root root 1359 Aug 8 14:57 ca.pem
-rw-r--r-- 1 root root 678 Aug 8 14:40 ca.sh
-rw-r--r-- 1 root root 2336 Aug 8 14:57 certificate.sh
-rw-r--r-- 1 root root 1009 Aug 8 14:57 kube-proxy.csr
-rw-r--r-- 1 root root 230 Aug 8 14:57 kube-proxy-csr.json
-rw------- 1 root root 1679 Aug 8 14:57 kube-proxy-key.pem
-rw-r--r-- 1 root root 1403 Aug 8 14:57 kube-proxy.pem
-rw-r--r-- 1 root root 1293 Aug 8 14:57 server.csr
-rw-r--r-- 1 root root 626 Aug 8 14:57 server-csr.json
-rw------- 1 root root 1675 Aug 8 14:57 server-key.pem
-rw-r--r-- 1 root root 1659 Aug 8 14:57 server.pem
  • 使用此命令可验证证书可用性

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    # openssl x509 -noout -text -in server.pem 
    Certificate:
    Data:
    Version: 3 (0x2)
    Serial Number:
    14:16:94:42:23:c5:6b:a0:db:d5:ee:be:a2:5b:e0:3e:5a:18:5b:c9
    Signature Algorithm: sha256WithRSAEncryption
    Issuer: C=CN, ST=Beijing, L=Beijing, O=k8s, OU=System, CN=kubernetes
    Validity
    Not Before: Aug 8 10:22:00 2018 GMT
    Not After : Aug 5 10:22:00 2028 GMT
    Subject: C=CN, ST=BeiJing, L=BeiJing, O=k8s, OU=System, CN=server
    Subject Public Key Info:
    Public Key Algorithm: rsaEncryption
    Public-Key: (2048 bit)
    Modulus:
    00:ca:d0:84:63:d2:b7:6f:f3:cd:64:ec:1d:a6:87:
    54:fb:5d:1b:61:2f:a7:22:1a:1d:68:2e:81:cb:32:
    d5:94:00:5b:61:9e:a9:e8:8b:ed:d9:d9:94:9b:93:
    6c:e0:8b:c8:1d:df:03:35:20:5b:09:5d:a9:4e:8c:
    dc:1b:d6:66:cc:31:b0:ec:21:7c:f0:86:1d:ee:73:
    71:c0:3d:d0:bc:01:07:86:41:9e:ee:d2:57:5e:c8:
    e7:92:ce:30:48:b6:45:75:03:00:aa:a3:38:72:a1:
    26:e5:94:cf:65:15:ab:29:de:01:79:7d:93:f3:e2:
    7e:39:4e:2b:3b:94:df:34:68:89:5b:53:ee:9f:c5:
    0f:42:26:54:be:c6:c6:08:a5:8c:5e:0e:44:94:03:
    77:de:57:45:25:5b:f1:23:1f:ff:87:e7:8e:ae:e5:
    2f:d6:73:00:74:b8:5c:cd:81:3d:e1:55:7f:af:92:
    a0:5f:bf:d7:18:b9:59:04:7b:80:3a:71:4a:ba:5e:
    1e:0c:45:1c:88:4b:2e:91:1f:ed:57:0e:12:5e:54:
    5f:1f:d3:f6:6a:49:ac:11:6d:88:c7:03:75:34:a6:
    e2:1b:9a:d7:9c:ed:8e:0a:f7:82:0d:13:45:49:b7:
    18:7c:f1:73:c4:1d:7e:cd:c2:09:4e:3c:9e:61:8d:
    27:ff
    Exponent: 65537 (0x10001)
    X509v3 extensions:
    X509v3 Key Usage: critical
    Digital Signature, Key Encipherment
    X509v3 Extended Key Usage:
    TLS Web Server Authentication, TLS Web Client Authentication
    X509v3 Basic Constraints: critical
    CA:FALSE
    X509v3 Subject Key Identifier:
    3B:73:D9:D0:D2:BD:A4:C9:C9:91:51:70:C3:48:FC:C2:7A:1D:C4:F4
    X509v3 Authority Key Identifier:
    keyid:93:32:93:7F:D4:A2:A4:64:B6:FD:FA:07:8C:3F:08:2B:FD:7F:F6:F6

    X509v3 Subject Alternative Name:
    DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster, DNS:kubernetes.default.svc.cluster.local, IP Address
    :127.0.0.1, IP Address:10.10.1.3, IP Address:10.10.1.4, IP Address:10.10.1.5, IP Address:10.10.1.6, IP Address:10.10.1.7, IP Address:10.10.1.8, IP Address:10.254.0.1 Signature Algorithm: sha256WithRSAEncryption
    00:fa:5c:a9:f5:aa:d8:78:1b:c9:39:a7:57:2e:0f:6f:ce:47:
    4b:00:9c:0a:5b:a5:20:16:6e:01:b4:2e:b6:0f:3a:29:4b:61:
    3c:91:5a:27:65:a0:20:a4:d0:68:7a:9d:f4:78:7e:88:f8:a4:
    b6:30:79:45:5e:62:12:5f:44:20:71:45:de:7d:f9:52:af:b5:
    23:78:54:46:24:7d:36:60:16:f8:bf:d2:b1:dc:d5:a2:1c:5d:
    53:d8:a9:3a:89:44:fc:96:17:84:00:d5:ae:f7:41:e3:ab:72:
    f3:c5:ae:b6:45:de:27:4f:b3:86:77:d0:40:00:12:60:50:46:
    79:f2:97:76:a9:72:07:f3:96:cc:ff:ed:c3:25:bf:e4:9c:92:
    97:da:49:81:9f:c7:a2:df:b3:4d:e2:0e:3a:50:3f:bb:ae:0b:
    b1:b1:94:a3:6e:c5:58:26:43:9c:df:91:4b:a7:56:a1:6b:28:
    fc:2d:d7:2d:22:1c:23:31:d0:ad:f6:e0:fe:04:d6:c9:e4:cc:
    f1:4d:5b:07:7c:69:7c:ae:16:4f:75:6a:00:62:64:89:87:28:
    1f:ae:76:16:9d:72:2c:f3:36:89:31:2d:90:10:de:97:f8:ab:
    03:69:61:15:05:55:8b:f4:0f:ba:58:cb:1a:6d:af:0e:56:33:
    0b:84:96:c1

etcd 安装

etcd版本使用为3.3.9

  • 下载etcd

    1
    2
    # cd /xdfapp/server
    # wget https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz
  • 部署前准备
    关闭防火墙or开放端口

    1
    2
    3
    4
    5
    6
    // 开放防火墙端口
    # firewall-cmd --permanent --zone=public --add-port=2379/tcp --add-port=2380/tcp
    # firewall-cmd --reload
    # firewall-cmd --list-ports
    // 关闭防火墙
    # systemctl stop firewalld && systemctl disable firewalld
  • 部署etcd

    1
    2
    3
    4
    5
    6
    7
    # tar zxvf etcd-v3.3.9-linux-amd64.tar.gz 
    # mv zxvf etcd-v3.3.9-linux-amd64 etcd-v3.3.9
    # echo "export PATH=$PATH:/xdfapp/server/etcd-v3.3.9/" >/etc/profile.d/etcd.sh
    echo "export PATH=$PATH:/xdfapp/server/etcd-v3.3.9/" >/etc/profile.d/etcd.sh
    // 将之前生成的ssl证书copy至etcd主机
    # mkdir /xdfapp/server/etcd-v3.3.9/ssl/
    # scp 10.10.1.8:/xdfapp/server/ssl/{ca.pem,server.pem,server-key.pem} /xdfapp/server/etcd-v3.3.9/ssl/
  • 修改配置文件
    需要修改两个变量及配置中的IP地址

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    # 注意这两个变量获取的值是否正确
    export ETCD_NAME=`hostname`
    export INTERNAL_IP=`ip a | grep 'inet '| awk '$2!="127.0.0.1/8" {print $2}' |sed 's/\/.*//g'`

    cat > /usr/lib/systemd/system/etcd.service <<EOF
    [Unit]
    Description=etcd server
    After=network.target
    After=network-online.target
    Wants=network-online.target

    [Service]
    Type=notify
    WorkingDirectory=/xdfapp/server/etcd-v3.3.9/
    EnvironmentFile=-/xdfapp/server/etcd-v3.3.9/etcd.conf
    ExecStart=/xdfapp/server/etcd-v3.3.9/etcd \\
    --name ${ETCD_NAME} \\
    --cert-file=/xdfapp/server/etcd-v3.3.9/ssl/server.pem \\
    --key-file=/xdfapp/server/etcd-v3.3.9/ssl/server-key.pem \\
    --peer-cert-file=/xdfapp/server/etcd-v3.3.9/ssl/server.pem \\
    --peer-key-file=/xdfapp/server/etcd-v3.3.9/ssl/server-key.pem \\
    --trusted-ca-file=/xdfapp/server/etcd-v3.3.9/ssl/ca.pem \\
    --peer-trusted-ca-file=/xdfapp/server/etcd-v3.3.9/ssl/ca.pem \\
    --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
    --listen-peer-urls https://${INTERNAL_IP}:2380 \\
    --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
    --advertise-client-urls https://${INTERNAL_IP}:2379 \\
    --initial-cluster-token etcd-cluster-1 \\
    --initial-cluster k8s-etcd-4=https://10.10.1.4:2380,k8s-etcd-5=https://10.10.1.5:2380,k8s-etcd-6=https://10.10.1.6:2380 \\
    --initial-cluster-state new \\
    --data-dir=/xdfapp/data/etcd
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536

    [Install]
    WantedBy=multi-user.target
    EOF
  • 设置环境变量

    1
    echo "export PATH=$PATH:/xdfapp/server/etcd-v3.3.9/" >/etc/profile.d/etcd.sh
  • 启动服务,并检查集群状态

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    # systemctl daemon-reload
    # systemctl start etcd

    // 三台服务最好同时启动,不然有可能连接其它节点超时,导致etcd启动失败,
    // 在任意一台服务器查看集群状态

    # cd /xdfapp/server/etcd-v3.3.9/ssl
    # etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.10.1.4:2379,https://10.10.1.5:2379,
    https://10.10.1.6:2379" cluster-health
    member 81d618bc181f9ba4 is healthy: got healthy result from https://10.10.1.6:2379
    member bf48ffa34667551a is healthy: got healthy result from https://10.10.1.4:2379
    member e108820667fcd816 is healthy: got healthy result from https://10.10.1.5:2379
    cluster is healthy

    // 查看leader

    # etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.10.1.4:2379,https://10.10.1.5:2379,
    https://10.10.1.6:2379" member list
    81d618bc181f9ba4: name=k8s-etcd-6 peerURLs=https://10.10.1.6:2380 clientURLs=https://10.10.1.6:2379 isLeader=false
    bf48ffa34667551a: name=k8s-etcd-4 peerURLs=https://10.10.1.4:2380 clientURLs=https://10.10.1.4:2379 isLeader=true
    e108820667fcd816: name=k8s-etcd-5 peerURLs=https://10.10.1.5:2380 clientURLs=https://10.10.1.5:2379 isLeader=false

    // 写入数据,在其它服务器读取验证

    # etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.10.1.4:2379" set testdir/key0 0
    0
    # etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.10.1.4:2379" ls /
    /testdir
    # etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.10.1.6:2379" ls /testdir
    /testdir/key0

etcd服务至此部署完成,后续完成添加节点,删除节点等文档


Docker安装

1 硬盘分区

  • 因为Docker目前推荐使用overlay2的存储方式,但是Centos7.x的默认xfs格式需要做一些调整,不然会出现如下的问题:

    • docker build的时候发现rm -rf 报错,说文件夹非空,使用docker run xxx /bin/bash 命令进去,rm 测试删除文件,发现挺不对劲的rm 文件后,再ls -l 发现文件还在,属性都变成 ???? 了,所以rm 无法删除文件。cat等命令操作文件时会提示文件不存在。
    • 执行命令docker info 时会有警告提示xfs文件系统需要d_type支持
  • 解决办法:
    需要在格式化xfs文件系统时添加 -n ftype=1参数

    1
    # mkfs.xfs -n ftype=1 /dev/sdb1
  • 挂载分区至目录

    1
    2
    3
    4
    5
    # blkid /dev/sdb1  // 获取硬件uuid并添加至fstab中
    /dev/sdb1: UUID="43c2494c-4800-4b23-85ec-29224bbf07de" TYPE="xfs" PARTLABEL="primary" PARTUUID="6496fe8c-e803-485d-8679-372d62a833b0"
    # grep '43c2494c-4800-4b23-85ec-29224bbf07de' /etc/fstab //添加以下配置到fstab中
    UUID="43c2494c-4800-4b23-85ec-29224bbf07de" /xdfapp xfs defaults 0 0
    # mount -a //挂载

2 安装Docker

  • 我直接安装的Docker v1.13.1,yum直接安装就好

    1
    # yum -y install docker
  • 如果需要修改Docker的默认存储位置按以下方法修改

    1
    2
    # cat /etc/sysconfig/docker-storage | grep -v '^#'
    DOCKER_STORAGE_OPTIONS=--graph /xdfapp/data/docker
  • 启动Docker

    1
    2
    # systemctl enable docker
    # systemctl start docker

flannel 安装

  • 下载二进制包

    1
    2
    3
    4
    mkdir /xdfapp/server/flanneld/{config,bin} -p  && cd /xdfapp/server/flanneld
    wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
    tar xf flannel-v0.10.0-linux-amd64.tar.gz -c bin/
    rm -f flannel-v0.10.0-linux-amd64.tar.gz
  • 写入分配的子网段到etcd,供flanneld使用,在etcd服务器执行

    1
    etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.10.1.4:2379,https://10.10.1.5:2379,https://10.10.1.6:2379" set /coreos.com/network/config '{ "Network": "10.254.0.0/16", "Backend": {"Type": "host-gw"}}'

flanneld的host-gw模式是通过建立主机上对应flanneld子网的mapping,以直接路由的方式联通flanneld的各个子网,这种互联方式没有vxlan等封装方式带来的负担,通过路由机制,实现flannel网络数据包在主机之间转发。但是这种方式也有不足,那就是所有节点之间都要相互有点对点的路由覆盖,并且所有加入flannel网络的主机需要在同一个LAN里面;所以每个节点上有n-1个路由,而n个节点一共有n(n-1)/2个路由以保证flannel的flat网络能力。

  • flanneld脚本

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    vim flanneld.sh
    #!/bin/bash

    ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

    cat <<EOF >/xdfapp/server/flanneld/config/flanneld

    FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
    -etcd-cafile=/xdfapp/server/kubernetes/ssl/ca.pem \
    -etcd-certfile=/xdfapp/server/kubernetes/ssl/server.pem \
    -etcd-keyfile=/xdfapp/server/kubernetes/ssl/server-key.pem"

    EOF
  • 创建flanneld 配置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    cat <<EOF >/usr/lib/systemd/system/flanneld.service
    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network-online.target network.target
    Before=docker.service

    [Service]
    Type=notify
    EnvironmentFile=/xdfapp/server/flanneld/config/flanneld
    ExecStart=/xdfapp/server/flanneld/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
    ExecStartPost=/xdfapp/server/flanneld/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target

    EOF
  • flanneld 配置

    1
    /xdfapp/server/flanneld/bin/flanneld.sh  https://10.10.1.4:2379,https://10.10.1.5:2379,https://10.10.1.6:2379
  • 修改docker配置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    # 我在docker配置中增加了第15行的配置,即flanneld启动后生成的配置
    # "EnvironmentFile=/run/flannel/subnet.env",并需要确保docker配置文件中有变量:$DOCKER_NETWORK_OPTIONS
    # 22行要将cgroupdriver 改为cgroupfs不然启动kubelet时会报错
    # cat -n /usr/lib/systemd/system/docker.service
    1 [Unit]
    2 Description=Docker Application Container Engine
    3 Documentation=http://docs.docker.com
    4 After=network.target rhel-push-plugin.socket registries.service
    5 Wants=docker-storage-setup.service
    6 Requires=docker-cleanup.timer
    7
    8 [Service]
    9 Type=notify
    10 NotifyAccess=all
    11 EnvironmentFile=-/run/containers/registries.conf
    12 EnvironmentFile=-/etc/sysconfig/docker
    13 EnvironmentFile=-/etc/sysconfig/docker-storage
    14 EnvironmentFile=-/etc/sysconfig/docker-network
    15 EnvironmentFile=/run/flannel/subnet.env
    16 Environment=GOTRACEBACK=crash
    17 Environment=DOCKER_HTTP_HOST_COMPAT=1
    18 Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
    19 ExecStart=/usr/bin/dockerd-current \
    20 --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
    21 --default-runtime=docker-runc \
    22 --exec-opt native.cgroupdriver=cgroupfs \
    23 --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
    24 --init-path=/usr/libexec/docker/docker-init-current \
    25 --seccomp-profile=/etc/docker/seccomp.json \
    26 $OPTIONS \
    27 $DOCKER_STORAGE_OPTIONS \
    28 $DOCKER_NETWORK_OPTIONS \
    29 $ADD_REGISTRY \
    30 $BLOCK_REGISTRY \
    31 $INSECURE_REGISTRY \
    32 $REGISTRIES
    33 ExecReload=/bin/kill -s HUP $MAINPID
    34 LimitNOFILE=1048576
    35 LimitNPROC=1048576
    36 LimitCORE=infinity
    37 TimeoutStartSec=0
    38 Restart=on-abnormal
    39 KillMode=process
    40
    41 [Install]
    42 WantedBy=multi-user.target
  • 启动flanneld,重启docker

    1
    2
    systemctl daemon-reload
    systemctl restart flanneld docker
  • 验证服务是否正常

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    运行一个docker镜像,查看容器内的IP是否为flanneld网段
    # docker run --rm centos:0.1 ifconfig
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 10.254.14.2 netmask 255.255.255.0 broadcast 0.0.0.0
    inet6 fe80::42:aff:fefe:e02 prefixlen 64 scopeid 0x20<link>
    ether 02:42:0a:fe:0e:02 txqueuelen 0 (Ethernet)
    RX packets 1 bytes 90 (90.0 B)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 1 bytes 90 (90.0 B)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

kubenetes master 节点安装

  • 软件,环境准备
软件名 版本 下载地址
kube-apiserver
kube-controller-manager
kubectl
kube-scheduler
1.10.5 https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md
keepalived v1.3.5 yum安装
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 临时禁用selinux
# 永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
setenforce 0
# 配置转发相关参数,否则可能会出错
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sysctl --system
# kubenetes v1.8版本后需要关闭系统的swap,不关闭则需要修改kubelet设定参数
swapoff -a && sysctl -w vm.swappiness=0
# 关闭firewalld
systemctl disable firewalld
systemctl stop firewalld
  • k8s安装

    • 以下操作都需要在两台master节点操作
    1
    2
    3
    4
    5
    6
    7
    8
    cd /xdfapp/server
    tar zxf kubernetes-server-linux-amd64.tar.gz
    cd kubernetes && rm -rf addons kubernetes-src.tar.gz LICENSES
    mkdir {bin,config,ssl}
    mv server/bin/{kube-apiserver,kube-controller-manager,kubectl,kube-scheduler} bin/
    rm -rf server
    echo "export PATH=$PATH:/xdfapp/server/kubernetes/bin" > /etc/profile.d/k8s.sh
    source /etc/profile
  • 将之前生成的证书scp到目录内

    1
    2
    3
    4
    scp 10.10.1.4:/xdfapp/server/ssl/{ca*pem,server*pem} /xdfapp/server/kubernetes/ssl/

    ls /xdfapp/server/kubernetes/ssl/
    ca-key.pem ca.pem server-key.pem server.pem
  • 生成token

    1
    2
    3
    4
    export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
    cat > token.csv <<EOF
    ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    EOF
  • 生成kube-apiserver的配置文件

    • 注意修改环境变量
    • INTERNAL_IP设置为0.0.0.0是为了后续做keepalived
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    export INTERNAL_IP=0.0.0.0
    export ETCD_CLUSTER=https://10.10.1.4:2379,https://10.10.1.5:2379,https://10.10.1.6:2379
    export CA_FILE=/xdfapp/server/kubernetes/ssl/ca.pem
    export CA_KEY_FILE=/xdfapp/server/kubernetes/ssl/ca-key.pem
    export CERT_FILE=/xdfapp/server/kubernetes/ssl/server.pem
    export CERT_KEY_FILE=/xdfapp/server/kubernetes/ssl/server-key.pem

    cat <<EOF >/xdfapp/server/kubernetes/config/kube-apiserver

    KUBE_APISERVER_OPTS="--logtostderr=true \\
    --v=4 \\
    --etcd-servers=${ETCD_CLUSTER} \\
    --insecure-bind-address=${INTERNAL_IP} \\
    --bind-address=${INTERNAL_IP} \\
    --insecure-port=8080 \\
    --secure-port=6443 \\
    --advertise-address=${INTERNAL_IP} \\
    --allow-privileged=true \\
    --service-cluster-ip-range=10.254.0.0/16 \\
    --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
    --authorization-mode=RBAC,Node \\
    --kubelet-https=true \\
    --enable-bootstrap-token-auth \\
    --token-auth-file=/xdfapp/server/kubernetes/config/token.csv \\
    --service-node-port-range=30000-50000 \\
    --tls-cert-file=${CERT_FILE} \\
    --tls-private-key-file=${CERT_KEY_FILE} \\
    --client-ca-file=${CA_FILE} \\
    --service-account-key-file=${CA_KEY_FILE} \\
    --etcd-cafile=${CA_FILE} \\
    --etcd-certfile=${CERT_FILE} \\
    --etcd-keyfile=${CERT_KEY_FILE}"

    EOF

    cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    EnvironmentFile=-/xdfapp/server/kubernetes/config/kube-apiserver
    ExecStart=/xdfapp/server/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target
    EOF
  • 生成kube-controller-manager配置文件

    • MASTER_ADDRESS要指定为VIP
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    MASTER_ADDRESS=10.10.1.3
    export CA_FILE=/xdfapp/server/kubernetes/ssl/ca.pem
    export CA_KEY_FILE=/xdfapp/server/kubernetes/ssl/ca-key.pem
    export CERT_FILE=/xdfapp/server/kubernetes/ssl/server.pem
    export CERT_KEY_FILE=/xdfapp/server/kubernetes/ssl/server-key.pem
    cat <<EOF >/xdfapp/server/kubernetes/config/kube-controller-manager

    KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
    --v=4 \\
    --master=${MASTER_ADDRESS}:8080 \\
    --leader-elect=true \\
    --address=127.0.0.1 \\
    --service-cluster-ip-range=10.254.0.0/16 \\
    --cluster-name=kubernetes \\
    --cluster-signing-cert-file=${CA_FILE} \\
    --cluster-signing-key-file=${CA_KEY_FILE} \\
    --service-account-private-key-file=${CA_KEY_FILE} \\
    --root-ca-file=${CA_FILE}"
    EOF

    cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    EnvironmentFile=-/xdfapp/server/kubernetes/config/kube-controller-manager
    ExecStart=/xdfapp/server/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target
    EOF

    systemctl daemon-reload
    systemctl enable kube-controller-manager
    systemctl restart kube-controller-manager
  • 生成kube-scheduler配置文件

    • MASTER_ADDRESS要指定为VIP
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    MASTER_ADDRESS=10.10.1.3

    cat <<EOF >/xdfapp/server/kubernetes/config/kube-scheduler

    KUBE_SCHEDULER_OPTS="--logtostderr=true \\
    --v=4 \\
    --master=${MASTER_ADDRESS}:8080 \\
    --leader-elect"

    EOF

    cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    EnvironmentFile=-/xdfapp/server/kubernetes/config/kube-scheduler
    ExecStart=/xdfapp/server/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target
    EOF

    systemctl daemon-reload
    systemctl enable kube-scheduler
    systemctl restart kube-scheduler
  • 安装keepalived

    1
    yum -y install keepalived
  • 修改keepalived配置文件

    • state我设置的都为backup,非抢占模式
    • 别外一台需要注意修改IP地址配置
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    ! Configuration File for keepalived
    global_defs {
    notification_email {
    wangguangyuan@okay.cn
    }

    notification_email_from root@kubernetes1.yp14.cn
    smtp_server 192.168.30.1
    smtp_connect_timeout 30
    router_id Master1
    }

    vrrp_script check_k8s {
    script "/xdfapp/scripts/chk_k8s_master.sh"
    interval 3
    weight 5
    }

    vrrp_instance VI_1 {
    # 使用单播通信,默认是组播通信
    unicast_src_ip 10.10.1.7
    unicast_peer {
    10.10.1.8
    }
    state BACKUP
    interface eth0
    virtual_router_id 88
    priority 100
    advert_int 1
    mcast_src_ip 10.10.1.7

    authentication {
    auth_type PASS
    auth_pass 1111
    }

    virtual_ipaddress {
    10.10.1.3 dev eth0
    }

    track_script {
    check_k8s
    }
    }
  • chk_k8s_master.sh脚本

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    cat <<EOF >/xdfapp/scripts/chk_k8s_master.sh 
    #!/bin/bash
    # k8s master 监控
    echo "`date` keepalived fail" >/tmp/keepalived
    netstat -ntpl | grep kube-apiserve >/dev/null
    [ $? -ne 0 ] && service keepalived stop && exit 1
    netstat -ntpl | grep kube-controll >/dev/null
    [ $? -ne 0 ] && service keepalived stop && exit 1
    netstat -ntpl | grep kube-schedule >/dev/null
    [ $? -ne 0 ] && service keepalived stop && exit 1
    echo "`date` keepalived success" >/tmp/keepalived
    EOF

    chmod +x /xdfapp/scripts/chk_k8s_master.sh
  • 启动keepalived

    1
    systemctl start keepalived
  • 验证

    • 查看IP是否有vip:10.10.1.3
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    # ip a 
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 6a:30:09:74:57:6f brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.7/24 brd 10.10.1.255 scope global noprefixroute eth0
    valid_lft forever preferred_lft forever
    inet 10.10.1.3/32 scope global eth0
    valid_lft forever preferred_lft forever
    inet6 fe80::44b8:eb3:355b:b82/64 scope link noprefixroute
    valid_lft forever preferred_lft forever
    • 配置kubectl测试查看集群状态
      我使用的是http方式
    1
    2
    3
    4
    5
    6
    7
    8
    9
    # 先设置kubectl
    # 配置一个名为default的集群
    kubectl config set-cluster default --server=http://10.10.1.3:8080
    # 设置一个管理用户为admin
    kubectl config set-credentials admin
    # 设置一个名为default使用default集群与admin用户的上下文
    kubectl config set-context default --cluster=default --user=admin
    # 启用default为默认上下文
    kubectl config use-context default
    • 查看集群状态
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    # kubectl cluster-info
    Kubernetes master is running at http://10.10.1.3:8080

    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

    # kubectl get cs
    NAME STATUS MESSAGE ERROR
    scheduler Healthy ok
    controller-manager Healthy ok
    etcd-0 Healthy {"health":"true"}
    etcd-1 Healthy {"health":"true"}
    etcd-2 Healthy {"health":"true"}
  • 设置Node节点kubeconfig文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    BOOTSTRAP_TOKEN=`cat /xdfapp/server/kubernetes/config/token.csv  | awk -F',' '{print $1}'`
    # 创建kubelet bootstrapping kubeconfig
    export KUBE_APISERVER="https://10.10.1.3:6443"

    # 设置集群参数
    kubectl config set-cluster kubernetes \
    --certificate-authority=./ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=bootstrap.kubeconfig

    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap \
    --token=${BOOTSTRAP_TOKEN} \
    --kubeconfig=bootstrap.kubeconfig

    # 设置上下文参数
    kubectl config set-context default \
    --cluster=kubernetes \
    --user=kubelet-bootstrap \
    --kubeconfig=bootstrap.kubeconfig

    # 设置默认上下文
    kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

    #----------------------

    # 创建kube-proxy kubeconfig文件

    kubectl config set-cluster kubernetes \
    --certificate-authority=./ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=kube-proxy.kubeconfig

    kubectl config set-credentials kube-proxy \
    --client-certificate=./kube-proxy.pem \
    --client-key=./kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig

    kubectl config set-context default \
    --cluster=kubernetes \
    --user=kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    # 将文件同步至node节点
    scp *kubeconfig 10.10.0.199:/xdfapp/server/kubernetes/config/
    scp *kubeconfig 10.10.0.208:/xdfapp/server/kubernetes/config/

kubernetes node 节点安装

  • 软件包准备
软件 版本 下载地址 备注
kubelet
kube-proxy
1.10.5 https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md
docker 1.13 yum安装 安装文档见上方docker安装文档
flanneld 0.10 https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz 安装文档见上方,flanneld安装
  • 系统准备

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
    setenforce 0
    # 配置转发相关参数,否则可能会出错
    cat <<EOF > /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    vm.swappiness=0
    EOF
    sysctl --system
    # kubenetes v1.8版本后需要关闭系统的swap,不关闭则需要修改kubelet设定参数
    swapoff -a && sysctl -w vm.swappiness=0
    # 关闭firewalld
    systemctl disable firewalld
    systemctl stop firewalld
  • kubelet 安装

    1
    2
    3
    4
    5
    6
    7
    cd /xdfapp/server && mkdir kubernetes/{bin,config,ssl}
    tar zxvf kubernetes-node-linux-amd64.tar.gz
    cd kubernetes
    mv node/bin/{kubelet,kube-proxy} bin/
    rm -rf kubernetes-src.tar.gz LICENSES client node server addons
    echo "export PATH=$PATH:/xdfapp/server/kubernetes/bin" > /etc/profile.d/k8s.sh
    source /etc/profile
  • 修改kubelet配置文件
    下面变量中的DNS_SERVER_IP是coredns的IP地址,后续会部署并指定此IP

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    export NODE_ADDRESS=10.10.0.208
    export DNS_SERVER_IP=10.254.0.2

    cat <<EOF >/xdfapp/server/kubernetes/config/kubelet

    KUBELET_OPTS="--logtostderr=true \\
    --v=4 \\
    --address=${NODE_ADDRESS} \\
    --hostname-override=${NODE_ADDRESS} \\
    --kubeconfig=/xdfapp/server/kubernetes/config/kubelet.kubeconfig \\
    --experimental-bootstrap-kubeconfig=/xdfapp/server/kubernetes/config/bootstrap.kubeconfig \\
    --cert-dir=/xdfapp/server/kubernetes/ssl \\
    --runtime-cgroups=/systemd/system.slice \\
    --kubelet-cgroups=/systemd/system.slice \\
    --allow-privileged=true \\
    --cluster-dns=${DNS_SERVER_IP} \\
    --cluster-domain=cluster.local \\
    --fail-swap-on=false \\
    --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

    EOF

    cat <<EOF >/usr/lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service

    [Service]
    EnvironmentFile=-/xdfapp/server/kubernetes/config/kubelet
    ExecStart=/xdfapp/server/kubernetes/bin/kubelet \$KUBELET_OPTS
    Restart=on-failure
    KillMode=process

    [Install]
    WantedBy=multi-user.target
    EOF

    systemctl daemon-reload
    systemctl enable kubelet
    systemctl restart kubelet
  • kube-proxy 配置
    注意修改环境变量

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    export NODE_ADDRESS=10.10.0.208
    cat <<EOF >/xdfapp/server/kubernetes/config/kube-proxy

    KUBE_PROXY_OPTS="--logtostderr=true \
    --v=4 \
    --hostname-override=${NODE_ADDRESS} \
    --kubeconfig=/xdfapp/server/kubernetes/config/kube-proxy.kubeconfig"

    EOF

    cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Proxy
    After=network.target

    [Service]
    EnvironmentFile=-/xdfapp/server/kubernetes/config/kube-proxy
    ExecStart=/xdfapp/server/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target
    EOF

    systemctl daemon-reload
    systemctl enable kube-proxy
    systemctl restart kube-proxy
  • 在master 节点通过csr验证

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    # kubectl get csr 
    NAME AGE REQUESTOR CONDITION
    node-csr-0Hs-_158F5VFcFTFKcYqdGzcQnPwSqs4ob5oj6E-yTQ 8m kubelet-bootstrap Pending
    node-csr-pT0sKH6PCmLtBortBeu4-J3HA_1Fp-y07hYUKhhXFFU 41m kubelet-bootstrap Pending
    # 通过认证
    # kubectl certificate approve node-csr-0Hs-_158F5VFcFTFKcYqdGzcQnPwSqs4ob5oj6E-yTQ
    certificatesigningrequest.certificates.k8s.io "node-csr-0Hs-_158F5VFcFTFKcYqdGzcQnPwSqs4ob5oj6E-yTQ" approved

    # kubectl get csr
    NAME AGE REQUESTOR CONDITION
    node-csr-0Hs-_158F5VFcFTFKcYqdGzcQnPwSqs4ob5oj6E-yTQ 9m kubelet-bootstrap Approved,Issued
    node-csr-pT0sKH6PCmLtBortBeu4-J3HA_1Fp-y07hYUKhhXFFU 42m kubelet-bootstrap Pending
    # 两台都允许后,查看node
    kubectl get node
    NAME STATUS ROLES AGE VERSION
    10.10.0.199 Ready <none> 6m v1.10.5
    10.10.0.208 Ready <none> 19m v1.10.5

k8s 插件部署

  • coredns 部署
    CoreDNS项目是SkyDNS2的作者,Miek Gieben采用更模块化,可扩展的框架构建,将此DNS服务器作为Kube-DNS的替代品,我们先用coredns来做k8s集群的dns解析服务

    • 准备yaml配置
    1. coredns-sa.yaml 文件,创建ServiceAccout
    1
    2
    3
    4
    5
    6
    7
    8
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: coredns
    namespace: kube-system
    labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    1. coredns-rbac.yaml文件,创建rbac ClusterRole和ClusterRoleBinding
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
    name: system:coredns
    rules:
    - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
    labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
    name: system:coredns
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: system:coredns
    subjects:
    - kind: ServiceAccount
    name: coredns
    namespace: kube-system
    1. coredns-configmap.yaml文件,定义Corefile配置文件的参数配置
      proxy 配置是继承node结节的/etc/resolv.conf配置
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: coredns
    namespace: kube-system
    data:
    Corefile: |
    .:53 {
    errors
    log
    health
    kubernetes cluster.local 10.254.0.0/16
    proxy . /etc/resolv.conf
    cache 30
    }
    1. coredns-deployment.yaml文件,定义pod的创建模板
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    name: coredns
    namespace: kube-system
    labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
    spec:
    replicas: 1
    selector:
    matchLabels:
    k8s-app: coredns
    template:
    metadata:
    labels:
    k8s-app: coredns
    annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ''
    scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
    spec:
    serviceAccountName: coredns
    containers:
    - name: coredns
    image: coredns/coredns:latest
    imagePullPolicy: Always
    args: [ "-conf", "/etc/coredns/Corefile" ]
    volumeMounts:
    - name: config-volume
    mountPath: /etc/coredns
    ports:
    - containerPort: 53
    name: dns
    protocol: UDP
    - containerPort: 53
    name: dns-tcp
    protocol: TCP
    livenessProbe:
    httpGet:
    path: /health
    port: 8080
    scheme: HTTP
    initialDelaySeconds: 60
    timeoutSeconds: 5
    successThreshold: 1
    failureThreshold: 5
    dnsPolicy: Default
    volumes:
    - name: config-volume
    configMap:
    name: coredns
    items:
    - key: Corefile
    path: Corefile
    1. coredns-service.yaml文件,定义服务的名称,注意修改clusterIP
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    apiVersion: v1
    kind: Service
    metadata:
    name: coredns
    namespace: kube-system
    labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
    spec:
    selector:
    k8s-app: coredns
    clusterIP: 10.254.0.2
    ports:
    - name: dns
    port: 53
    protocol: UDP
    - name: dns-tcp
    port: 53
    protocol: TCP
  • 通过yaml创建coredns服务

    1
    2
    cd /xdfapp/server/yaml
    kubectl create -f .
  • 查看结果

    1
    2
    3
    4
    5
    6
    7
    8
    9
    # kubectl get pod,svc,deployment,rc -n kube-system 
    NAME READY STATUS RESTARTS AGE
    pod/coredns-66c9f6f9f7-dbjpg 1/1 Running 0 45m

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    service/coredns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP 42m

    NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    deployment.extensions/coredns 1 1 1 1 45m
  • 启动一个容器验证dns可用性

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    kubectl run -i --tty busybox --image=busybox /bin/sh
    # 查看容器使用的dnsIP
    cat /etc/resolv.conf
    nameserver 10.254.0.2
    # nslookup www.baidu.com
    Server: 10.254.0.2
    Address: 10.254.0.2:53

    Non-authoritative answer:
    www.baidu.com canonical name = www.a.shifen.com
    Name: www.a.shifen.com
    Address: 61.135.169.121
    Name: www.a.shifen.com
    Address: 61.135.169.125

验证k8s集群

  • 创建一个nginx服务

    1
    kubectl run test --image=nginx --replicas=2 --port=888
  • 查看运行状态

    1
    2
    3
    4
    5
    6
    7
    # kubectl get deploy
    NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    test 2 2 2 2 3m
    # kubectl get pods
    NAME READY STATUS RESTARTS AGE
    test-bb68894dd-qjxtq 1/1 Running 0 3m
    test-bb68894dd-wz9nm 1/1 Running 0 3m
本文作者 : WGY
原文链接 : http://geeklive.cn/2019/08/19/install-k8s/undefined/install-k8s/
版权声明 : 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明出处!
留下足迹