K8S Again

 

我又回来了。

使用二进制方式安装

去 Kubernetes 官网找对应的版本号。

打开:https://github.com/kubernetes/kubernetes/releases

点击 CHANGELOG 指向的链接。

下载 kubernetes-server-linux-amd64.tar.gz 文件。其中包含了 Kubernetes 需要运行的全部服务程序文件。

在 Node 上需要部署 docker / kubelet / kube-proxy 服务进程。

将二进制文件复制到 /usr/bin 目录下,然后在 /usr/lib/systemd/system 目录下为各个服务创建 systemd 服务配置文件。

通用的操作

是否需要 ssh 免密登录?

CA 签名

Kubernetes 提供了基于 CA 签名的双向数字证书认证方式和简单的基于 HTTP Base 或者 Token 的认证方式。

基于 CA 签名的双向数字证书的生成过程:

  • 为 kube-apiserver 生成一个数字证书,并用 CA 证书签名。
  • 为 kube-apiserver 进程配置证书线管的启动参数,包括 CA 证书,自签证书以及私钥。
  • 为每个访问 Kubernetes API Server 的客户端都生成自己的证书,用 CA 签名,在各自的启动参数中指定。

CA 证书 (使用 OpenSSL)

使用 OpenSSL 工具在 Master 服务器上创建 CA 证书和私钥相关的文件

sudo mkdir -p /etc/kubernetes/myca
cd /etc/kubernetes/myca
sudo openssl genrsa -out ca.key 2048
## -subj 参数中的 /CN 的值为主机名
sudo openssl req -x509 -new -nodes -key ca.key \
  -subj "/CN=c701.chaos.luxe" -days 5000 -out ca.crt
sudo openssl genrsa -out server.key 2048

创建 master_ssl.cnf 文件,用于 x509 v3 版本的证书。此文件中主要设置 Master 服务器的 hostname, IP 地址以及 Kubernetes Master Service 的虚拟服务名称和 ClusterIP 地址。

[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name

[req_distinguished_name]

[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = c701.chaos.luxe
IP.1 = 169.169.0.1
IP.2 = 192.168.56.105

使用 master_ssl.cnf 创建 server.csr 和 server.crt 文件(-subj 中的 ‘/CN’ 值为 Master 主机名)。

sudo openssl req \
  -new \
  -key server.key \
  -subj "/CN=c701.chaos.luxe" \
  -config master_ssl.cnf \
  -out server.csr

sudo openssl x509 -req \
  -in server.csr \
  -CA ca.crt \
  -CAkey ca.key \
  -CAcreateserial \
  -days 5000 \
  -extensions v3_req \
  -extfile master_ssl.cnf \
  -out server.crt

复制这 6 个文件到 /var/run/kubernetes/ 目录中。

CA 证书 (使用 cfssl)

下载 cfssl 工具
wget -c https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget -c https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget -c https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

sudo cp cfssl_linux-amd64 /usr/local/bin/cfssl
sudo cp cfssljson_linux-amd64 /usr/local/bin/cfssljson
sudo cp cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
cfssl 子命令简介
  • bundle: 创建包含客户端证书的证书包
  • genkey: 生成一个key(私钥)和CSR(证书签名请求)
  • scan: 扫描主机问题
  • revoke: 吊销证书
  • certinfo: 输出给定证书的证书信息, 跟cfssl-certinfo 工具作用一样
  • gencrl: 生成新的证书吊销列表
  • selfsign: 生成一个新的自签名密钥和 签名证书
  • print-defaults: 打印默认配置,这个默认配置可以用作模板
    • config:生成配置模板
    • csr:生成证书请求模板
  • serve: 启动一个HTTP API服务
  • gencert: 生成新的key(密钥)和签名证书
    • -ca:指明ca的证书
    • -ca-key:指明ca的私钥文件
    • -config:指明请求证书的json文件
    • -profile:与-config中的profile对应,是指根据config中的profile段来生成证书的相关信息
  • ocspdump
  • ocspsign
  • info: 获取有关远程签名者的信息
  • sign: 签名一个客户端证书,通过给定的CA和CA密钥,和主机名
  • ocsprefresh
  • ocspserve
生成证书

创建 ca-config.json 文件

server auth 表示 client 可以用该 ca 对 server 提供的证书进行验证 client auth 表示 server 可以用该 ca 对 client 提供的证书进行验证

mkdir -p /opt/cfssl.org/
cd /opt/cfssl.org
touch ca-config.json
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

创建证书签名请求

mkdir -p /opt/cfssl.org/
cd /opt/cfssl.org
touch ca-csr.json
cat > ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

####### 注意: hosts主机参数要根据实际情况进行修改
mkdir -p /opt/cfssl.org/
cd /opt/cfssl.org
touch server-csr.json
cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
        "192.168.56.104",
        "192.168.56.105",
        "192.168.56.106"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2021/07/24 22:34:29 [INFO] generating a new CA key and certificate from CSR
2021/07/24 22:34:29 [INFO] generate received request
2021/07/24 22:34:29 [INFO] received CSR
2021/07/24 22:34:29 [INFO] generating key: rsa-2048
2021/07/24 22:34:31 [INFO] encoded CSR
2021/07/24 22:34:31 [INFO] signed certificate with serial number 695243864094923366612092315402511611991596203031

####### 生成CA证书和私钥. (生成 ca.csr, ca-key.pem 和 ca.pem  三个文件)

cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=www server-csr.json | cfssljson -bare server
2021/07/24 22:35:05 [INFO] generate received request
2021/07/24 22:35:05 [INFO] received CSR
2021/07/24 22:35:05 [INFO] generating key: rsa-2048
2021/07/24 22:35:05 [INFO] encoded CSR
2021/07/24 22:35:05 [INFO] signed certificate with serial number 140086734311765183926939971913146357697678502030
2021/07/24 22:35:05 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

####### 生成CA证书和私钥. (生成 server.csr, server-key.pem 和 server.pem  三个文件)

sudo mkdir -p /opt/etcd/ssl/
sudo cp *.csr /opt/etcd/ssl/
sudo cp *.pem /opt/etcd/ssl/

etcd

官网文档:https://etcd.io/docs/v3.4.0/upgrades/upgrade_3_4/

参考:https://www.cnblogs.com/sanduzxcvbnm/p/11778633.html

etcd 服务作为 Kubernetes 集群的主数据库,在安装 Kubernetes各服务之前需要首先安装和启动 etcd 。

去 Github 上下载 etcd 的二进制文件: https://github.com/coreos/etcd/releases。去下面的 Assets 中下载对应平台的版本。

创建目录

mkdir -p /tmp/etcd-download-test
sudo mkdir -p /etc/etcd
sudo mkdir -p /var/lib/etcd
sudo touch /etc/etcd/etcd.conf

一个下载脚本示例

download-etcd.sh

#!/usr/bin/env bash

ETCD_VER=v3.4.14

# choose either URL
GOOGLE_URL=https://storage.googleapis.com/etcd
GITHUB_URL=https://github.com/etcd-io/etcd/releases/download
DOWNLOAD_URL=${GITHUB_URL}

curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o ./etcd-${ETCD_VER}-linux-amd64.tar.gz
mkdir -p /tmp/etcd-download-test
tar xzvf ./etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download-test/ --strip-components=1
cd /tmp/etcd-download-test/
cp etcd /usr/local/bin/etcd
cp etcdctl /usr/local/bin/etcdctl

etcd 的服务配置文件

创建并编辑 etcd 的 systemd 服务配置文件。

/usr/lib/systemd/system/etcd.service

sudo -i
cd /usr/lib/systemd/system/
touch etcd.service
cat > /usr/lib/systemd/system/etcd.service <<EOF 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

其中 WorkingDirectory 表示 etcd 数据保存的目录,需要在 etcd 服务器启动之前创建。

配置文件 /etc/etcd/etcd.conf 通常不需要进行特别的参数设置,etcd 默认使用 http://127.0.0.1:2379 提供客户端的链接。

配置文件

创建并编辑 etcd 的 配置文件

## for  etch01
sudo -i
cd /etc/etcd/
touch /etc/etcd/etcd.conf
cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.10.241:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.10.241:2379,http://127.0.0.1:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.10.241:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.10.241:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.10.10.241:2380,etcd02=https://10.10.10.242:2380,etcd03=https://10.10.10.243:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/opt/etcd/ssl/server.pem"
ETCD_KEY_FILE="/opt/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/opt/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/opt/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
EOF


## for et cd02

sudo -i
cd /etc/etcd/
touch /etc/etcd/etcd.conf

cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.10.242:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.10.242:2379,http://127.0.0.1:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.10.242:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.10.242:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.10.10.241:2380,etcd02=https://10.10.10.242:2380,etcd03=https://10.10.10.243:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/opt/etcd/ssl/server.pem"
ETCD_KEY_FILE="/opt/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/opt/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/opt/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
EOF

## for etcd03

sudo -i
cd /etc/etcd/
touch /etc/etcd/etcd.conf

cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.10.243:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.10.243:2379,http://127.0.0.1:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.10.243:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.10.243:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.10.10.241:2380,etcd02=https://10.10.10.242:2380,etcd03=https://10.10.10.243:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/opt/etcd/ssl/server.pem"
ETCD_KEY_FILE="/opt/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/opt/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/opt/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
EOF

启动和验证

sudo -i

mkdir -p /var/lib/etcd/
systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service
systemctl status etcd.service
etcdctl endpoint health
etcdctl endpoint health     \
  --endpoints=10.10.10.241:2379,10.10.10.242:2379,10.10.10.243:2379 \
  --cacert="/opt/etcd/ssl/ca.pem"   \
  --cert="/opt/etcd/ssl/server.pem"    \
  --key="/opt/etcd/ssl/server-key.pem"   
10.10.10.241:2379 is healthy: successfully committed proposal: took = 13.057001ms
10.10.10.242:2379 is healthy: successfully committed proposal: took = 13.081093ms
10.10.10.243:2379 is healthy: successfully committed proposal: took = 13.07177ms

A note about .service file:

这个 Type 参数作用是用来标志 systemctl 是否要跟踪处理服务进程是否启动成功,如果是 forking 则是需要跟踪对应的 MAINPID 是否存在,只有检测到 MAINPID 存在才能确定这个服务启动成功。这就需要保证被启动的那个服务能够按照标准协议来把 pi d写入对应文件或是进行对应处理。如果是 simple 或是不填,则 systemctl 认为这是一般的应用程序,只要启动就能保证成功,而不会去检查对应的服务是否真的执行成功。

Type 的取值

Type:定义启动时的进程行为。它有以下几种值。

  • Type=simple:默认值,执行 ExecStart 指定的命令,启动主进程
  • Type=forking:以 fork 方式从父进程创建子进程,创建后父进程会立即退出
  • Type=oneshot:一次性进程,Systemd 会等当前服务退出,再继续往下执行
  • Type=dbus:当前服务通过 D-Bus 启动
  • Type=notify:当前服务启动完毕,会通知 Systemd,再继续往下执行
  • Type=idle:若有其他任务执行完毕,当前服务才会运行

原文链接:https://blog.csdn.net/zhaoenweiex/article/details/113864140

Flannel

官网:https://github.com/coreos/flannel/releases

参考: https://www.cnblogs.com/skyflask/articles/11357316.html

报错:

Feb 18 19:45:42 c701.chaos.luxe flanneld[27747]: E0218 19:45:42.022253   27747 main.go:349] Couldn't fetch network config: client: response is invalid json. The endpoint is probably not valid etcd cluster endpoint.

原因:是因为 flannel v0.11 版本不支持 etcd v3.4.3 版本,支持 etcd v3.3.10 版本,需要将 etcd 降低版本到 3.3.10,后期寻找合适的网络插件方法将etcd升级起来,(2020-02-19)

Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段:

etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.56.105:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https:/10.10.10.241:2379,https:/10.10.10.243:2379,https:/10.10.10.243:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

# 貌似新版本的 etcd 是用下面的命令
etcdctl \
--cacert=/opt/etcd/ssl/ca.pem \
--cert=/opt/etcd/ssl/server.pem \
--key=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.56.105:2379" \
put /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

etcdctl \
--cacert=/opt/etcd/ssl/ca.pem \
--cert=/opt/etcd/ssl/server.pem \
--key=/opt/etcd/ssl/server-key.pem \
--endpoints="https://10.10.10.241:2379,https://10.10.10.243:2379,https://10.10.10.243:2379" \
put /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

部署flannel

以下部署步骤在规划的每个node节点都操作。

下载二进制包:

wget -c --no-check-certificate https://github.com/flannel-io/flannel/releases/download/v0.14.0/flannel-v0.14.0-linux-amd64.tar.gz
tar zxvf flannel-*-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/bin
mv flanneld mk-docker-opts.sh /opt/kubernetes/bin
# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin

配置 Flannel:

sudo mkdir -p /opt/kubernetes/cfg
cat > /opt/kubernetes/cfg/flanneld <<EOF
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.56.105:2379 \\
-etcd-cafile=/opt/etcd/ssl/ca.pem \\
-etcd-certfile=/opt/etcd/ssl/server.pem \\
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF

systemd 管理 Flannel:

sudo mkdir -p /run/flannel/
sudo cat > /usr/lib/systemd/system/flanneld.service <<EOF

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
 
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
EOF

配置 Docker 启动指定子网段:

cat /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
 
[Install]
WantedBy=multi-user.target

重启 Flannel 和 Docker

systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

查看 Flannel 路由信息

etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.56.105:2379" \
ls /coreos.com/network/subnets

etcdctl \
--cacert=/opt/etcd/ssl/ca.pem \
--cert=/opt/etcd/ssl/server.pem \
--key=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.56.105:2379" \
get /coreos.com/network/subnets

Master

到这里要保证 Etcd / Flannel / Docker 都已经正常了。

在 Master 上需要部署 etcd / kube-apiserver / kube-controller-manager / kube-scheduler 服务进程。

apiserver 证书

创建 CA 证书,CA 为颁发证书的权威机构,创建认证中心。(权威机构建立)。

mkdir -p /opt/cfssl.org/k8s
cd /opt/cfssl.org/k8s
touch ca-config.json
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

证书签名请求:向证书颁发机构(CA)申请证书的材料。(权威机构建立需要的材料)。

mkdir -p /opt/cfssl.org/k8s
cd /opt/cfssl.org/k8s
touch ca-csr.json
cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

运行命令,生成ca.pem(ca根证书公钥)和ca-key.pem(ca私钥),任何人向CA机构申请证书时,使用ca.pem根证书签名,使用ca-key.pem私钥加密

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

2021/02/19 00:19:42 [INFO] generating a new CA key and certificate from CSR
2021/02/19 00:19:42 [INFO] generate received request
2021/02/19 00:19:42 [INFO] received CSR
2021/02/19 00:19:42 [INFO] generating key: rsa-2048
2021/02/19 00:19:42 [INFO] encoded CSR
2021/02/19 00:19:42 [INFO] signed certificate with serial number 665987048635310017519838836294576121759273113764

ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

生成apiserver证书:注意:这里要规划好将来需要加入的 Master ServerIP。

mkdir -p /opt/cfssl.org/k8s
cd /opt/cfssl.org/k8s
touch server-csr.json
cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
        "10.0.0.1",
        "127.0.0.1",
        "192.168.56.105",
        "192.168.56.106",
        "192.168.56.107",
        "192.168.56.108",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes server-csr.json | cfssljson -bare server

2021/02/19 00:24:38 [INFO] generate received request
2021/02/19 00:24:38 [INFO] received CSR
2021/02/19 00:24:38 [INFO] generating key: rsa-2048
2021/02/19 00:24:39 [INFO] encoded CSR
2021/02/19 00:24:39 [INFO] signed certificate with serial number 343522576771337458040622521392622003863616572037
2021/02/19 00:24:39 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@c701 apiserver-ca]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

生成 kube-proxy 证书:

mkdir -p /opt/cfssl.org/k8s
cd /opt/cfssl.org/k8s
touch kube-proxy-csr.json
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [
        "10.0.0.1",
        "127.0.0.1",
        "192.168.56.105",
        "192.168.56.106",
        "192.168.56.107",
        "192.168.56.108",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
 
cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

2021/02/19 00:26:57 [INFO] generate received request
2021/02/19 00:26:57 [INFO] received CSR
2021/02/19 00:26:57 [INFO] generating key: rsa-2048
2021/02/19 00:26:57 [INFO] encoded CSR
2021/02/19 00:26:57 [INFO] signed certificate with serial number 65696523522638298482393770232604659265748840706
2021/02/19 00:26:57 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

ls

ca-config.json  ca-csr.json  ca.pem          kube-proxy-csr.json  kube-proxy.pem  server-csr.json  server.pem
ca.csr          ca-key.pem   kube-proxy.csr  kube-proxy-key.pem   server.csr      server-key.pem


kube-apiserver

参考:https://www.cnblogs.com/skyflask/articles/11357327.html

复制 kube-apiserver / kube-controller-manager / kube-scheduler 文件到 /usr/bin 目录。


sudo cp server/bin/kube-apiserver /usr/bin/
sudo cp server/bin/kube-controller-manager /usr/bin/
sudo cp server/bin/kube-scheduler /usr/bin/

sudo mkdir -p /var/log/kubernetes

sudo mkdir -p /etc/kubernetes
sudo touch /etc/kubernetes/apiserver

sudo touch /usr/lib/systemd/system/kube-apiserver.service

# 创建token文件,用途后面会讲到:
# 第一列:随机字符串,自己可生成
# 第二列:用户名
# 第三列:UID
# 第四列:用户组
cat > /opt/kubernetes/cfg/token.csv <<EOF
674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,1001,"system:kubelet-bootstrap"
EOF

配置文件 /etc/kubernetes/apiserver 中包括了 kube-apiserver 的全部启动参数

# --etcd-servers=https://10.11.97.191:2379,https://10.11.97.192:2379,https://10.11.97.71:2379
# --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
# cat /etc/kubernetes/apiserver
KUBE_API_ARGS="--etcd-servers=http://127.0.0.1:2379 \
--bind-address=192.168.56.105 \
--secure-port=6443 \
--advertise-address=192.168.56.105 \
--allow-privileged=true \
--service-cluster-ip-range=192.168.56.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-issuer=https://192.168.56.105 \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--logtostderr=true \
--log-dir=/var/log/kubernetes \
--v=4"

配置好前面生成的证书,确保能连接etcd。

参数说明:

  • –logtostderr 启用日志
  • –v 日志等级
  • –etcd-servers etcd集群地址
  • –bind-address 监听地址
  • –secure-port https安全端口
  • –advertise-address 集群通告地址
  • –allow-privileged 启用授权
  • –service-cluster-ip-range Service虚拟IP地址段
  • –enable-admission-plugins 准入控制模块
  • –authorization-mode 认证授权,启用RBAC授权和节点自管理
  • –enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到
  • –token-auth-file token文件
  • –service-node-port-range Service Node类型默认分配端口范围

编辑 apiserver 的服务配置文件 。

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

启动

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

kube-scheduler

创建 schueduler 配置文件。

参数说明:

  • –master 连接本地 apiserver
  • –leader-elect 当该组件启动多个时,自动选举(HA)
mkdir -p /opt/kubernetes/cfg/
touch kube-scheduler.cfg
cat > /opt/kubernetes/cfg/kube-scheduler.cfg <<EOF
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--log-dir=/var/log/kubernetes \\
--v=4 \\
--master=127.0.0.1:8080 \\
--leader-elect"
EOF

cat /opt/kubernetes/cfg/kube-scheduler.cfg
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--log-dir=/var/log/kubernetes \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"

创建 service 文件

cd /usr/lib/systemd/system/
touch kube-scheduler.service
cat > /usr/lib/systemd/system/kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler.cfg
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
EOF

cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler.cfg
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动:

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

kube-controller-manager

kube-controller-manager 服务依赖于 kube-apiserver 服务。

参数配置文件:

mkdir -p /opt/kubernetes/cfg/
cd /opt/kubernetes/cfg/
touch kube-controller-manager.cfg
cat > /opt/kubernetes/cfg/kube-controller-manager.cfg <<EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--log-dir=/var/log/kubernetes \\
--master=127.0.0.1:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF

cat /opt/kubernetes/cfg/kube-controller-manager.cfg
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--log-dir=/var/log/kubernetes \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"


服务设置文件 kube-controller-manager.service

cd /usr/lib/systemd/system/
touch kube-controller-manager.service
cat > /usr/lib/systemd/system/kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetesf
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager.cfg
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetesf
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager.cfg
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

启动:

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

配置文件 /etc/kubernetes/controller-manager 中包含了 kube-controller-manager 的全部启动参数。

KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0"

/etc/kubernetes/kubeconfig 文件中设置与 API Server 连接的相关配置。

apiVersion: v1
kind: Config
users:
- name: client
  user:
clusters:
- name: default
  cluster:
    server: http://192.168.56.105:8080
contexts:
- context:
    cluster: default
    user: client
  name: default
  current-context: default

EOF


Power by TeXt.