K8S部署Redis集群

1.使用redis-cluster创建Redis集群 1.1 创建Redis集群 可以基于如下的资源清单,创建K8S的Redis集群: apiVersion: v1 kind: Namespace metadata: name: redis ---

1.使用redis-cluster创建Redis集群

1.1 创建Redis集群

可以基于如下的资源清单,创建K8S的Redis集群:

apiVersion: v1
kind: Namespace
metadata:
  name: redis
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-config
  namespace: redis
data:
  redis.conf: |
    port 6379
    cluster-enabled yes
    cluster-config-file nodes.conf
    cluster-node-timeout 5000
    appendonly yes
    requirepass root
    masterauth root
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-cluster
  namespace: redis
spec:
  replicas: 6
  selector:
    matchLabels:
      app: redis
  serviceName: redis-cluster
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: harbor.wanna1314y.top/redis/redis:7.4.2
        command: ["redis-server", "/redis-conf/redis.conf"]
        volumeMounts:
        - name: redis-config
          mountPath: /redis-conf/redis.conf
          subPath: redis.conf
        - name: redis-data
          mountPath: /data
        ports:
        - containerPort: 6379
        resources:
          requests:
            memory: "256Mi"
            cpu: "120m"
        readinessProbe:
          tcpSocket:
            port: 6379
          initialDelaySeconds: 5
          periodSeconds: 10
      volumes:
      - name: redis-config
        configMap:
          name: redis-config
  volumeClaimTemplates:
  - metadata:
      name: redis-data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: k8s-nfs-storage
      resources:
        requests:
          storage: 10Gi

在6个Pod都创建完成之后,可以使用如下的命令去初始化Redis集群:

kubectl exec -it redis-cluster-0 -n redis -- redis-cli -a root --cluster create --cluster-replicas 1 $(kubectl get pods -n redis -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}')

得到如下的结果,说明集群初始化成功:

Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.204.90.96:6379 to 10.204.90.89:6379
Adding replica 10.204.90.73:6379 to 10.204.90.57:6379
Adding replica 10.204.90.67:6379 to 10.204.90.68:6379
M: 2ea6f68e92627cb59f65820794f14eb0baacea50 10.204.90.89:6379
   slots:[0-5460] (5461 slots) master
M: 0fc032a58fdfbf9bcf3a623e40e40ee640634249 10.204.90.57:6379
   slots:[5461-10922] (5462 slots) master
M: 6b139ad5fc46a35f51d74262d4eb907a4e9af401 10.204.90.68:6379
   slots:[10923-16383] (5461 slots) master
S: 2877a03c714e013929d732c7cd41d07fee7bd76e 10.204.90.67:6379
   replicates 6b139ad5fc46a35f51d74262d4eb907a4e9af401
S: ed51536beb71e91f0e54fbfc6203f41b3f071407 10.204.90.96:6379
   replicates 2ea6f68e92627cb59f65820794f14eb0baacea50
S: 314a6300028bf261b4e9eb521f10617c39dc1f19 10.204.90.73:6379
   replicates 0fc032a58fdfbf9bcf3a623e40e40ee640634249
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 10.204.90.89:6379)
M: 2ea6f68e92627cb59f65820794f14eb0baacea50 10.204.90.89:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 6b139ad5fc46a35f51d74262d4eb907a4e9af401 10.204.90.68:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 0fc032a58fdfbf9bcf3a623e40e40ee640634249 10.204.90.57:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: ed51536beb71e91f0e54fbfc6203f41b3f071407 10.204.90.96:6379
   slots: (0 slots) slave
   replicates 2ea6f68e92627cb59f65820794f14eb0baacea50
S: 314a6300028bf261b4e9eb521f10617c39dc1f19 10.204.90.73:6379
   slots: (0 slots) slave
   replicates 0fc032a58fdfbf9bcf3a623e40e40ee640634249
S: 2877a03c714e013929d732c7cd41d07fee7bd76e 10.204.90.67:6379
   slots: (0 slots) slave
   replicates 6b139ad5fc46a35f51d74262d4eb907a4e9af401
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

可以使用如下的命令去检查集群中的各个节点的状态,6条命令对应的是6个节点的情况:

kubectl exec -it redis-cluster-0 -n redis -- redis-cli -a root --cluster check $(kubectl get pods -n redis -o jsonpath='{range.items[0]}{.status.podIP}:6379 {end}')
kubectl exec -it redis-cluster-1 -n redis -- redis-cli -a root --cluster check $(kubectl get pods -n redis -o jsonpath='{range.items[1]}{.status.podIP}:6379 {end}')
kubectl exec -it redis-cluster-2 -n redis -- redis-cli -a root --cluster check $(kubectl get pods -n redis -o jsonpath='{range.items[2]}{.status.podIP}:6379 {end}')
kubectl exec -it redis-cluster-3 -n redis -- redis-cli -a root --cluster check $(kubectl get pods -n redis -o jsonpath='{range.items[3]}{.status.podIP}:6379 {end}')
kubectl exec -it redis-cluster-4 -n redis -- redis-cli -a root --cluster check $(kubectl get pods -n redis -o jsonpath='{range.items[4]}{.status.podIP}:6379 {end}')
kubectl exec -it redis-cluster-5 -n redis -- redis-cli -a root --cluster check $(kubectl get pods -n redis -o jsonpath='{range.items[5]}{.status.podIP}:6379 {end}')

执行上述的命令均可以得到如下的结果:

10.204.90.89:6379 (2ea6f68e...) -> 0 keys | 5461 slots | 1 slaves.
10.204.90.68:6379 (6b139ad5...) -> 0 keys | 5461 slots | 1 slaves.
10.204.90.57:6379 (0fc032a5...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.204.90.89:6379)
M: 2ea6f68e92627cb59f65820794f14eb0baacea50 10.204.90.89:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 6b139ad5fc46a35f51d74262d4eb907a4e9af401 10.204.90.68:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 0fc032a58fdfbf9bcf3a623e40e40ee640634249 10.204.90.57:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: ed51536beb71e91f0e54fbfc6203f41b3f071407 10.204.90.96:6379
   slots: (0 slots) slave
   replicates 2ea6f68e92627cb59f65820794f14eb0baacea50
S: 314a6300028bf261b4e9eb521f10617c39dc1f19 10.204.90.73:6379
   slots: (0 slots) slave
   replicates 0fc032a58fdfbf9bcf3a623e40e40ee640634249
S: 2877a03c714e013929d732c7cd41d07fee7bd76e 10.204.90.67:6379
   slots: (0 slots) slave
   replicates 6b139ad5fc46a35f51d74262d4eb907a4e9af401
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

1.2 测试多节点读写

各个节点上去测试节点的写入和读取:

# 在节点0上去写入数据
kubectl exec -it redis-cluster-0 -n redis -- redis-cli -c -a root
set data wanna

# 在节点1上去查看数据
kubectl exec -it redis-cluster-1 -n redis -- redis-cli -c -a root
127.0.0.1:6379> get data
-> Redirected to slot [1890] located at 10.204.90.89:6379
"wanna"

# 在节点2上去修改数据
kubectl exec -it redis-cluster-2 -n redis -- redis-cli -c -a root
127.0.0.1:6379> set data wanna2
-> Redirected to slot [1890] located at 10.204.90.89:6379
OK

# 在节点3上去继续读取数据, 发现已经更新
kubectl exec -it redis-cluster-3 -n redis -- redis-cli -c -a root
127.0.0.1:6379> get data
-> Redirected to slot [1890] located at 10.204.90.89:6379
"wanna2"

1.3 测试集群单个节点挂掉后恢复

我们使用如下的命令杀掉redis-cluster-0这个Pod,K8S重新拉一个新的Pod,IP也会换。

kubectl delete pod  redis-cluster-0 -n redis

杀掉这个Pod这个重新查看各个Pod的IP:

$ kubectl get pod -n redis -o wide
NAME              READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
redis-cluster-0   1/1     Running   0          2m54s   10.204.90.51   node1   <none>           <none>
redis-cluster-1   1/1     Running   0          39m     10.204.90.57   node1   <none>           <none>
redis-cluster-2   1/1     Running   0          39m     10.204.90.68   node1   <none>           <none>
redis-cluster-3   1/1     Running   0          38m     10.204.90.67   node1   <none>           <none>
redis-cluster-4   1/1     Running   0          38m     10.204.90.96   node1   <none>           <none>
redis-cluster-5   1/1     Running   0          38m     10.204.90.73   node1   <none>           <none>

接着,我们从这个新的Pod,重新检查Redis集群的状态,发现Master节点已经从之前的10.204.90.89:6379变成现在的10.204.90.51:6379,节点已经自动完成切换。

$ kubectl exec -it redis-cluster-0 -n redis -- redis-cli -a root --cluster check $(kubectl get pods -n redis -o jsonpath='{range.items[0]}{.status.podIP}:6379 {end}')
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.204.90.51:6379 (2ea6f68e...) -> 1 keys | 5461 slots | 1 slaves.
10.204.90.57:6379 (0fc032a5...) -> 0 keys | 5462 slots | 1 slaves.
10.204.90.68:6379 (6b139ad5...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 1 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 10.204.90.51:6379)
M: 2ea6f68e92627cb59f65820794f14eb0baacea50 10.204.90.51:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 2877a03c714e013929d732c7cd41d07fee7bd76e 10.204.90.67:6379
   slots: (0 slots) slave
   replicates 6b139ad5fc46a35f51d74262d4eb907a4e9af401
S: 314a6300028bf261b4e9eb521f10617c39dc1f19 10.204.90.73:6379
   slots: (0 slots) slave
   replicates 0fc032a58fdfbf9bcf3a623e40e40ee640634249
M: 0fc032a58fdfbf9bcf3a623e40e40ee640634249 10.204.90.57:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 6b139ad5fc46a35f51d74262d4eb907a4e9af401 10.204.90.68:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: ed51536beb71e91f0e54fbfc6203f41b3f071407 10.204.90.96:6379
   slots: (0 slots) slave
   replicates 2ea6f68e92627cb59f65820794f14eb0baacea50
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

2.使用Redis-Operator安装集群

可以使用tongdun的Redis-Operator

详情参考Github:td-redis-operator

可以使用如下的资源清单进行部署:

redis-cluster.yaml

Comment