2021/07/19

TiKVをローカルのk8sクラスターにデプロイしてRust Clientから接続してみました

k8stikvrust

分散型&ACID特性のあるKey Value Store「TiKV」をローカルのk8sクラスターにDeployしてRust Clientから接続してみました。

ローカルのk8sクラスターはシングル構成で主にテスト用に作っています。

TiKVのデプロイ

まずはTiKVのテスト用のclusterをk8sにデプロイします。

Operatorのデプロイ

namespace

apiVersion: v1
kind: Namespace
metadata:
  name: tikv

CRD

# For Kubernetes before 1.16.
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: tikvclusters.tikv.org
spec:
  group: tikv.org
  scope: Namespaced
  names:
    plural: tikvclusters
    singular: tikvcluster
    kind: TikvCluster
  versions:
  - name: v1alpha1
    served: true
    storage: true
  validation:
    openAPIV3Schema:
      type: object
  additionalPrinterColumns:
  - JSONPath: .status.conditions[?(@.type=="Ready")].status
    name: Ready
    type: string
  - JSONPath: .status.pd.image
    description: The image for PD cluster
    name: PD
    type: string
  - JSONPath: .spec.pd.replicas
    description: The desired replicas number of PD cluster
    name: Desire
    type: integer
  - JSONPath: .status.pd.statefulSet.readyReplicas
    description: The current replicas number of PD cluster
    name: Current
    type: integer
  - JSONPath: .status.tikv.image
    description: The image for TiKV cluster
    name: TiKV
    type: string
  - JSONPath: .spec.tikv.replicas
    description: The desired replicas number of TiKV cluster
    name: Desire
    type: integer
  - JSONPath: .status.tikv.statefulSet.readyReplicas
    description: The current replicas number of TiKV cluster
    name: Current
    type: integer
  - name: Age
    type: date
    JSONPath: .metadata.creationTimestamp
  - JSONPath: .status.conditions[?(@.type=="Ready")].message
    name: Status
    priority: 1
    type: string
  • こちらを参考にして以下のリソースファイルを作成しました。

service account

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tikv-operator
  namespace: tikv

rbac

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: tikv-operator-controller-manager
rules:
- apiGroups:
  - tikv.org
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - 'apps'
  resources:
  - 'statefulsets'
  - 'deployments'
  verbs:
  - '*'
- apiGroups:
  - ''
  resources:
  - 'events'
  - 'pods'
  - 'persistentvolumeclaims'
  - 'persistentvolumes'
  - 'services'
  - 'endpoints'
  - 'nodes'
  - 'configmaps'
  - 'serviceaccounts'
  verbs:
  - '*'
- apiGroups:
  - 'rbac.authorization.k8s.io'
  resources:
  - 'roles'
  - 'rolebindings'
  verbs:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tikv-operator-controller-manager
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: tikv-operator-controller-manager
subjects:
- kind: ServiceAccount
  name: tikv-operator
  namespace: tikv
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: tikv-operator-controller-manager-leaderelection
rules:
- apiGroups:
  - ''
  resources:
  - 'endpoints'
  verbs:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tikv-operator-controller-manager-leaderelection
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: tikv-operator-controller-manager-leaderelection
subjects:
- kind: ServiceAccount
  name: tikv-operator
  namespace: tikv

deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tikv-operator
  namespace: tikv
spec:
  selector:
    matchLabels:
      app: tikv-operator
      version: "0.1.0"
  template:
    metadata:
      labels:
        app: tikv-operator
        version: "0.1.0"
    spec:
      serviceAccountName: tikv-operator
      containers:
        - name: tikv-operator
          image: "pingcap/tikv-operator:v0.1.0"
          imagePullPolicy: IfNotPresent
          command:
            - /usr/local/bin/tikv-controller-manager
          args:
            - "--pd-discovery-image=pingcap/tikv-operator:v0.1.0"
            - -v=2
          ports:
            - name: http
              containerPort: 6060
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /healthz
              port: http
          env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace

確認

問題なければ、tikv-operatorのpodが起動します。

% kubectl get pods -n tikv -w
NAME                            READY   STATUS              RESTARTS   AGE
tikv-operator-f46cbf5c9-fhcrd   1/1     Running             0          15s

TiKV Clusterのデプロイ

注意点

ローカルのk8sクラスターにrook cephを導入しており、rook-ceph-blockなるstorage classを導入しています。(この手順は特段不要で、普通にhost localのディレクトリを使うのがテストでは簡易なのかなと思います。)

TikvCluster

# IT IS NOT SUITABLE FOR PRODUCTION USE.
# This YAML describes a basic TiDB cluster with minimum resource requirements,
# which should be able to run in any Kubernetes cluster with storage support.
apiVersion: tikv.org/v1alpha1
kind: TikvCluster
metadata:
  name: basic
  namespace: tikv
spec:
  version: v4.0.0
  pd:
    baseImage: pingcap/pd
    replicas: 1
    storageClassName: rook-ceph-block  # rook-ceph-blockを指定
    requests:
      storage: "1Gi"
    config: {}
  tikv:
    baseImage: pingcap/tikv
    replicas: 1
    storageClassName: rook-ceph-block  # rook-ceph-blockを指定
    requests:
      storage: "4Gi"
    config: {}

確認

問題なければ、tikv clusterのpodとserviceが起動します。

% kubectl get pods -n tikv
NAME                               READY   STATUS    RESTARTS   AGE
basic-discovery-5bcf68669b-5wkk8   1/1     Running   2          15s
basic-pd-0                         1/1     Running   2          15s
basic-tikv-0                       1/1     Running   2          15s

% kubectl get svc -n tikv
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
basic-discovery   ClusterIP   10.106.17.103    <none>        10261/TCP   15s
basic-pd          ClusterIP   10.104.179.241   <none>        2379/TCP    15s
basic-pd-peer     ClusterIP   None             <none>        2380/TCP    15s
basic-tikv-peer   ClusterIP   None             <none>        20160/TCP   15s

サービスへの接続確認

# kubectl port-fowardコマンドを使って、basic-pdのサービスをローカルホストの2379ポートにbindします
% kubectl port-forward svc/basic-pd -n tikv 2379:2379
# http endpointに対して、リクエストを投げると、正常にレスポンスが帰ってくることを確認します。
% curl 127.0.0.1:2379/pd/api/v1/stores
{
  "count": 1,
  "stores": [
    {
      "store": {
        "id": 1,
        "address": "basic-tikv-0.basic-tikv-peer.tikv.svc:20160",
        "version": "4.0.0",
        "status_address": "0.0.0.0:20180",
        "git_hash": "198a2cea01734ce8f46d55a29708f123f9133944",
        "start_timestamp": 1626494507,
        "deploy_path": "/",
        "last_heartbeat": 1626494687896856195,
        "state_name": "Up"
      },
      "status": {
        "capacity": "3.875GiB",
        "available": "1.58GiB",
        "used_size": "31.5MiB",
        "leader_count": 1,
        "leader_weight": 1,
        "leader_score": 1,
        "leader_size": 1,
        "region_count": 1,
        "region_weight": 1,
        "region_score": 1,
        "region_size": 1,
        "start_ts": "2021-07-17T04:01:47Z",
        "last_heartbeat_ts": "2021-07-17T04:04:47.896856195Z",
        "uptime": "3m0.896856195s"
      }
    }
  ]
}

rustでの接続確認

Cargo.toml

  • tikv-clientとtokioをインストールしておきます。
  • dotenvとserdeは任意です。
[dependencies]
tikv-client = "0.1.0"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1.8.0", features = ["full"] }
dotenv = "0.15.0"

test.rs

use std::env;
use dotenv::dotenv;
use tikv_client::RawClient;
use tikv_client::TransactionClient;
use std::error::Error;

/// cargo test --package tikv-workaround --bin tikv-workaround test::test_raw -- --exact
#[tokio::test]
async fn test_raw() -> Result<(), Box<dyn Error>> {
    dotenv().ok();
    let pd_address = env::var("PD_ADDRS")
        .unwrap_or("127.0.0.1:2379".into());
    let client = RawClient::new(vec![pd_address]).await?;
    client.put("key".to_owned(), "value".to_owned()).await?;
    let value = client.get("key".to_owned()).await?;
    assert_eq!(value.unwrap(), "value".as_bytes());
    Ok(())
}

/// cargo test --package tikv-workaround --bin tikv-workaround test::test_transactional -- --exact
#[tokio::test]
async fn test_transactional() -> Result<(), Box<dyn Error>> {
    dotenv().ok();
    let pd_address = env::var("PD_ADDRS")
        .unwrap_or("127.0.0.1:2379".into());
    let txn_client = TransactionClient::new(vec![pd_address]).await?;
    let mut txn = txn_client.begin_optimistic().await?;
    txn.put("key".to_owned(), "value2".to_owned()).await?;
    let value = txn.get("key".to_owned()).await?;
    assert_eq!(value.unwrap(), "value2".as_bytes());
    txn.commit().await?;
    Ok(())
}

動作確認

以下のようにkubectl port-forwardコマンドを使って、tikv pdサービスだけをローカルにバインドしてtestを実行してみたところ…

% kubectl port-forward svc/basic-pd -n tikv 2379:2379

以下のようなエラーになりました。

failed to connect to [Member { name: \"basic-pd-0\", member_id: 17920683193262735092, peer_urls: [\"http://basic-pd-0.basic-pd-peer.tikv.svc:2380\"], client_urls: [\"http://basic-pd-0.basic-pd-peer.tikv.svc:2379\"], leader_priority: 0, deploy_path: \"\", binary_version: \"\", git_hash: \"\", dc_location: \"\" }]" }

pdだけではなく、pdが返す、メンバーの情報を元に、リーダーに接続しにいく?みたいな流れみたいで、basic-pd-0.basic-pd-peer.tikv.svc:2380basic-pd-0.basic-pd-peer.tikv.svc:2379などに接続できるネットワーク上にいる必要があるようでした。

なので、ローカルからの接続を一旦諦めて、(多分proxyを作ればいけると思いますが、ちょっと大変そうなので)
k8s上で動作するjobなんかを作って動作確認することにしました。

まずはDockerfileを作成します。

.
├── Cargo.lock
├── Cargo.toml
├── Dockerfile
├── rust-toolchain
├── src
│   ├── main.rs
│   └── test.rs

rust-toolchain

1.53.0

Cargo.toml

[package]
name = "tikv-workaround"
version = "0.1.0"
authors = ["Hiroki Tanaka"]
edition = "2018"

[dependencies]
tikv-client = "0.1.0"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1.8.0", features = ["full"] }
dotenv = "0.15.0"

[dev-dependencies]

Dockerfile

FROM debian:stretch as builder

RUN echo "deb http://deb.debian.org/debian stretch-backports main" > /etc/apt/sources.list.d/backports.list \
    && apt-get update && apt-get install -y protobuf-compiler/stretch-backports cmake curl clang \
    && apt-get clean && rm -r /var/lib/apt/lists/*

RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain none
ENV PATH "$PATH:/root/.cargo/bin"

WORKDIR /tikv-workaround

# install utilities and set openssl env vars.
RUN apt-get update && \
    apt-get install -y libssl-dev perl
ENV OPENSSL_LIB_DIR /usr/lib/x86_64-linux-gnu/
ENV OPENSSL_INCLUDE_DIR /usr/include/openssl/

# update rustup to newer version.
COPY rust-toolchain /tikv-workaround/rust-toolchain
RUN rustup install $(cat rust-toolchain)

# build from source
COPY Cargo.toml /tikv-workaround/Cargo.toml
COPY Cargo.lock /tikv-workaround/Cargo.lock
COPY src /tikv-workaround/src

# Capture backtrace on error
ENV RUST_BACKTRACE 1

# Rust log level
ENV RUST_LOG trace

RUN cargo build

CMD ["cargo test", "--package", "tikv-workaround", "--bin", "tikv-workaround", "test::test_raw", "--", "--exact"]

この状態で、docker imageを作成します。(docker imageはk8sのnode上でビルドする必要があります)

docker build -t tikv-workaround:latest -f Dockerfile .

これを以下のJobリソースを作って実行しました。

apiVersion: batch/v1
kind: Job
metadata:
  name: tikv-workaround
  namespace: tikv
spec:
  template:
    metadata:
      labels:
        app: tikv-workaround
    spec:
      restartPolicy: OnFailure
      containers:
        - name: tikv-workaround
          image: tikv-workaround:latest
          imagePullPolicy: Never
          # test::test_raw または test::test_transactional を実行
          command: [
              "cargo", "test", "--package", "tikv-workaround", "--bin", "tikv-workaround", "test::test_transactional",
              "--", "--exact"
          ]
          env:
            - name: RUST_BACKTRACE
              value: "1"
            - name: RUST_LOG
              value: "trace"
            - name: PD_ADDRS
              value: "basic-pd:2379"

こちらで、問題なくTiKV Clusterに接続してGetやPut及びトランザクションを試すことができました。

以上です。