Skip to content

Offline Depolyment Manual


1 Preface

1.1 Product Profile

Guance is a cloud service platform aimed at solving cloud computing and building full-link observability for every complete application in the cloud native era system. Guance, a product fully built by Guance Weilai since 2018, aims to provide services for the majority of cloud-based development project teams in China. Compared to complex and variable open source products such as ELK, Prometheus, Grafana and Skywalkin, Guance does not simply provide a monitoring product, but more importantly, to provide overall observability services. In addition to the integration of the underlying storage and system architecture, we also provide a complete analysis and deconstruction of all cloud computing and cloud-native related technology stacks, so that any project team can easily use our products without having to invest too much effort in research or transformation of immature open source products. At the same time, Guance adopts a service-based, on-demand and volume-based method to charging fees, based entirely on the amount of data generated by users, without the need to invest in hardware. At the same time, while for paying clients, we would also establish a professional service team to help clients build a data-based core assurance system, which has the characteristics of real-time, flexibility, easy expansion and easy deployment, and supports cloud SaaS and local deployment mode.

1.2 Description of The Doc

This document focuses on offline deployment (including but not limited to physical servers and IDC computer rooms), and introduces the complete steps from resource planning and configuration to deployment and operation of Guance.

Description:

  • This document takes dataflux.cn as the main domain name example, and the actual deployment is replaced by the corresponding domain name.

1.3 Keywords

Entry Description
Launcher WEB application is used to deploy and install Guance, and the installation and upgrade of Guance are completed according to the boot steps of Launcher service.
O&M Operator An operation and maintenance machine with kubectl installed on the same network as the target Kubernetes cluster
Deployment Operator Visit the launcher service in the browser to complete the observation of the cloud boot, installation and debugging machines
kubectl Kubernetes' command-line client tool, installed on the O&M operator

1.4 Deployment Architecture

2 Resource Preparation

2.1 Resources List

Usage Resources Type Minimum Specification Recommended specification Quantity Notes
Kubernetes Cluster Physical server|virtual machine 4C8GB 100GB 8C16GB 100GB 3 k8s cluster master node|Etcd cluster Note: If it is a virtual machine, it is necessary to improve the resource specification appropriately
Physical server|virtual machine 4C8GB 100GB 8C16GB 100GB 4 k8s cluster worker node, hosting Guance applications, k8s components, basic component services Mysql 5.7.18、Redis 6.0.6
Physical server|virtual machine 2C4GB 100GB 4C8GB 200GB 1 Optional, used to deploy a reverse proxy server deployment, proxy to the ingress edge node Note: Cluster edge nodes are not directly exposed for security considerations
Physical server|virtual machine 2C4GB 200G B 4C8GB 1TB high performance disk 1 Deploy network file system, network storage service, default NFS
DataWay Physical server|virtual machine 2C4GB 100GB 4C8GB 100GB 1 User deployment DataWay
ElasticSearch Physical server|virtual machine 4C8GB 1TB 8C16G 1TB 3 Independent binary deployment ES cluster version: 7.4+ (recommended 7.10) Note: Password authentication needs to be turned on, and the matching version word segmentation plug-in analysis-ik needs to be installed
InfluxDB Physical server|virtual machine 4C8GB 500GB 8C16G 1TB 1 k8s cluster node, hosting the Influxdb Server; Version: 1.7. 8
Others Mail server/SMS - - 1 SMS gateway, mail server, alarm channel
Filed official wildcard domain name - - 1 The main domain name needs to be filed
SSL/TLS certificate Wildcard domain name certificate Wildcard domain name certificate 1 Ensure site security

Note:

  1. The minimum configuration, only for functional verification, is suitable for POC scenario deployment but not suitable for production environment.
  2. As a production deployment, the actual access data volume is used to evaluate. The more access data volume, the higher the storage and specification configuration of InfuxDB and Elasticsearch.

2.2 Create Resources

2.2.1 kubernetes Cluster Resource Creation

Important!!!

  • Before deployment, it is necessary to label the cluster points with corresponding nodes, which correspond to the nodeSelector field in yaml, otherwise an error would be reported through instance yaml deployment. Please take a closer look at the instance yaml file.
  • NFS server nodes need to install server and client, and verify the service status. And other cluster nodes need to install NFS client, and verify the mount status.
  • You need to prepare the required image in advance, or use Ali's mirror site to pull the image.

kubernetes cluster deployment refers to https://kubernetes.io/zh/docs/home/ After the cluster resources are created, the cluster function is verified to ensure the normal operation of the cluster. Including node status, cluster status and service resolution. Refer to the following figure:

By default, kube-proxy uses iptables mode, in which the host cannot ping the svc address, but in ipvs mode, it can ping the svc address directly.

When the Node Local DNS configuration is not used, the container obtains the same dns service address and svc address segment. kubernetes ingress component deployment refers to https://github.com/kubernetes/ingress-nginx

# Some versions of k8s failed to create the ingress resource and need to delete the resource execution command kubectl  delete  validatingwebhookconfigurations.admissionregistration.k8s.io  ingress-nginx-admission
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader-nginx
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      nodePort: 31257
    - name: https
      port: 443
      protocol: TCP
      targetPort: http
      nodePort: 31256
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:

      tolerations:
      - key: ''
        operator: 'Exists'
      dnsPolicy: ClusterFirstWithHostNet
      hostNetwork: true
      containers:
        - name: controller
          image:  pollyduan/ingress-nginx-controller:v0.47.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        app02: ingress
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
      - v1beta1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1beta1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-3.33.0
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.47.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-3.33.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.47.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: docker.io/jettech/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
kubernetes nfs subdir external provisioner 组件部署参考 https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: dyrnq/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 10.90.14.175   # 填写实际nfs server
            - name: NFS_PATH
              value: /home/zhuyun   # 实际共享目录       
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.90.14.175 # 填写实际nfs server地址
            path: /home/zhuyun  # 实际共享目录
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: df-nfs-storage
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"  # 配置该 storageclass 为默认
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
allowVolumeExpansion: true 
reclaimPolicy: Delete
parameters:
  archiveOnDelete: "false"            

2.2.2 Create Basic Resources and Middleware Resources

Mysql, Redis, InfluxDB, Elasticsearch, NFS storage should be created according to the configuration requirements.

2.3 Resource Configuration

2.3.1 MySQL

  • Create an administrator account (must be an administrator account as it will be used to create and initialize various application databases during installation and initialization. If remote connection is required, it needs to be enabled manually).
---
apiVersion: v1
kind: Namespace
metadata:
  name: middleware

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: "kubernetes.io/nfs"
  #  volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
  name: mysql-data
  namespace: middleware
spec:
  accessModes:
  - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName:  standard-nfs-storage ## 指定实际存在StorageClass #


---
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-config
  namespace: middleware
  labels:
    app: mysql
data:
  mysqld.cnf: |-
        [mysqld]
        pid-file        = /var/run/mysqld/mysqld.pid
        socket          = /var/run/mysqld/mysqld.sock
        datadir         = /var/lib/mysql
        symbolic-links=0
        max_connections=5000



---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: mysql
  name: mysql
  namespace: middleware
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: mysql
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - env:
        - name: MYSQL_ROOT_PASSWORD
          value: rootPassw0rd
        - name: MYSQL_DATABASE
          value: FT2.0
        - name: MYSQL_USER
          value: admin
        - name: MYSQL_PASSWORD
          value: admin@123
        image: mysql:5.7
        imagePullPolicy: IfNotPresent
        name: mysql
        ports:
        - containerPort: 3306
          name: dbport
          protocol: TCP
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/mysql
          name: db
        - mountPath: /etc/mysql/mysql.conf.d/mysqld.cnf
          name: config
          subPath: mysqld.cnf
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: db
        persistentVolumeClaim:
          claimName: mysql-data
      - name: config
        configMap:
          name: mysql-config
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: middleware
spec:
  ports:
  - name: mysqlport
    nodePort: 32306
    port: 3306
    protocol: TCP
    targetPort: dbport
  selector:
    app: mysql
  sessionAffinity: None
  type: NodePort

Note: If the deployment is not successful, you can use Docker to deploy MySQL.

2.3.2 Redis

  • Redis password needs to be set
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: middleware
    
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: redis-config
      namespace: middleware
    data:
      redis.conf: |
        requirepass viFRKZiZkoPmXnyF
        appendonly yes
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: redis
      namespace: middleware
      labels:
        app: redis
    spec:
      selector:
        matchLabels:
          app: redis
      template:
        metadata:
          labels:
            app: redis
        spec:
          containers:
            - command:
                - redis-server
                - /usr/local/etc/redis/redis.conf
              name: redis
              image: redis:5.0.7
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 6379
                  name: redis-port
              volumeMounts:
                - name: data
                  mountPath: /data
                - name: config
                  mountPath: /usr/local/etc/redis
          volumes:
            - name: data
              emptyDir: {}
            - name: config
              configMap:
                name: redis-config
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: redis
      namespace: middleware
    spec:
      selector:
        app: redis
      type: NodePort
      ports:
        - name: redis-port
          protocol: TCP
          port: 6379
          targetPort: redis-port
    

2.3.3 InfluxDB

  • Before deploying InfluxDB, you need to label the selected nodes:
$ kubectl label nodes <nodename> app01: influxdb
  • Create an administrator account (it must be an administrator account as it will be used to create and initialize DB, RP, and other information during the subsequent installation).
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: "kubernetes.io/nfs"
  name: influx-data
  namespace: middleware
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  volumeMode: Filesystem
  storageClassName: standard-nfs-storage 
  # Specify the actual existing storage class here. If there is a default storage class configured, this field can be left empty. #



---
apiVersion: v1
kind: ConfigMap
metadata:
  name: influxdb-config
  namespace: middleware
  labels:
    app: influxdb
data:
  influxdb.conf: |-
    [meta]
      dir = "/var/lib/influxdb/meta"

    [data]
      dir = "/var/lib/influxdb/data"
      engine = "tsm1"
      wal-dir = "/var/lib/influxdb/wal"
      max-values-per-tag = 0
      max-series-per-database = 0


---

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: influxdb
  name: influxdb
  namespace: middleware
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: influxdb
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: influxdb
    spec:
      nodeSelector:     ## Configure the container to schedule to the specified node, provided that the specified node is labeled  ##
        app01: influxdb
      containers:
      - env:
        - name: INFLUXDB_ADMIN_ENABLED
          value: "true"
        - name: INFLUXDB_ADMIN_PASSWORD
          value: admin@influxdb
        - name: INFLUXDB_ADMIN_USER
          value: admin
        - name: INFLUXDB_GRAPHITE_ENABLED
          value: "true"
        - name: INFLUXDB_HTTP_AUTH_ENABLED
          value: "true"
        image: influxdb:1.7.8
        imagePullPolicy: IfNotPresent
        name: influxdb
        ports:
        - containerPort: 8086
          name: api
          protocol: TCP
        - containerPort: 8083
          name: adminstrator
          protocol: TCP
        - containerPort: 2003
          name: graphite
          protocol: TCP
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/influxdb
          name: db
        - mountPath: /etc/influxdb/influxdb.conf
          name: config
          subPath: influxdb.conf
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: db
        #hostPath: /influx-data
        persistentVolumeClaim:
          claimName: influx-data
      - name: config
        configMap:
          name: influxdb-config
---
apiVersion: v1
kind: Service
metadata:
  name: influxdb
  namespace: middleware
spec:
  ports:
  - name: api
    nodePort: 32086
    port: 8086
    protocol: TCP
    targetPort: api
  - name: adminstrator
    nodePort: 32083
    port: 8083
    protocol: TCP
    targetPort: adminstrator
  - name: graphite
    nodePort: 32003
    port: 2003
    protocol: TCP
    targetPort: graphite
  selector:
    app: influxdb
  sessionAffinity: None
  type: NodePort

2.3.4 Elasticsearch

K8s Cluster Deploying ES Reference Example

Note: This yaml is suitable for the poc environment and is convenient for testing.

## ConfigMap can be modified according to actual testing needs
## Namespace is elastic, which can be modified according to actual testing needs
## Volume uses automatic storage, which needs to be confirmed in advance. If automatic storage is not configured, it can be modified as needed to use the host directory.
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: middleware
  name: elasticsearch-master-config
  labels:
    app: elasticsearch
    role: master
data:
  elasticsearch.yml: |-
    cluster.name: ${CLUSTER_NAME}
    node.name: ${NODE_NAME}
    discovery.seed_hosts: ${NODE_LIST}
    cluster.initial_master_nodes: ${MASTER_NODES}
    network.host: 0.0.0.0
    node.master: true
    node.data: true
    node.ingest: true
    xpack.security.enabled: true
    xpack.security.transport.ssl.enabled: true
---
apiVersion: v1
kind: Service
metadata:
  namespace: middleware
  name: elasticsearch-master
  labels:
    app: elasticsearch
    role: master
spec:
  clusterIP: None
  ports:
  - port: 9200
    name: http
  - port: 9300
    name: transport
  selector:
    app: elasticsearch
    role: master
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: middleware
  name: elasticsearch-master
  labels:
    app: elasticsearch
    role: master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
      role: master
  template:
    metadata:
      labels:
        app: elasticsearch
        role: master
    spec:
      initContainers:
      - name: fix-permissions
        image: busybox:1.30.0
        imagePullPolicy: IfNotPresent
        args:
        - chown -R 1000:1000 /usr/share/elasticsearch/data; chown -R 1000:1000 /usr/share/elasticsearch/logs;
          chown -R 1000:1000 /usr/share/elasticsearch/plugins
        command:
        - /bin/sh
        - -c
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /usr/share/elasticsearch/data
          name: es-data
        - mountPath: /usr/share/elasticsearch/plugins
          name: plugins
        - mountPath: /usr/share/elasticsearch/logs
          name: logs
      - name: increase-vm-max-map
        image: busybox:1.30.0
        imagePullPolicy: IfNotPresent
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox:1.30.0
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true 
      nodeName: cf-standard-02003 #Configuration requires the host name of the scheduling machine, based on the actual environment
      containers:
      - name: elasticsearch-master
        image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
        env:
        - name: CLUSTER_NAME
          value: dataflux-es
        - name: NODE_NAME
          value: elasticsearch-master
        - name: NODE_LIST
          value: elasticsearch-master
        - name: MASTER_NODES
          value: elasticsearch-master
        - name: "ES_JAVA_OPTS"
          value: "-Xms512m -Xmx512m" #Adjust according to test needs
        - name: xpack.security.enabled
          value: "true"
        - name: xpack.security.transport.ssl.enabled
          value: "true"
        ports:
        - containerPort: 9200
          name: http
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - mountPath: /usr/share/elasticsearch/data
          name: es-data
        - name: logs
          mountPath: /usr/share/elasticsearch/logs
        - name: plugins 
          mountPath: /usr/share/elasticsearch/plugins
        - name: config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          readOnly: true
          subPath: elasticsearch.yml
      volumes:
      - name: es-data
        persistentVolumeClaim:
          claimName: es-data
        # hostPath:
        #   path: /alidata/elasticsearch_data
        #   type: DirectoryOrCreate
      - name: plugins
        persistentVolumeClaim:
          claimName: es-plugins
        # hostPath:
        #   path: /alidata/elasticsearch_plugins
        #   type: DirectoryOrCreate
      - name: logs
        persistentVolumeClaim:
          claimName: es-logs
        # hostPath:
        #   path: /alidata/elasticsearch_logs
        #   type: DirectoryOrCreate    
      - name: config
        configMap:
          name: elasticsearch-master-config
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    # volume.beta.kubernetes.io/storage-class: "df-nfs-storage"
    # volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/nfs
  name: es-plugins
  namespace: middleware
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: "df-nfs-storage"
  volumeMode: Filesystem      
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    # volume.beta.kubernetes.io/storage-class: "df-nfs-storage"
    # volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/nfs
  name: es-data
  namespace: middleware
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: "df-nfs-storage"
  volumeMode: Filesystem   
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    # volume.beta.kubernetes.io/storage-class: "df-nfs-storage"
    # volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/nfs
  name: es-logs
  namespace: middleware
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: "df-nfs-storage"
  volumeMode: Filesystem           
  • After the service deployment is completed, create an administrator account (authentication needs to be turned on).

    Interactively log in to the deployed es service through the kubectl command line client and perform the action of creating superuser

    $  kubectl exec -ti -n middleware es-cluster-0 -- bin/elasticsearch-users useradd copriwolf -p sayHi2Elastic -r superuser 
    

Note: The relevant account information needs to be saved (if you use the built-in account, you need to persist the elasticsearch.keystore file to avoid being unable to use normally after restarting or using the self-created administrator account)

  • Modify elastic password
$  kubectl exec -ti -n middleware es-cluster-0 -- curl -u copriwolf:sayHi2Elastic \
       -XPUT "http://localhost:9200/_xpack/security/user/elastic/_password?pretty" \
       -H 'Content-Type: application/json' \
       -d '{"password": "4dIv4VJQG5t5dcJOL8R5"}'
  • Install ES: Install Chinese word segmentation plug-in
$  kubectl exec -ti es-cluster-0 bash -n middleware
$  ./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.5.1/elasticsearch-analysis-ik-7.5.1.zip
  • Set to prohibit automatic index creation
 $  kubectl exec -ti -n middleware es-cluster-0 -- curl -X PUT -u elastic:4dIv4VJQG5t5dcJOL8R5 "elasticsearch.middleware:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'{
  "persistent": {
    "action.auto_create_index": "false"
  }
}'
  • Easy UI management It is recommended to deploy cerbro or another UI management solution. Refer to https://github.com/lmenezes/cerebro/releases
  • Install Chinese word segmentation plug-in:
  • Download the word segmentation plug-in corresponding to ES version: https://github.com/medcl/elasticsearch-analysis-ik/releases (if there is a network, it can be installed by bin/elasticsearch-plugin install [plugin_name] (you need to move the config configuration file under the plug-in name directory in the config directory to the plug-in persistence directory)]
  • After unzipping, put it in the plugins directory of the elasticsearch directory, such as:
[root@ft-elasticsearch-867fb8d9bb-xchnm plugins]# find .
.
./analysis-ik
./analysis-ik/commons-codec-1.9.jar
./analysis-ik/commons-logging-1.2.jar
./analysis-ik/config
./analysis-ik/config/IKAnalyzer.cfg.xml
./analysis-ik/config/extra_main.dic
./analysis-ik/config/extra_single_word.dic
./analysis-ik/config/extra_single_word_full.dic
./analysis-ik/config/extra_single_word_low_freq.dic
./analysis-ik/config/extra_stopword.dic
./analysis-ik/config/main.dic
./analysis-ik/config/preposition.dic
./analysis-ik/config/quantifier.dic
./analysis-ik/config/stopword.dic
./analysis-ik/config/suffix.dic
./analysis-ik/config/surname.dic
./analysis-ik/elasticsearch-analysis-ik-7.10.1.jar
./analysis-ik/elasticsearch-analysis-ik-7.10.1.zip
./analysis-ik/httpclient-4.5.2.jar
./analysis-ik/httpcore-4.4.4.jar
./analysis-ik/plugin-descriptor.properties
./analysis-ik/plugin-security.policy

2.3.5 Import of external services into the cluster (optional)

apiVersion: v1
kind: Service
metadata:
 name: mysql
spec:
 type: ClusterIP
 ports:
 - port: 3306 # Define the service ports used within the cluster
   targetPort: 23306 # The port actually used by the service

---
apiVersion: v1
kind: Endpoints
metadata:
 name: mysql
subsets:
 - addresses:
     - ip: 10.90.15.32   # The address actually provided by the external service
   ports:
     - port: 23306  # Ports actually provided by external services

3 kubectl Installation and Configuration

3.1 Installing kubectl

Kubectl is a command-line client tool of Kubernetes, which can be used to deploy applications, check and manage cluster resources. Our Launcher is based on this command line tool to deploy applications. The specific installation method can be seen in the official document:

https://kubernetes.io/docs/tasks/tools/install-kubectl/

3.2 Configure kube config

In order to get the ability to manage the cluster, Kubecconfig of the cluster needs to use Kubeadm to deploy the cluster. The default file of Kubecconfig file is /etc/kubernetes/admin.conf, which needs to write the file contents to the client user's $HOME/.kube/config file.

4 Start Installing Guance

4.1 Guance Offline Installation Image Download Address

If it is installed in offline network environment, it is necessary to manually download the latest Guance image package, import all images to each kubernetes working node through docker load command, and then carry out subsequent boot installation.

Download the latest Guance Docker image package at: https://static.guance.com/dataflux/package/guance-latest.tar.gz

  1. Download the Docker image package locally with the following command:

    $ wget https://static.guance.com/dataflux/package/guance-latest.tar.gz
    

  2. After downloading and uploading the Docker image package to each node host in Kubernetes, import the Docker image by executing the following command:

  3. Docker environment import mirror command:

    $ gunzip -c guance-latest.tar.gz | docker load
    

  4. Containterd environment import mirror command:

    $ gunzip guance-latest.tar.gz
    $ ctr -n=k8s.io images import guance-latest.tar
    
    Note: If the kubernetes node host can access the public network, the image does not need to be imported through the above offline import method, and the installer would automatically download the image.

4.2 Launcher Service Installation Configuration

4.2.1 Launcher Installation

Launcher is installed in two ways:

  • Helm Installation
  • Original YAML Installation

!!! Only need to choose one installation method

4.2.1.1 Helm Installation

Preconditions:

  • Helm3 is installed.
  • You have completed storage configuration.
4.2.1.1.1 Installation
# add repository
$ helm repo add launcher https://pubrepo.guance.com/chartrepo/launcher

# update repository
$ helm repo update 

# helm installs Launcher
$ helm install <RELEASE_NAME> launcher/launcher -n launcher --create-namespace  \
        --set-file configyaml="<Kubeconfig Path>" \
  --set ingress.hostName="<Hostname>",storageClassName=<Stroageclass>

Note: <RELEASE_NAME> is the publication name, which can be set to launcher, <Kubeconfig Path> is the kube config file path in section 2.3, which can be set to /root/. kube/config, <Hostname> is the Launcher ingress domain name, <Stroageclass> is the store class name in section 4.1. 2, which can be obtained by kubectl get sc.

# This command is a demo command, please modify the content according to your own requirements
$ helm install my-launcher launcher/launcher -n launcher --create-namespace  \
        --set-file configyaml="/Users/buleleaf/.kube/config" \
  --set ingress.hostName="launcher.my.com",storageClassName=nfs-client
4.2.1.1.2 Community Version Installation

If you deploy the community version, you can first get the community version deployment image and add --set image.repository=<镜像地址>,--set image.tag=<镜像tag> parameters for deployment.

# This command is a demo command, please modify the content according to your own requirements
$ helm install my-launcher launcher/launcher -n launcher --create-namespace  \
        --set-file configyaml="/Users/buleleaf/.kube/config" \
  --set ingress.hostName="launcher.my.com",storageClassName=nfs-client \
 --set image.repository=pubrepo.jiagouyun.com/dataflux/1.40.93,image.tag=launcher-aa97377-1652102035
4.2.1.1.3 How to Uninstall

Launcher has been installed successfully. Please do not uninstall it if it is abnormal.

helm uninstall <RELEASE_NAME> -n launcher
4.2.1.2 YAML Installation

Launcher YAML Download: https://static.guance.com/launcher/launcher.yaml

Save the above YAML content as launcher.yaml file, put it on the O&M Operator, and then replace the variable part in the document:

  • Replace {{ launcher_image }} with the mirror address applied by the latest version of Launcher.
  • If it is an offline installation, after the above-mentioned offline image import through docker load, get the latest version of Launcher image address imported into Worker node through docker images | grep launcher command.
  • If the installation is online, the latest Launcher installation image address can be obtained in the document private deployment version image.
  • Replace {{ domain }} with the primary domain name, such as using dataflux.cn.
  • Replace {{ storageClassName }} with storage class name, which must be the same as the name configured in the kubernetes nfs subdir external provider. (The storageClassName field can be deleted if the default storageclass is configured).

Resources configured with the default storageclass display defalut as shown below:

4.2.2 Import Launcher Service

Execute the following kubectl command on the O&M operator and import the Launcher service: kubectl apply -f ./laucher.yaml

4.2.3 Resolving Launcher Domain Name to Launcher Service

As launcher service is used for deploying and upgrading Guance Cloud, and does not need to be open to users, domain names should not be resolved on the public network. You can bind host on the install operator to simulate domain name resolution, and add launcher.dataflux.cn domain name binding in /etc/hosts:

192.168.0.1 launcher.dataflux.cn is actually accessed by the address of the edge node ingress (or via the cluster node IP + Port by modifying the launcher service to NodePort).

4.3 Application Installation Boot Steps

Visit launcher.dataflux.cn in the browser of the installation operator, and complete the installation configuration step by step according to the boot steps.

4.3.1 Database Configuration

  • Services within the cluster are connected by the service name, and the cluster suggests services to be used within the cluster.
  • The account must use the administrator account, because this account is needed to initialize the database and database access account of multiple sub-applications.

4.3.2 Redis Configuration

  • The Redis connection address must be able to communicate with the cluster physical node.
  • Services within the cluster are connected by the service name, and the cluster suggests services to be used within the cluster.

4.3.3 InfluxDB Configuration

  • Services within the cluster are connected by service name.
  • The account must use the administrator account, because it is necessary to use this account to initialize DB and RP waiting information.
  • You can add multiple instances of InfuxDB.

4.3.4 Other Settings

  • The initial account name and mailbox of the administrator account in Guance Cloud Management Center (the default password is admin, and it is recommended to modify the default password immediately after logging in).
  • Intranet IP of cluster node (would be obtained automatically and it needs to be confirmed whether it is correct)
  • The configuration of the main domain name and the sub-domain name of each sub-application, the default sub-domain name is as follows, which can be modified as needed:
  • dataflux 【User Front Desk
  • df-api 【User Front Desk API
  • df-management 【Management Background
  • df-management-api 【Management Background API
  • df-websocket 【Websocket Service
  • df-func 【Func Platform
  • df-openapi 【OpenAPI】
  • df-static-res 【Static Resource Site
  • df-kodo 【kodo

  • TLS domain name certificate

4.3.5 Installation Information

Summarize and display the information just filled in. If there is any error in filling in the information, you can go back to the previous step to modify it.

4.3.6 Application Configuration File

The installer would automatically initialize the application configuration template according to the installation information provided in the previous steps, but it is still necessary to check all the application templates one by one and modify the personalized application configuration. See the installation interface for specific configuration instructions.

After confirmation, submit the creation configuration file.

4.3.7 Apply Mirroring

  • Choose the correct shared storage, which is the storage class name you created in the previous step.
  • The application image would be automatically filled in according to the Launcher version you selected without modification, and the application would be created after confirmation

4.3.8 Application Status

The startup status of all application services would be listed here. This process needs to download all images, which may take several minutes to ten minutes. After all services are successfully started, it means that they have been installed successfully.

Note: In the process of service startup, you must stay on this page and don't close it. At the end, you would see the prompt of "version information was written successfully" and no error window would pop up, which means that the installation is successful!

4.4 Domain Name Resolution

Resolve all subdomain names except df-kodo.dataflux.cn to the edge node ingress address:

  • dataflux.dataflux.cn
  • df-api.dataflux.cn
  • df-management.dataflux.cn
  • df-management-api.dataflux.cn
  • df-websocket.dataflux.cn
  • df-func.dataflux.cn
  • df-openapi.dataflux.cn
  • df-static-res.dataflux.cn
  • df-kodo.dataflux.cn

The current version of Launcher Installer requires manual configuration of the kodo ingress service. Reference:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: df-kodo
  namespace: forethought-kodo
spec:
  rules:
  - host: df-kodo.dataflux.cn
    http:
      paths:
      - backend:
          serviceName: kodo-nginx
          servicePort: http
        path: /
        pathType: ImplementationSpecific
---
apiVersion: v1
kind: Service
metadata:
  name: kodo-nginx
  namespace: forethought-kodo
spec:
  ports:
  - name: https
    nodePort: 31841
    port: 443
    protocol: TCP
    targetPort: 80
  - name: http
    nodePort: 31385
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: deployment-forethought-kodo-kodo-nginx
  sessionAffinity: None
  type: NodePort

After configuration, services such as haproxy or nginx can be deployed to perform domain name proxy on machines outside the cluster.

 #---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

# 443 https port configuration can release comments without matching
#frontend https_frontend
#        bind *:443 ssl crt /etc/ssl/certs/dataflux.cn.pem # ssl certificate
#        mode http
#        option httpclose
#        option forwardfor
#        reqadd X-Forwarded-Proto:\ https
#        #default_backend web_server
#        # not ingress
#        acl kodo           hdr(Host)  -i df-kodo.test.com
#
#        acl launcher       hdr(Host)  -i launcher.test.com
#        acl dataflux       hdr(Host)  -i dataflux.test.com
#        acl func           hdr(Host)  -i df-func.test.com
#        acl api            hdr(Host)  -i df-api.test.com
#        acl management     hdr(Host)  -i df-management.test.com
#        acl management-api hdr(Host)  -i df-management-api.test.com
#        acl static         hdr(Host)  -i df-static-res.test.com
#
#        use_backend vip_1_servers if dataflux
#        use_backend vip_1_servers if func
#        use_backend vip_1_servers if launcher
#        use_backend vip_1_servers if static
#        use_backend vip_1_servers if api
#        use_backend vip_1_servers if management
#        use_backend vip_1_servers if management-api
#
#       # 不走ingress
#        use_backend vip_2_servers if kodo

# dynamic-static separation
frontend http_web
        mode http
        bind *:80
#        redirect scheme https if !{ ssl_fc}
        option httpclose
        option forwardfor
        ###### Please change your domain name test.com to your domain name
        acl kodo           hdr(Host)  -i df-kodo.test.com

        acl launcher       hdr(Host)  -i launcher.test.com
        acl dataflux       hdr(Host)  -i dataflux.test.com
        acl func           hdr(Host)  -i df-func.test.com
        acl api            hdr(Host)  -i df-api.test.com
        acl management     hdr(Host)  -i df-management.test.com
        acl management-api hdr(Host)  -i df-management-api.test.com
        acl static         hdr(Host)  -i df-static-res.test.com

        acl dataway         hdr(Host)  -i df-dataway.test.com
        use_backend vip_1_servers if dataflux
        use_backend vip_1_servers if func
        use_backend vip_1_servers if launcher
        use_backend vip_1_servers if static
        use_backend vip_1_servers if api
        use_backend vip_1_servers if management
        use_backend vip_1_servers if management-api
        use_backend vip_1_servers if kodo

        use_backend vip_2_servers if dataway
# The ingress port ip is a cluster of k8s please replace ip
backend vip_1_servers
        balance roundrobin
        server ingress_1 172.16.1.186:31257 check inter 1500 rise 3 fall 3
        server ingress_2 172.16.1.187:31257 check inter 1500 rise 3 fall 3
        server ingress_3 172.16.1.188:31257 check inter 1500 rise 3 fall 3

# The kodo port and ip are dataway, which needs to be configured in next 4.5.
backend vip_2_servers
        balance roundrobin
        server ingress_1 172.16.1.190:9528 check inter 1500 rise 3 fall 3
#        server ingress_2 172.16.1.187:31465 check inter 1500 rise 3 fall 3
#        server ingress_3 172.16.1.188:31465 check inter 1500 rise 3 fall 3

4.5 After Installation

For successful deployment, please refer to the manual how to get started

If there is a problem during installation and you need to reinstall, please refer to the manual maintenance manual

4.6 Important Steps!!!

4.6.1 Installer Service Offline

After the above steps, it can be verified after Guance is installed. After the verification is correct, it is a very important step to offline the launcher service to prevent it from being accessed by mistake and damaging the application configuration. The following command can be executed on the O&M operator to set the pod copy number of the launcher service to 0:

kubectl scale deployment -n launcher --replicas=0  launcher

or

kubectl patch deployment launcher -p '{"spec": {"replicas": 0}}' -n launcher

Feedback

Is this page helpful? ×