v2.0
v1.0
  1. Release Notes
    1. Release Notes - 2.0.2Latest
    1. Release Notes - 2.0.1
    1. Release Notes - 2.0.0
  1. Introduction
    1. Introduction
    1. Features
    1. Architecture
    1. Advantages
    1. Glossary
  1. Installation
    1. Intruction
      1. Intro
      2. Port Requirements
    1. Install on Linux
      1. All-in-One Installation
      2. Multi-Node Installation
      3. Installing HA Master and Etcd Cluster
      4. Storage Configuration Instruction
    1. Install on Kubernetes
      1. Prerequisites
      2. Online Installation
      3. Offline Installation
    1. Related Tools
      1. Integrating Harbor Registry
    1. Cluster Operation
      1. Adding New Nodes
      2. High Risk Operation
      3. Uninstalling KubeSphere
  1. Quick Start
    1. Getting Started with Multitenancy
    1. Exposing your APP using Ingress
    1. Deploying a MySQL Application
    1. Deploying a Wordpress Website
    1. Job to compute π to 2000 places
    1. Deploying Grafana using APP Template
    1. Creating Horizontal Pod Autoscaler
    1. S2i: Publish your app without Dockerfile
    1. Canary Release of Microservice APP
    1. CI/CD based on Spring Boot Project
    1. Building a Pipeline in a Graphical Panel
    1. CI/CD based on GitLab and Harbor
    1. Ingress-Nginx for Grayscale Release
  1. Cluster Admin Guide
    1. Multi-tenant Management
      1. Overview of Multi-tenant Management
      2. Overview of Role Management
    1. Platform Management
      1. Account Management
      2. Platform Roles Management
    1. Infrastructure
      1. Service Components
      2. Nodes
      3. Storage Classes
    1. Monitoring Center
      1. Physical Resources
      2. Application Resources
    1. Application Repository
    1. Jenkins System Settings
  1. User Guide
    1. Application Template
    1. Workloads
      1. Deployments
      2. StatefulSets
      3. DaemonSets
      4. Jobs
      5. CronJobs
    1. Storage
      1. Volumes
    1. Network & Services
      1. Services
      2. Routes
    1. Configuration Center
      1. Secret
      2. ConfigMap
      3. Image Registry
    1. Project Settings
      1. Basic Information
      2. Member Roles
      3. Project Members
      4. Internet Access
    1. DevOps Project
      1. DevOps Project Management
      2. DevOps Project Management
      3. DevOps Project Management
      4. DevOps Project Management
      5. DevOps Project Management
  1. Development Guide
    1. Preparing the Development Environment
    1. Development Workflow
  1. API Documentation
    1. API Guide
    1. How to invoke KubeSphere API
KubeSphere®️ 2020 All Rights Reserved.

Storage Configuration Instruction

Currently, Installer supports the following types of storage as storage servers, providing persistent storage service for KubeSphere (more storage classes are continuously updated).

  • Ceph RBD
  • GlusterFS
  • NFS
  • QingCloud Block Storage
  • QingStor NeonSAN
  • Local Volume (All-in-One installation test only)

At the same time, Installer integrates the QingCloud-CSI (Block Storage Plugin) and the QingStor NeonSAN CSI Plugin. It can be connected to the QingCloud block storage or QingStor NeonSAN as a storage, just need simple configuration before installation.

Make sure you have QingCloud account. In addition, The Installer also integrates storage clients such as NFS, GlusterFS and Ceph RBD. Users need to prepare the relevant storage server in advance, and then configure the corresponding parameters in vars.yml to connect to the corresponding storage server.

The versions of open source storage servers and clients that have been tested using Installer, as well as the CSI plugins, are listed as following:

Name Version Reference
Ceph RBD Server v0.94.10 For testing installation, please refer to Deploy Ceph Storage Server. If it is in a formal environment, please refer to Ceph Documentation
Ceph RBD Client v12.2.5 Before installing KubeSphere, you just need to configure the corresponding parameters in vars.yml to connect to its storage server, see Ceph RBD
GlusterFS Server v3.7.6 For testing installation, please refer to Deploying GlusterFS Storage Server. If it is a formal environment, please refer to Gluster Documentation or Gluster Documentaion and need to install Heketi Manager (V3.0.0) as well
GlusterFS Client v3.12.10 Before installing KubeSphere, you just need to configure the corresponding parameters in vars.yml to connect to the storage server, see GlusterFS
NFS Server in Kubernetes v1.0.9 For configuration details, see NFS Server Configuration
NFS Client v3.1.0 Before installing KubeSphere, you just need to configure the corresponding parameters in vars.yml to connect to its storage server, see NFS Client
QingCloud-CSI v0.2.0.1 Please configure the corresponding parameters in vars.yml before installing KubeSphere. For details, see QingCloud CSI
NeonSAN-CSI v0.3.0 Before installing KubeSphere, you just need to configure the corresponding parameters in vars.yml, see Neonsan-CSI

Note: It's not allowed to set two default storage class in the cluster. To specify a default storage class, make sure there is no default storage class already exited in the current cluster.

Storage Configuration Definition

After preparing the storage server, then you need to reference the parameter description in the following table. Then modify the corresponding storage class part in the configuration file (conf/vars.yml ) according to your storage server.

The following is a brief description of the parameter configuration related to vars.yml storage, also see Storage Classes) for the details.

Note: By default, Local Volume is configured as the default storage class of the cluster in vars.yml. If you are going to configure other storage class as the default class, firstly you have to modify the related configuration of Local to false, and then modify the configuration of the corresponding storage according to your storage server before start installation.

Ceph RBD

The open source Ceph RBD distributed storage system, can be configured in conf/vars.yml, assume you have prepared Ceph storage servers in advance, thus you can reference the following definition. See Kubernetes Documentation for more details.

Ceph_RBD Description
ceph_rbd_enabled Determines whether to use Ceph RBD as the persistent storage, can be set to true or false. Defaults to false
ceph_rbd_storage_class Storage class name
ceph_rbd_is_default_class Determines whether to set Ceph RBD as default storage class, can be set to true or false. Defaults to false.
Note: When there are multiple storage classes in the system, only one can be set as the default.
ceph_rbd_monitors Ceph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters
ceph_rbd_admin_id Ceph client ID that is capable of creating images in the pool. Default is “admin”
ceph_rbd_admin_secret Admin_id's secret,Secret name for "adminId". This parameter is required. The provided secret must have type “kubernetes.io/rbd”
ceph_rbd_pool Ceph RBD pool. Default is “rbd”
ceph_rbd_user_id Ceph client ID that is used to map the RBD image. Default is the same as adminId
ceph_rbd_user_secret Secret for User_id, it is required to create this secret in namespace which used rbd image
ceph_rbd_fsType fsType that is supported by kubernetes. Default: "ext4"
ceph_rbd_imageFormat Ceph RBD image format, “1” or “2”. Default is “1”
ceph_rbd_imageFeatures This parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on

Attention:

The on-demand ceph secret which is created in storage class, like "cephrbdadminsecret" and "cephrbdusersecret", it can be returned with following command in Ceph storage server.

$ ceph auth get-key client.admin

GlusterFS

GlusterFS is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. Assume you have prepared GlusterFS storage servers in advance, thus you can reference the following definition,see Kubernetes Documentation for more details.

GlusterFS(It requires glusterfs cluster which is managed by heketi) Description
glusterfs_provisioner_enabled Determines whether to use GlusterFS as the persistent storage, can be set to true or false. Defaults to false
glusterfs_provisioner_storage_class Storage class name
glusterfs_is_default_class Determines whether to set GlusterFS as default storage class, can be set to true or false. Defaults to false.
Note: When there are multiple storage classes in the system, only one can be set as the default
glusterfs_provisioner_restauthenabled Gluster REST service authentication boolean that enables authentication to the REST server
glusterfs_provisioner_resturl Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner
glusterfs_provisioner_clusterid Optional, for example, 630372ccdc720a92c681fb928f27b53f is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids
glusterfs_provisioner_restuser Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool
glusterfs_provisioner_secretName Optional, identification of Secret instance that contains user password to use when talking to Gluster REST service,Installation package will automatically create this secret in Kube-system
glusterfs_provisioner_gidMin The minimum value of GID range for the storage class
glusterfs_provisioner_gidMax The maximum value of GID range for the storage class
glusterfs_provisioner_volumetype The volume type and its parameters can be configured with this optional value,For example: ‘Replica volume’: volumetype: replicate:3
jwt_admin_key "jwt.admin.key" column from "/etc/heketi/heketi.json" in Heketi server

Attention:

In Glusterfs, "glusterfs_provisioner_clusterid" could be returned from glusterfs server. Execute the following command:

$ export HEKETI_CLI_SERVER=http://localhost:8080

$ heketi-cli cluster list

NFS

An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in conf/vars.yml, assume you have prepared Ceph storage servers in advance. By the way, you can use QingCloud vNAS as NFS server.

NFS Description
nfs_client_enable Determines whether to use NFS as the persistent storage, can be set to true or false. Defaults to false
nfs_client_is_default_class Determines whether to set NFS as default storage class, can be set to true or false. Defaults to false.
Note: When there are multiple storage classes in the system, only one can be set as the default
nfs_server The NFS server address, either IP or Hostname
nfs_path NFS shared directory, which is the file directory shared on the server, see Kubernetes Documentation

QingCloud Block Storage

KubeSphere supports QingCloud Block Storage as the platform storage service. If you would like to experience dynamic provisioning to create volumes, it's recommended to use QingCloud Block Storage, KubeSphere integrated QingCloud-CSI, which supports you to use the different performance of block storage in QingCloud platform.

After plugin installation completes, user can create volumes based on several types of disk, such as super high performance disk, high performance disk and high capacity disk, with ReadWriteOnce access mode and mount volumes on workloads.

The parameters for configuring the QingCloud-CSI plugin are described below.

QingCloud-CSI Description
qingcloud_csi_enabled Determines whether to use QingCloud-CSI as the persistent storage volume, can be set to true or false. Defaults to false
qingcloud_csi_is_default_class Determines whether to set QingCloud-CSI as default storage class, can be set to true or false. Defaults to false.
Note: When there are multiple storage classes in the system, only one can be set as the default.
qingcloud_access_key_id ,
qingcloud_secret_access_key
Get from QingCloud Cloud Platform Console
qingcloud_zone zone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A)
type The type of volume in QingCloud IaaS platform. In QingCloud public cloud platform, 0 represents high performance volume. 3 respresents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see QingCloud Documentation
maxSize, minSize Limit the range of volume size in GiB
stepSize Set the increment of volumes size in GiB
fsType The file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4

QingStor NeonSAN

The NeonSAN-CSI plugin supports the enterprise-level distributed storage QingStor NeonSAN as the platform storage service. If you have prepared the NeonSAN server, you will be able to configure the NeonSAN-CSI plugin to connect to its storage server in conf/vars.yml, see NeonSAN-CSI Reference

NeonSAN Description
neonsan_csi_enabled Determines whether to use NeonSAN as the persistent storage, can be set to true or false. Defaults to false
neonsan_csi_is_default_class Determines whether to set NeonSAN-CSI as default storage class, can be set to true or false. Defaults to false.
Note: When there are multiple storage classes in the system, only one can be set as the default.
Neonsan_csi_protocol tranportation protocol, user must set the option, such as TCP or RDMA
neonsan_server_address NeonSAN server address
neonsan_cluster_name NeonSAN server cluster name
neonsan_server_pool A comma separated list of pools. Tell plugin to manager these pools. User must set the option, the default value is kube
neonsan_server_replicas NeonSAN image replica count. Default: 1
neonsan_server_stepSize set the increment of volumes size in GiB. Default: 1
neonsan_server_fsType The file system to use for the volume. Default: ext4

Local Volume (All-in-One installation test only)

A Local volume represents a mounted local storage device such as a disk, partition or directory. Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet. So it's only recommended to use Local volume for All-in-One installation test only, it can help you to quickly & easily install KubeSphere on a single node. The definition of the conf/vars.yml is as following table.

Local volume Description
local_volume_provisioner_enabled Determines whether to use Local as the persistent storage, can be set to true or false. Defaults to true
local_volume_provisioner_storage_class Storage class name, default value:local
local_volume_is_default_class Determines whether to set Local as the default storage class, can be set to true or false. Defaults to true.
Note: When there are multiple storage classes in the system, only one can be set as the default