FlexSDS FAQs

The product will use one or more cpu cores to run itself in polling mode which will use 100% of each core’s utilization, please do not think it is an issue.

Each Server Pool will use one dedicate CPU core. If you want to use it in virtual machine, the virtual machine must have two or more virtual CPU cores.

If you don’t change main part of the hardware like CPU, Boot Disk, and NiCs, the license will keep available  on your machine, otherwise you can use the existing license to install (activating) again.

If you want to change whole machine, you can just remove the exiting license and then install the license on to another machine. Just do not long time run the license on two or more machines at the same time is no problem.

Install the product is very easy, you can run installer in the product package, which will take you only one minute, please refer to Knowledge Base the guide of the product page.

First of all, copies mean how many copies of each data block, copies = “N ways replication”, 2 or more copies will support high availability.

To support high availability, user may set copies=2. The product is true software defined and use dynamic blocks, not like RAID1 simply replicate volume between nodes. After a node failed, the recovery will be automatically started to fix up out of date objects.

copies=1 means there is only one data replication path means each block was placed either on node1 or node2.

If you set copies=2 when creating storage pool, data will be replicated between two or more nodes, it will use up to twice space of volume capacity.

The commended additional memory being used by FlexSDS storage node is:

  1. RDMA enabled: 16GB/Core
  2. TCP only or witness: 8GB/Core

CPU Core can be specified in the configuration file that is placed at: /etc/flexsds.conf. 

“additional” means except the mentioned memory, left memory should be sufficient for running OS and other apps.

For TCP mode, there is no additional comments there because for FlexSDS VM has no different with physical machines.

For RDMA mode, user might use pci-passthrough and sriov, that depends on hardware limitation, typically it should support 8 or more VF on each port, user could passthrough on VF into VM to let VM using RDMA network.

First of all, 2 nodes HA is the lowest TCO, but if you want to deploy 3 or more storage nodes, we suggest to install them as true clustered mode rather than 2 nodes high availability mode.

The FlexSDS is designed as distributed scale-out storage which is not like traditional SAN storage, it could create 1, 2, 3 or more nodes cluster, while 1 node mode will only support in-node data redundancy, 3 or more nodes will work as true software defined, scale-out mode without any restriction. And 2 nodes HA mode working like 3 nodes mode as well, only different in 2 nodes mode is that the third node could be no storage, less CPU resource, but the third node is necessary, other wise, the two nodes will not support failover, because each other don’t know if it is another node fail, or happen split-brain. Split-brain is very dangerous, in true 2 nodes mode there is no 100% way to avoid  split-brain in software-only solution. 

Therefor witness node is necessary for building 2 nodes HA mode (and for inside technology, we use raft for consensus, 2 nodes if lost 1 will not archive consensus). However there are some restriction, because metadata needs two copies in high-consistenance, if one node fail, although another node can continue offer I/O service, thin provision volume is not recommend to use.

Yes, being scale-out storage, you can create a cluster in one time, or dynamically add new node to expand storage capacity in the future. Use can flexibly expand cluster from 1 node, 2 nodes, or 3+ nodes clusters to more.

The witness node could be any machine like physical machine or a VM that can connect to the storage network, VM could be placed inside a business server like ESX/ESXi, and VM can use RDMA as well, either Linux or ESX support sr-iov function, user can use pci passthrough a VF into VM, in this case, the VM will be able to use the RDMA network, user don’t need to do much change to the existing network.

For 2 or more storage nodes cluster, one node goes down will not effect other nodes continue to work, after the node goes up, recovery will be automatically started with no human intervention is required.

In spite of witness not or normal node, the up nodes must more than 1/2 of total nodes, and down nodes must less than data copies – 1. 

The synchronous replication feature is used for volume online backup, you can create synchronous on a specified volume to a remote storage. 

Yes, that is supported. The FlexSDS was designed as scale-out clustered storage, FlexSDS itself will provide failover and auto-recover function, so you do not have to create replication for failover solution, the recommenced nodes are three nodes.

Two nodes HA mode is supported as well, but this mode still need a third node as witness node because there is no method to 100% avoid split-brain in “true” two nodes mode, that means the witness node could be less compute resource, less storage etc, but the node is necessary to provide arbitration function to prevent split-brain, and volume need to be RAW with fulfilled at this time.

When add disk after issue the following command or add NVMe disk in WEB management platform.

flexsds backend add —disk nvme://0000:33:00.0

the disk is gone in the system, I can’t see in the lsblk command.

That’s exactly what we designed, we support several type of backend, 

for example, when user use /dev/nvme0n1, will use default kernel mode driver, the disk is still there in the system, when user use nvme://0000:33:00.0,  the disk driver will be switched to user mode, that means the disk will be dedicated managed by FlexSDS, system and other application can’t “see” the disk anymore. 

We provide command line and web based management tool to convert from user mode to kernel mode or convert from kernel mode to user mode, on condition that the software is installed and disks is not added into the FlexSDS storage cluster.

Except this, user can issue the following commands to reset specified NVMe disk into kernel mode:

echo 1 >/sys/bus/pci/devices/0000\:33\:00.0/remove 

echo 1 >/sys/bus/pci/rescan 

Metadata is “root” data therefor its very important for the product, if the metadata lose, all data may be lost, whats information are important in the metadata

  1. storage pools
  2. disks.

 user need to consider the following conditions:

  1. Single node, single node mode, metadata will be stored in the system drive, if system disk damaged, the metadata may be lost, we strongly recommend user create RAID1 for system disk, otherwise, user need backup the folder /opt/flexsds to safe place after any creation of pool, adding disks.
  2. HA mode or multiple nodes cluster mode, user do not need backup metadata since those metadata are replicated between nodes, any node will store a copy of metadata, we also recommend to create RAID1 on system disk but it is not mandatory.

No matter the user encounters difficulties or problems in the installation process, or even the user needs us to help them to install the FlexSDS, the user can provide SSH channel to us through email, and tell us the disks /network that needs to be used, and the type of pool and volume that needs to be created, so that we can help complete the initial installation remotely.

For no space issue while creating pool, the error indicate there is no enough disks or space to create the storage pool.

In single node mode, disk count must be more or equal to the storage pool data copies, while in multiple nodes cluster mode, nodes count  must be more or equal (note, if equal will not support auto-recovery) to the storage pool data copies. 

Yes, FlexSDS provide high availability, auto-recovery and scale-out storage for VMware ESXi.

To use FlexSDS in VMware ESXi, you need to follow these steps:

  1. Install FlexSDS storage software on a separate server or cluster of servers.
  2. Add disk to FlexSDS as backends.
  3. Create and configure storage pools and volumes in FlexSDS.
  4. Create volumes and export them via iSCSI, iSER, or NVMf protocols.
  5. Connect the ESXi host to the FlexSDS storage server using iSCSI, iSER, or NVMf protocols and attach the FlexSDS volumes to them as datastores.
  6. Create virtual machines in ESXi on the attached FlexSDS volumes datastores.
  7. Monitor and manage the storage and virtual machines using the vSphere client or other management tools.
    Note:

The exact steps may vary depending on your specific ESXi version and the features you want to use with FlexSDS. It’s recommended to refer to FlexSDS white paper “FlexSDS Build HA and Scale-out Storage for VMware ESXi” for detailed instructions.

Yes, you can use FlexSDS in your system if IOMMU is enabled with ‘pt’ mode. To do this, add the following arguments in the system command line: ‘intel_iommu=on iommu=pt’, or use the following command to enable IOMMU with ‘pt’ mode:

#grubby –update-kernel=ALL –args=”intel_iommu=on iommu=pt”
#reboot

NVMStack is also a part of our portfolio. In the near future, NVMStack will support the full HCI stack, encompassing both storage and compute platforms. However, please note that its storage capabilities have been integrated into FlexSDS and FlexSDS brings much more other features.

The FlexSDS solution also supports a two-node high availability (HA) feature, complemented by its scale-out functionality. While you may begin with a two-node setup, FlexSDS allows you to seamlessly expand your storage capacity by incorporating additional nodes. This scale-out capability ensures uninterrupted service as your storage needs grow.

Regarding RAID functionalities, FlexSDS offers Erasure Coding (EC) features. With options like 3+1 or 4+2 configurations, EC represents a more advanced alternative to traditional RAID setups. It’s worth noting that RAID is essentially a specific subset of EC. Additionally, FlexSDS incorporates a journaling feature designed to safeguard against data corruption during unforeseen events like power outages.