FlexSDS released high performance optimized SDS stack for hybrid or all flash arrays

FlexSDS, the global recognized in high performance software defined storage period, has announced a new release of it’s software stack 2018 for Flash Arrays.

FlexSDS brings its own developed core service which is a high performance optimized, lock-less scheduler (like a OS) to manage all resources in the server includes:
CPU, memory, storage, PCI devices and numa, which is working in polling mode and make full usage of hardware resource to perform maximum performance for storage service.

FlexSDS brings user mode NVMe driver (Kernel-by-pass) to support directly attached, RAID pool and SDS Pool, SDS Pool allows to create multiple volumes to be exposed as high availability, snapshot enabled iSCSI target, iSER target or NVMe-oF target.

 

Key features:

100% SDS Pool which can manage all disks and make them as pooled with the ability to create arbitrary, dynamic block volumes with unlimited zero-copies snapshot enabled.
Polling mode server pool, listening to multiple NICs and ports.
Protocol Support: iSCSI (TCP), iSER (iSCSI Extension for RDMA) and NVMe-oF (NVMe over Fabric).
High Availability and Remote mirror: All interfaces (iSCSI, iSER, and NVMe-oF) support HA.
Kernel-by-pass, completely kernel-by-pass and zero data copy in I/O path (except latency disk support).
Non-SDS Pool, providing directly attached, RAID (0, 1, 5) mode storage pool.
Legacy device support, support for SATA/SAS HDD and SSD.
Data safety, strong consistency I/Os return back after safely placed in Disks.
Easy Management, providing easy-to-use, all-in-on and centralized WEB management platform.

Almost no limitation to use FlexSDS, FlexSDS can be working on All Flash Array, and as well as working on traditional SATA/SAS arrays, user can use even 1-2 NVMe to get the benefits of the kernel-by-pass performance.

FlexSDS software defined storage is now available for End-Users and OEM partners in all around of the world.

How to install FlexSDS

Install FlexSDS Scale-out Software Software is very easy, please refer to the FlexSDS user’s manual for setting up and deployment.

Using command line iSER DEMO to test iSER target performance

From the package of us, there are few demo utils to create a single pooled storage to demonstrate storage features and performance. Through the way, you can quick preview the favorite features in minutes.

For iSER target, you may run the following command to start:

/opt/flexsds/bin/flexsds -d iser spool 2 nvme://0000:04:00.0 nvme://0000:05:00.0
outputs are:
found 1 ib devices
device0: mlx5_0, uverbs0, /sys/class/infiniband_verbs/uverbs0, /sys/class/infiniband/mlx5_0
copies: 2
create name space: iqn.2016-12.com.flexsds:testpool.volume0…done.
iSCSI over RDMA service is started.

Read more

Using command line NVMe-oF DEMO to test off-load NVMe-oF target performance

From the package of us, there are few demo utils to create a single pooled storage to demonstrate storage features and performance. Through the way, you can quick preview your favorite features in minutes.

For NVMe-oF target, you may run the following command to start:

/opt/flexsds/bin/flexsds -d nvmf spool 3 nvme://0000:04:00.0 nvme://0000:05:00.0 nvme://0000:05:00.0
outputs are:
found 1 ib devices
device0: mlx5_0, uverbs0, /sys/class/infiniband_verbs/uverbs0, /sys/class/infiniband/mlx5_0
copies: 2
create name space: nqn.2016-12.com.flexsds:all-flash-pool.volume0…done.
NVMe over Fabric service is started.

Read more

Configure NVMe-oF device with Multipath

Requirements:

Linux Kernel 4.8 and newer.

Package: multipath, nvme-cli

Use the ways to install them:

#yum install device-mapper-multipath nvme-cli

Read more

Using Linux nvme-cli to connect to the FlexSDS’s NVMe-oF targets

Requirements

To using Linux NVMe over Fabrics client, need Linux with kernel 4.08 and above.

Install the nvme-cli package on the client machine:

#yum install nvme-cli

or on Debian:

#apt-get install nvme-cli

Startup Kernel Module

#modprobe nvme-rdma

Discover NVMe-oF subsystems

nvme discover -t rdma -a 192.168.80.101 -s 4420
Discovery Log Number of Records 1, Generation counter 1

=====Discovery Log Entry 0======

trtype: rdma

adrfam: ipv4

subtype: nvme subsystem

treq: not specified

portid: 1

trsvcid: 4420

subnqn:nqn.2016-12.com.flexsds:all-flash-pool.test_vol0

traddr: 192.168.80.101

rdma_prtype: not specified

rdma_qptype: connected

rdma_cms: rdma-cm

rdma_pkey: 0x0000

Connect to NVMe-oF subsystems

nvme connect -t rdma -n nqn.2016-12.com.flexsds:all-flash-pool.test_vol0 -a 192.168.80.101 -s 4420

List nvme device info:

#nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 FlexSDS Controller 1000 GB / 1000 GB 512 B + 0 B A34CCD834CD3544

Disconnect NVMe-oF subsystems

In order to disconnect from the target run the nvme disconnect command:

#nvme disconnect -d /dev/nvme0n1