Company:Dell Technologies PowerFlex

From HandWiki
Short description: Software-defined storage product
Dell Technologies Inc.
TypePublic
  • NYSEDELL (Class C)
  • Russell 1000 component
IndustryComputer hardware
Software
Cloud computing
Data storage
Information security
Consulting
PredecessorsDell
EMC Corporation
FoundedSeptember 7, 2016; 7 years ago (2016-09-07) as a merger of EMC Corporation and Dell Inc.
FounderMichael Dell
HeadquartersRound Rock, Texas, U.S.
Area served
Worldwide
Key people
Michael Dell
(Chairman and CEO)
Products
RevenueIncrease US$94.224 billion (2021)
Increase US$5.144 billion (2021)
Decrease US$3.505 billion (2021)
Total assetsIncrease US$123.415 billion (2021)
Total equityIncrease US$7.553 billion (2021)
Number of employees
158,000 (2021)
Divisions
Subsidiaries
Websitedell.com
Footnotes / references
[1]

Dell Technologies PowerFlex (previously known as ScaleIO and VxFlex OS), is a commercial software-defined storage product from Dell Technologies that creates a server-based storage area network (SAN) from local server storage using x86 servers. It converts this direct-attached storage into shared block storage that runs over an IP-based network.

PowerFlex can scale from three compute/storage nodes to over 2,000 nodes that can drive up to 240 million IOPS of performance.[citation needed] PowerFlex is bundled with Dell commodity computing servers (officially called VxFlex Ready Nodes, PowerFlex appliance, and PowerFlex rack).

PowerFlex can be deployed as storage only or as a converged infrastructure combining storage, computational and networking resources into a single block. Capacity and performance of all available resources are aggregated and made available to every participating PowerFlex server and application. Storage tiers can be created with media types and drive types that match the ideal performance or capacity characteristics to best suit the application needs.

History

ScaleIO was founded in 2011 by Boaz Palgi, Erez Webman, Lior Bahat, Eran Borovik, Erez Ungar, and Dvir Koren in Israel.[2] The software was designed for high performance and large systems.[3]

A product was announced in November 2012.[4]

EMC Corporation bought ScaleIO in June 2013 for about $200 million, only about six months after the company emerged from stealth mode.[5][2] EMC began promoting ScaleIO in 2014 and 2015, marketing it in competition with EMC’s own data storage arrays. Also in 2015, EMC introduced a model of its VCE converged infrastructure hardware that supported ScaleIO storage.

At its 2015 trade show, EMC announced that ScaleIO would be made freely available to developers for testing. By May, 2015, developers could download the ScaleIO software.

In September 2015, EMC announced the availability of the previously software-only ScaleIO pre-bundled on EMC commodity hardware, called EMC ScaleIO Node.

In May, 2017, Dell EMC announced ScaleIO.Next, featuring inline compression, thin provisioning and flash-based snapshots. The new release features enhanced snapshot tools and features and full support for VMware Virtual Volumes (VVols), as well as volume migration for deployments that want to take advantage of lower cost media for low-priority data.

In March, 2018, ScaleIO was rebranded to VxFlex OS and continued to be the software defined storage for VxFlex Ready Nodes, VxFlex appliance and VxFlex integrated system (VxRack FLEX).

In April, 2019, VxFlex OS 3.0 (ScaleIO 3.0) was released.

In June, 2020, VxFlex OS was rebranded to PowerFlex[6] and version 3.5 was released which included updates such as native asynchronous replication, HTML5 Web UI, secure snapshots and other core improvements.

In June, 2021, PowerFlex 3.6 was launched and includes new features such as replication for HCI with VMware SRM support, 15 second RPO timing, CloudIQ, Oracle Linux Virtualization support, increased network resiliency enhancements, and support for up to 2000 SDC's.

In November, 2021 - PowerFlex 3.6.0.2 was released along with PowerFlex Manager 3.8, which includes support for PowerFlex rack, appliance and custom 15G Dell server nodes, along with new networking options to support additional topologies, and new container observability capabilities with the Container Storage Modules (CSM).

In May 2022, PowerFlex 4.0 was announced at Dell Technologies World - new features include a Unified Manager which combine the PowerFlex Gateway, PowerFlex Manager, and PowerFlex Presentation Server into one modernized tool. Other new and notable features for PowerFlex is SD-NAS to provide File (NFS/CIFS) access, and NVMe/TCP support to provide a non-proprietary option to consume PowerFlex storage.

Architecture

PowerFlex uses storage and compute resources from commodity x86 hardware. As a performance focused product, it typically uses NAND based media (SSD/NVMe/Optane) to create storage pools with different performance capabilities. Other media such as HDDs, PCIe flash cards, and even files can also be used to form a storage pool which provides shared block storage to applications.

Each node added to the storage cluster will linearly increase capacity and performance up to the maximum of 512 storage nodes / 16 PiB. It also includes enterprise-grade data protection features such as policy driven snapshots and Async replication with 15 second RPO's. Other notable features include QoS, thin provisioning, in-line compression and HTML5 management (along with CLI and REST API's).

The PowerFlex core software can run on multiple hardware platforms, including on public cloud systems such as Amazon Web Services (AWS). Lifecycle Management however is best on Dell PowerFlex nodes as the software can fully automate provisioning and patching up to the OS / hypervisor level.

PowerFlex architecture consists of multiple components which are installed on application hosts. The hosts that will be contributing local storage to the cluster need to run the SDS (storage data server) component. The hosts that will be consuming storage need to run the lightweight SDC (storage data client) device driver component. In version 4.0, PowerFlex will also support NVMe/TCP so the SDC will only be optional. The SDS/SDC components can be installed together on the same host for a "hyper-converged" (HCI) design, or separately for a "two-layer" architecture.

Clients that use the SDC component (up to 2048 SDC's per cluster) - know exactly where their data resides on the cluster as they each maintain a very small (megabytes only) in-memory metadata map. This ensures performance as no centralized metadata lookups need to be performed for each I/O, and therefore storage controllers never become a bottleneck. The protocol used by the SDC is proprietary and is capable of maintaining TCP connections to hundreds of SDS's, far exceeding what iSCSI is able to do.

The SDS component forms a highly parallelized pool of storage, with each media device (SSD/NVMe/etc) being used for reads and writes. There is no caching layer in PowerFlex all-flash systems in order to take advantage of the direct I/O capabilities of the underlying media. Each Volume is divided into small chunks (1MB for the original medium grain layout, 4KB for the fine grain layout which supports in-line compression). These data chunks are 100% evenly distributed across every media device to achieve the highest possible levels of performance through parallelism. This performance is also greatly utilized to achieve PowerFlex's 6 nines of availability (99.9999%) by delivering extremely quick rebuilds if a device or node fails. It reduces the mean time to repair which provides higher over all availability without having to provide additional copies of data. PowerFlex uses a mesh-mirror data layout design, therefore storage efficiency is ~50% before compression.

Storage and compute resources can be added to or removed from the PowerFlex cluster as needed, with no downtime and minimal impact to application performance. The self-healing, auto-balancing capability of the PowerFlex cluster ensures that data is automatically rebuilt and rebalanced across resources when components are added, removed, or failed. Because every server and local storage device in the cluster is used in parallel to process I/O operations and protect data, system performance scales linearly as additional servers and storage devices are added to the configuration.

PowerFlex software takes each data chunk to be written and spreads it across many nodes, mirroring it as well. This makes data rebuilds from disk loss very fast as several nodes contribute their own smaller, faster and parallel rebuild efforts to the whole. PowerFlex supports VMware, Hyper-V, Xen and KVM hypervisors. It also supports OpenStack, Windows, Red Hat, SLES, CentOS, and CoreOS (docker). Any app needing block storage can use it, including Oracle and other high performance databases.

References