diff --git a/content/product/cluster_configuration/san_storage/purestorage.md b/content/product/cluster_configuration/san_storage/purestorage.md new file mode 100644 index 00000000..ce049a2e --- /dev/null +++ b/content/product/cluster_configuration/san_storage/purestorage.md @@ -0,0 +1,308 @@ +--- +title: "PureStorage FlashArray SAN Datastore (EE)" +linkTitle: "PureStorage FlashArray - Native (EE)" +weight: "6" +--- + +OpenNebula’s **Pure Storage FlashArray SAN Datastore** delivers production-grade, native control of FlashArray block storage, from provisioning through cleanup, directly from OpenNebula. This integration exposes the full lifecycle of FlashArray Volumes, Snapshots, and Clones, and automates host connectivity via Pure’s host/host-group model with reliable iSCSI and multipath handling. All communication with the array uses authenticated HTTPS against the FlashArray REST API. This datastore driver is part of OpenNebula Enterprise Edition (EE). + +### Key Benefits + +With the native Pure driver, OpenNebula users gain the performance consistency of FlashArray’s always-thin, metadata-driven architecture. Pure’s zero-copy snapshots and clones complete instantly, without impacting write amplification or introducing snapshot-tree latency penalties typical of host-side copy-on-write systems. Under mixed 4k/8k and fsync-heavy workloads, FlashArray maintains flat latency profiles even with deep snapshot histories, while LVM-thin commonly exhibits early degradation as CoW pressure increases. The result is higher, steadier IOPS and predictable latency for virtual machine disks at scale. + +| Area | Benefit | Description | +|------|----------|--------------| +| **Automation** | Full lifecycle control | End-to-end creation, cloning, resizing, renaming, and deletion of FlashArray volumes directly from OpenNebula. | +| **Efficiency** | Instant, thin snapshots and clones | Pure’s metadata-only snapshots allow immediate, zero-copy cloning for persistent and non-persistent VMs alike. | +| **Performance** | Latency-stable I/O path | FlashArray’s architecture keeps read/write latency flat even as snapshot chains grow; multipath iSCSI is configured automatically per host. | +| **Reliability** | Synchronous REST orchestration | Operations use FlashArray’s synchronous REST API with explicit error handling and safe sequencing for volume, snapshot, and host mapping tasks. | +| **Data Protection** | Incremental SAN-snapshot backups | Block-level incremental backups are generated by comparing FlashArray snapshot pairs via raw device attachment — no guest agents required. | +| **Security** | HTTPS control path | All FlashArray communication uses authenticated, encrypted HTTPS REST calls. | +| **Scalability** | Simplified host-group mappings | Safe concurrent attach/detach operations across hosts using deterministic LUN IDs and predictable multipath layout. | + +### Supported NetApp Native Functionality + +| NetApp Feature | Supported | Notes | +|----------------|------------|-------| +| **Zero-Copy Volume Clone** | Yes | Pure clones are metadata-only and complete instantly | +| **Snapshot (manual)** | Yes | Created and deleted directly from OpenNebula; mapped 1:1 to FlashArray snapshots. | +| **Snapshot restore** | Yes | Volume overwrite-from-snapshot supported via the REST API. | +| **Snapshot retention/policies** | No | FlashArray snapshot schedules exist, but OpenNebula does not manage array-side policies; all snapshots remain under OpenNebula’s control. | +| **Incremental backups (SAN snapshot diff)** | Yes | Utilizes FlashArray Volume Diff API to gather block differences, then copies the data. | +| **Host Management** | Yes | Hosts are automatically created and mapped to as needed. | +| **Multipath I/O** | Yes | Fully orchestrated; automatic detection, resize, and removal of maps. | +| **Data encryption (at-rest)** | Yes | Supported transparently by the array (always-on AES-XTS); not managed by OpenNebula. | +| **SnapMirror replication** | No (planned) | Not yet supported; may be added in future roadmap. | +| **QoS policy groups** | No | Not currently exposed through the datastore driver. | +| **SVM DR / MetroCluster** | No | Supported by FlashArray, but not orchestrated by OpenNebula. | + + +## Limitations and Unsupported Features + +While the Pure Storage FlashArray integration delivers full VM disk lifecycle management and the core SAN operations required by OpenNebula, it is deliberately scoped to **primary datastore provisioning** via **iSCSI block devices.** +Several advanced FlashArray protection and VMware-specific capabilities are intentionally not surfaced through this driver. + +{{< alert title="Important" color="warning" >}} +This integration targets block-level provisioning for OpenNebula environments. +It does not expose replication, asynchronous protection groups, or VMware-exclusive workflows (e.g., vVols or VAAI primitives). +{{< /alert >}} + +| Category | Unsupported Feature | Rationale / Alternative | +|-----------|--------------------|--------------------------| +| **Replication & DR** | Protection Groups / Active Cluster | Planned for future releases; can be managed externally on the FlashArray. | +| **NAS protocols** | NFS / SMB | Driver focuses on iSCSI block storage only. | +| **Array-managed automatic snapshots** | Automated snapshot schedules | OpenNebula requires full control over snapshot lifecycle; array policies must remain disabled for OpenNebula-managed volumes. | +| **Storage QoS / Performance tiers** | Bandwidth / IOPS limits | FlashArray supports QoS, but these controls are not integrated into the driver. | +| **Storage efficiency analytics** | Deduplication & compression metrics | Calculated internally by FlashArray; not displayed or consumed by OpenNebula. | +| **Encryption management** | Per-volume encryption toggling | FlashArray encryption is always-on and appliance-managed; no OpenNebula API exposure. | +| **Advanced VMware features** | VAAI offloads, Storage DRS, vVols | VMware-specific APIs, not applicable to OpenNebula. | +| **Multi-instance sharing** | Shared datastore IDs | Not supported; each OpenNebula instance must own its datastore definitions uniquely. Utilize suffixes for multi-OpenNebula Arrays | +| **Synchronous Replication Topologies** | ActiveCluster stretch, pod failover | May be deployed at the array infrastructure level but is not orchestrated by OpenNebula. | + + +## PureStorage FlashArray Setup + +OpenNebula runs the set of datastore and transfer manager driver to register an existing PureStorage FlashArray SAN. This set utilizes the PureStorage FlashArray API to create volumes which are treated as a Virtual Machine disk utilizing the iSCSI interface. Both the Image and System datastores must use the same PureStorage array and indentical datastore configurations. This is because volumes are either clones or renamed depending on the image persistence type. Persistent images are renamed to the System datastore, while non-persistent images are cloned using FlexClone. + +The [PureStorage Linux documentation](https://support.purestorage.com/bundle/m_linux/page/Solutions/Linux/topics/concept/c_installing_and_configuring.html) and this [PureStorage iSCSI Setup with FlashArray Blog Post](https://blog.purestorage.com/purely-technical/iscsi-setup-with-flasharray/) may be useful during this setup. + +1. **Verify iSCSI Service Connections** + - In the FlashArray System Manager: **Settings -> Network -> Connectors** + - Ensure the iSCSI connectors are enabled and note their IP addresses. + +2. **Create an API User** + - In the FlashArray System Manager: **Settings -> Access -> Users and Policies** + - Create a new user with the Storage Admin role, this should provide enough permissions for OpenNebula. + - Create an API token for this user and note the API key. Leave the expiration date blank to create an indefinite API key. + + +## Front-end Only Setup + +The Front-end requires network access to the PureStorage FlashArrayAPI endpoint: + +1. **API Access:** + - Ensure network connectivity to the PureStorage FlashArray API interface. The datastore will be in an ERROR state if the API is not accessible or cannot be monitored properly. + + +## Front-end & Node Setup + +Configure both the Front-end and nodes with persistent iSCSI connections and multipath configuration as described by the [NetApp ONTAP Documentation - SAN Host Utilities Overview](https://docs.netapp.com/us-en/ontap-sanhost/hu_fcp_scsi_index.html): + +1. **iSCSI:** + - Discover the iSCSI targets on the hosts: + ~~~bash + iscsiadm -m discovery -t sendtargets -p # for each iSCSI target IP from NetApp + ~~~ + +2. **Persistent iSCSI Configuration:** + - Set `node.startup = automatic` in `/etc/iscsi/iscsid.conf` + - Ensure iscsid is started with `systemctl status iscsid` + - Enable iscsid with `systemctl enable iscsid` + +3. **Multipath Configuration:** + Update `/etc/multipath.conf` to something like: + ~~~text + defaults { + polling_interval 10 + } + + + devices { + device { + vendor "NVME" + product "Pure Storage FlashArray" + path_selector "queue-length 0" + path_grouping_policy group_by_prio + prio ana + failback immediate + fast_io_fail_tmo 10 + user_friendly_names no + no_path_retry 0 + features 0 + dev_loss_tmo 60 + } + device { + vendor "PURE" + product "FlashArray" + path_selector "service-time 0" + hardware_handler "1 alua" + path_grouping_policy group_by_prio + prio alua + failback immediate + path_checker tur + fast_io_fail_tmo 10 + user_friendly_names no + no_path_retry 0 + features 0 + dev_loss_tmo 600 + } + } + ~~~ + +## OpenNebula Configuration + +Create both datastores as PureFA (PureStorage FlashArray) (instant cloning/moving capabilities): + +- **System Datastore** +- **Image Datastore** + +### Create System Datastore + +**Template required parameters:** + +| Attribute | Description | +| --------------------- | ------------------------------------------------ | +| `NAME` | Datastore name | +| `TYPE` | `SYSTEM_DS` | +| `TM_MAD` | `purefa` | +| `DISK_TYPE` | `BLOCK` | +| `PUREFA_HOST` | PureStorage FlashArray API IP address | +| `PUREFA_API_TOKEN` | API Token key | +| `PUREFA_TARGET` | iSCSI Target name | + +**Example template:** + +~~~shell +$ cat purefa_system.ds +NAME = "purefa_system" +TYPE = "SYSTEM_DS" +DISK_TYPE = "BLOCK" +TM_MAD = "purefa" +PUREFA_HOST = "10.1.234.56" +PUREFA_API_TOKEN = "01234567-89ab-cdef-0123-456789abcdef" +PUREFA_TARGET = "iqn.1993-08.org.ubuntu:01:1234" + + +$ onedatastore create purefa_system.ds +ID: 101 +~~~ + +### Create Image Datastore + +**Template required parameters:** + +| Attribute | Description | +| ------------------- | ----------------------------------------------- | +| `NAME` | Datastore name | +| `TYPE` | `IMAGE_DS` | +| `DS_MAD` | `purefa` | +| `TM_MAD` | `purefa` | +| `DISK_TYPE` | `BLOCK` | +| `PUREFA_HOST` | PureStorage FlashArray API IP address | +| `PUREFA_API_TOKEN` | API Token key | +| `PUREFA_TARGET` | iSCSI Target name | + +**Example template:** +~~~shell +$ cat purefa_image.ds +NAME = "purefa_image" +TYPE = "IMAGE_DS" +DISK_TYPE = "BLOCK" +DS_MAD = "purefa" +TM_MAD = "purefa" +PUREFA_HOST = "10.1.234.56" +PUREFA_API_TOKEN = "01234567-89ab-cdef-0123-456789abcdef" +PUREFA_TARGET = "iqn.1993-08.org.ubuntu:01:1234" + +$ onedatastore create purefa_image.ds +ID: 102 +~~~ + +### Datastore Optional Attributes + +**Template optional parameters:** + +| Attribute | Description | +| ------------------------- | ----------------------------------------------------- | +| `PUREFA_VERSION` | PureStorage FlashArray Version (Default: 2.9) | +| `PUREFA_SUFFIX` | Suffix to append to all volume names | + +## Datastore Internals + +**Storage architecture details:** + +- **Images:** Stored as a single Volume in PureStorage FlashArray +- **Naming Convention:** + - Image datastore: `one__` + - System datastore: `one__disk_` +- **Operations:** + - Non‐persistent: Clone + - Persistent: Rename + +Hosts are automatically created in PureStorage using the PureStorage FlashArray API, with a name generated from their hostname. + +{{< alert title="Warning" color="warning" >}} +Do NOT change the hostname of your hosts unless you have 0 VM's deployed to that host +{{< /alert >}} + +Symbolic links from the System datastore will be created for each Virtual Machine on its Host once the Volumes have been mapped. + +**Backups process details:** + +Both Full and Incremental backups are supported by PureStorage FlashArray. For Full Backups, a snapshot of the Volume containing the VM disk is taken and attached to the host, where it is converted into a qcow2 image and uploaded to the backup datastore. + +Incremental backups are created using the Volume Difference Feature of PureStorage FlashArray. This returns a list of block offsets and lengths which have changed since a target snapshot. This list is then used to create a sparse QCOW2 format file which is uploaded to the backup datastore. + +{{< alert title="Note" color="success" >}} +You can configure the block size ( Default and minimum 4096 B / 4 KB ) for incremental backups by modifing the file at `/var/tmp/one/etc/tm/san/backup.conf` +{{< /alert >}} + +{{< alert title="Warning" color="warning" >}} +The incremental backup feature of PureStorage FlashArray requires the `nbd` kernel module to be loaded and the `nbdfuse` package to be installed on all OpenNebula nodes. +{{< /alert >}} + +## System Considerations + +Occasionally, under network interruptions or if a volume is deleted directly from PureStorage, the iSCSI connection may drop or fail. This can cause the system to hang on a `sync` command, which in turn may lead to OpenNebula operation failures on the affected Host. Although the driver is designed to manage these issues automatically, it’s important to be aware of these potential iSCSI connection challenges. + +You may wish to contact the OpenNebula Support team to assist in this cleanup; however, here are a few advanced tips to clean these up if you are comfortable doing so: + +- If you have extra devices from failures leftover, run: + ~~~bash + rescan_scsi_bus.sh -r -m + ~~~ +- If an entire multipath setup remains, run: + ~~~bash + multipath -f + ~~~ + *Be very careful to target the correct multipath device.* + +{{< alert title="Note" color="success" >}} +This behavior stems from the inherent complexities of iSCSI connections and is not exclusive to OpenNebula or PureStorage. +{{< /alert >}} + +If devices persist, follow these steps: + +1. Run `dmsetup ls --tree` or `lsblk` to see which mapped devices remain. You may see devices not attached to a mapper entry in `lsblk`. +2. For each such device (not your root device), run: + ~~~bash + echo 1 > /sys/bus/scsi/devices/sdX/device/delete + ~~~ + where `sdX` is the device name. +3. Once those devices are gone, remove leftover mapper entries: + ~~~bash + dmsetup remove /dev/mapper/ + ~~~ +4. If removal fails: + - Check usage: + ~~~bash + fuser -v $(realpath /dev/mapper/) + ~~~ + - If it’s being used as swap: + ~~~bash + swapoff /dev/mapper/ + dmsetup remove /dev/mapper/ + ~~~ + - If another process holds it, kill the process and retry: + ~~~bash + dmsetup remove /dev/mapper/ + ~~~ + - If you can’t kill the process or nothing shows up: + ~~~bash + dmsetup suspend /dev/mapper/ + dmsetup wipe_table /dev/mapper/ + dmsetup resume /dev/mapper/ + dmsetup remove /dev/mapper/ + ~~~ + +This should resolve most I/O lockups caused by failed iSCSI operations. Please contact the OpenNebula Support team if you need assistance.