zpool(1M) | System Administration Commands | zpool(1M) |
zpool [-?]
zpool create [-fn] [-R root] [-m mountpoint] pool vdev ...
zpool destroy [-f] pool
zpool add [-fn] pool vdev
zpool remove pool vdev
zpool list [-H] [-o field[,field]*] [pool] ...
zpool iostat [-v] [pool] ... [interval [count]]
zpool status [-xv] [pool] ...
zpool offline [-t] pool device ...
zpool online pool device ...
zpool clear pool [device] ...
zpool attach [-f] pool device new_device
zpool detach pool device
zpool replace [-f] pool device [new_device]
zpool scrub [-s] pool ...
zpool export [-f] pool
zpool import [-d dir] [-D]
zpool import [-d dir] [-D] [-f] [-o opts] [-R root] pool | id
[newpool]
zpool import [-d dir] [-D] [-f] [-a]
zpool upgrade
zpool upgrade -v
zpool upgrade [-a | pool]
zpool history [pool] ...
The zpool command configures ZFS storage pools. A storage pool is a collection of devices that provides physical storage and data replication for ZFS datasets.
All datasets within a storage pool share the same space. See zfs(1M) for information on managing datasets.
A "virtual device" describes a single device or a collection of devices organized according to certain performance and fault characteristics. The following virtual devices are supported:
Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed.
A pool can have any number of virtual devices at the top of the configuration (known as "root vdevs"). Data is dynamically distributed across all top-level devices to balance data among devices. As new virtual devices are added, ZFS automatically places data on the newly available devices.
Virtual devices are specified one at a time on the command line, separated by whitespace. The keywords "mirror" and "raidz" are used to distinguish where a group ends and another begins. For example, the following creates two root vdevs, each a mirror of two disks:
ZFS supports a rich set of mechanisms for handling device failure and data corruption. All metadata and data is checksummed, and ZFS automatically repairs bad data from a good copy when corruption is detected.
In order to take advantage of these features, a pool must make use of some form of redundancy, using either mirrored or raidz groups. While ZFS supports running in a non-redundant configuration, where each root vdev is simply a disk or file, this is strongly discouraged. A single case of bit corruption can render some or all of your data unavailable.
A pool's health status is described by one of three states: online, degraded, or faulted. An online pool has all devices operating normally. A degraded pool is one in which one or more devices have failed, but the data is still available due to a redundant configuration. A faulted pool has one or more failed devices, and there is insufficient redundancy to replicate the missing data.
ZFS allows devices to be associated with pools as "hot spares". These devices are not actively used in the pool, but when an active device fails, it is automatically replaced by a hot spare. To create a pool with hot spares, specify a "spare" vdev with any number of devices. For example,
Spares can be shared across multiple pools, and can be added with the "zpool add" command and removed with the "zpool remove" command. Once a spare replacement is initiated, a new "spare" vdev is created within the configuration that will remain there until the original device is replaced. At this point, the hot spare becomes available again if another device fails.
An in-progress spare replacement can be cancelled by detaching the hot spare. If the original faulted device is detached, then the hot spare assumes its place in the configuration, and is removed from the spare list of all active pools.
The "zpool create -R" and "zpool import -R" commands allow users to create and import a pool with a different root path. By default, whenever a pool is created or imported on a system, it is permanently added so that it is available whenever the system boots. For removable media, or when in recovery situations, this may not always be desirable. An alternate root pool does not persist on the system. Instead, it exists only until exported or the system is rebooted, at which point it will have to be imported again.
In addition, all mount points in the pool are prefixed with the given root, so a pool can be constrained to a particular area of the file system. This is most useful when importing unknown pools from removable media, as the mount points of any file systems cannot be trusted.
When creating an alternate root pool, the default mount point is "/", rather than the normal default "/pool".
All subcommands that modify state are logged persistently to the pool in their original form.
The zpool command provides subcommands to create and destroy storage pools, add capacity to storage pools, and provide information about the storage pools. The following subcommands are supported:
Do not add a disk that is currently configured as a quorum device to a zpool. Once a disk is in a zpool, that disk can then be configured as a quorum device.
The default is all fields.
This command reports actual physical space available to the storage pool. The physical space can be different from the total amount of space that any contained datasets can actually use. The amount of space used in a raidz configuration depends on the characteristics of the data being written. In addition, ZFS reserves some space for internal accounting that the zfs(1M) command takes into account, but the zpool command does not. For non-full pools of a reasonable size, these effects should be invisible. For small pools, or pools that are close to being completely full, these discrepancies may become more noticeable.
Example 1 Creating a RAID-Z Storage Pool
The following command creates a pool with a single raidz root vdev that consists of six disks.
Example 2 Creating a Mirrored Storage Pool
The following command creates a pool with two mirrors, where each mirror contains two disks.
Example 3 Creating a ZFS Storage Pool by Using Slices
The following command creates an unmirrored pool using two disk slices.
Example 4 Creating a ZFS Storage Pool by Using Files
The following command creates an unmirrored pool using files. While not recommended, a pool based on files can be useful for experimental purposes.
Example 5 Adding a Mirror to a ZFS Storage Pool
The following command adds two mirrored disks to the pool "tank", assuming the pool is already made up of two-way mirrors. The additional space is immediately available to any datasets within the pool.
Example 6 Listing Available ZFS Storage Pools
The following command lists all available pools on the system. In this case, the pool zion is faulted due to a missing device.
The results from this command are similar to the following:
Example 7 Destroying a ZFS Storage Pool
The following command destroys the pool "tank" and any datasets contained within.
Example 8 Exporting a ZFS Storage Pool
The following command exports the devices in pool tank so that they can be relocated or later imported.
Example 9 Importing a ZFS Storage Pool
The following command displays available pools, and then imports the pool "tank" for use on the system.
The results from this command are similar to the following:
tank ONLINE
mirror ONLINE
c1t2d0 ONLINE
c1t3d0 ONLINE
# zpool import tank
Example 10 Upgrading All ZFS Storage Pools to the Current Version
The following command upgrades all ZFS Storage pools to the current version of the software.
Example 11 Managing Hot Spares
The following command creates a new pool with an available hot spare:
If one of the disks were to fail, the pool would be reduced to the degraded state. The failed device can be replaced using the following command:
Once the data has been resilvered, the spare is automatically removed and is made available should another device fails. The hot spare can be permanently removed from the pool using the following command:
The following exit values are returned:
See attributes(5) for descriptions of the following attributes:
zfs(1M), attributes(5)
14 Nov 2006 | SunOS 5.11 |