Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
ZPOOL(8)		    System Manager's Manual		      ZPOOL(8)

NAME
       zpool --	configures ZFS storage pools

SYNOPSIS
       zpool [-?]
       zpool add [-fn] pool vdev ...
       zpool attach [-f] pool device new_device
       zpool clear [-F [-n]] pool [device]
       zpool	   create	[-fnd]	     [-o      property=value]	   ...
	     [-O file-system-property=value] ...  [-m  mountpoint]  [-R	 root]
	     pool vdev ...
       zpool destroy [-f] pool
       zpool detach pool device
       zpool export [-f] pool ...
       zpool get all | property[,...] pool ...
       zpool history [-il] [pool] ...
       zpool import [-d	dir | -c cachefile] [-D]
       zpool	 import	    [-o	    mntopts]	 [-o	property=value]	   ...
	     [-d dir | -c cachefile] [-D] [-f] [-m] [-N] [-R root]  [-F	 [-n]]
	     -a
       zpool	 import	    [-o	    mntopts]	 [-o	property=value]	   ...
	     [-d dir | -c cachefile] [-D] [-f] [-m] [-N] [-R root]  [-F	 [-n]]
	     pool | id [newpool]
       zpool iostat [-T	d|u] [-v] [pool] ...
       zpool labelclear	[-f] device
       zpool   list   [-H]   [-o   property[,...]]   [-T   d|u]	  [pool]   ...
	     [inverval [count]]
       zpool offline [-t] pool device ...
       zpool online [-e] pool device ...
       zpool reguid pool
       zpool remove pool device	...
       zpool replace [-f] pool device [new_device]
       zpool scrub [-s]	pool ...
       zpool set property=value	pool
       zpool split [-n]	[-R altroot] [-o  mntopts]  [-o	 property=value]  pool
	     newpool [device ...]
       zpool status [-vx] [-T d|u] [pool] ... [interval	[count]]
       zpool upgrade [-v]
       zpool upgrade [-V version] -a | pool ...

DESCRIPTION
       The  zpool  command  configures	ZFS storage pools. A storage pool is a
       collection of devices that provides physical storage and	data  replica-
       tion for	ZFS datasets.

       All datasets within a storage pool share	the same space.	See zfs(8) for
       information on managing datasets.

   Virtual Devices (vdevs)
       A  "virtual device" (vdev) describes a single device or a collection of
       devices organized according to certain performance and fault character-
       istics. The following virtual devices are supported:

       disk    A block device, typically located under /dev.  ZFS can use  in-
	       dividual	 slices	 or partitions,	though the recommended mode of
	       operation is to use whole disks.	A disk can be specified	 by  a
	       full  path  to  the  device  or the geom(4) provider name. When
	       given a whole disk, ZFS automatically labels the	disk, if  nec-
	       essary.

       file    A regular file. The use of files	as a backing store is strongly
	       discouraged.  It	 is  designed  primarily for experimental pur-
	       poses, as the fault tolerance of	a file is  only	 as  good  the
	       file  system of which it	is a part. A file must be specified by
	       a full path.

       mirror  A mirror	of two or more devices.	Data is	replicated in an iden-
	       tical fashion across all	components of a	mirror.	A mirror  with
	       N  disks	of size	X can hold X bytes and can withstand (N-1) de-
	       vices failing before data integrity is compromised.

       raidz   (or raidz1 raidz2 raidz3).  A variation on RAID-5  that	allows
	       for  better  distribution of parity and eliminates the "RAID-5"
	       write hole (in which data and parity become inconsistent	 after
	       a  power	 loss).	  Data	and parity is striped across all disks
	       within a	raidz group.

	       A raidz group can have single-, double-	,  or  triple  parity,
	       meaning	that  the  raidz  group	can sustain one, two, or three
	       failures, respectively, without losing  any  data.  The	raidz1
	       vdev  type  specifies  a	 single-parity raidz group; the	raidz2
	       vdev type specifies a double-parity raidz group;	and the	raidz3
	       vdev type specifies a triple-parity raidz group.	The raidz vdev
	       type is an alias	for raidz1.

	       A raidz group with N disks of size X with P  parity  disks  can
	       hold  approximately (N-P)*X bytes and can withstand P device(s)
	       failing before data integrity is	compromised. The minimum  num-
	       ber  of devices in a raidz group	is one more than the number of
	       parity disks. The recommended number is between 3 and 9 to help
	       increase	performance.

       spare   A special pseudo-vdev which keeps track of available hot	spares
	       for a pool.  For	more information, see the ""Hot	Spares""  sec-
	       tion.

       log     A  separate-intent  log	device.	If more	than one log device is
	       specified, then writes are load-balanced	between	 devices.  Log
	       devices can be mirrored.	However, raidz vdev types are not sup-
	       ported  for  the	 intent	 log.  For  more  information, see the
	       ""Intent	Log"" section.

       cache   A device	used to	cache storage pool data. A cache device	cannot
	       be configured as	a mirror or raidz group. For more information,
	       see the ""Cache Devices"" section.

       Virtual devices cannot be nested, so a mirror or	raidz  virtual	device
       can  only contain files or disks. Mirrors of mirrors (or	other combina-
       tions) are not allowed.

       A pool can have any number of virtual devices at	the top	of the config-
       uration (known as "root"	vdevs).	Data is	dynamically distributed	across
       all top-level devices to	balance	data among devices. As new virtual de-
       vices are added,	ZFS automatically places data on the  newly  available
       devices.

       Virtual	devices	are specified one at a time on the command line, sepa-
       rated by	whitespace. The	keywords "mirror" and "raidz" are used to dis-
       tinguish	where a	group ends and another begins. For example,  the  fol-
       lowing creates two root vdevs, each a mirror of two disks:

	 # zpool create	mypool mirror da0 da1 mirror da2 da3

   Device Failure and Recovery
       ZFS  supports  a	rich set of mechanisms for handling device failure and
       data corruption.	All metadata and data is checksummed, and ZFS automat-
       ically repairs bad data from a good copy	when corruption	is detected.

       In order	to take	advantage of these features, a pool must make  use  of
       some  form  of redundancy, using	either mirrored	or raidz groups. While
       ZFS supports running in a non-redundant configuration, where each  root
       vdev  is	 simply	a disk or file,	this is	strongly discouraged. A	single
       case of bit corruption can render some or all of	your data unavailable.

       A pool's	health status is described by one of three states: online, de-
       graded, or faulted. An online pool has all devices operating  normally.
       A  degraded  pool  is one in which one or more devices have failed, but
       the data	is still available due to a redundant configuration. A faulted
       pool has	corrupted metadata, or one or more faulted devices, and	insuf-
       ficient replicas	to continue functioning.

       The health of the top-level vdev, such as mirror	or  raidz  device,  is
       potentially impacted by the state of its	associated vdevs, or component
       devices.	 A top-level vdev or component device is in one	of the follow-
       ing states:

       DEGRADED	 One or	more top-level vdevs is	in the degraded	state  because
		 one  or more component	devices	are offline. Sufficient	repli-
		 cas exist to continue functioning.

		 One or	more component devices is in the degraded  or  faulted
		 state,	but sufficient replicas	exist to continue functioning.
		 The underlying	conditions are as follows:

		   o   The number of checksum errors exceeds acceptable	levels
		       and  the	device is degraded as an indication that some-
		       thing may be wrong.  ZFS	continues to use the device as
		       necessary.

		   o   The number of I/O errors	exceeds	acceptable levels. The
		       device could not	be marked as faulted because there are
		       insufficient replicas to	continue functioning.

       FAULTED	 One or	more top-level vdevs is	in the faulted	state  because
		 one  or  more	component  devices  are	 offline. Insufficient
		 replicas exist	to continue functioning.

		 One or	more component devices is in the  faulted  state,  and
		 insufficient  replicas	exist to continue functioning. The un-
		 derlying conditions are as follows:

		   o   The device could	be opened, but the  contents  did  not
		       match expected values.

		   o   The  number of I/O errors exceeds acceptable levels and
		       the device is faulted to	prevent	further	use of the de-
		       vice.

       OFFLINE	 The  device  was  explicitly  taken  offline  by  the	"zpool
		 offline" command.

       ONLINE	 The device is online and functioning.

       REMOVED	 The  device  was physically removed while the system was run-
		 ning. Device removal detection	is hardware-dependent and  may
		 not be	supported on all platforms.

       UNAVAIL	 The  device could not be opened. If a pool is imported	when a
		 device	was unavailable, then the device will be identified by
		 a unique identifier instead of	its path since	the  path  was
		 never correct in the first place.

       If a device is removed and later	reattached to the system, ZFS attempts
       to  put	the  device  online  automatically. Device attach detection is
       hardware-dependent and might not	be supported on	all platforms.

   Hot Spares
       ZFS allows devices to be	associated with	pools as "hot spares".	 These
       devices	are  not  actively used	in the pool, but when an active	device
       fails, it is automatically replaced by a	hot spare. To  create  a  pool
       with hot	spares,	specify	a "spare" vdev with any	number of devices. For
       example,

	 # zpool create	pool mirror da0	da1 spare da2 da3

       Spares  can  be shared across multiple pools, and can be	added with the
       "zpool add" command and removed with the	"zpool remove" command.	Once a
       spare replacement is initiated, a new "spare" vdev  is  created	within
       the  configuration  that	will remain there until	the original device is
       replaced. At this point,	the hot	spare becomes available	again  if  an-
       other device fails.

       If a pool has a shared spare that is currently being used, the pool can
       not  be exported	since other pools may use this shared spare, which may
       lead to potential data corruption.

       An in-progress spare replacement	can be cancelled by detaching the  hot
       spare.	If the original	faulted	device is detached, then the hot spare
       assumes its place in the	configuration, and is removed from  the	 spare
       list of all active pools.

       Spares cannot replace log devices.

   Intent Log
       The  ZFS	 Intent	Log (ZIL) satisfies POSIX requirements for synchronous
       transactions. For instance, databases often require their  transactions
       to be on	stable storage devices when returning from a system call.  NFS
       and  other applications can also	use fsync(2) to	ensure data stability.
       By default, the intent log is allocated from  blocks  within  the  main
       pool.  However,	it  might  be possible to get better performance using
       separate	intent log devices such	as NVRAM or a dedicated	disk. For  ex-
       ample:

	 # zpool create	pool da0 da1 log da2

       Multiple	 log  devices can also be specified, and they can be mirrored.
       See the "EXAMPLES" section for an example of mirroring multiple log de-
       vices.

       Log devices can be added, replaced, attached,  detached,	 imported  and
       exported	 as  part  of the larger pool. Mirrored	log devices can	be re-
       moved by	specifying the top-level mirror	for the	log.

   Cache devices
       Devices can be added to a storage pool as "cache	 devices."  These  de-
       vices  provide  an  additional layer of caching between main memory and
       disk. For read-heavy workloads, where the  working  set	size  is  much
       larger  than what can be	cached in main memory, using cache devices al-
       low much	more of	this working set to be served from low latency	media.
       Using  cache  devices provides the greatest performance improvement for
       random read-workloads of	mostly static content.

       To create a pool	with cache devices, specify a "cache"  vdev  with  any
       number of devices. For example:

	 # zpool create	pool da0 da1 cache da2 da3

       Cache devices cannot be mirrored	or part	of a raidz configuration. If a
       read  error is encountered on a cache device, that read I/O is reissued
       to the original storage pool device, which might	be part	of a  mirrored
       or raidz	configuration.

       The content of the cache	devices	is considered volatile,	as is the case
       with other system caches.

   Properties
       Each  pool  has	several	properties associated with it. Some properties
       are read-only statistics	while others are configurable and  change  the
       behavior	of the pool. The following are read-only properties:

       alloc	   Amount of storage space within the pool that	has been phys-
		   ically allocated.

       capacity	   Percentage  of  pool	 space used. This property can also be
		   referred to by its shortened	column name, "cap".

       comment	   A text string consisting of printable ASCII characters that
		   will	be stored such that it is available even if  the  pool
		   becomes  faulted.   An administrator	can provide additional
		   information about a pool using this property.

       dedupratio  The deduplication ratio specified for a pool, expressed  as
		   a  multiplier.  For example,	a dedupratio value of 1.76 in-
		   dicates that	1.76 units of data were	stored but only	1 unit
		   of disk space was actually consumed.	See zfs(8) for	a  de-
		   scription of	the deduplication feature.

       free	   Number of blocks within the pool that are not allocated.

       freeing	   After  a file system	or snapshot is destroyed, the space it
		   was using is	returned to the	pool asynchronously.   freeing
		   is  the  amount  of	space remaining	to be reclaimed.  Over
		   time	freeing	will decrease while free increases.

       expandsize  This	property has currently no value	on FreeBSD.

       guid	   A unique identifier for the pool.

       health	   The current health of the pool.  Health  can	 be  "ONLINE",
		   "DEGRADED", "FAULTED", "OFFLINE", "REMOVED",	or "UNAVAIL".

       size	   Total size of the storage pool.

       unsupported@feature_guid
		   Information	about unsupported features that	are enabled on
		   the pool.  See zpool-features(7) for	details.

       used	   Amount of storage space used	within the pool.

       The space usage properties report actual	physical  space	 available  to
       the  storage  pool.  The	physical space can be different	from the total
       amount of space that any	 contained  datasets  can  actually  use.  The
       amount of space used in a raidz configuration depends on	the character-
       istics of the data being	written.  In addition, ZFS reserves some space
       for internal accounting that the	zfs(8) command takes into account, but
       the zpool(8) command does not. For non-full pools of a reasonable size,
       these  effects  should be invisible. For	small pools, or	pools that are
       close to	being completely full, these discrepancies may become more no-
       ticeable.

       The following property can be set at creation time and import time:

       altroot
	   Alternate root directory. If	set, this directory  is	 prepended  to
	   any	mount  points within the pool. This can	be used	when examining
	   an unknown pool where the mount points cannot be trusted, or	in  an
	   alternate  boot environment,	where the typical paths	are not	valid.
	   altroot is not a persistent property. It is valid  only  while  the
	   system  is  up.   Setting altroot defaults to using cachefile=none,
	   though this may be overridden using an explicit setting.

       The following property can only be set at import	time:

       readonly=on | off
	   If set to on, pool will be imported in read-only mode with the fol-
	   lowing restrictions:

	     o	 Synchronous data in the intent	log will not be	accessible

	     o	 Properties of the pool	can not	be changed

	     o	 Datasets of this pool can only	be mounted read-only

	     o	 To write to a read-only pool, a export	and import of the pool
		 is required.

       The following properties	can be set at creation time and	 import	 time,
       and later changed with the zpool	set command:

       autoexpand=on | off
	   Controls automatic pool expansion when the underlying LUN is	grown.
	   If  set  to "on", the pool will be resized according	to the size of
	   the expanded	device.	If the device is part of  a  mirror  or	 raidz
	   then	 all  devices  within that mirror/raidz	group must be expanded
	   before the new space	is made	available to the pool. The default be-
	   havior is "off".  This property can also  be	 referred  to  by  its
	   shortened column name, expand.

       autoreplace=on |	off
	   Controls  automatic device replacement. If set to "off", device re-
	   placement must be initiated	by  the	 administrator	by  using  the
	   "zpool  replace"  command. If set to	"on", any new device, found in
	   the same physical location as a device that previously belonged  to
	   the	pool, is automatically formatted and replaced. The default be-
	   havior is "off".  This property can also  be	 referred  to  by  its
	   shortened column name, "replace".

       bootfs=pool/dataset
	   Identifies  the  default  bootable  dataset for the root pool. This
	   property is expected	to be set mainly by the	installation  and  up-
	   grade programs.

       cachefile=path |	none
	   Controls  the  location  of where the pool configuration is cached.
	   Discovering all pools on system startup requires a cached  copy  of
	   the	configuration data that	is stored on the root file system. All
	   pools in this cache are  automatically  imported  when  the	system
	   boots.  Some	 environments, such as install and clustering, need to
	   cache this information in a different location so  that  pools  are
	   not	automatically  imported. Setting this property caches the pool
	   configuration in a different	location that can  later  be  imported
	   with	 "zpool	 import	 -c".	Setting	it to the special value	"none"
	   creates a temporary pool that is  never  cached,  and  the  special
	   value '' (empty string) uses	the default location.

       comment=text
	   A text string consisting of printable ASCII characters that will be
	   stored  such	that it	is available even if the pool becomes faulted.
	   An administrator can	provide	additional information	about  a  pool
	   using this property.

       dedupditto=number
	   Threshold  for  the	number of block	ditto copies. If the reference
	   count for a deduplicated block increases above this number,	a  new
	   ditto  copy	of this	block is automatically stored. Default setting
	   is 0.

       delegation=on | off
	   Controls whether a non-privileged user is granted access  based  on
	   the dataset permissions defined on the dataset. See zfs(8) for more
	   information on ZFS delegated	administration.

       failmode=wait | continue	| panic
	   Controls  the  system  behavior  in	the event of catastrophic pool
	   failure. This condition is typically	a result of a loss of  connec-
	   tivity  to the underlying storage device(s) or a failure of all de-
	   vices within	the pool. The behavior of such an event	is  determined
	   as follows:

	   wait	   Blocks  all I/O access until	the device connectivity	is re-
		   covered and the errors are cleared.	This  is  the  default
		   behavior.

	   continue
		   Returns  EIO	to any new write I/O requests but allows reads
		   to any of the remaining healthy devices. Any	write requests
		   that	have yet to be committed to disk would be blocked.

	   panic   Prints out a	message	to the console and generates a	system
		   crash dump.

       feature@feature_name=enabled
	   The	value  of  this	property is the	current	state of feature_name.
	   The only valid value	when setting this property  is	enabled	 which
	   moves feature_name to the enabled state.  See zpool-features(7) for
	   details on feature states.

       listsnaps=on | off
	   Controls  whether  information about	snapshots associated with this
	   pool	is output when "zfs list" is run without the  -t  option.  The
	   default value is off.

       version=version
	   The current on-disk version of the pool. This can be	increased, but
	   never decreased. The	preferred method of updating pools is with the
	   "zpool  upgrade"  command,  though this property can	be used	when a
	   specific version is needed for backwards compatibility.  Once  fea-
	   ture	flags is enabled on a pool this	property will no longer	have a
	   value.

SUBCOMMANDS
       All  subcommands	 that modify state are logged persistently to the pool
       in their	original form.

       The zpool command provides subcommands to create	 and  destroy  storage
       pools, add capacity to storage pools, and provide information about the
       storage pools. The following subcommands	are supported:

       zpool [-?]

	   Displays a help message.

       zpool add [-fn] pool vdev ...

	   Adds	 the  specified	 virtual  devices  to the given	pool. The vdev
	   specification is described in the ""Virtual Devices"" section.  The
	   behavior  of	the -f option, and the device checks performed are de-
	   scribed in the "zpool create" subcommand.

	   -f	   Forces use of vdev, even if they appear in use or specify a
		   conflicting replication level.   Not	 all  devices  can  be
		   overridden in this manner.

	   -n	   Displays the	configuration that would be used without actu-
		   ally	 adding	 the vdevs. The	actual pool creation can still
		   fail	due to insufficient privileges or device sharing.

		   Do not add a	disk that is currently configured as a	quorum
		   device  to a	zpool.	After a	disk is	in the pool, that disk
		   can then be configured as a quorum device.

       zpool attach [-f] pool device new_device

	   Attaches new_device to an existing zpool device. The	 existing  de-
	   vice	cannot be part of a raidz configuration. If device is not cur-
	   rently  part	 of  a	mirrored  configuration,  device automatically
	   transforms into a two-way mirror  of	 device	 and  new_device.   If
	   device  is part of a	two-way	mirror,	attaching new_device creates a
	   three-way mirror, and so on.	In either case,	new_device  begins  to
	   resilver immediately.

	   -f	   Forces use of new_device, even if its appears to be in use.
		   Not all devices can be overridden in	this manner.

       zpool clear [-F [-n]] pool [device]

	   Clears  device errors in a pool. If no arguments are	specified, all
	   device errors within	the pool are cleared. If one or	 more  devices
	   is  specified,  only	those errors associated	with the specified de-
	   vice	or devices are cleared.

	   -F	   Initiates recovery mode for an unopenable pool. Attempts to
		   discard the last few	transactions in	the pool to return  it
		   to  an  openable state. Not all damaged pools can be	recov-
		   ered	by using this option. If successful, the data from the
		   discarded transactions is irretrievably lost.

	   -n	   Used	in combination with the	-F flag.  Check	 whether  dis-
		   carding  transactions  would	make the pool openable,	but do
		   not actually	discard	any transactions.

       zpool	 create	    [-fnd]     [-o     property=value]	   ...	   [-O
	   file-system-property=value] ... [-m mountpoint] [-R root] pool vdev
	   ...

	   Creates a new storage pool containing the virtual devices specified
	   on  the  command  line. The pool name must begin with a letter, and
	   can only contain alphanumeric  characters  as  well	as  underscore
	   ("_"),  dash	 ("-"),	 and  period  (".").  The pool names "mirror",
	   "raidz", "spare" and	"log" are reserved,  as	 are  names  beginning
	   with	 the  pattern "c[0-9]".	The vdev specification is described in
	   the ""Virtual Devices"" section.

	   The command verifies	that each device specified is  accessible  and
	   not	currently  in  use  by another subsystem. There	are some uses,
	   such	as being currently mounted, or specified as the	dedicated dump
	   device, that	prevents a device from ever being used	by  ZFS	 Other
	   uses, such as having	a preexisting UFS file system, can be overrid-
	   den with the	-f option.

	   The	command	also checks that the replication strategy for the pool
	   is consistent. An attempt to	combine	 redundant  and	 non-redundant
	   storage  in a single	pool, or to mix	disks and files, results in an
	   error unless	-f is specified. The use of differently	sized  devices
	   within  a  single raidz or mirror group is also flagged as an error
	   unless -f is	specified.

	   Unless the -R option	is  specified,	the  default  mount  point  is
	   "/pool".   The mount	point must not exist or	must be	empty, or else
	   the root dataset cannot be mounted. This can	be overridden with the
	   -m option.

	   By default all supported features are enabled on the	new  pool  un-
	   less	the -d option is specified.

	   -f	   Forces  use of vdevs, even if they appear in	use or specify
		   a conflicting replication level.  Not all  devices  can  be
		   overridden in this manner.

	   -n	   Displays the	configuration that would be used without actu-
		   ally	 creating the pool. The	actual pool creation can still
		   fail	due to insufficient privileges or device sharing.

	   -d	   Do not enable any features on  the  new  pool.   Individual
		   features  can  be  enabled  by  setting their corresponding
		   properties  to   enabled   with   the   -o	option.	   See
		   zpool-features(7) for details about feature properties.

	   -o property=value [-o property=value] ...
		   Sets	the given pool properties. See the ""Properties"" sec-
		   tion	for a list of valid properties that can	be set.

	   -O file-system-property=value [-O file-system-property=value] ...
		   Sets	the given file system properties in the	root file sys-
		   tem	of the pool. See zfs(8)	Properties for a list of valid
		   properties that can be set.

	   -R root
		   Equivalent to "-o cachefile=none,altroot=root"

	   -m mountpoint
		   Sets	the mount point	for  the  root	dataset.  The  default
		   mount  point	 is  "/pool"  or  "altroot/pool" if altroot is
		   specified. The  mount  point	 must  be  an  absolute	 path,
		   "legacy", or	"none".	 For more information on dataset mount
		   points, see zfs(8).

       zpool destroy [-f] pool

	   Destroys the	given pool, freeing up any devices for other use. This
	   command  tries to unmount any active	datasets before	destroying the
	   pool.

	   -f	   Forces any active datasets contained	within the pool	to  be
		   unmounted.

       zpool detach pool device

	   Detaches  device  from  a mirror. The operation is refused if there
	   are no other	valid replicas of the data.

       zpool export [-f] pool ...

	   Exports the given pools from	the system. All	devices	are marked  as
	   exported,  but are still considered in use by other subsystems. The
	   devices can be moved	between	systems	(even those of different endi-
	   anness) and imported	as long	as a sufficient	number of devices  are
	   present.

	   Before  exporting  the  pool,  all datasets within the pool are un-
	   mounted. A pool can not be exported if it has a shared  spare  that
	   is currently	being used.

	   For	pools  to  be  portable, you must give the zpool command whole
	   disks, not just slices, so  that  ZFS  can  label  the  disks  with
	   portable  EFI  labels. Otherwise, disk drivers on platforms of dif-
	   ferent endianness will not recognize	the disks.

	   -f	   Forcefully unmount all datasets,  using  the	 "unmount  -f"
		   command.

		   This	command	will forcefully	export the pool	even if	it has
		   a  shared spare that	is currently being used. This may lead
		   to potential	data corruption.

       zpool get all | property[,...] pool ...

	   Retrieves the given list of properties (or all properties if	 "all"
	   is  used)  for  the specified storage pool(s). These	properties are
	   displayed with the following	fields:

		 name	     Name of storage pool
		 property    Property name
		 value	     Property value
		 source	     Property source, either 'default' or 'local'.

	   See the ""Properties"" section for more information on  the	avail-
	   able	pool properties.

       zpool history [-il] [pool] ...

	   Displays the	command	history	of the specified pools or all pools if
	   no pool is specified.

	   -i	   Displays  internally	 logged	ZFS events in addition to user
		   initiated events.

	   -l	   Displays log	records	in long	format,	which in  addition  to
		   standard  format includes, the user name, the hostname, and
		   the zone in which the operation was performed.

       zpool import [-d	dir | -c cachefile] [-D]

	   Lists pools available to import. If the -d option is	not specified,
	   this	command	searches for devices in	"/dev".	 The -d	option can  be
	   specified  multiple times, and all directories are searched.	If the
	   device appears to be	part of	an exported pool,  this	 command  dis-
	   plays  a  summary  of the pool with the name	of the pool, a numeric
	   identifier, as well as the vdev layout and current  health  of  the
	   device  for	each device or file.  Destroyed	pools, pools that were
	   previously destroyed	with the  "zpool  destroy"  command,  are  not
	   listed unless the -D	option is specified.

	   The	numeric	 identifier  is	unique,	and can	be used	instead	of the
	   pool	name when multiple exported pools of the same name are	avail-
	   able.

	   -c cachefile
		   Reads  configuration	from the given cachefile that was cre-
		   ated	with the "cachefile" pool property. This cachefile  is
		   used	instead	of searching for devices.

	   -d dir  Searches for	devices	or files in dir.  The -d option	can be
		   specified multiple times.

	   -D	   Lists destroyed pools only.

       zpool  import  [-o  mntopts]  [-o  property=value]  ...	[-d  dir  | -c
	   cachefile] [-D] [-f]	[-m] [-N] [-R root] [-F	[-n]] -a

	   Imports all pools found in the search directories. Identical	to the
	   previous command, except that all pools with	a sufficient number of
	   devices available are imported. Destroyed pools,  pools  that  were
	   previously  destroyed with the "zpool destroy" command, will	not be
	   imported unless the -D option is specified.

	   -o mntopts
		   Comma-separated list	of mount options to use	when  mounting
		   datasets  within  the pool. See zfs(8) for a	description of
		   dataset properties and mount	options.

	   -o property=value
		   Sets	the specified property on the imported pool.  See  the
		   ""Properties""  section  for	more information on the	avail-
		   able	pool properties.

	   -c cachefile
		   Reads configuration from the	given cachefile	that was  cre-
		   ated	 with the "cachefile" pool property. This cachefile is
		   used	instead	of searching for devices.

	   -d dir  Searches for	devices	or files in dir.  The -d option	can be
		   specified multiple times. This option is incompatible  with
		   the -c option.

	   -D	   Imports  destroyed  pools  only.  The -f option is also re-
		   quired.

	   -f	   Forces import, even if the pool appears to  be  potentially
		   active.

	   -m	   Enables import with missing log devices.

	   -N	   Do not mount	any filesystems	from the imported pool.

	   -R root
		   Sets	 the  "cachefile" property to "none" and the "altroot"
		   property to "root"

	   -F	   Recovery mode for a non-importable pool. Attempt to	return
		   the	pool to	an importable state by discarding the last few
		   transactions. Not all damaged pools can be recovered	by us-
		   ing this option. If successful, the data from the discarded
		   transactions	is irretrievably lost. This option is  ignored
		   if the pool is importable or	already	imported.

	   -n	   Used	with the -F recovery option. Determines	whether	a non-
		   importable  pool can	be made	importable again, but does not
		   actually perform the	pool recovery. For more	details	 about
		   pool	recovery mode, see the -F option, above.

	   -a	   Searches for	and imports all	pools found.

       zpool  import  [-o  mntopts]  [-o  property=value]  ...	[-d  dir  | -c
	   cachefile] [-D] [-f]	[-m] [-N]  [-R	root]  [-F  [-n]]  pool	 |  id
	   [newpool]

	   Imports  a  specific	 pool. A pool can be identified	by its name or
	   the numeric identifier. If newpool is specified, the	 pool  is  im-
	   ported  using the name newpool.  Otherwise, it is imported with the
	   same	name as	its exported name.

	   If a	device is removed from a system	without	running	"zpool export"
	   first, the device appears as	potentially active. It cannot  be  de-
	   termined  if	this was a failed export, or whether the device	is re-
	   ally	in use from another host. To import a pool in this state,  the
	   -f option is	required.

	   -o mntopts
		   Comma-separated  list of mount options to use when mounting
		   datasets within the pool. See zfs(8)	for a  description  of
		   dataset properties and mount	options.

	   -o property=value
		   Sets	 the  specified	property on the	imported pool. See the
		   ""Properties"" section for more information on  the	avail-
		   able	pool properties.

	   -c cachefile
		   Reads  configuration	from the given cachefile that was cre-
		   ated	with the "cachefile" pool property. This cachefile  is
		   used	instead	of searching for devices.

	   -d dir  Searches for	devices	or files in dir.  The -d option	can be
		   specified  multiple times. This option is incompatible with
		   the -c option.

	   -D	   Imports destroyed pools only. The -f	 option	 is  also  re-
		   quired.

	   -f	   Forces  import,  even if the	pool appears to	be potentially
		   active.

	   -m	   Enables import with missing log devices.

	   -N	   Do not mount	any filesystems	from the imported pool.

	   -R root
		   Equivalent to "-o cachefile=none,altroot=root"

	   -F	   Recovery mode for a non-importable pool. Attempt to	return
		   the	pool to	an importable state by discarding the last few
		   transactions. Not all damaged pools can be recovered	by us-
		   ing this option. If successful, the data from the discarded
		   transactions	is irretrievably lost. This option is  ignored
		   if the pool is importable or	already	imported.

	   -n	   Used	with the -F recovery option. Determines	whether	a non-
		   importable  pool can	be made	importable again, but does not
		   actually perform the	pool recovery. For more	details	 about
		   pool	recovery mode, see the -F option, above.

       zpool iostat [-T	d|u] [-v] [pool] ... [interval [count]]

	   Displays  I/O  statistics for the given pools. When given an	inter-
	   val,	the statistics are printed every interval seconds until	Ctrl-C
	   is pressed. If no pools are specified, statistics for every pool in
	   the system is shown.	If count is specified, the command exits after
	   count reports are printed.

	   -T d|u  Print a timestamp.

		   Use modifier	d for standard date format. See	date(1).   Use
		   modifier u for unixtime (equals "date +%s").

	   -v	   Verbose statistics. Reports usage statistics	for individual
		   vdevs within	the pool, in addition to the pool-wide statis-
		   tics.

       zpool labelclear	[-f] device

	   Removes  ZFS	 label	information  from  the	specified device.  The
	   device must not be part of an active	pool configuration.

	   -v	   Treat exported or foreign devices as	inactive.

       zpool list [-Hv]	[-o property[,...]]  [-T  d|u]	[pool]	...  [inverval
	   [count]]

	   Lists  the  given pools along with a	health status and space	usage.
	   When	given no arguments, all	pools in the system are	listed.

	   When	given an interval, the output is printed every	interval  sec-
	   onds	 until	Ctrl-C	is pressed. If count is	specified, the command
	   exits after count reports are printed.

	   -H	   Scripted mode. Do not display headers, and separate	fields
		   by a	single tab instead of arbitrary	space.

	   -v	   Show	more detailed information.

	   -o property[,...]
		   Comma-separated  list  of  properties  to  display. See the
		   ""Properties"" section for a	list of	valid properties.  The
		   default  list  is  name,  size,  used, available, capacity,
		   health, altroot.

	   -T d|u  Print a timestamp.

		   Use modifier	d for standard date format. See	date(1).   Use
		   modifier u for unixtime (equals "date +%s").

       zpool offline [-t] pool device ...

	   Takes  the  specified  physical device offline. While the device is
	   offline, no attempt is made to read or write	to the device.

	   -t	   Temporary. Upon reboot, the specified physical  device  re-
		   verts to its	previous state.

       zpool online [-e] pool device ...

	   Brings the specified	physical device	online.

	   This	command	is not applicable to spares or cache devices.

	   -e	   Expand the device to	use all	available space. If the	device
		   is  part  of	a mirror or raidz then all devices must	be ex-
		   panded before the new space will become  available  to  the
		   pool.

       zpool reguid pool

	   Generates  a	 new  unique identifier	for the	pool.  You must	ensure
	   that	all devices in this pool are online and	 healthy  before  per-
	   forming this	action.

       zpool remove pool device	...

	   Removes  the	specified device from the pool.	This command currently
	   only	supports removing hot spares, cache, and log devices.  A  mir-
	   rored  log device can be removed by specifying the top-level	mirror
	   for the log.	Non-log	devices	that are part of a mirrored configura-
	   tion	can be removed using the "zpool	detach"	command. Non-redundant
	   and raidz devices cannot be removed from a pool.

       zpool replace [-f] pool device [new_device]

	   Replaces old_device with new_device.	 This is equivalent to attach-
	   ing new_device, waiting for it  to  resilver,  and  then  detaching
	   old_device.

	   The size of new_device must be greater than or equal	to the minimum
	   size	of all the devices in a	mirror or raidz	configuration.

	   new_device  is required if the pool is not redundant. If new_device
	   is not specified, it	defaults to old_device.	 This form of replace-
	   ment	is useful after	an existing disk has failed and	has been phys-
	   ically replaced. In this case, the new disk may have	the same  /dev
	   path	 as  the  old  device,	even though it is actually a different
	   disk.  ZFS recognizes this.

	   -f	   Forces use of new_device, even if its appears to be in use.
		   Not all devices can be overridden in	this manner.

       zpool scrub [-s]	pool ...

	   Begins a scrub. The scrub examines all data in the specified	 pools
	   to  verify  that  it	checksums correctly. For replicated (mirror or
	   raidz) devices, ZFS automatically  repairs  any  damage  discovered
	   during  the	scrub. The "zpool status" command reports the progress
	   of the scrub	and summarizes the results of the scrub	 upon  comple-
	   tion.

	   Scrubbing  and resilvering are very similar operations. The differ-
	   ence	is that	resilvering only examines data that ZFS	 knows	to  be
	   out	of  date (for example, when attaching a	new device to a	mirror
	   or replacing	an existing device), whereas  scrubbing	 examines  all
	   data	to discover silent errors due to hardware faults or disk fail-
	   ure.

	   Because scrubbing and resilvering are I/O-intensive operations, ZFS
	   only	 allows	 one at	a time.	If a scrub is already in progress, the
	   "zpool scrub" command returns an error. To start a new  scrub,  you
	   have	to stop	the old	scrub with the "zpool scrub -s"	command	first.
	   If  a  resilver  is	in  progress, ZFS does not allow a scrub to be
	   started until the resilver completes.

	   -s	   Stop	scrubbing.

       zpool set property=value	pool

	   Sets	 the  given  property  on  the	 specified   pool.   See   the
	   ""Properties""  section for more information	on what	properties can
	   be set and acceptable values.

       zpool split [-n]	[-R altroot] [-o  mntopts]  [-o	 property=value]  pool
	   newpool [device ...]

	   Splits off one disk from each mirrored top-level vdev in a pool and
	   creates a new pool from the split-off disks.	The original pool must
	   be made up of one or	more mirrors and must not be in	the process of
	   resilvering.	 The  split subcommand chooses the last	device in each
	   mirror vdev unless overridden by a device specification on the com-
	   mand	line.

	   When	using a	device argument,  split	 includes  the	specified  de-
	   vice(s)  in	a new pool and,	should any devices remain unspecified,
	   assigns the last device in each mirror vdev to  that	 pool,	as  it
	   does	 normally.  If	you are	uncertain about	the outcome of a split
	   command, use	the -n ("dry-run") option to ensure your command  will
	   have	the effect you intend.

	   -R altroot
		   Automatically  import  the  newly created pool after	split-
		   ting, using the specified altroot  parameter	 for  the  new
		   pool's  alternate  root. See	the altroot description	in the
		   ""Properties"" section, above.

	   -n	   Displays the	configuration that would  be  created  without
		   actually  splitting	the  pool. The actual pool split could
		   still fail due to insufficient privileges or	device status.

	   -o mntopts
		   Comma-separated list	of mount options to use	when  mounting
		   datasets  within  the pool. See zfs(8) for a	description of
		   dataset properties and mount	options. Valid	only  in  con-
		   junction with the -R	option.

	   -o property=value
		   Sets	 the  specified	 property  on  the  new	 pool. See the
		   ""Properties"" section, above, for more information on  the
		   available pool properties.

       zpool status [-vx] [-T d|u] [pool] ... [interval	[count]]

	   Displays the	detailed health	status for the given pools. If no pool
	   is  specified,  then	 the status of each pool in the	system is dis-
	   played. For more information	on pool	and  device  health,  see  the
	   ""Device Failure and	Recovery"" section.

	   When	 given	an interval, the output	is printed every interval sec-
	   onds	until Ctrl-C is	pressed. If count is  specified,  the  command
	   exits after count reports are printed.

	   If  a  scrub	 or  resilver is in progress, this command reports the
	   percentage done and the estimated time to completion. Both of these
	   are only approximate, because the amount of data in	the  pool  and
	   the other workloads on the system can change.

	   -x	   Only	display	status for pools that are exhibiting errors or
		   are	otherwise unavailable.	Warnings about pools not using
		   the latest on-disk format will not be included.

	   -v	   Displays verbose data error	information,  printing	out  a
		   complete  list  of  all data	errors since the last complete
		   pool	scrub.

	   -T d|u  Print a timestamp.

		   Use modifier	d for standard date format. See	date(1).   Use
		   modifier u for unixtime (equals "date +%s").

       zpool upgrade [-v]

	   Displays pools which	do not have all	supported features enabled and
	   pools formatted using a legacy ZFS version number.  These pools can
	   continue  to	 be used, but some features may	not be available.  Use
	   zpool upgrade -a to enable all features on all pools.

	   -v	   Displays legacy ZFS versions	supported by the current soft-
		   ware.  See zpool-features(7)	for a description  of  feature
		   flags features supported by the current software.

       zpool upgrade [-V version] -a | pool ...

	   Enables  all	 supported  features  on the given pool.  Once this is
	   done, the pool will no longer be accessible on systems that do  not
	   support  feature  flags.  See zpool-features(7) for details on com-
	   patability with system sthat	support	feature	flags, but do not sup-
	   port	all features enabled on	the pool.

	   -a	   Enables all supported features on all pools.

	   -V version
		   Upgrade to the specified legacy version. If the -V flag  is
		   specified,  no  features will be enabled on the pool.  This
		   option can only be used to increase version	number	up  to
		   the last supported legacy version number.

EXIT STATUS
       The following exit values are returned:

	 0   Successful	completion.

	 1   An	error occurred.

	 2   Invalid command line options were specified.

EXAMPLES
       Example 1 Creating a RAID-Z Storage Pool

	 The  following	 command  creates a pool with a	single raidz root vdev
	 that consists of six disks.

	   # zpool create tank raidz da0 da1 da2 da3 da4 da5

       Example 2 Creating a Mirrored Storage Pool

	 The following command creates a pool with  two	 mirrors,  where  each
	 mirror	contains two disks.

	   # zpool create tank mirror da0 da1 mirror da2 da3

       Example 3 Creating a ZFS	Storage	Pool by	Using Partitions

	 The following command creates an unmirrored pool using	two GPT	parti-
	 tions.

	   # zpool create tank da0p3 da1p3

       Example 4 Creating a ZFS	Storage	Pool by	Using Files

	 The  following	 command creates an unmirrored pool using files. While
	 not recommended, a pool based on files	can be useful for experimental
	 purposes.

	   # zpool create tank /path/to/file/a /path/to/file/b

       Example 5 Adding	a Mirror to a ZFS Storage Pool

	 The following command adds two	mirrored disks to the pool  tank,  as-
	 suming	the pool is already made up of two-way mirrors.	The additional
	 space is immediately available	to any datasets	within the pool.

	   # zpool add tank mirror da2 da3

       Example 6 Listing Available ZFS Storage Pools

	 The following command lists all available pools on the	system.

	   # zpool list
	   NAME	  SIZE	ALLOC	FREE	CAP  DEDUP  HEALTH  ALTROOT
	   pool	 2.70T	 473G  2.24T	17%  1.00x  ONLINE  -
	   test	 1.98G	89.5K  1.98G	 0%  1.00x  ONLINE  -

       Example 7 Listing All Properties	for a Pool

	 The following command lists all the properties	for a pool.

	   # zpool get all pool
	   pool	 size		2.70T	    -
	   pool	 capacity	17%	    -
	   pool	 altroot	-	    default
	   pool	 health		ONLINE	    -
	   pool	 guid		2501120270416322443  default
	   pool	 version	28	    default
	   pool	 bootfs		pool/root   local
	   pool	 delegation	on	    default
	   pool	 autoreplace	off	    default
	   pool	 cachefile	-	    default
	   pool	 failmode	wait	    default
	   pool	 listsnapshots	off	    default
	   pool	 autoexpand	off	    default
	   pool	 dedupditto	0	    default
	   pool	 dedupratio	1.00x	    -
	   pool	 free		2.24T	    -
	   pool	 allocated	473G	    -
	   pool	 readonly	off	    -

       Example 8 Destroying a ZFS Storage Pool

	 The  following	command	destroys the pool "tank" and any datasets con-
	 tained	within.

	   # zpool destroy -f tank

       Example 9 Exporting a ZFS Storage Pool

	 The following command exports the devices in pool tank	so  that  they
	 can be	relocated or later imported.

	   # zpool export tank

       Example 10 Importing a ZFS Storage Pool

	 The  following	command	displays available pools, and then imports the
	 pool "tank" for use on	the system.

	 The results from this command are similar to the following:

	   # zpool import

	     pool: tank
	       id: 15451357997522795478
	    state: ONLINE
	   action: The pool can	be imported using its name or numeric identifier.
	   config:

		   tank	       ONLINE
		     mirror    ONLINE
			  da0  ONLINE
			  da1  ONLINE

       Example 11 Upgrading All	ZFS Storage Pools to the Current Version

	 The following command upgrades	all ZFS	Storage	pools to  the  current
	 version of the	software.

	   # zpool upgrade -a
	   This	system is currently running ZFS	pool version 28.

       Example 12 Managing Hot Spares

	 The following command creates a new pool with an available hot	spare:

	   # zpool create tank mirror da0 da1 spare da2

	 If  one  of  the disks	were to	fail, the pool would be	reduced	to the
	 degraded state. The failed device can be replaced using the following
	 command:

	   # zpool replace tank	da0 da2

	 Once the data has been	resilvered, the	spare is automatically removed
	 and is	made available should another device fails. The	hot spare  can
	 be permanently	removed	from the pool using the	following command:

	   # zpool remove tank da2

       Example 13 Creating a ZFS Pool with Mirrored Separate Intent Logs

	 The  following	 command creates a ZFS storage pool consisting of two,
	 two-way mirrors and mirrored log devices:

	   # zpool create pool mirror da0 da1 mirror da2 da3 log mirror	da4 da5

       Example 14 Adding Cache Devices to a ZFS	Pool

	 The following command adds two	disks for use as cache	devices	 to  a
	 ZFS storage pool:

	   # zpool add pool cache da2 da3

	 Once  added,  the cache devices gradually fill	with content from main
	 memory.  Depending on the size	of your	cache devices, it  could  take
	 over  an  hour	 for them to fill. Capacity and	reads can be monitored
	 using the iostat subcommand as	follows:

	   # zpool iostat -v pool 5

       Example 15 Removing a Mirrored Log Device

	 The following command removes the mirrored log	device mirror-2.

	 Given this configuration:

	      pool: tank
	     state: ONLINE
	     scrub: none requested
	    config:

		    NAME	STATE	  READ WRITE CKSUM
		    tank	ONLINE	     0	   0	 0
		      mirror-0	ONLINE	     0	   0	 0
			   da0	ONLINE	     0	   0	 0
			   da1	ONLINE	     0	   0	 0
		      mirror-1	ONLINE	     0	   0	 0
			   da2	ONLINE	     0	   0	 0
			   da3	ONLINE	     0	   0	 0
		    logs
		      mirror-2	ONLINE	     0	   0	 0
			   da4	ONLINE	     0	   0	 0
			   da5	ONLINE	     0	   0	 0

	 The command to	remove the mirrored log	mirror-2 is:

	   # zpool remove tank mirror-2

       Example 16 Recovering a Faulted ZFS Pool

	 If a pool is faulted but recoverable, a message indicating this state
	 is provided by	"zpool status" if the pool  was	 cached	 (see  the  -c
	 cachefile  argument  above),  or  as  part of the error output	from a
	 failed	"zpool import" of the pool.

	 Recover a cached pool with the	"zpool clear" command:

	   # zpool clear -F data
	   Pool	data returned to its state as of Tue Sep 08 13:23:35 2009.
	   Discarded approximately 29 seconds of transactions.

	 If the	pool configuration was not cached, use "zpool import" with the
	 recovery mode flag:

	   # zpool import -F data
	   Pool	data returned to its state as of Tue Sep 08 13:23:35 2009.
	   Discarded approximately 29 seconds of transactions.

SEE ALSO
       zpool-features(7), zfs(8)

AUTHORS
       This manual page	is a mdoc(7) reimplementation of the OpenSolaris  man-
       ual  page  zpool(1M),  modified and customized for FreeBSD and licensed
       under the Common	Development and	Distribution License (CDDL).

       The mdoc(7) implementation of this manual page was initially written by
       Martin Matuska <mm@FreeBSD.org>.

GNU				March 14, 2013			      ZPOOL(8)

NAME | SYNOPSIS | DESCRIPTION | SUBCOMMANDS | EXIT STATUS | EXAMPLES | SEE ALSO | AUTHORS

Want to link to this manual page? Use this URL:
<https://man.freebsd.org/cgi/man.cgi?query=zpool&sektion=8&manpath=FreeBSD+9.2-RELEASE>

home | help