ESXi 8.0 has a system storage layout that allows flexible partition management and support for large modules, and third-party components, while facilitating debugging.

ESXi System Storage

The ESXi 8.0 system storage layout consists of four partitions:
Table 1. ESXi system storage partitions:
Partition Use Type
System Boot Stores boot loader and EFI modules. FAT16
Boot-bank 0 System space to store ESXi boot modules. FAT16
Boot-bank 1 System space to store ESXi boot modules. FAT16

Acts as the unified location to store additional modules.

Not used for booting and virtual machines.

Consolidates the legacy /scratch partition, locker partition for VMware Tools, and core dump destinations.

Caution: In case the installation media is a USB or an SD card device, best practice is to create ESX-OSData partitions on persistent storage device that is not shared between ESXi hosts.

The ESX-OSData volume is divided into two high-level categories of data, persistent and non-persistent data. Persistent data contains of data written infrequently, for example, VMware Tools ISOs, configurations, and core dumps.

Non-persistent data contains of frequently written data, for example, logs, VMFS global traces, vSAN Entry Persistence Daemon (EPD) data, vSAN traces, and real-time databases.

Figure 1. Consolidated system storage in ESXi 8.0
The ESX-OSData volume consolidates the legacy /scratch partition, locker partition for VMware Tools, and core dump destinations.

ESXi System Storage Sizes

Partition sizes, except for the system boot partition, can vary depending on the size of the boot media used. If the boot media is a high-endurance one with capacity larger than 142 GB, a VMFS datastore is created automatically to store virtual machine data.

You can review the boot media capacity and the automatic sizing as configured by the ESXi installer by using the vSphere Client and navigating to the Partition Details view. Alternatively, you can use ESXCLI, for example the esxcli storage filesystem list command.

Table 2. ESXi System Storage Sizes, Depending on the Used Boot Media and Its Capacity.
Boot Media Size 8-10 GB 10-32 GB 32-128 GB >128 GB
System Boot 100 MB 100 MB 100 MB 100 MB
Boot-bank 0 500 MB 1 GB 4 GB 4 GB
Boot-bank 1 500 MB 1 GB 4 GB 4 GB
ESX-OSData remaining space remaining space remaining space up to 128 GB
VMFS datastore remaining space for media size > 142 GB
You can use the ESXi installer boot option systemMediaSize to limit the size of system storage partitions on the boot media. If your system has a small footprint that does not require the maximum of 128 GB of system storage size, you can limit it to the minimum of 32 GB. The systemMediaSize parameter accepts the following values:
  • min (32 GB, for single disk or embedded servers)
  • small (64 GB, for servers with at least 512 GB of RAM)
  • default (128 GB)
  • max (consume all available space, for multi-terabyte servers)

The selected value must fit the purpose of your system. For example, a system with 1 TB of memory must use the minimum of 64 GB for system storage. To set the boot option at install time, for example systemMediaSize=small, refer to Enter Boot Options to Start an Installation or Upgrade Script. For more information, see Knowledge Base article 81166.

ESXi System Storage Links

The sub-systems that require access to the ESXi partitions, access these partitions by using the following symbolic links:
Table 3. ESXi system storage symbolic links.
System Storage Volume Symbolic Link
Boot-bank 0 /bootbank
Boot-bank 1 /altbootbank
Persistent data






Non-persistent data






Storage Behavior

When you start ESXi, the host enters an autoconfiguration phase during which system storage devices are configured with defaults.

When you reboot the ESXi host after installing the ESXi image, the host configures the system storage devices with default settings. By default, all visible blank internal disks are formatted with VMFS, so you can store virtual machines on the disks. In ESXi Embedded, all visible blank internal disks with VMFS are also formatted by default.

Caution: ESXi overwrites any disks that appear to be blank. Disks are considered to be blank if they do not have a valid partition table or partitions. If you are using software that uses such disks, in particular if you are using logical volume manager (LVM) instead of, or in addition to, conventional partitioning schemes, ESXi might cause local LVM to be reformatted. Back up your system data before you power on ESXi for the first time.

On the hard drive or USB device that the ESXi host is booting from, the disk-formatting software retains existing diagnostic partitions that the hardware vendor creates. In the remaining space, the software creates the partitions described below.

Partitions Created by ESXi on the Host Drive

For fresh installations, several new partitions are created for the boot banks, scratch partition, locker, and core dump. Fresh ESXi installations use GUID Partition Tables (GPT) instead of MSDOS-based partitioning. The installer creates boot banks of varying size depending on the size of the disk. For more information on the scratch partition see About the Scratch Partition.

The installer affects only the installation disk. The installer does not affect other disks of the server. When you install on a disk, the installer overwrites the entire disk. When the installer autoconfigures storage, the installer does not overwrite hardware vendor partitions.

To create the VMFS datastore, the ESXi installer requires a minimum of 128 GB of free space on the installation disk.

You might want to override this default behavior if, for example, you use shared storage devices instead of local storage. To prevent automatic disk formatting, detach the local storage devices from the host under the following circumstances:
  • Before you start the host for the first time.
  • Before you start the host after you reset the host to the configuration defaults.

To override the VMFS formatting if automatic disk formatting already occurred, you can remove the datastore. See the vCenter Server and Host Management documentation.

About the Scratch Partition

For new installations of ESXi, during the autoconfiguration phase, a scratch partition is created on the installation disk if it is a high-endurance device such as a hard drive or SSD.

Note: Partitioning for hosts that are upgraded to ESXi 7.0 and later from earlier versions differs significantly from partitioning for new installations of ESXi. The size of bootbank partitions is different and autoconfiguration might not configure a coredump partition on the boot disk due to size limitations.

When ESXi boots, the system tries to find a suitable partition on a local disk to create a scratch partition.

The scratch partition is not required. It is used to store system logs, which you need when you create a support bundle. If the scratch partition is not present, system logs are stored in a ramdisk. In low-memory situations, you might want to create a scratch partition if one is not present.

The scratch partition is created during installation. Do not modify the partition.

If no scratch partition is created, you can configure one, but a scratch partition is not required. You can also override the default configuration. You can create the scratch partition on a remote NFS-mounted directory.

Set the Scratch Partition from the vSphere Client

If a scratch partition is not set up, you might want to configure one, especially if the host is low on memory. When a scratch partition is not present, system logs are stored in a ramdisk.


The directory to use for the scratch partition must exist on the host.


  1. From the vSphere Client, connect to the vCenter Server.
  2. Select the host in the inventory.
  3. Click the Configure tab.
  4. Select System.
  5. Select Advanced System Settings.
    The setting ScratchConfig.CurrentScratchLocation shows the current location of the scratch partition.
  6. In the ScratchConfig.ConfiguredScratchLocation text box, enter a directory path that is unique for this host.
    For example, /vmfs/volumes/DatastoreUUID/DatastoreFolder.
  7. Reboot the host for the changes to take effect.