When you make an application for NeCTAR resources, the application form talks about "local vm disk", "persistent volume storage" and "object storage".  Other places talk about "ephemeral storage", "RDSI collection storage", NFS, HFS, and other things.

This page describes and compares the different kinds of storage that are available to NeCTAR instances, to help you make an informed decision about what you should ask for in your requests for resources.

Local VM Disk

 Local VM disk (as the Dashboard calls it) is the disk space that you get when launch an Instance.  Assuming that you created your image by "launch from image" using one of the standard NeCTAR flavours, your local VM disk storage will consist of two file systems:

  • The "root" file system will be 10Gb in size and will contain the core system software, any additional software that you have installed (e.g. from the distro's software repositories), and the users' home directories.
  • An additional "ephemeral" file system will be mounted as "/mnt".  For the standard flavours, the size of the ephemeral file system is 30Gb x nos VCPUs.

The "root" file system and the "ephemeral" file system typically use the same kind of storage, but there is a key difference between them:

  • The "root" file system will be saved when you create an Instance Snapshot.
  • The "ephemeral" file system will not be saved in an instance snapshot  The only way to save what is in the ephemeral file system (e.g. when you Terminate an instance) is to run some command on the instance to copy the data to somewhere else.  Also, the ephemeral file system is lost if you "rebuild" an instance.

However, the two kinds of local VM disk file system are the same in another respect.  The lifetime of the root and ephemeral file systems are both tied to the lifetime of the instance (i.e. the VM).  When an instance is Terminated, the disk space used by its two file systems will be immediately reclaimed.

Volume Storage

Volume Storage (or Persistent Volume Storage to give it its full name) is another kind of disk storage that is typically used for file systems.  The main difference between Volume Storage and local VM disk storage is that the lifetime of a Volume is independent of any single instance / VM.

Volumes can be snapshotted just like Instances, but Volume Storage snapshots are handled differently:

  • An Instance snapshot is stored in Swift Object Storage, and replicated across multiple data centres.
  • A Volume snapshot is stored in Cinder and is not replicated.

There are two common ways to use Volume Storage:

  • You can attach a Volume to an Instance, and then mount the file system on the volume as a secondary file system for the instance.
  • You can launch an Instance from a Volume, which makes the file system on the Volume the instance's primary file system.

In both cases there are a couple of restrictions:

  • Only one instance can be attached to or launched from a given Volume at any time.  Volumes can be used by different instances over time ... but not at the same time.
  • A Volume can only be used by an instance in the same data centre.

Object Storage

 The third kind of storage available in NeCTAR OpenStack is Swift Object Storage.  As mentioned above, Swift storage is replicated across multiple data centres, for speed of access and redundancy.  However, the main difference between Swift and the other forms of storage is that it does not support conventional filesystem access; i.e. you cannot "mount" a Swift object on a NeCTAR instance.  Instead, Swift provides a simple "GET" / "PUT" API for reading and writing entire objects, or segments of objects.  There is also a simple "flat" directory of Swift objects for each NeCTAR project.

Swift is best suited to use-cases which require the online storage of large, essentially read-only blobs of data, especially when that data needs to be replicated.

(At the moment, NeCTAR OpenStack does not implement quotas on Object Storage. A NeCTAR project is technically able to use Object Storage even if the project requested zero storage if this kind. However, I expect that at some time the NeCTAR will roll out the capability to set and enforce quotas.)

RDSI Collection Storage

The previous sections describe the kinds of storage that may be requested as part of a NeCTAR allocation.  The amount of storage available through NeCTAR is relatively small.  (The largest standard flavour will give you 480Gb of ephemeral storage, and generally speaking requests for significant amounts of Volume Storage require convincing justification: the metaphorical "bar" is set high.)

The main vehicle for getting large amounts of storage space is to lodge a request for "Collection" storage through the RDSI "ReDS" merit allocation scheme. The requirements and the application process are outside of the scope of this page. What is in scope is the characteristics of the storage that you will get if your application is successful.  And (sadly) the answer is that it depends on which Node provisions the storage.

Some Nodes provision RSDI collection storage as Object Storage or Volume Storage.  Other Nodes have other ways of provisioning, as described below.

QRIScloud "RDSI collection VMs"

QCIF's default way of exposing RDSI collection storage to the custodians is to provide them with SSH enabled accounts on a "collection VM".  In addition to logging in, the custodian can use SSH-based file transfer protocols / mechanisms such as "scp" and "rsync", WebDAV, and Globus GridFTP. 

The collection custodian is provided with the credentials for accessing the collections via its collection VM.  It is worth noting that these credentials do not give the custodian "root" access.

NFS Storage

A file system on a typical UNIX or Linux-based system can be "exported" so that selected other systems can mount it. Someone with a NeCTAR instance can configure it as an NFS server, and export some part of the file system to other systems.

NFS can also be used to expose QRIScloud RDSI collections to NeCTAR instances, with the following caveats:

  • The right to access to a specific RDSI collection via NFS needs to be requested via QRIScloud support.  It can be restricted to specific instances, or to specific tenants.
  • Access to the collection NFS servers is via the (respective) data centre's private storage network only. This means that you can only access a collection from an instance via NFS if they are both in the same data centre.
  • There is no central management of NFS user and group ids. If you mount a collection on multiple instances, you need to ensure that the (numeric) user and group ids on all instances are the same.

HSM Storage

 Some NeCTAR Node providers operate a hierarchical storage management or HSM. The general idea of HSM is that you have multiple "tiers" of storage, typically implemented using a variety of online (e.g. magnetic disk or SSD) and offline (e.g.magnetic tape) storage. In a typical HSM system, infrequently used data files are migrated from fast storage to slower storage and ultimately onto tape.  If someone then attempts to access a file on tape, it is automatically brought back to disk, and the user is able to access it after a few minutes delay.

A common technology for implementing HSM is SGI's DMF software.

Related questions

Which kind of disk storage is fastest?

Is NFS OK for XXX?

First of all, NFS does not give as good performance as file systems on local (to the machine) disks, or indeed to file systems on attached Cinder volumes.  NFS allows file systems to be shared by multiple machines, and that has inherent overheads; e.g. in keeping caches fresh and file locking. However, NFS is definitely suitable for a wide range of use-cases:

  • NFS should be fine for typical "file management" use-cases.
  • NFS should be fine for computation with light file I/O loads.
  • NFS is less than ideal for computation with heavy file I/O loads.  We have observed (in QRIScloud) that people who attempt to run workloads with multiple computational jobs (or threads) hitting the file system in simultaneously find that the NFS server is a bottleneck. This is liable to lead to NFS timeouts and (in bad cases) "stale NFS handle" errors that kill application processes.  This can be mitigated by tuning the client-side NFS parameters.
  • NFS is not good for databases. You won't get good performance.

The other thing to note is that NFS does not perform well for "metadata intensive" operations.  For example, if you have a directory containing lots of files and you need to repeatedly scan for the most-recently-updated, each scan may generate an NFS request for each file.  By contrast with a local or Cider-based file system, the metadata is likely to be cached in the VM's memory, resulting in orders of magnitude better performance.  Having said that, one could argue that repeated directory scans are poor application design. If the application was fixed, the NFS metadata performance would be moot.

Which kinds of disk storage are "backed up"?

It depends what you mean by "backed up".

  • If you mean a backup system that would allow you to restore individual files or directories to their states at multiple points in the past, then the answer is that none of the different storage types support that.
  • If you mean a disaster recovery system that would allow you to do coarse-grained recovery, then:
    • NeCTAR Object Storage keeps multiple online replicas of each object in multiple places to guard against data loss and unavailability.
    • Hierarchical Storage Management can be configured to keep both online and offline (tape) copies, and to keep offline copies in multiple locations.

What is the problem with small files?