We provide users with the means to store, manage, and share their research data.
In addition to systems specifically tailored for data-intensive computations, we provide a variety of storage resources optimized for different phases of the data lifecycle; tools to enable users to manage, protect, and control their data; high-speed networks for intra-site and inter-site (ESnet) data transfer; gateways and portals for publishing data for broad consumption; and consulting services to help users craft efficient data management processes for their projects.
Recommended Workflow
Users with minimal data storage requirements, <=2TB and/or <=2 Million inodes, can stage input files and codes, launch jobs and store output data all in their home directory.
Users who expect to exceed the home directory limits should request access to either NFS Scratch or Lustre Scratch. The recommended workflow would be…
- User codes and input files stored in user home directories. These are backed up nightly and are geographically diverse.
- Output data written to NFS or Lustre Scratch spaces. These spaces are subject to 60-day Purge Policies.
- Periodic post-processing or reclamation of scratch data to long-term storage. See the Filesystem section for more information.
- Inode heavy data should be compressed in-flight to long-term storage. See our sections on NFS Scratch and Lustre Scratch.
- Deletion of unnecessary scratch data.