1. Home
  2. Docs
  3. Joule 3.0
  4. Running Jobs
  5. General Workflow

General Workflow

Working on a High Performance Computing typically involves preparing input files, staging the files to an appropriate file system, submitting the job to the queuing system, allowing the job to run, and analyzing the output. Relevant files can then be transferred to long-term storage or downloaded. 

Users with minimal data storage requirements – <=4TB space and/or <=4 Million inodes – can stage input files and codes, launch jobs, and store output data all in their home directory. 

Users that expect to exceed these limits or have I/O intensive runs should  request access to NFS Scratch. This system is specifically designed for very large I/O workloads and large amounts of data. Users that utilize NFS Scratch should be mindful of these guidelines:

  • User codes and input files can be stored in user home directories. These are backed up nightly and are geographically diverse. 
  • Output data should be written to NFS Scratch space. This space is subject to 60-day Purge Policies
  • Data stored on NFS Scratch is NOT backed up. 
  • Post-processed data of scratch data should be moved to long-term storage. 
  • Inode heavy data should be compressed in-flight to long-term storage. 
  • Users should delete unnecessary scratch data after post-processing is complete.

In addition, a variety of tools are provided to enable users to manage, protect, and control their data.  High-speed networks for intra-site and inter-site (ESnet) are available for data transfer. Users will be able to use gateways and portals for publishing data for broad consumption.

Consulting services are available upon request to help users craft efficient data management processes for their projects.

Was this article helpful to you? No Yes

How can we help?