The MoleQueue Workflow
Using MoleQueue to perform a simulation consists of two stages: a one-time setup of the queues and programs they can access, and the submission of specific calculations.
Import Preset Configurations
The one-time setup of MoleQueue consists of configuring cluster login details, scheduler interactions, and program execution environments. Fortunately for non-technical users, MoleQueue provides a method of importing preset configurations. This feature enables site maintainers and research groups to provide users with an appropriate configuration file to load site-specific queue and program details into MoleQueue. In this case, setup will consist of simply importing the file through the MoleQueue user interface:
User specific settings such as login names and working directories would still need to be set, but the bulk of the technical details concerning scheduler interaction and program execution will be configured by the importer.
More advanced users (or those with less generous system administrators) can configure resources themselves using the MoleQueue application as detailed in the following sections.
Adding a Local Queue
A local queue for performing calculations on a user’s workstation can be created by opening the Queue Manager in MoleQueue, clicking “Add” and selecting the “Local” queue type.
Configuring a local queue is simple -- all MoleQueue needs to know are the number of processor cores the user wishes to use for calculations. MoleQueue will automatically detect the number of available cores and uses this as the default value.
Adding a Remote Queue
Queues on remote HPC clusters are added by selecting the type of scheduler running on the cluster. The Portable Batch System (PBS), Sun Grid Engine (SGE), and SLURM schedulers are currently supported, along with their descendants (i.e. Torque (PBS-like) and OpenGrid (SGE-like)). The setup for each of these is similar, so we’ll use PBS/Torque configuration as an example.
The remote queue’s configuration is initially set to reasonable default values. The status of running and queued jobs will be queried every three minutes, the standard qsub, qdel, and qstat commands will be used to interact with the scheduler, and the batch script will be written to job.pbs. A fully customizable batch script template is provided, using keywords such as $$numberOfCores$$ and $$maxWallTime$$ that will be replaced with job-specific options, and the $$programExecution$$ keyword will be replaced by program-specific execution details.
The connection to the remote host is configured by setting the hostname or IP address of the cluster’s head node (somehost.facility.edu in this example) and the name of the user that will be used during login (user in the above example). The “Test Connection” button will attempt an SSH login to the configured host, allowing for connection troubleshooting if necessary.
Submitted jobs will be copied to and submitted from the “Remote working directory” (/work/user above). “Submit test job” can be used to send a trivial job to the configured queue, enabling users to test their configuration.
Adding Programs to a Queue
Program execution environments are fully configurable. Several presets for common execution syntaxes are available for simple programs, or the entire batch script template can be customized for more complex simulations. This allows programs to make use of advanced resources such as a specific MPI implementation for multi-node parallelism, configuration of environment variables, etc.