Hydra
From ScorecWiki
Contents |
Hydra Information
Hardware Description
Interconnect
Non-blocking switched gigabit Ethernet, with 0.2ms average latency.
4xQDR (40Gbps) Infiniband fabric, with microsecond latency.
Master Nodes
Hydra has a master node, master.hydra.scorec.rpi.edu a.k.a. hydra.scorec.rpi.edu for managing the operation of the cluster and for interactive use by the cluster users.
Pre/Post Processing Node
The host piglet.scorec.rpi.edu was purchased for the pre- and post- processing of large-memory data. This machine is just like any other SCOREC host. As with all of the other general purpose SCOREC systems the /fasttmp file system is mounted on piglet under /fasttmp .
It does require special authorization to use; email help@scorec.rpi.edu to request access. Note that requests not detailing why they require greater than 32GB of memory to run will be denied.
Compute Nodes
The cluster is composed of 28 compute nodes, each node includes an 8-core 2.3GHz Opteron processor and 16 GB of 1333MHz DDR3 ECC memory, and directly connects to the Infiniband fabric.
Job Submission
Hydra uses Slurm to manage job submission. Note that the scheduler is running the Slurm fair-share scheduling plugin, and jobs will be run according to past usage trends of each research group. If you have questions about this please email help@scorec.rpi.edu for more information.
To start an interactive session on one node:
salloc -N 1 -p <partition>
where partition is either debug or normal.
execute compute jobs with srun
, otherwise they will execute on the master node
Further information can be found on the CCNI Wiki and on the SLURM page.
Environment Variables
To pass environment variables through to the compute nodes, such as LD_LIBRARY_PATH, add them to your ~/.bashrc
file.
Killing Jobs
scancel <jobid from squeue>
Further information can be found on the CCNI Wiki and on the Slurm page.
Disk Storage
Location | Size | Description |
---|---|---|
/fasttmp | 16TB |
|
/users | 500MB |
|
/import/users | 1TB |
|
/scratch | ~40GB |
|