Non-blocking switched gigabit Ethernet, with 0.2ms average latency.
4xQDR (40Gbps) Infiniband fabric, with microsecond latency.
Hydra has a master node, master.hydra.scorec.rpi.edu a.k.a. hydra.scorec.rpi.edu for managing the operation of the cluster and for interactive use by the cluster users.
Pre/Post Processing Node
The host piglet.scorec.rpi.edu was purchased for the pre- and post- processing of large-memory data. This machine is just like any other SCOREC host. As with all of the other general purpose SCOREC systems the /fasttmp file system is mounted on piglet under /fasttmp .
It does require special authorization to use; email firstname.lastname@example.org to request access. Note that requests not detailing why they require greater than 32GB of memory to run will be denied.
The cluster is composed of 28 compute nodes, each node includes an 8-core 2.3GHz Opteron processor and 16 GB of 1333MHz DDR3 ECC memory, and directly connects to the Infiniband fabric.
Hydra uses Slurm to manage job submission. Note that the scheduler is running the Slurm fair-share scheduling plugin, and jobs will be run according to past usage trends of each research group. If you have questions about this please email email@example.com for more information.
To start an interactive session on one node:
salloc -N 1 -p <partition>
where partition is either debug or normal.
execute compute jobs with
srun, otherwise they will execute on the master node
To pass environment variables through to the compute nodes, such as LD_LIBRARY_PATH, add them to your
scancel <jobid from squeue>