From cmclaughlin at fsu.edu Wed Jun 26 09:28:50 2024 From: cmclaughlin at fsu.edu (Casey Mc Laughlin) Date: Wed, 26 Jun 2024 13:28:50 +0000 Subject: [Hpc-notice] MVAPICH module changes on July 10 Message-ID: Hi HPC users, We recently made MVAPICH v3 available on the HPC (details). If you use MVAPICH, you will need to be aware of the following changes to the module names that will go into effect on July 10, 2024. We are going to rename the "mvapich2/version" pattern to "mvapich/version". No actual paths on the system are changing though, just the module names. So, no recompilation will be necessary. But you will need to update your Slurm submit scripts. Starting on July 10, use the following syntax to load MVAPICH in your Slurm submit scripts: # GNU module load gnu mvapich/2.3.5 module load gnu mvapich/3.0 # Intel module load intel mvapich/2.3.5 module load intel mvapich/3.0 We will be making mvapich/3.0 the default, so if you need to use version 2.3.5, you will need to specify the version number on the end of the "module load" command. Please let us know if you have any questions: support at rcc.fsu.edu Best regards, The RCC Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmclaughlin at fsu.edu Wed Jun 26 14:18:18 2024 From: cmclaughlin at fsu.edu (Casey Mc Laughlin) Date: Wed, 26 Jun 2024 18:18:18 +0000 Subject: [Hpc-notice] New HPE liquid cooled servers deployed to HPC Message-ID: Hi HPC users, We are happy to report the general availability of our first 28 liquid-cooled compute nodes to research groups that have bought into the cluster. The new nodes were added to the owner Slurm accounts/queues today (June 26) and are automatically available for jobs submitted to dedicated queues/partitions. There is no need to change any parameters in submit scripts. If you purchased resources on our cluster, Slurm will automatically include these new resources when scheduling jobs. Note: this does not apply to certain Slurm accounts, such as GPU node purchases. The node specs are: * 2x AMD Epyc 9454 48-core processors * 384 GB of RAM * 1TB solid-state disks * 200Gbps InfiniBand networking This adds 2,688 CPU cores to the HPC, bringing our total up to 20,736 cores in the cluster. Best regards, The RCC Team -------------- next part -------------- An HTML attachment was scrubbed... URL: