We are glad to announce the availability of third generation (Lr3) nodes in the Lawrencium cluster which is the Lab's institutional scientific computational system available for LBNL PI use. We have recently added 108 new compute nodes each equipped with dual-socket eight-core Intel SandyBridge 2.6Ghz processors (16 cores/node) and 64GB of 1600Mhz memory. They are connected with the latest high performance, low latency 56gb/s FDR infiniband interconnect, compared to the QDR 40gb/s & DDR 20gb/s interconnects in the earlier generation nodes, and are connected into the same user environment and storage as the Lr1 and Lr2 clusters.
Comparison of available Lawrencium Nodes
Third generation (lr3) nodes - 16 core, 64 GB, FDR 56gb/s infiniband
Second generation (lr2) nodes - 12-core, 24GB, QDR 40gb/s infiniband First generation (lr1) nodes - 8-core, 16GB, DDR 20gb/s infiniband
Any Lawrencium user can now access these new nodes by submitting jobs to the same routing queue as always ("lr_batch") but by specifying the type of nodes on which you want to run in the "-l" line of your PBS job scripts. For example:
1) To run on the new, third generation (lr3) nodes, please specify
#PBS -q lr_batch
#PBS -l nodes=X:ppn=Y:lr3
If you do not specify a type of node (either :lr1 or :lr2 or : lr3), the job will default to using the lr1 nodes.
Also we have a 60 second default walltime configured which means any job submitted to Lawrencium queues without the required walltime will run only for 60 seconds so please make sure you specify walltime in all your jobs.
We hope our users will make good use of these new enhanced resources and get more research done quickly. Interested and new users can visit the HPC Services web site to learn more and request an account. Please email us at email@example.com if you have any questions. Enjoy.