Message-ID: <1818409987.6181.1493081478244.JavaMail.email@example.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_6180_778824104.1493081478244" ------=_Part_6180_778824104.1493081478244 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
As part of the Molecular Foundry's Theory Facility we use a number of lo= cal compute clusters as well as The Nati= onal Energy Research Scientific Computing Center (NERSC). Cluster resou= rces are managed by the High Performance Computing Services (HPCS) group at= LBNL. The Foundry annual allocations of computer time at NERSC are obtaine= d through a yearly competitive proposal process; NERSC computer time is wel= l-suited to and extensively used by Foundry Scientists and Users for long-t= erm projects and "production" runs that require large-scale computation. = p>=20
Purchased with ARRA Stimulus Funds, Vulcan is a 242-node cluster with In= finiband interconnect connected to a 41.7TB LUSTRE parallel file system. Ea= ch node has two 2.4GHz Intel Xeon E5530 Quad-core Nehalem processors with 3= GB RAM per core. Vulcan is also connected to an additional 57.0TB BlueArc N= FS file system. Theoretical performance is 18.1TFLOPS with 5.7TB of total m= emory.=20
Vulcan is used exclusively by Theory Facility Staff and Users. Ideal for= exploratory research, Vulcan provides the Theory Facility with the flexibi= lity to address exciting new problems as they arise, allows fast turnaround= for development projects, and is highly scalable for future expansion.= =20
The Theory Facility's first in-house parallel Linux cluster, Nano is a 6= 24 core Intel Xeon processor machine (consisting of a mix of 2-, 4-, and 8-= core nodes) networked with high-speed, low-latency Infiniband interconnects= ; it has 824 GB of total memory, uses a 10.1TB Panasas parallel file system= , and has a theoretical peak performance of 3.1 Teraflops.=20
The Theory Facility supplements its cluster computing resources with acc= ess to Lawrencium, an LBNL-wide computing resource.= p>=20
The University of California Office of the President (UCOP) sponsors the= Shared Research Computing Services Pilot Project in collabora= tion with the ten UC Universities and LBNL. The Theory Facility has access = to its North Cluster, known as Mako. This cluster compri= ses 272 dual quad-core (Nehalem) nodes and is quite similar in architecture= to Vulcan.