Blog from November, 2016

The San Diego Supercomputer Center (SDSC), an Organized Research Unit of the University of California, San Diego, is making major high-performance computing resources available to the UC community. This program, HPC@UC, is being offered in partnership with the UC Vice Chancellors of Research as well as campus CIOs. Awards under this program are intended to help UC researchers expand their overall research program. Specifically, this program is designed to:

  • Broaden the base of UC researchers who use advanced computing
  • Seed promising computational research Facilitate collaborations between SDSC and UC researchers
  • Give UC researchers access to cyberinfrastructure that complements what is available at their campus
  • Help UC researchers be successful when pursuing larger allocation requests through NSF’s Extreme Science and Engineering Discovery Environment program (XSEDE), and other national computing programs

Advanced cyberinfrastructure (CI), including high-performance computing (HPC) systems, is critical to advancing science and discovery across a wide range of research domains. SDSC operates some of the most advanced CI in the nation, including a petascale computing system for conducting complex numerical simulations, and high-performance storage systems for moving, analyzing, and storing massive amounts of data from simulation and experiment. SDSC’s HPC applications specialists, data scientists, and systems administrators provide the support and expertise required to make maximum use of these resources. Access to the resources is ubiquitous over high-speed networks between the campuses, with specialized hardware that enables data movement at speeds in the tens of Gigabits per second. Resources Available

  • Comet supercomputer: A ~2 PFlop/s systems featuring 1,944 nodes, each with two 12-core Intel Haswell processors, 128 GB memory and 320 GB of flash storage; 36 GPU nodes each with two NVIDIA K80 GPGPUs; and 4 large memory nodes, each with 4 Intel Haswell processors, and 1.5 TB of memory;
  • Gordon supercomputer: A ~340 Tflop/s system featuring 1,024 two-socket nodes with Intel Sandy Bridge processors and 64 GB memory/node; and 300 TB of high performance flash memory;
  • Data Resources: Over 7 PB of high-speed storage made available via Lustre parallel file systems, as either short term Performance Storage used for temporary files, or long term, non-purged Project Storage that persists for the life of the project. A Durable Storage resource provides a second copy of all data in Project Storage file system;
  • Applications: A large installed base of applications for HPC and big data analytics;
  • Expertise: SDSC staff have broad expertise in the application of advanced computing and stand ready to assist you in making the best use of these resources for your research.

Trial Accounts on Comet: Before submitting an allocation request to HPC@UC, you may also want to consider requesting a Trial account on SDSC’s Comet system. These are allocations for 1000 core-hours and are intended for very quick assessment of Comet to see if it meets your needs. These requests are fulfilled within 2 working days. You are eligible for an HPC@UC allocation even if you already have a Trial account. Eligibility and Review

  • Applications may request up to 1M core-hours of computing, associated data storage, and access to SDSC expertise to assist their research team. Awards are active for one year.
  • Applicants must not have an active award in the NSF’s Extreme Science and Engineering Discovery (XSEDE) program.
  • The expectation is that these awards will lead to larger, formal allocation requests on national HPC systems that are available through the XSEDE program. SDSC staff will assist in developing these allocation applications.
  • Applications are reviewed on an ongoing basis. Applicants will be notified within 10 business days of the review decision.

Eligibility and Review

  • Applications may request up to 1M core-hours of computing, associated data storage, and access to SDSC expertise to assist their research team. Awards are active for one year.
  • Applicants must not have an active award in the NSF’s Extreme Science and Engineering Discovery (XSEDE) program.
  • The expectation is that these awards will lead to larger, formal allocation requests on national HPC systems that are available through the XSEDE program. SDSC staff will assist in developing these allocation applications.
  • Applications are reviewed on an ongoing basis. Applicants will be notified within 10 business days of the review decision.

Apply 

 

HelloSign will experience a planned outage to upgrade key aspects of their infrastructure on Saturday, November 19th starting at 10pm Pacific, lasting for approximately 1 hour.

We are pleased to announce the immediate availability of the Jupyter Notebook service to the Lawrencium cluster.

Jupyter Notebook is a useful web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. It can also be used to organize your existing data flow, compute, visualization, documentation, as well as publication, into a single workflow and we've extended Juypterhub so that it can now leverage the Lawrencium cluster resource in order to support code needing high performance computing and to reduce turnaround time.

Lawrencium users can go to our online documentation see how to get started. In the next several months we will continue to develop and add new features to the Jupyter Notebook service, so please stay tuned! As always, feedback, comments, implementation suggestions, feature requests, bug reports, are all welcome (to hpcshelp@lbl.gov)!

Happy crunching!



 

The Sympa mailing list server (lists.lbl.gov) upgrade is complete.


The Sympa mailing list server was offline due to a scheduled upgrade on Thursday, November 3, 2016 from 5 PM to 7:30 PM.  

This outage impacted all mailing lists provided by lists.lbl.gov including all Level 1 email lists.

During the upgrade, Sympa lists was unavailable.   Messages should be delivered when the the outage is complete.

 


Adaptavist ThemeBuilder EngineAtlassian Confluence