Blog

If you notice a strange calendar event which appears to be spam, our Commons instructions page will help you learn the steps to report the event to Google.

If these steps do not resolve the spam issue, please contact Help Desk at (510) 486-4357 (xHELP).

Good Morning LBL. Our Help Desk is short staffed today, 8/22/2016, please be advised that hold times could be lengthy. Remember you can submit your HELP request via email to [email protected] 

We will try to assist every caller, email and Ticket submitted as best we can . Thank you. 

Google’s IPO

August 19, 2004

Google holds its Initial Public Offering (IPO) selling over 22 millions shares at a starting price of $85. Google shares closed that day at $100.34 and the IPO created many instant millionaires and a few billionaires.

 

(via http://thisdayintechhistory.com/)

HP Incorporated

August 18, 1947

Hewlett-Packard is incorporated by William Hewlett and David Packard, nine years after they sold their first products from their garage in Palo Alto. Hewlett and Packard got their start in 1938 by producing oscillators used to test audio equipment. Since selling eight of their first oscillators to Disney for use in preparing movie theaters for the movie Fantasia, HP has grown to one of the largest technology companies in the world.

 

(via http://thisdayintechhistory.com/)

As of 11:00AM August 18, 2016, there have been no outages reported.


Files on the browser are not displaying and Google is reporting encountering issues with their servers

LBL Help Desk is monitoring the situation and will update when we have more information.

 

 

 

Smartsheet will be offline for 180 minutes on August 6, 2016 at 5:00 PM PDT for system improvements

Collaboration Services is making back-end adjustments to our email pipeline the week of 1 August.  These changes should not impact users, however if you have any email related problems please do notify the IT User Support Help Desk at 486-4357.

Google is currently experiencing issues with Google Docs.  This issue is also impacting the functionality of Google Forms.

For status on this issue from Google visit the Google Apps Status Dashboard.

***This issue was resolved before 8.30AM****

Google Calendar is down worldwide this morning.  There is no ETA for restoration.  You may be able to view an out of date cached version of your calendar on your mobile device.

You can follow the Google status page for this issue here: http://www.google.com/appsstatus#hl=en&v=issue&sid=2&iid=847490285bf1b9e082a699bafb95f53b

 

Commons will be offline on Thursday, May 26, 2016 starting at 5:00 PM for approximately 15 minutes for system improvements.

 

 

 

HPCS Storage Lead John White will be giving at talk at the annual Lustre Users Group 2016 conference being held in Portland, Oregon this week.

John's presentation will provide an introduction to the experiences and challenges involved in providing parallel storage to a HPC focused condo-style infrastructure. High Performance Computing Services at Lawrence Berkeley Lab serves as a middle-ground at the institutional level, between grad-student managed computation and the national-allocation class computing such as those offered by NERSC and XSEDE. Given our revenue sources, our infrastructure style is characterized by frequent small-scale buy-ins as well as infrequent 'large' grant funding. That challenge has shaped a nimble infrastructure that maximizes our customers' dollar/GB and dollar/FLOP but has lead to unique requirements from Lustre including upgrade paths that break traditional parallel file system rules.  We focus on ease of management and, above all, a deep desire for uniformity across numerous Lustre instances to reach the goal of a true building-block infrastructure.


We are pleased to announce the “Low Priority QoS (Quality of Service)” pilot program which allows users to run the Lawrencium Cluster resources at no charge when running at a lower priority.

This program, tested by Lawrencium Condo users, is now available to all Lawrencium users. We hope by implementing such a solution, it would help users to increase their productivity by allowing them to make use of available computing resources.

The new QoSs "lr_lowprio" and "mako_lowprio" have been added that will allow users to run jobs that request up to 64 nodes and 3 days of runtime. This includes all general purpose partitions such as lr2, lr3, lr4, mako, and special purpose partitions such as lr_amd, lr_bigmem, lr_manycore, mako_manycore. By using these new QoSs, you are NOT subject to the usage recharge that we are currently collecting through the “lr_normal” and “mako_normal” QoSs; however, these QoSs do not get a priority as high as all the general, debug, and condo QoSs and they are subject to preemption by jobs submitted at the normal priority.

This has two implications to you:

1. When the system is busy, any job that is submitted with a Low Priority QoS will yield to other jobs with higher priorities. If you are running debug, interactive, or other types of jobs that require quick turn-around of resource, or have important deadline to catch, you may still want to use the general QoSs.

2. Further, when system is busy and there are higher priority jobs pending, scheduler will preempt jobs that are running with these lower priority QoSs automatically. The preempted jobs are chosen by the scheduler automatically and we have no way to set select criteria to control its behavior. Users can choose at submission time whether preempted jobs should simply be killed, or be automatically requeued after it is killed. Hence, we recommend that you have your application do periodic checkpoints so that it is able to restart from the last checkpoint. If you have a job that is not able to checkpoint/restart by itself, or non-interruptible during its runtime, you may want to use the general QoSs.

To submit jobs to this QoS, you will need to provide all the normal parameters, e.g., --partition=lr3, --account=ac_projectname, etc., for the QoS please use "--qos=lr_lowprio" or "--qos=mako_lowprio", and make sure you request less than 64 nodes and 3 days of runtime for the job. If you would like the scheduler to requeue the job in its entirety in the case that the job is preempted, please add "–requeue" to your srun or sbatch command, otherwise the job will simply be
killed when preemption happens. An example of the job script should look like below:

====
#!/bin/bash
#SBATCH --job-name=test
#SBATCH --partition=lr3  ### other partition options:
lr2,lr4,lr_bigmem,lr_manycore,lr_amd,mako,mako_manycore
#SBATCH --account=ac_projectname
#SBATCH --qos=lr_lowprio ### another QoS option: mako_lowprio
###SBATCH --requeue ### only needed if automatically requeue is desired
#SBATCH --ntasks=20
#SBATCH --time=24:00:00

mpirun a.out
====

For condo users who have been helping us to test these low priority QoSs on the lr2 and mako partitions, your current associations with your “lr_condo” account have not changed so you can continue to use them but they are limited to lr2 and mako partitions only. If you intend to use other partitions you will need to change the account from “lr_condo” to “ac_condo”, e.g., “lr_nanotheory” -> “ac_nanotheory”. And we will phase out associations connected to your “lr_condo” account in the next month without further notice, so please make the change now.

For more information about this program and how to use the low priority QoSs properly please check our online user guide.

https://sites.google.com/a/lbl.gov/high-performance-computing-services-group/lbnl-supercluster/lawrencium

The pilot program will run for two month (Mar 22 - May 22) and we will decide how to proceed from there based on the usage and feedback.
Please forward your requests, questions, and comments to [email protected] during this pilot period.

Are you running out of space in your office, moving to a new office, or tasked with processing the records of retiring (or already retired) scientists? As part of National Records and Information Management Month, you’ll learn which files need to be kept, which can be archived, and which can be disposed of at a workshop sponsored by the Archives and Records Office on Apr. 22 from 10:30 am to noon in 50A-5132.  For more info and to register, go here:

https://hris.lbl.gov/self_service/training/