Blog from July, 2012

Google continually enhances the products they provide to customers.  We want to bring some of these to your attention and also give everyone a heads-up on recent efforts by IT's Google Support Team.

We would also like to encourage anyone with a special interest in Google, to join the LBL Google Users Group (primarily a mailing list which we use to provide advance notice of new features, request feedback, and follow community discussions).  Join the group by going here.

Local Resources

A "New Employee Intro to Google" page was added.   We have this linked under N (New employee ...) and G (Google ...) in the A-Z index and will be sending the link to each person who requests a new Google account. (This is linked off our main Google Help Center)

We are starting to explore Google Apps Scripting as a way to enhance our use of the product.  A space on the Commons wiki was set up for anyone interested in writing scripts (or using ones others have written).  For example, we have a method for users to forward a batch of email to someone else (after first labeling the messages of interest). This is a great solution for someone leaving the lab who wants to transfer job related emails to the person who is taking over responsibility for the position.

Google+

We have added Google+  (a  social networking service)  to the list of applications available to Lab Users.  There is no "enterprise version" of Google+, so there are no boundary conditions (you are not restricted to seeing just LBL users or posting new content inside the Lab).  One feature that we think is very useful is Hangouts.  Hangouts provide a multi-person video conferencing capability. Recently, the ability to schedule an event with others was implemented - and these events will show up on your Google Calendar (once you accept an invitation that arrives in Gmail).  Access to Google+ requires a registration effort and acceptance of terms and conditions.  You will have to provide a birth date  as part of this process. Google is not allowed to provide access to anyone under 13.   If that situation is detected, the account is immediately disabled and you will not have access to any Google product, including email and calendar! . Be Careful !!!

Google Drive

Google Drive (was called Google Docs) has been out for several months.  The changes were twofold:  A new look for the Web user interface and the introduction of a desktop sync client for Macs and PCs. (Note - we are still in the "opt-in" stage where you can use the old "classic" look as well as the new look).

Google Drive's new desktop sync client continues to be a work in progress. We are closely following the Google Drive forums while we test the use of this "Dropbox" alternative at the Lab.  We think this will eventually  be a great addition - cloud based storage with sync to any number of workstations - but we think it is still a Beta Product. Try it with test files,  but use it with caution.

The key issues (as reported by users worldwide) include excessive use of CPU resources, problems with XP service pack 3 systems (explorer crashes on some systems), the requirement to continually re-sync the contents of the workstation folder (even though only a few files have been changed), and some design issues that make it too easy to take a file or folder that has been shared with you and remove it from the original folder hierarchy established by the owner.  We have also heard reports of lost data.  We have no idea how often these issues surface, but since they are being reported, it is wise to go slow with adoption.

The core team responsible for our multi-year initiative to improve the energy efficiency of our research and operational computing datacenter has been recognized with two awards in one month. The team was recognized with a Director's Achievement Award and a Larry Sautter Award from the University of California. Congratulations to the team! The Pandas thank you.

Summary: Through a unique collaboration between Berkeley Lab’s energy efficiency researchers and IT Division, the team undertook a number of experimental and cutting edge approaches to improving legacy data center energy efficiency. These measures included one of the earliest implementations of a complete wireless sensor network to monitor the impact of the changes, extensive changes to the facilities infrastructure in the room, testing of early device prototypes with industrial partners, virtualization and consolidation, testing of modified controllable CRAC units, and many more. In November 2010, the team achieved an important milestone - a view of realtime power utilization effectiveness in the datacenter.

Project Description:
The team came together out of the combination of EETD’s expertise in energy efficiency in datacenters and IT’s pressing need to expand scientific computing resources in its datacenter to meet increasing demand. The goal of the project was to simultaneously explore and validate solutions for EETD’s research and to use these solutions to improve operational efficiency in the data center thereby allowing an increase the computational capability.
While the estimated PUE (Total energy use divided by the IT equipment energy use) of the datacenter suggested that it was relatively efficient in comparison to others that the EETD team benchmarked, the team believed that significant improvements were possible. The team initially developed a computational fluid dynamics (CFD) model of airflow in the datacenter. The data from this model confirmed the conclusion that airflow mixing in the datacenter contributed to its inefficiency. This suggested the need for a monitoring system that would allow the team to fully visualize and understand the scope of the problem. This enabled immediate feedback on the impact of various changes to the datacenter as they were implemented. The team engaged Synapsense, which at the time was just beginning development of its real-time, wireless, monitoring application, to deploy a system which would permit detailed analysis of the environmental conditions (humidity and temperature) along with air pressure and power monitoring at hundreds of points within the datacenter. The team worked with Synapsense to improve their product based on their experience within the datacenter. This work was conducted in phases over several years and is continuing to explore new efficiency opportunities.

Once the system was deployed, the team used the data to begin to change the airflow and make other operational adjustments in the datacenter. The team undertook a variety of fixes, some small, and some large:
Floor tile tuning to improve air pressure
Hot Aisle/Cold Aisle Isolation
Conversion of the overhead plenum to hot air return
Extension of CRAC returns to connect to overhead
Installation of curtains to further reduce hot aisle/cold aisle mixing
Installation of water cooled doors based on non-chilled water (collaboration with the vendor to reduce energy use)
Piloting of fully enclosed racks
Use of higher ambient temperature setpoints to improve efficiency

Throughout the process, the team collaborated with industrial partners to pilot new technology while influencing the technology roadmap for these products in the marketplace. This trend continues today with testing of a prototype APC in-row cooler and another project that may be the first ever computer controlled air conditioner fan and compressor control system which can dynamically adjust the Computer Room Air Conditioning cooling power depending on the conditions in the data center.

The culmination of this initial work occurred in November 2010, when LBL became one of the first organizations in the federal space, and among a handful of smaller data centers in the world, to be able to calculate and view the data center’s Power Utilization Effectiveness (PUE) in real-time. This critical metric, which indicates the power used by the infrastructure in the data center in comparison to the power used by the computers themselves, helps staff manage the data center on a dynamic basis to best achieve environmental goals. One vendor partner visited in November 2010 to present awards for the role LBL staff played in this achievement and in the roadmap for their product (http://today.lbl.gov/2010/11/12/berkeley-lab-data-center-at-cutting-edge-of-efficiency/).

In addition to the extensive collaboration between IT’s facilities experts and the researchers, the High Performance Computing, Infrastructure, and Collaboration teams also helped to support these goals. During this time, IT consolidated and virtualized its business systems, further reducing the impact on energy and floorspace in the datacenter. In addition, the move to cloud-based systems for email and collaborative applications also increased resiliency while reducing the impact on the datacenter. Finally, the HPC group continues to work with researchers to support demand-response testing, allowing for shedding load from scientific computing during times of reduced energy availability or in response to data center and environmental conditions.

By any measure, the impact of this achievement has been felt far beyond LBL’s data center. Numerous publications and reports have been generated from the work, as well as kudos for LBL’s efforts from around the world.

In the datacenter itself, LBL went from a situation where new datacenter construction was going to be needed imminently, to one in which we have headroom to continue to add scientific computing. Indeed, the room, which we believed to be at capacity in 2007, is currently running 50% more scientific computing yet has improved its efficiency (PUE) from 1.6 to 1.4 over that time. The room still has space to grow, as we continue to make use of cutting edge energy efficient cooling technologies to improve the room’s performance.

One good indicator of the quality of this achievement is the extent to which this achievement has been studied by others. Dozens of individuals and groups from industry, academia, and government have toured our datacenter as a model for how to green datacenters that weren’t built with modern efficiency standards in mind. While Google, Yahoo, and Facebook’s efforts get most of the industry’s attention, most companies, agencies, and universities have one or more legacy datacenters to deal with and no resources to build a new one from scratch.

Overall, this project represents a rare confluence of achievements: it simultaneously enabled new research and demonstration projects in a scientific division related directly to data center efficiency, enabled science in virtually every other scientific area of the laboratory by allowing for significant expansion of scientific computing resources, and reduced operational costs by allowing for continued and expanded capacity in a resource that was believed to be exhausted. In bringing together an operational need with scientific research, the Laboratory has shown by example how energy can be saved in older data centers, and has demonstrated how a continuous improvement process can lead to on-going energy savings.