Viewable by the world

What You Need to Know

  • Commercial generative AI tools are not covered under negotiated agreements- only provide non-sensitive/public inputs to these tools.  Never enter proprietary, export controlled, PII or other controlled data into these tools.
  • Generative AI can create novel and unresolved intellectual property issues from "new" images derived closely from their original source, to potential claims that the third party tool has a role in the development of a technology the Lab is working on, to complaints regarding the training set of researcher-created models.  Use caution, especially around anything related to the development of intellectual property.  
  • You can refer to this cheat-sheet to help you understand the types of data and information categories appropriate for the different types of generative AI tools and models. 

Generative AI Tools and Data Protection

As the advancements in artificial intelligence continue to shape the landscape of scientific research and innovation, it is imperative that we establish clear boundaries and best practices for the responsible deployment of Generative AI tools within our information ecosystem. This document aims to provide Berkeley Lab employees with guidance on the utilization of Generative AI technologies while ensuring the security of sensitive information developed at Berkeley Lab. 

Licensing Agreements

Berkeley Lab policy requires that all software licensing agreements with suppliers (and other entities), particularly those software suppliers that will host data generated or owned by Berkeley Lab, the UC or the Department of Energy as part of their services, require the software supplier to comply with a variety of Berkeley Lab data security and privacy practices. These include, but are not limited to, the protection of all Berkeley Lab data against loss, unauthorized access and use, and the indemnification of Berkeley Lab, the University of California and the Department of Energy, and compliance with all software-related requirements. The data security and privacy provisions contained in these agreements are a critical part of Berkeley Lab’s ongoing efforts to ensure the protection of the data and information related to its research, operations, and the personal information of its employees. 

However, there is currently no agreement with OpenAI, the developer of ChatGPT, or its licensees that would provide these types of protections with regard to ChatGPT or OpenAI’s programming interface.  Consequently, the use of ChatGPT at this time could expose the individual user and Berkeley Lab, the UC and the Department of Energy to the potential loss and/or abuse of highly sensitive data and information.  Similarly, Google's Bard AI can be accessed with your LBL identity, but is subject to different protections than the core Google Workspace accounts.   There are no agreements with popular image generation tools such as Midjourney, Dall-E (OpenAI), or online providers of Stable Diffusion.  

Generative AI Tools use

Do not submit confidential or protected information such as Personally Identifiable Information, Proprietary Information and Export Controlled Information on Generative AI tools like Open AI’s ChatGPT or Google’s Bard. In general, any data that is determined to be protected or controlled should not be used to generate a response using Generative AI tools. You can refer to the Lab’s Controlled and Prohibited Information Categories policy to learn more about each information type. 

Please be advised that in the absence of a Berkeley Lab agreement covering use of ChatGPT, your use of ChatGPT constitutes personal use, and obligates you, as an individual, to assume responsibility for compliance with the terms and conditions set forth in OpenAI’s own Terms of Use.

For further guidance on using Google Bard, please review the references listed below that address Google’s Generative AI Terms of Service. This includes Google’s Prohibited Use Policy as well as Privacy Policy and Use Restrictions. 

Please note that OpenAI states that how it uses or does not use your information is different when using their API services from the consumer ChatGPT interface. For further guidance on using ChatGPT please review the references listed below that address OpenAI’s own recommendations and policies on research and streaming of ChatGPT, and privacy, among others.

Intellectual Property Concerns

Intellectual Property Infringement: Generative AI tools are capable of producing content that closely resembles existing copyrighted works. Researchers need to ensure that their use of such tools does not infringe upon the intellectual property rights of others. This includes avoiding the creation of derivative works without proper authorization or licensing.

Dataset Licensing: AI models often require large datasets for training, and these datasets may contain copyrighted material. Researchers must ensure that they have the necessary rights or permissions to use the data, especially if it includes protected works such as images, text, or audio.  DO NOT DISTRIBUTE A MODEL YOU DEVELOPED WITHOUT CONSULTING WITH IPO.

Authorship and Attribution: Researchers who develop an original research model are generally credited as the authors or inventors of the model. However, if the generative AI model significantly enhances the research model, a question of authorship, ownership and attribution for the improved version may arise.   Authorship and citation remains a complex and evolving issue for these tools.  University and Lab policy require the highest ethical standards in authorship and citation.   Consider an expansive approach to citation of these tools as new norms within scientific communities take form.  

Prohibited Use

Please also note that OpenAI explicitly forbids the use of ChatGPT and their other products for certain categories of activity. This list of items can be found in their usage policy document.

Further guidance on Google Bard, ChatGPT and other Generative AI tools will be forthcoming as soon as it is available.

Subscriptions

The Lab allows individual Lab-paid "click through" subscriptions to some of these tools for non-sensitive, exploratory work.   If you have a need for an API and/or interactive subscription, email [email protected] and explain your use case and desired tool subscription. 

Zoom AI Companion Guidance 

As Zoom continues to integrate AI-features, LBNL employees must be mindful of when and how these tools should be used, particularly in alignment with the Lab’s policies on data protection and privacy.

Lab employees must not use Zoom’s AI Companion tools (such as AI transcription or meeting summary features) when handling sensitive, proprietary, or protected personally identifiable information (PII) unless explicitly approved by IT Policy. 

Understanding Transcription vs. Meeting Summaries

It is important to distinguish between transcription and meeting summaries when using Zoom AI.

  • Transcription refers to a word-for-word capture of spoken content, such as closed captions or real-time text displays during a meeting.

Lab employees requiring the use of AI live transcription for Zoom meetings can utilize Zoom AI Companion’s transcription tool. In addition to Zoom AI Companion features, Lab employees can also use Otter.ai’s Live Streaming Transcription Service as well. This is a third-party service that provides real-time captions during meetings. Employees seeking to enable this functionality should refer to the instructions under Zoom Host User (Step B) to properly configure the service.

Please note that Otter.ai’s Live Streaming Transcription Service is distinct from the Otter.ai Bot, which joins Zoom meetings as an active participant.

  • Meeting summaries use AI to generate a condensed version of the discussion, which may omit details and introduce inaccuracies. These summaries can be automatically saved and emailed to participants.

Employees should be aware that AI-generated summaries may not always be accurate and should be reviewed before being shared. Additionally, some staff may rely on transcription tools due to accessibility needs, but these tools must still comply with LBNL’s data policies.

Third-Party AI Bots Disabled in Zoom Meetings

Lab employees are not permitted to use third-party AI bots in Zoom meetings. These bots introduce administrative and privacy challenges, including:

  • Participant Consent Issues – AI bots appear as additional meeting participants, making it difficult for hosts to obtain explicit consent from all attendees regarding the AI services being utilized.
  • Meeting Management Complexity – Multiple AI bots can join a single meeting, leading to disruptions and complicating the host’s ability to control the meeting environment effectively.
  • Potential Privacy Risks – The presence of third-party bots can pose risks to confidential discussions, as data handling policies may vary between AI service providers.

Lab employees should ensure compliance with institutional privacy policies and data security protocols when using any third-party AI-powered tools.

Data Sensitivity and AI Use

Zoom AI tools process data in ways that may involve third-party services. Therefore, employees must not use AI transcription or meeting summary features when handling sensitive, proprietary, or personally identifiable information (PII) unless explicitly approved by IT Policy. 

Accessibility Needs

Lab employees with accessibility needs can reach out to ERGO, HR, and IT Policy to discuss use of Zoom AI features. 


References

https://policies.google.com/terms/generative-ai
https://openai.com/policies/sharing-publication-policy
https://openai.com/policies/usage-policies
https://openai.com/policies/privacy-policy
https://openai.com/policies

Additional Readings

https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/
https://www.theatlantic.com/newsletters/archive/2023/03/dont-be-misled-by-gpt-4s-gift-of-gab/673411/
https://www.ucop.edu/ethics-compliance-audit-services/compliance/uc-ai-working-group-final-report.pdf
https://adminrecords.ucsd.edu/Notices/2023/2023-3-23-2.html
https://ethics.berkeley.edu/privacy/appropriate-use-chatgpt-and-similar-ai-tools