Guest Speaker: Marc Hamilton, Global Vice President of Solutions Architecture and Engineering at NVIDIA
Lecture Title: LLM Applications in Science
(Applications of Large Language Models in Science)
Time: April 4, 2023, 15:00-16:00
Venue: Multi-Functional Hall 204, International Conference Center, Shenzhen University Town

"AI for Science" is the product of deep integration between "artificial intelligence technologies represented by machine learning" and "scientific research." Large Language Models (LLMs) are deep learning algorithms that can learn to recognize, summarize, translate, predict, and generate text and other content through training on massive datasets. LLMs first gained popularity with OpenAI’s ChatGPT, and they have since been widely applied in many scientific fields; as a large language model, ChatGPT has already pushed the competition in AI-enabled scientific research to new heights.
Recently, the Ministry of Science and Technology, together with the National Natural Science Foundation of China, launched the special deployment work for "Artificial Intelligence-Driven Scientific Research" (AI for Science). This initiative closely addresses key issues in basic disciplines such as mathematics, physics, chemistry, and astronomy, and focuses on research needs in key fields including drug development, genetic research, biological breeding, and new materials 研发,aiming to build a cutting-edge technology R&D system for "AI for Science." The Ministry of Science and Technology will promote innovation in AI models and algorithms for major scientific problems, develop a number of "AI for Science" dedicated platforms for typical research fields, accelerate the construction of the national new-generation artificial intelligence public computing power open innovation platform, and support the heterogeneous integration of high-performance computing centers and intelligent computing centers. In terms of talent and mechanisms, the Ministry supports more scientists and researchers in scientific fields such as mathematics and physics to engage in related research, cultivates and gathers interdisciplinary R&D teams, promotes the establishment of "AI for Science" innovation consortia, and builds international academic exchange platforms.
On April 4, our institution will host the 3rd lecture of the 2023 "Next-Post-Future" AI4S monthly science series. This lecture will invite Marc Hamilton, Global Vice President of Solutions Architecture and Engineering at NVIDIA, to deliver a talk titled "LLM Applications in Science."
Marc Hamilton leads NVIDIA’s global solutions architecture and engineering teams, delivering the world’s best solutions for artificial intelligence, deep learning, professional visualization, and high-performance computing. Before joining NVIDIA, he worked in the hyperscale business unit at HP and the high-performance computing and data center division at Sun Microsystems. He holds a master’s degree in electrical engineering from the University of Southern California and a dual bachelor’s degree in mathematics and computer science from the University of California, Los Angeles.
NVIDIA is the foundation of AI and has been involved since the early stages of the generative AI (AIGC) revolution. NVIDIA BioNeMo is a domain-specific managed service and framework for large language models in proteomics, small molecules, DNA, and RNA. It is an AI-enabled drug discovery cloud service and framework built on NVIDIA NeMo Megatron, designed for training and deploying large biomolecular Transformer AI models at supercomputing scale. In this lecture, Marc will introduce the fundamentals of LLM technology and discuss how to leverage NVIDIA BioNeMo to advance drug discovery research.
Let us together experience the technological power of NVIDIA BioNeMo’s large language models in empowering drugR&D.
