Vani Mandava
Vani Mandava is the Head of Engineering for the UW Scientific Software Engineering Center within eScience Institute. She is responsible for setting up the SSEC organization and working with PIs to define the priorities and scope of software infrastructure that will strengthen the scientific software community. Before joining UW in 2022, Vani spent over two decades at Microsoft. Her career spanned engineering and product roles in client, server, and services products across Microsoft Office, Bing AdCenter, Microsoft Academic Search, and Microsoft Research Open Data. As Director for Data Science at Microsoft Research, she led Cloud, Data Science, and Trustworthy AI research collaborations with partners in academia and government.
Sessions
Generative AI systems built upon large language models (LLMs) have shown great promise as tools that enable people to access information through natural conversation. Scientists can benefit from the breakthroughs these systems enable to create advanced tools that will help accelerate their research outcomes. This tutorial will cover: (1) the basics of language models, (2) setting up the environment for using open source LLMs without the use of expensive compute resources needed for training or fine-tuning, (3) learning a technique like Retrieval-Augmented Generation (RAG) to optimize output of LLM, and (4) build a “production-ready” app to demonstrate how researchers could turn disparate knowledge bases into special purpose AI-powered tools. The right audience for our tutorial is scientists and research engineers who want to use LLMs for their work.