Seminar Series/ Workshop
Regularly Updated Seminar link: Teams
UPCOMING SEMINAR
Title: Building Reliable Large Language Models for Scientific Research
Speaker: Dr. Feng Yunhe (Jack) Assistant Professor, CSE, University of North Texas [Dec 15]
TALK ABSTRACT: This talk will examine the opportunities and risks of deploying LLMs as tools for scientific discovery in fields such as earth science, biology, and clinical healthcare. I will first give a concise overview of how modern LLMs are built and where their “knowledge” comes from, emphasizing why their statistical nature makes reliability a central concern for scientific use. The core of the talk will focus on three technical aspects that critically shape trustworthiness: hallucinations, bias, and fine‑tuning. I will discuss how and why LLMs hallucinate in scientific tasks and workflows, and how domain‑specific fine‑tuning can both improve and undermine reliability. Throughout, I will highlight practical mitigation strategies—such as retrieval‑augmented generation, tool‑integrated workflows, structured outputs, and rigorous evaluation protocols (e.g., treating LLMs as “research assistants” rather than oracles). I will conclude with concrete design patterns and evaluation practices that laboratories can adopt to use LLMs responsibly, ensuring they act as calibrated, auditable instruments that accelerate discovery without compromising scientific rigor.
SPEAKER BIO: Dr. Yunhe Jack Feng is an Assistant Professor in the Department of Computer Science and Engineering (CSE) at UNT. He directs the Responsible AI Lab and directs the Master’s Program in Artificial Intelligence. His research interests lie at the intersection of Efficient Generative AI, AI Security & Privacy, and Responsible AI, with a focus on developing robust and efficient AI systems. Dr. Feng’s research is supported by grants from the NSF, NIH, DOE, as well as industry collaborations with Microsoft, Google, and NVIDIA. He is the recipient of the 2023 IEEE Smart Computing Special Technical Community Early Career Award and was named to the inaugural Dallas Innovates AI 75 List. His work is published in top conferences and journals such as ACL, EMNLP, AAAI, CVPR, ICCV, IJCAI, USENIX Security, and HPDC. In 2025, Dr. Feng received the CSE Department Teaching Excellence Award and Affiliate Faculty of the Year Award from Data Science Department.
PAST EVENTS
In collaboration with the Post-Doctoral society of Argonne this AI/ML group will be hosting Mathworks for an inperson workshop on July 8th 2025
This will be a hybrid event. Event Link: Teams
In person event location at Argonne: #402, E110
Event Registration Link: Forms
Topics Convered:
1) Creating AI-based reduced-order models using the Reduced Order Modeler App
2) Integrating trained AI models into Simulink for system-level simulation
3) Generating optimized C code and performing HIL tests
4) Hands-on workshop after the seminar to try MathWorks resources
Speaker: Reece Teramoto from Mathworks
Reece Teramoto is a Senior Application Engineer for MathWorks. He has been with MathWorks for 8 years, working with engineers and researchers in the Aerospace, Defense, and Government industries. He specializes in the areas of artificial intelligence (AI), machine learning, and data analytics, and has partnered with customers on applications using AI for image processing, signal processing, and predictive maintenance. Reece graduated with a B. CS and B. EE from the University of Portland
Seminar abstract:
High-fidelity models, such as those based on FEA (Finite Element Analysis), CFD (Computational Fluid Dynamics), and CAE (Computer-Aided Engineering) models can take hours or even days to simulate, and are not suitable for all stages of development. For example, a finite element analysis model that is useful for detailed component design will be too slow to include in system-level simulations for verifying your control system or to perform system analyses that require many simulation runs. A high-fidelity model for analyzing NOx emissions will be too slow to run in real time in your embedded system.Does this mean you have to start from scratch to create faster approximations of your high-fidelity models? No, this is where reduced-order modeling (ROM) comes to the rescue. ROM is a set of computational techniques that helps you reuse your high-fidelity models to create faster-running, lower-fidelity approximations. In this session, you will learn how to create AI-based reduced order models to replace the complex high-fidelity model of a jet engine blade. Using the Simulink add-on for Reduced Order Modeling, see how you can perform a thorough design of experiments and use the resulting input-output data to train AI models using pre-configured templates of LSTMs, neural ODE, and nonlinear ARX. Learn how to integrate these AI models into your Simulink simulations for control design, Hardware-in-the-Loop (HIL) testing, or deployment to embedded systems for virtual sensor applications.
Session Agenda:
• 9:00 AM – 10:00 AM: Seminar
“Create AI-based reduced order models to replace the complex high-fidelity model of a jet engine blade”
• 10:00 AM – 11:00 AM: Hands-on Workshop
Please bring your laptop. Reece from MathWorks will help you refine your idea or revise your workflow for the topic you are interested in
• 11:00 AM – 11:30 AM: Open office hours
An open office hour will be held to discuss general topics, including opportunities at MathWorks
2025 LIST OF PAST SPEAKERS
Title: Building Scalable Agentic Systems for Science with Academy
Speaker: Alok Kamatar, PhD Student under the guidance of Dr Ian Foster, University of Chicago
Alok Kamatar is a 4th year Ph.D. student at the University of Chicago advised by Ian Foster and Kyle Chard. Broadly, he is interested in building systems to enable faster and more efficient science. He is currently working on building Academy: a framework for integrating “Agents” with federated research infrastructure and exploring the associated systems challenges.
Title: Accelerating Materials Discovery via Generative AI
Speaker: Dr. Avanish Mishra,Theoretical Division (T-1),Los Alamos National Laboratory
Avanish Mishra is a Staff Scientist in the Physics and Chemistry of Materials Group within the Theoretical Division at Los Alamos National Laboratory. His research focuses on developing materials-centric machine learning models, generative AI, and autonomous agents to accelerate materials discovery. He also investigates quantum advantage and utility estimation for quantum chemistry and materials applications. Dr. Mishra co-developed the aNANt materials database, contributed to the JARVIS-Leaderboard platform and the URSA agentic workflow, and created virtual characterization tools to augment experimental investigations. He earned his Ph.D. in Materials Science, with a specialization in materials modeling and informatics, from the Indian Institute of Science, Bangalore, in 2019. Following this, he completed a postdoctoral fellowship at the University of Connecticut. In 2022, he joined Los Alamos as a Director’s Postdoctoral Fellow before transitioning to his current staff position.
Title: Edge Computing for Scientific Instruments: Towards Real-Time and AI-ready Discovery
Speaker: Denis Leshchev, Senior Application Engineer, NVIDIA
TALK ABSTRACT: Next-generation scientific instruments generate vast amounts of data at increasingly higher rates, outpacing traditional data management that relies on large-scale transfers to offline storage for post-analysis. To address the needs of the scientific experiments of the future, the instruments must be augmented with computational resources that make them autonomous and intelligent, thus boosting speed, efficiency, and impact of scientific discovery. This talk will showcase NVIDIA’s advancements in enabling real-time data processing and AI inferencing at scientific instruments through integration with edge computing. We will present examples of pipelines across several scientific domains, with a particular focus on synchrotron ptychographic nanoimaging, and highlight how these developments pave the way for autonomous experiments performed at machine speeds.
Title: Actionable AI for Inorganic Materials
Speaker: Dr. Linda Hung from Toyota Research Institute [May 12]
Linda Hung is a Senior Manager in the Energy & Materials Division at Toyota Research Institute, and an associate editor for the journal Digital Discovery. She obtained her PhD in applied and computational mathematics from Princeton University and has held research positions at the Ecole Polytechnique (France), the University of Illinois Chicago, and NIST before joining TRI in 2017
***
2024 LIST OF PAST SPEAKERS
Title: Seeing Beyond the Blur: Imaging Black Holes with Increasingly Strong Assumptions
Speaker: Prof. Katherine L. (Katie) Bouman from California Institute of Technology [Dec 17]
Katherine L. (Katie) Bouman is an associate professor in the Computing and Mathematical Sciences, Electrical Engineering, and Astronomy Departments at the California Institute of Technology. Before joining Caltech, she was a postdoctoral fellow in the Harvard-Smithsonian Center for Astrophysics. She received her Ph.D. in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT in EECS, and her bachelor’s degree in Electrical Engineering from the University of Michigan. As part of the Event Horizon Telescope Collaboration, she is co-lead of the Imaging Working Group and acted as coordinator for papers concerning the first imaging of the M87 and Sagittarius A* black holes.*
Recorded video: Youtube
Title: Space-time methods to address unwanted dynamics in atomic electron tomography
Speaker: Tiffany Chien from University of California, Berkeley [Nov 18]
Tiffany Chien is a PhD candidate at UC Berkeley working in Professor Laura Waller’s Computational Imaging Lab on reconstruction algorithms for motion modeling and dynamic imaging. She is currently designing dynamic reconstruction algorithms based on implicit neural representations to address experimental challenges in electron microscopy, but she is broadly interested in signal processing, optics, probability and statistics, and the careful application of machine learning to science. She also did her undergraduate studies at UC Berkeley, where she studied computer science and linguistics, and worked on both computational imaging for neuron imaging and other fun things like natural language processing.
Title: Foundational AI for materials discovery
Speaker: Victor Fung from Georgia Institute of Technology [Oct 21]
Victor Fung currently works at Georgia Tech. Previously he worked at Oak Ridge National Laboratory where he was a Eugene P. Wigner Fellow in the Center for Nanophase Materials Sciences (CNMS). He obtained his B.A. in Chemistry from Cornell University and his Ph.D. in Chemistry from the University of California, Riverside. His research seeks to harness the power of computing and machine learning to accelerate the chemical discovery process, with the eventual goal of fully realizing materials by inverse design. This includes developing novel methods and tools which incorporate chemical information to model phenomena at the atomic scale, as well as design new materials from the ground up, atom-by-atom. His work also involves establishing automated, data-driven and domain-informed ecosystems for materials and chemical discovery which can be deployed on the latest supercomputers.
Speaker: Dr. Prasad Iyer from Sandia National Laboratories [July 29]
Speaker: Prof. Emma Alexander from Northwestern University [Aug 12]
2023 LIST OF PAST SPEAKERS
Dr. Zhantao Chen from SLAC National Accelerator Laboratory [May 8]
Prof. Pinshane Huang from University of Illinois at Urbana-Champaign [June 5]
Dr. Steven Torrisi from Toyota Research Institute [July 10]
Dr. Chris Rackauckas from Massachusetts Institute of Technology [Aug 14]
Dr. Sam Dillavou from University of Pennsylvania [Sept 18]
Dr. Peter Lu from University of Chicago [Oct 23]
Prof. Tess Smidt Massachusetts Institute of Technology [Nov 15]
Prof. Aditi Krishnapriyan from University of California, Berkeley [Dec 11]
Links to Past Seminars:
Advancing Atom-by-Atom Characterization Using Generative Networks by Pinshane Huang Tutorial link: Youtube
Machine Learning for Intelligent Data Collection and Analysis by Zhantao Chen Tutorial link: Youtube
Accelerating mesoscale predictions via surrogate models trained by machine learning methods by Remi Dingreville Tutorial link: Youtube
Exploring Energy-Efficiency in Neural Systems with Spike-based Machine Intelligence by Priya Pandey Tutorial link: Youtube
Autonomous experiments in the age of computing, machine learning and automation by Rama Vasudhevan Tutorial link: Youtube
Date and Time: 03-06-2022 ; 3 PM - 4 PM CST
Title: Accelerating mesoscale predictions via surrogate models trained by machine learning methods
Speaker: Dr. Rémi Dingreville
Download the abstract here (PDF).
Meeting link: Teams
Rémi Dingreville is a Distinguished Member of the Technical Staff at Sandia National Laboratories and staff scientist at the Center for Integrated Nanotechnologies (CINT) a DOE Office of Science user facility. His current research is at the intersection of computational materials and data sciences to understand and characterize process-structure-properties for materials reliability across scales. He leads a few research programs at Sandia focused on the discovery of resilient materials and manufacturing processes via AI-guided approaches. Rémi holds a Ph.D. in Mechanical Engineering from the Georgia Institute of Technology in Atlanta GA, and a B.S./M.S. in Materials Science and Engineering from École Nationale Supérieure des Techniques Avancées in France.
Date and Time: 11-28-2022 ; 3 PM - 4 PM CST
Title: Exploring Energy-Efficiency in Neural Systems with Spike-based Machine Intelligence
Speaker: Dr. Priya Panda
Download the abstract here (PDF).
Meeting link: Teams
Dr. Panda is an assistant professor in the electrical engineering department at Yale University, USA. She received her B.E. and Master’s degree from BITS, Pilani, India in 2013 and her PhD from Purdue University, USA in 2019. During her PhD, she interned in Intel Labs where she developed large scale spiking neural network algorithms for benchmarking the Loihi chip. She is the recipient of the 2019 Amazon Research Award, 2022 Google Research Scholar Award, 2022 DARPA Riser Award. Her research interests lie in Neuromorphic Computing, energy-efficient accelerators, and in-memory processing.
Date and Time: 08-29-2022 ; 3 PM - 4 PM CST
Title: Autonomous experiments in the age of computing, machine learning and automation: progress and challenges
Speaker: Dr. Rama Vasudevan
Download the abstract here (PDF).
Meeting link: Teams
Dr. Vasudevan is the group Leader of the Data NanoAnalytics (DNA) Group at Center for Nanophase Materials Sciences, Oak Ridge National Laboratory. His research is focused on smart, autonomous synthesis and characterization tools driven by improvements in machine learning and tight integration between theory, automation and individual instruments. He has a specific sub-focus is on applications and development of scalable reinforcement learning for scanning probe microscopy, to optimize, manipulate and better characterize ferroic materials at the nanoscale, and upgrade scanning probe microscopy from a standard characterization tool to one capable of autonomous physics discovery by connecting algorithms, edge computing and theory in end-to-end automated workflows.
Date and Time: 04-25-2022 ; 3 PM - 4 PM CST
Title: AI Applications with Spiking Neural Networks and Neuromorphic Computing
Speaker: Dr. Shruti R. Kulkarni
Download the abstract here (PDF).
Meeting link: Teams
Dr. Kulkarni is a research scientist at the Oak Ridge National Laboratory in the Learning Systems group. Her research spans different aspects of neuromorphic computing including algorithms, applications, simulators, and hardware codesign. She was formerly a postdoc at ORNL advised by Dr. Catherine Schuman, working on different techniques to implement learning to learn framework for neuromorphic computing. She received her PhD in 2019 from New Jersey Institute of Technology with a major in Electrical Engineering under the guidance of Dr. Bipin Rajendran. Her dissertation was on bio-inspired learning and hardware acceleration with emerging memories. She has 19 peer reviewed journal and conference publications.
Date and Time: 03-07-2022 ; 3 PM - 4 PM CST
Title: Understanding Trajectories: From Electron Microscopes to Atomistic Simulations
Speaker: Dr. Ayana Ghosh
Download the abstract here (PDF).
Meeting link: Teams
Dr. Ghosh’s research focuses on data-driven and machine learning approaches combined with the state-of-the-art first principles methods to study complex material systems. In particular, she has interestsin developing physics-based machine learning frameworks to investigate causal mechanisms in a wide range of materials ranging from inorganic perovskites, actinides, 2D systems to organic crystals and polymers. Bridging the gap between appropriately utilizing data generated by simulations and experiments require a great deal of understanding the nuances present in both fields, which is the goal of her efforts. She received her MS, PhD in Materials Science and Engineering from the University of Connecticut in the summer of2020 and BS in Physics, Abstract Mathematics from the University of Michigan-Flint in spring of 2015.
Date and Time: 02-16-2022 ; 2 PM - 3 PM CST
Title: AI-Driven Design of High Entropy Halide Perovskite Alloys
Speaker: Dr. Arun Mannodi-Kanakkithodi
Download the abstract here (PDF).
Meeting link: Teams
Dr. Kanakkithodi is an assistant professor in Materials Engineering at Purdue University. He received his PhD in Materials Science and Engineering from the University of Connecticut in 2017 and worked as a postdoctoral researcher at the Center for Nanoscale Materials in Argonne National Laboratory from 2017 to 2020. His research involves using first principles computational modeling, machine learning, and materials informatics to drive the design of new materials for energy-relevant applications. He is a resident associate in the Nanoscience and Technology Division at Argonne, a regular attendee, presenter, and organizer at the Materials Research Society (MRS) spring and fall meetings, and a co-organizer of the data science and machine learning workshop series as part of the NSF-funded nanoHUB.org.