Zhongsheng Wang: Creative Intelligence: Applications of Large Language Models in Data Generation and Reasoning

Speaker:

Zhongsheng Wang

Time:

  • 15:00-17:00 Beijing Time
  • Dec 19, 2024 (Thursday)

Venue:

518, Research Building 4

Abstract:

Large language models (LLMs) are widely used in data generation and reasoning. This talk covers the basic concepts, advantages, development history, and pre-training and fine-tuning process of LLMs, and focuses on some of the speaker’s work: the combined application of LLMs in high-quality self-supervised data generation, LLM agents combined with reinforcement learning to generate epic long text story novels, and challenges and solutions in reasoning tasks, including multi-step logical reasoning using LLM-generated code. In addition, we propose a potential idea: to improve the performance of LLMs in multi-step deductive logical reasoning tasks by building a world model and combining it with a model checking method (LLM+X).

Speaker Bio:

Zhongsheng Wang is the PhD Candidate at UoA, focusing on Trustworthy AI Agents based on MRAG. He received his undergraduate degree in Computer Science and Technology from the Southwest University in 2023 and master degree in Data Science at the University of Auckland in 2024. His research interests include natural language processing, and he’s currently focusing on research on LLM-Based Agents and the practical application of LLMs. He’s also a member of the LIU AI LAB, which is dedicated to cutting-edge research in artificial intelligence.