Semester.ly

Johns Hopkins University | EN.601.879

Selected Topics in Large Language Model Reasoning

1.0

credits

Average Course Rating

(-1)

Recently, large language models (LLMs) have demonstrated exceptional reasoning capabilities in mathematics, science, coding, and other complex problem-solving tasks, sparking growing interest in understanding and improving how these models think. This seminar will explore both the mechanisms and limitations of the current state-of-the-art LLM reasoning, including chain-of-thought prompting, efficient reasoning, reinforcement learning for incentivizing reasoning capabilities, and benchmark approaches for evaluating reasoning quality. Through weekly paper readings and discussions, students will critically examine current methods, analyze model behaviors, and develop innovative research directions aimed at making LLM reasoning more reliable, efficient, and robust. Students are expected to have basic knowledge of machine learning methods, architectures, and processes, including familiarity with the transformer architecture and the training workflows.

No Course Evaluations found