About Us
Welcome to ML Lab! Our main focus is to enhance AI's intelligence, efficiency, and robustness. Our research spans a wide range of areas, including generative AI (such as LLMs and diffusion models) and reinforcement learning agents. Through innovative algorithmic contributions, we aim to push the boundaries of AI performance and efficiency.
We currently collaborate with Google, NAVER AI Lab, Samsung, Krafton and many other research labs to deliver impactful research. Research opportunities through internships are always open, so feel free to contact us anytime (link)!
Research Highlights
-
LLM / Diffusion Model Compression:
- Compressed Context Memory fine-tunes Transformer LLMs to compress keys and values, reducing KV cache memory requirements by 5× during inference.
- LayerMerge enhances the efficiency of diffusion model inference by pruning and merging nonlinear and convolutional layers.
-
RL Algorithms:
- DPPO trains a policy directly using human preference data without reward modeling .
- Achievement Distillation learns complex subtask hierarchies with minimal supervision for vision-based RL agents in Minecraft-like environments.
- DCPG analyzes the value function learning of RL agents in visually complex environments and develops a phasic algorithm to train a robust value function.
-
Black-Box Optimization:
- Bayesian Red Teaming identifies failures in black-box models, including ChatGPT and Stable-Diffusion, by sampling and editing user inputs with Gaussian process surrogate models.
- Greedy Policy proposes a combinatorial optimization algorithm that uses neural networks to efficiently select proposal batches for black-box problems, such as protein and molecular design.
Besides the topics above, we are open to research on algorithms for more advanced AI technologies! For example, we are currently conducting research on gaming agents, coding models, and other areas.
Research Statement
The Machine Learning Lab at Seoul National University focuses on developing principled optimization algorithms to address fundamental challenges in machine learning. These challenges often involve combinatorial, high-dimensional, or computationally intensive objectives, arising in areas such as neural network compression, metric learning, reinforcement learning, and adversarial robustness. By leveraging techniques such as combinatorial optimization, algebraic, and probabilistic methods, the lab designs efficient and scalable solutions for these complex tasks.
A unifying theme of the lab's work is the development of algorithms that are both theoretically grounded and practically impactful. This includes exploiting problem structures when possible, while ensuring robustness and adaptability in less predictable environments, such as those encountered in reinforcement learning. Central questions drive the lab's research: How can optimization frameworks scale effectively with modern computational demands? What strategies enable robust and adaptive learning across diverse applications? By addressing these questions, the lab aims to advance machine learning with methodologies that are rigorous, efficient, and widely applicable.
Internship Opportunities
- If you are interested in an internship at our lab, please send your CV, including your research interests and experience, to the professor’s email (hyunoh@snu.ac.kr).
- If you have no prior knowledge of deep learning, please attend the Introduction to Deep Learning (or Basics of Deep Learning) class before contacting us.