Junfeng Long「龙俊峰」
Hello there! I am Junfeng Long, a researcher focusing on Control, Reinforcement Learning and Robotics. I am currently visiting Hybrid Robotics at UC Berkeley advised by Prof. Koushil Sreenath. Previously, I was a research intern
at Shanghai Artificial Intelligence Innovation Center(a.k.a. Shanghai AI Laboratory),
where I was working on Reinforcement Learning and Robotics Control advised by Dr. Jiangmiao Pang.
I received my Bachelor's degree in Computer Science and Technology from ShanghaiTech University.
During my undergraduate, I worked with Prof. Youlong Wu
as an undergraduate research assistant on Information Theory and its Application in Distributed Systems. I am actively applying for Ph.D. position starting from Fall 2025.
My research interest mainly falls on Reinforcement Learning, Optimization, Control and Robotics. I am particularly
interested in the intersection of machine learning and control theory and apply them to real robotic systems to achieve
agile and robust robotic locomotion, manipulation and interaction. My goal is to bring together the strengths of machine
learning (generalisability and inference friendliness) and control theory (robustness, theoretical guarantees) to
push forward the frontier of robotic systems. Besides, I am also interested in the theory of optimization and learning.
I am always open to new ideas and collaborations. If you are interested in my research or have any questions, please drop me
an email.
Google Scholar  | 
Github  | 
Twitter  | 
Email 
|
|
GRUtopia: Dream General Robots in a City at Scale
Hanqing Wang*, Jiahe Chen*, Wensi Huang*, Qingwei Ben*, Tai Wang*, Boyu Mi*, Tao Huang, Siheng Zhao, Yilun Chen, Sizhe Yang, Peizhou Cao, Wenye Yu, Zichao Ye, Jialun Li, Junfeng Long, Zirui Wang, Huiling Wang, Ying Zhao, Zhongying Tu, Yu Qiao, Dahua Lin, Jiangmiao Pang†
Under Review
[Project Page]
[Paper]
[Code]
[BibTeX]
We proposed GRUtopia, the first simulated interactive 3D society designed for various robots. It features (a)GRScenes,
a dataset with 100k interactive and finely annotated scenes. (b) GRResidents, a LLM driven NPC system. (c) GRBench, a
benchmark posing moderately challenging tasks.
|
|
Learning Humanoid Locomotion with Perceptive Internal Model
Junfeng Long*, Junli Ren*, Moji Shi*, Zirui Wang, Tao Huang, Ping Luo, Jiangmiao Pang†
LocoLearn Workshop at CoRL 2024
[Project Page]
[Paper]
[Code]
[BibTeX]
We propose the Perceptive Intenal Model (PIM), a method to estimate environmental disturbances
with perceptive information, enabling agile and robust locomotion for various humanoid robots on various terrains.
|
|
Parallelizing Model-based Reinforcement Learning Over the Sequence Length
Zirui Wang, Yue Deng, Junfeng Long, Yin Zhang†
2024 Annual Conference on Neural Information Processing Systems
[Paper]
[Code]
[BibTeX]
We propose the Parallelized Model-based Reinforcement Learning (PaMoRL) framework. PaMoRL introduces two
novel techniques: the Parallel World Model (PWM) and the Parallelized Eligibility Trace Estimation (PETE)
to parallelize both model and policy learning stages of current MBRL methods over the sequence length.
|
|
Learning H-infinity Locomotion Control
Junfeng Long*, Wenye Yu*, Quanyi Li*, Zirui Wang, Dahua Lin, Jiangmiao Pang†
2024 Conference on Robot Learning, Best Poster Award of LocoLearn Workshop at CoRL 2024
[Project Page]
[Paper]
[Code]
[BibTeX]
We present the H-infinity Locomotion Control, an adversarial framework improving the control policy's ability to
resist external disturbances with H-infinity performance guarantee.
|
|
TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation
Junli Ren*, Yikai Liu*, Yingru Dai, Junfeng Long, Guijin Wang†
2024 Conference on Robot Learning
[Project Page]
[Paper]
[Code]
[BibTeX]
We propose TOP-Nav, a novel legged navigation framework that integrates a comprehensive path planner
with Terrain awareness, Obstacle avoidance and close-loop Proprioception.
|
|
Hybrid Internal Model: Learning Agile Legged Locomotion with Simulated Robot Response
Junfeng Long*, Zirui Wang*, Quanyi Li, Jiawei Gao, Liu Cao, Jiangmiao Pang†
2024 International Conference on Learning Representations
[Project Page]
[Paper]
[Code]
[BibTeX]
We present the Hybrid Internal Model,
a method enabling the control policy to estimate environmental disturbances
by only explicitly estimating velocity and implicitly simulating the system's response.
|
|
On the Optimality of Data Exchange for Master-Aided Edge Computing Systems
Haoning Chen, Junfeng Long, Shuai Ma, Mingjian Tang, Youlong
IEEE Transactions on Communications
[Paper]
[BibTeX]
We propose a coded scheme to reduce the communication latency by exploiting computation and communication
capabilities of all nodes and creating coded multicast opportunities.More importantly, we prove that the
proposed scheme is always optimal, i.e., achieving the minimum communication latency, for arbitrary computing
and storage abilities at the master.
|
Best Poster Award of LocoLearn Workshop at CoRL 2024
Outstanding Teaching Assistant Award, ShanghaiTech University, 2021
Outstanding Individual in Industry Practice, ShanghaiTech University, 2021
|
Updated at November 2024.
Template from Jon Barron.
|
|