Adaptive Robotics and Autonomous Manipulation for Flexible Manufacturing: Reinforcement Learning, Sim-to-Real Transfer, and Dexterous Manipulation in Contact-Rich Environments
PDF

Keywords

Adaptive Robotics
Autonomous Manipulation

Abstract

The transition from rigid, pre-programmed industrial automation to flexible, adaptive robotic systems capable of operating in unstructured and dynamic manufacturing environments represents one of the most significant challenges and opportunities in modern manufacturing. Contact-rich manipulation tasks—insertion, peg-in-hole assembly, polishing, cable routing, and deformable object handling—are ubiquitous in manufacturing but remain among the most difficult to automate, requiring the robot to reason about physical contact, comply with environmental constraints, and adapt to variations in part geometry, friction, and placement. Conventional robotic programming approaches—based on pre-specified trajectories and force thresholds—are fundamentally inadequate for these tasks: they cannot handle the variability inherent in real manufacturing environments, require extensive manual tuning for each task and product variant, and cannot adapt when conditions change. This review provides a comprehensive synthesis of adaptive robotics and autonomous manipulation for flexible manufacturing, examining deep reinforcement learning for contact-rich manipulation, sim-to-real transfer for robust policy deployment, dexterous multi-fingered manipulation, deformable object handling, and LLM-based task planning for robotic assembly. We further connect these advances to industrial sensing technologies—precision 3D optical metrology and four-dimensional thermal imaging—demonstrating their roles as enabling perceptual modalities for adaptive manipulation systems. A central contribution is the articulation of a unified Adaptive Manipulation Stack framework that integrates perception, task planning, learned control policies, and real-time adaptation for next-generation flexible manufacturing automation.

PDF

References

1. Tang, C., Abbatematteo, B., Hu, J., Chandra, R., Martín-Martín, R., & Stone, P. (2024). Deep reinforcement learning for robotics: A survey of real-world successes. Annual Review of Control, Robotics, and Autonomous Systems. https://doi.org/10.1146/annurev-control-030323-022510

2. Zeng, F., Gan, W., Wang, Y., Liu, N., & Yu, P. S. (2023). Large language models for robotics: A survey. arXiv preprint arXiv:2311.07226. https://doi.org/10.48550/arXiv.2311.07226

3. Bian, S., Zhang, Y., Tian, G., Miao, Z., Wu, E. Q., Yang, S. X., & Hua, C. (2025). Large language model-based task planning for service robots: A review. arXiv preprint arXiv:2510.23357. https://doi.org/10.48550/arXiv.2510.23357

4. Liu, H., Zhou, Y., Wu, Z., Ji, Z., et al. (2026). RoCo Challenge at AAAI 2026: Benchmarking robotic collaborative manipulation for assembly towards industrial automation. arXiv preprint arXiv:2603.15469. https://doi.org/10.48550/arXiv.2603.15469

5. Huang, H., Tang, J., Liu, T., & Huang, M.-L. (2026). Precision 3D surface metrology of optical components using stereo phase-measuring deflectometry with deep learning-enhanced phase unwrapping. Proceedings of SPIE, 0898. https://doi.org/10.1117/12.3093993

6. Huang, H., Yang, Y., & Zhu, Y. (2023). Accurate 4D thermal imaging of uneven surfaces: Theory and experiments. International Journal of Heat and Mass Transfer, 211, 124580. https://doi.org/10.1016/j.ijheatmasstransfer.2023.124580

7. Huang, M., Li, Y., Zhang, Z., et al. (2025). Real-time decision-making for digital twin in additive manufacturing with model predictive control using time-series deep neural networks. ScienceDirect. https://doi.org/10.1016/j.addma.2025.103456

8. Chen, Y., Ren, T., Li, Y., Jiang, G., Liu, Q., Chen, Y., & Yang, S. X. (2026). AI-empowered intelligence in industrial robotics: technologies, challenges, and emerging trends. Intelligence & Robotics, 6(1), 1-18. https://doi.org/10.20517/ir.2026.01

9. Li, Y., Lou, J., Cai, Z., Zheng, P., Wu, H., & Wang, X. (2024). An interactive gesture control system for collaborative manipulator based on Leap Motion Controller. Advances in Mechanical Engineering, 16(5), 16878132241253101. https://doi.org/10.1177/16878132241253101

10. Mnih, V., Kavukcuoglu, K., Silver, D., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518, 529–533. https://doi.org/10.1038/nature14236

11. Parnada, A., Qu, M., Castellani, M., Chang, H. J., & Wang, Y. (2026). Towards cost-effective and safe contact-rich robotic manipulation with reinforcement learning: A review of techniques for future industrial automation. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering. https://doi.org/10.1177/09596518251350353

12. Zhou, K., Zhong, L., Liu, J. et al. (2026). Unveiling the Role of Western Pacific Subtropical High in Urban Heat Islands Using Local Climate Zones Coupled WRF-BEP/BEM. Earth Syst Environ, 10, 363–390. https://doi.org/10.1007/s41748-025-00589-z

13. Zhao, Y., Zhong, L., Zhou, K., Liu, B., & Shu, W. (2024). Responses of the urban atmospheric thermal environment to two distinct heat waves and their changes with future urban expansion in a Chinese megacity. Geophysical Research Letters, 51(11), Article e2024GL109018. https://doi.org/10.1029/2024GL109018

14.Wang, S., Yu, Y., Feldt, R., & Parthasarathy, D. (2025). Automating a complete software test process using LLMs: An automotive case study. 2025 IEEE/ACM 47th International Conference on Software Engineering (ICSE), 1–12. https://doi.org/10.1109/ICSE55347.2025.00211