Luận án Developing efficient localization and motion planning systems for a wheeled mobile robot in a dynamic environment

Chapter 1

INTRODUCTION

1.1. Motivation

Mobility is an essential navigation issue for autonomous mobile robots.

To allow the mobile robots to navigate safely in a real-world environment,

the mobile robots must deal with four typical functional blocks of the

navigation system [1], as shown in Fig. 1.1, including: (i) perception {

the mobile robots must interpret its sensors to extract meaningful information; (ii) localization { the mobile robots must determine their position and orientation in the environment. In other words, it answers the

question \Where am I?"; (iii) motion planning { includes path planning

techniques and obstacle avoidance methods. The mobile robots utilize it

to decide how to act to achieve its goals; and (iv) motor control { the

mobile robots must modulate their motor outputs to achieve the desired

trajectory, i.e. PID control

It has been known that, the success in robot navigation requires the

1success of the four aforementioned fundamental processes, and to improve the performance of the robot’s navigation system, the performance

of all processes need to be improved.

In recent years, several domestic researches in the field of robotics

have been published in recent years, such as the publications of Vietnam Academy of Science and Technology, Institute of Information Technology, Hanoi University of Science and Technology, Vietnam National

University, Le Quy Don Technical University and Ho Chi Minh City University of Technology. The domestic works mainly focus on trajectory

tracking systems [2], [3], [4] and [5]. Specifically, the authors propose

control laws which enable the mobile robots to follow predefined trajectories. In this research, the authors also stated that, using these control

laws the systems can overcome uncertainties caused by dynamic parameter variations and external disturbances. In other research direction,

some studies [6] and [7] propose algorithms based on extended Kalman

filter (EKF) to improve the localization system for mobile robots in unknown environments. And a few works [8], [9] develop adaptive control

algorithms for tracking moving targets by using image features of the

target which get from a camera system. Finally, a little researches introduce a trajectory planning method in a static environment with known

start point and target point [10]. As a result, the mobile robot navigation system, especially localization and motion planning systems has

not been focused and adequately researched.

Therefore, in this research we only focuses on two interesting systems

including localization and motion planning systems, which are the scope

2of the thesis.

Localization is the problem of estimating a robot’s pose relative to

its environment from sensor observations. It has been referred as the

most fundamental problem to provide the mobile robot with autonomous

competences. The challenges of localization are from the inaccuracy and

inadequacies of sensors and effects of noise. Firstly, the errors of the measurement model, or sensor noise, due to the structural characteristics,

resolution and error tolerance of different types of sensors or dynamic

environments, such as light conditions, obstacles. Clearly, the solution

here is to take multiple readings into account or multi-sensor fusion to

increase the overall information from inputs. Secondly, the errors also

can be caused by systematic errors (deterministic) such as the size of

uneven wheels, the distance between two unbalance wheels, and they

can be eliminated by proper calibration of the system. However, there

are still a number of non-systematic (random) errors that remain, such

as slipping on the surface, changes in the contact points of the wheel are

uneven [1], leading to uncertainties in position estimation over time. In

addition, when the mobile robot navigates in the harsh environmental

conditions, the information can be interrupted in a short or long interval

of time. Therefore, the mobile robot might have insufficient information

for estimating the pose during its navigation.

pdf 152 trang chauphong 16/08/2022 11780
Bạn đang xem 20 trang mẫu của tài liệu "Luận án Developing efficient localization and motion planning systems for a wheeled mobile robot in a dynamic environment", để tải tài liệu gốc về máy hãy click vào nút Download ở trên

Tóm tắt nội dung tài liệu: Luận án Developing efficient localization and motion planning systems for a wheeled mobile robot in a dynamic environment

Luận án Developing efficient localization and motion planning systems for a wheeled mobile robot in a dynamic environment
MINISTRY OF EDUCATION AND TRAINING MINISTRY OF NATIONAL DEFENCE
MILITARY TECHNICAL ACADEMY
NGUYEN THI LAN ANH
DEVELOPING EFFICIENT LOCALIZATION
AND MOTION PLANNING SYSTEMS
FOR A WHEELED MOBILE ROBOT
IN A DYNAMIC ENVIRONMENT
DOCTORAL DISSERTATION: CONTROL ENGINEERING AND AUTOMATION
HA NOI - 2021
MINISTRY OF EDUCATION AND TRAINING MINISTRY OF NATIONAL DEFENCE
MILITARY TECHNICAL ACADEMY
NGUYEN THI LAN ANH
DEVELOPING EFFICIENT LOCALIZATION
AND MOTION PLANNING SYSTEMS
FOR A WHEELED MOBILE ROBOT
IN A DYNAMIC ENVIRONMENT
DOCTORAL DISSERTATION
Major: CONTROL ENGINEERING AND AUTOMATION
Code: 092520216
SUPERVISOR: Assoc. Prof. Dr. Pham Trung Dung
HA NOI - 2021
ASSURANCE
I certify that this dissertation is a research work done by the author
under the guidance of the research supervisors. The dissertation has
used citation information from many different references, and the ci-
tation information is clearly stated. Experimental results presented in
the dissertation are completely honest and not published by any other
author or work.
Author
Nguyen Thi Lan Anh
ACKNOWLEDGEMENTS
First of all, I would like to express my sincere gratitude to my advisor,
Assistant Professor Pham Trung Dung, who has been directly guiding
me through the PhD progress. His passionate enthusiasm, unwavering
dedication to research, and insightful advice have motivated me to carry
out this research. I do appreciate all support and opportunities that he
has provided to me.
Then, I wish to thank my co-supervisor my co-supervisor, Dr. Truong
Xuan Tung, for his valuable advices on my research. He has given and
discussed a lot of new issues with me. Working with Dr. Tung, I have
learnt how to do research systematically. His support have motivated
me to overcome all challenges in during my PhD journey.
Next, I also would like to thank the leaders and all lecturers of the Fac-
ulty of Control Engineering, Military Technical Academy for supporting
me with favorable conditions and cheerfully helping me in the study and
research process.
Finally, I must express my very profound gratitude to my parents,
to my husband for unfailing support me and always encouraging, to my
daughter, Tran Nguyen Khanh An, and my son, Tran Duc Anh for trying
to grow up by themselves. This accomplishment would not have been
possible without them.
Author
Nguyen Thi Lan Anh
CONTENTS
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
List of figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
List of tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Chapter 1. INTRODUCTION. . . . . . . . . . . . . . . . . . . . . . . . 1
1.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2. Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4. Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5. Dissertation outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Chapter 2. BACKGROUND . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1. Mobile robot models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.1. Mobile robot platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.2. Kinematic model of differential-drive robot . . . . . . . . . . . . . 15
2.2. Bayesian filters for localization systems . . . . . . . . . . . . . . . . . . . . . 17
2.2.1. Extended Kalman filter algorithm . . . . . . . . . . . . . . . . . . . . . . 18
2.2.2. The particle filter algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3. Typical obstacle avoidance algorithms . . . . . . . . . . . . . . . . . . . . . 24
2.3.1. The dynamic window approach algorithm. . . . . . . . . . . . . . . 25
2.3.2. Hybrid reciprocal velocity obstacle model . . . . . . . . . . . . . . . 30
i
2.3.3. Timed elastic band technique . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4. Conclusions of the chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Chapter 3. SENSOR DATA FUSION-BASED LO-
CALIZATION ALGORITHMS . . . . . . . . . . . . . . . . . . . . . . . .
39
3.1. Extended Kalman filter-based localization algorithm. . . . . . . . 40
3.1.1. Construction of EKF-based localization algorithm . . . . . . 42
3.1.2. Results and discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.2. Particle filter-based localization algorithm . . . . . . . . . . . . . . . . . . 55
3.2.1. Construction of PF-based localization algorithm . . . . . . . . 57
3.2.2. Results and discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.3. Remarks and discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Chapter 4. DEVELOPING EFFICIENT MOTION
PLANNING SYSTEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1. Proposed enhanced dynamic window approach algorithm . . . 70
4.1.1. Problem description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.1.2. Construction of the EDWA algorithm. . . . . . . . . . . . . . . . . . . 75
4.1.3. The EDWA algorithm-based navigation framework . . . . . 78
4.1.4. Algorithm validation by simulations and experiments . . . 79
4.1.5. Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.2. Proposed proactive timed elastic band algorithm . . . . . . . . . . . 90
4.2.1. Problem description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.2.2. Construction of the PTEB algorithm . . . . . . . . . . . . . . . . . . . 94
4.2.3. The PTEB algorithm-based navigation framework . . . . . . 97
4.2.4. Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
ii
4.2.5. Remarks and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.3. Proposed extended timed elastic band algorithm . . . . . . . . . . 104
4.3.1. Problem description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3.2. Construction of the ETEB algorithm . . . . . . . . . . . . . . . . . . 107
4.3.3. Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3.4. Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.4. Proposed integrated navigation system . . . . . . . . . . . . . . . . . . . . 113
4.4.1. Completed navigation framework . . . . . . . . . . . . . . . . . . . . . . 114
4.4.2. Experimental setup and results . . . . . . . . . . . . . . . . . . . . . . . . 120
4.4.3. Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.5. Conclusions and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Chapter 5. CONCLUSIONS AND FUTURE WORKS
125
5.1. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.2. Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.3. Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
PUBLICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
iii
ABBREVIATIONS
No. Abbreviation Meaning
1 IMU Inertial Measurement Unit
2 GPS Global Position System
3 KF Kalman Filter
4 EKF External Kalman Filter
5 PF Particle Filter
6 VO Velocity Obstacle
7 RVO Reciprocal Velocity Obstacle
8 HRVO Hybrid Reciprocal Velocity Obstacle
9 DWA Dynamic Window Approach
10 EDWA Enhance Dynamic Window Approach
11 EB Elastic Band
12 TEB Time Elastic Band
13 PTEB Proactive Time Elastic Band
14 ETEB Extended Time Elastic Band
15 ROS Robot Operating System
16 PCL Point Cloud Library
iv
LIST OF FIGURES
1.1 A general control scheme for autonomous mobile robots. . . 1
2.1 Two mobile robot platforms under the study. . . . . . . . . . 13
2.2 The global reference frame and the robot reference frame. . . 15
2.3 The velocity space of the dynamic window approach model.
Vs, Va, Vd are the possible velocities, admissible velocities,
and dynamic window, respectively. . . . . . . . . . . . . . . 26
2.4 Procedure of the hybrid reciprocal velocity obstacles of a
robot and an obstacle. . . . . . . . . . . . . . . . . . . . . . 30
2.5 TEB trajectory representation with n=3 poses . . . . . . . . 33
2.6 The example of exploration graph (a). The block diagram
of parallel trajectory planning of time elastic bands (b). . . . 36
3.1 The block diagram of the proposed autonomous mobile
robot localization systems based on the multiple sensor
fusion methods. . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2 The data flow from sensors into the EKF for robot localization.46
3.3 The extended Kalman filter-based mobile robot localiza-
tion system. . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4 The proposed approaches . . . . . . . . . . . . . . . . . . . . 49
3.5 The sinusoidal trajectories of the mobile robot in three
approaches. . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.6 The circular trajectories of the mobile robot in three ap-
proaches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.7 The mean error and mean square error of the robot’s po-
sition of three approaches in two simulations. . . . . . . . . . 55
v
3.8 The simulation results using PF localization . . . . . . . . . 64
4.1 The navigation framework for autonomous mobile robot. . . 68
4.2 The example scenario of the dynamic environments in-
cluding a mobile robot and two dynamic obstacles. . . . . . 74
4.3 The efficient navigation system based on the EDWA algorithm78
4.4 The trajectory of the mobile robot and obstacles in Sce-
nario 1 and 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.5 The trajectory of the mobile robot and obstacles in Sce-
nario 3 and 4. . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.6 The minimum passing distance along the robot’s trajectory. 85
4.7 The robot’s velocity along the trajectory of mobile robot. . . 86
4.8 (a) The Eddie mobile robot platform equipped with a laser
rangefinder and a NVIDIA Xavier Developer Kit;  ... ernational Conference on Intelligent Robots and Systems,
2017, pp. 5681–5686.
[28] D. Helbing and P. Molna´r, “Social force model for pedestrian dynamics,” Physical
Review E, pp. 4282–4286, 1995.
[29] D. Fox, W. Burgard, and S. Thrun, “The dynamic window approach to collision
avoidance,” IEEE Transactions on Robotics and Automation, vol. 4, no. 1, pp.
23–33, Mar. 1997.
[30] J. van den Berg, C. L. Ming, and D. Manocha, “Reciprocal velocity obstacles
for real-time multi-agent navigation,” in Proceedings of the IEEE International
Conference on Robotics and Automation, 2008, pp. 1928–1935.
[31] J. Snape, J. Van den Berg, S. Guy, and D. Manocha, “The hybrid reciprocal
velocity obstacle,” IEEE Transactions on Robotics, vol. 27, no. 4, pp. 696–706,
August 2011.
134
[32] D. Zhang, Z. Xie, P. Li, J. Yu, and X. Chen, “Real-time navigation in dynamic
human environments using optimal reciprocal collision avoidance,” in Proceedings
of the 2015 IEEE International Conference on Mechatronics and Automation, Au-
gust 2015, pp. 2232–2237.
[33] D. Claes, D. Hennes, and K. Tuyls, “Towards human-safe navigation with pro-
active collision avoidance in a shared workspace,” in Workshop on On-line
decision-making in multi-robot coordination, 2015.
[34] P. Fiorini and Z. Shillert, “Motion planning in dynamic environments using veloc-
ity obstacles,” International Journal of Robotics Research, vol. 17, pp. 760–772,
1998.
[35] O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots,” in
Proceedings of the IEEE International Conference on Robotics and Automation,
vol. 2, March 1985, pp. 500–505.
[36] J. Borenstein and Y. Koren, “The vector field histogram-fast obstacle avoidance
for mobile robots,” IEEE Transactions on Robotics and Automation, vol. 7, no. 3,
pp. 278–288, June 1991.
[37] S. Quinlan and O. Khatib, “Elastic bands: connecting path planning and control,”
in [1993] Proceedings IEEE International Conference on Robotics and Automation,
May 1993, pp. 802–807.
[38] S. M. LaValle and J. J. Kuffner, Jr., “Randomized kinodynamic planning,” The
International Journal of Robotics Research, vol. 20, no. 5, pp. 378–400, 2001.
[39] D. Hsu, R. Kindel, J. C. Latombe, and S. Rock, “Randomized kinodynamic motion
planning with moving obstacles,” The International Journal of Robotics Research,
vol. 21, no. 3, pp. 233–255, 2002.
[40] C. Rosmann, W. Feiten, T. Wosch, F. Hoffmann, and T. Bertram, “Trajectory
modification considering dynamic constraints of autonomous robots,” in 7th Ger-
man Conference on Robotics, May 2012, pp. 1–6.
135
[41] C. Rosmann, F. Hoffmann, and T. Bertram, “Integrated online trajectory planning
and optimization in distinctive topologies,” Robotics and Autonomous Systems,
vol. 88, pp. 142–153, 2017.
[42] C. Rosmann, M. Oeljeklaus, F. Hoffmann, and T. Bertram, “Online trajectory
prediction and planning for social robot navigation,” in IEEE International Con-
ference on Advanced Intelligent Mechatronics (AIM), 2017, pp. 1255–1260.
[43] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger,
R. Wheeler, and A. Ng, “ROS: An open-source Robot Operating System,” in
ICRA Workshop on Open Source Software, vol. 32, 2009, pp. 151–170.
[44] B. P. Gerkey, R. T. Vaughan, and A. Howard, “The player/stage project: Tools
for multi-robot and distributed sensor systems,” in In Proceedings of the 11th
International Conference on Advanced Robotics, 2003, pp. 317–323.
[45] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000.
[46] R. B. Rusu and S. Cousins, “3D is here: Point cloud library (PCL),” in IEEE
International Conference on Robotics and Automation, May 2011, pp. 1–4.
[47] L. A. Nguyen, P. T. Dung, T. D. Ngo, and X. T. Truong, “Localization system
based on the particle filter algorithm and sensor fusion technique for autonomous
mobile robots in the interrupted sensor data,” in 2019 3rd IEEE International
Conference on Recent Advances in Signal Processing, Telecommunications Com-
puting (SigTelCom), 2019, pp. 33–37.
[48] L. A. Nguyen, L. Nghia, D. N. Thang, P. T. Dung, and X. T. Truong, “Local-
ization system based on the particle filter algorithm and sensor fusion technique
for autonomous mobile robots in the interrupted sensor data,” Special issue on
Measurement, Control and Automation, vol. 4, no. 2, pp. 46–53, December 2019.
[49] L. A. Nguyen, P. T. Dung, T. D. Ngo, and X. T. Truong, “Localization system
based on the particle filter algorithm and sensor fusion technique for autonomous
136
mobile robots in the interrupted sensor data,” in 2020 17th International Confer-
ence on Ubiquitous Robots (UR), Kyoto, Japan, 2020, pp. 309–314.
[50] ——, “An efficient navigation system for autonomous mobile robots in dynamic so-
cial environments,” International Journal of Robotics and Automation, May 2020.
[51] ——, “An integrated navigation system for autonomous mobile robot in dynamic
environments,” Journal of Military Science and Technology, December 2020.
[52] G. Welch and G. Bishop, An Introduction to the Kalman Filter. Chapel Hill, NC,
USA: Tech. Rep. TR-95-041, University of North Carolina at Chapel Hill, 2006.
[53] D. F. S. Thrun, W. Burgard, Probabilistic Robotics. MIT Press, 2006.
[54] M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle
filters for online nonlinear/non-gaussian bayesian tracking,” IEEE Transactions
on Signal Processing, vol. 50, no. 2, pp. 174–188, February 2002.
[55] D. Hennes, D. Claes, W. Meeussen, and K. Tuyls, “Multi-robot collision avoidance
with localization uncertainty,” in Proceedings of the 11th International Conference
on Autonomous Agents and Multiagent Systems, vol. 1, 2012, pp. 147–154.
[56] V. J. Lumelsky and T. Skewis, “Incorporating range sensing in the robot navi-
gation function,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 20,
no. 5, pp. 1058–1069, Sep. 1990.
[57] M. Khatib, H. Jaouni, R. Chatila, and J.-P. Laumond, “Dynamic path modifica-
tion for car-like nonholonomic mobile robots,” 05 1997, pp. 2920 – 2925.
[58] R. Simmons, “The curvature-velocity method for local obstacle avoidance,” in In
Proc. of the IEEE International Conference on Robotics and Automation, 1996,
pp. 3375–3382.
[59] O. Brock and O. Khatib, “High-speed navigation using the global dynamic window
approach,” in Proceedings 1999 IEEE International Conference on Robotics and
Automation (Cat. No.99CH36288C), vol. 1, May 1999, pp. 341–346.
137
[60] J. Barraquand and J.-C. Latombe, “Robot motion planning: A distributed rep-
resentation approach,” The International Journal of Robotics Research, vol. 10,
no. 6, pp. 628–649, 1991.
[61] D. Claes, D. Hennes, K. Tuyls, and W. Meeussen, “Collision avoidance under
bounded localization uncertainty,” in IEEE/RSJ International Conference on In-
telligent Robots and Systems, October 2012, pp. 1192–1198.
[62] J. Nocedal and S. J. Wright, Numerical optimization. Springer series in operations
research, 1999.
[63] S. Bhattacharya, Topological and geometric techniques in graph-search based robot
planning. Ph.D. dissertation, University of Pennsylvania, 2012.
[64] S.Phillips and S.Narasimhan, “Automating data collection for robotic bridge in-
spections,” Journal of Bridge Engineering, vol. 8, no. 24, 2019.
[65] H. Deilamsalehy and T. C. Havens, “Sensor fused three-dimensional localization
using imu, camera and lidar,” in 2016 IEEE SENSORS, Oct 2016, pp. 1–3.
[66] J. D. Toledo, Jonay, R. Arnay, D. Acosta, and L. Acosta, “Improving odometric
accuracy for an autonomous electric cart,” Sensors, vol. 18, no. 1, 2018.
[67] J. Borenstein and Liqiang Feng, “Measurement and correction of systematic odom-
etry errors in mobile robots,” IEEE Transactions on Robotics and Automation,
vol. 12, no. 6, pp. 869–880, Dec 1996.
[68] C. Ro¨smann, W. Feiten, T. Wo¨sch, F. Hoffmann, and T. Bertram, “Efficient
trajectory optimization using a sparse model,” in 2013 European Conference on
Mobile Robots, Sep. 2013, pp. 138–143.
[69] G. Ferrer and A. Sanfeliu, “Proactive kinodynamic planning using the extended
social force model and human motion prediction in urban environments,” in
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS
2014), September 2014, pp. 1730–1735.
138
[70] X. T. Truong and T. D. Ngo, “Toward socially aware robot navigation in dynamic
and crowded environments: A proactive social motion model,” IEEE Transactions
on Automation Science and Engineering, vol. 14, no. 4, pp. 1743–1760, October
2017.
[71] H. W. Kuhn, The Hungarian Method for the Assignment Problem. Berlin, Hei-
delberg: Springer Berlin Heidelberg, 2010, pp. 29–47.
[72] X. T. Truong, N. Y. Voo, and T. D. Ngo, “RGB-D and laser data fusion-based
human detection and tracking for socially aware robot navigation framework,” in
Proceedings of the 2015 IEEE Conference on Robotics and Biomimetics, Dcember
2015, pp. 608–613.
[73] K. O. Arras, O. M. Mozos, and W. Burgard, “Using boosted features for the de-
tection of people in 2d range data,” in IEEE International Conference on Robotics
and Automation, April 2007, pp. 3402–3407.
[74] M. Munaro, F. Basso, and E. Menegatti, “Tracking people within groups with
rgb-d data,” in IEEE/RSJ International Conference on Intelligent Robots and
Systems, October 2012, pp. 2101–2107.
[75] L. A. Nguyen, P. T. Dung, T. D. Ngo, and X. T. Truong, “Improving the accu-
racy of the autonomous mobile robot localization systems based on the multiple
sensor fusion methods,” in International Conference on Recent Advances in Signal
Processing, Telecommunications Computing, 2019, pp. 33–37.
[76] P. E. Hart, N. J. Nilsson, and B. Raphael, “A formal basis for the heuristic de-
termination of minimum cost paths,” IEEE Transactions on Systems Science and
Cybernetics, vol. 4, no. 2, pp. 100–107, July 1968.
[77] A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese, “So-
cial LSTM: Human trajectory prediction in crowded spaces,” in IEEE Conference
on Computer Vision and Pattern Recognition, June 2016, pp. 961–971.
139
[78] A. Rudenko, L. Palmieri, M. Herman, K. M. Kitani, D. M. Gavrila,
and K. O. Arras, “Human motion trajectory prediction: A survey,”
https://arxiv.org/abs/1905.06113v3, 2019.
[79] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural
Networks, vol. 61, p. 85–117, Jan 2015. [Online]. Available: http:
//dx.doi.org/10.1016/j.neunet.2014.09.003
[80] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare,
A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie,
A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Has-
sabis, “Human-level control through deep reinforcement learning,” Nature Pub-
lishing Group, a division of Macmillan Publishers Limited. All Rights Reserved.,
vol. 518, no. 7540, pp. 529–533, Feb. 2015.
140

File đính kèm:

  • pdfluan_an_developing_efficient_localization_and_motion_plannin.pdf
  • pdfLan Anh_Trang thong tin_Eng.pdf
  • pdfLan Anh_Trang thong tin_Viet.pdf
  • pdfLananh_tomtat.pdf