1.
Introduction
Autonomous
indoor navigation represents a fundamental challenge in the field of mobile
robotics, with applications extending across domains such as warehouse
automation, healthcare, and domestic assistance. These systems aim to minimize
human involvement and enhance operational efficiency by autonomously performing
tasks like item delivery, cleaning, and surveillance [1]. A crucial component
enabling such autonomy is Simultaneous Localization and Mapping (SLAM), which
allows a robot to construct a map of an unknown environment while continuously determining its own position within that map [2]. Recent advancements have led to the widespread adoption of autonomous
indoor robots across industries including logistics, healthcare, and personal
service sectors. Earlier localization systems typically depended on fixed
infrastructure such as RFID tags, ultrasonic beacons, or visual markers but contemporary approaches increasingly rely on
map-based localization techniques
for improved flexibility and
scalability [2], [3].
Fig. 1. Obstacle Detection
Concept Overview
For
example, 2D LiDAR - based mapping can be used to align sensor scans with
occupancy grid maps to estimate the robot’s pose. Likewise, vision based
methods have gained traction in recent years. Bajpai and Amir-Mohammadian
(2021), for example, demonstrated real-time 3D indoor map- ping and navigation
using the iPhone’s ARK it framework [4]. In this project,
we present the design and implementation of a cost-effective autonomous indoor
mobile robot developed on a Raspberry
Pi 4 platform running ROS 2. The robot is equipped with a TF Mini LiDAR and
ultrasonic sensors for depth perception and obstacle detection
Fig 1,[5], [6]. It follows
a conventional SLAM - driven navigation architecture (sense→ localize/map →
plan → act) and leverages open - source ROS 2 modules
such as Grid-based Mapping (gmapping) and Navigation. Visualization
and monitoring are performed through RViz. The system is tested in a controlled
laboratory environment, where the robot autonomously explores, maps its surroundings, and navigates between
predefined goal points using only onboard computation and
sensors, without reliance on GPS or external tracking systems [7]
Related Work
Autonomous
indoor navigation has been the focus of extensive research within the robotics
community. Numerous studies have examined mapping, localization, and motion-
planning strategies for robots operating in structured indoor environments.
Among these approaches, Simultaneous Localization and Mapping (SLAM) remains a
fundamental technique that enables
a robot to construct and update a map of its surroundings while continuously
estimating its pose [8]. Most modern SLAM frameworks employ LiDAR or camera-
based sensors due to their accuracy, reliability, and suitability for real-time
applications. Early localization systems primarily
relied on landmark- or beacon-based approaches, including RFID transmitters,
ultrasonic beacons, and fiducial markers [9].
Although these systems
provided stable position
estimates, they required pre-installed environmental infrastructure, which limited flexibility.

Fig. 2. Path Planning Concept
In
contrast, map-based localization techniques use onboard sensing and
computational resources. For instance, 2D Li- DAR scan matching allows the
robot to compare current observations with stored occupancy grids to estimate
position. In practical implementations, we find that combining wheel odometry
with LiDAR or visual data through probabilistic filtering-such as the particle
filter or graph-based SLAM- improves robustness and accuracy. Recent
advancements have also
expanded the use of vision-based SLAM [9]. Systems based on monocular or RGB-D
cameras can generate dense environmental models for navigation and augmented
reality. Bajpai and Amir-Mohammadian (2021), for example, presented a
markerless indoor mapping approach using Apple’s ARKit, where smartphones
captured 3D spatial maps for shared localization among multiple devices. While
promising, such solutions depend on high-performance mobile hardware and are not directly
optimized for embedded
robotic platforms [9], [10].
In our
work, we emphasize LiDAR-based SLAM for its robustness and low computational demand. We use a planar
TF Mini LiDAR to perform
two-dimensional mapping in a ROS 2
environment [10]. This design choice simplifies real-time processing and
ensures consistent performance under varying lighting conditions. For path
planning, we examined both classical and sampling-based algorithms. Algorithms
such as Dijkstra’s and A* provide deterministic, optimal paths in grid-
based maps, while Rapidly-Exploring Random Trees (RRT) offer faster exploration in larger or higher-dimensional spaces. As highlighted by Jayaparvathy et
al. RRT can outperform traditional methods in open environments due to its
ability to rapidly sample feasible trajectories. In our system, we utilize the
Navigation2 (Nav2) framework of ROS 2, which supports both global path planners
(A* and RRT) and local controllers such as the Dynamic Window Approach (DWA)
[11].
Obstacle Avoidance: Obstacle avoidance: It is another
essential component of our navigation pipeline. As supported by previous
studies, range sensors-particularly LiDAR and ultrasonic modules-are effective for detecting nearby
obstacles and ensuring collision-free motion. We adopt a sensor-fusion
approach, where LiDAR provides precise distance estimation and ultrasonic sensors
supplement perception in cases where transparent or reflective surfaces
might degrade LiDAR performance [11]. Cameras may also
be used for detecting obstacles or visual landmarks, but we find they often
demand greater computational resources and are more sensitive to environmental
lighting Fig 3.
Fig. 3. Obstacle Avoidance
Through
this hybrid sensing strategy, we ensure safe and reliable motion even in cluttered
or dynamic indoor environments. Our review of existing literature indicates
that contemporary indoor robots primarily rely on LiDAR-based SLAM, sensor
fusion, and path planners such as A* and RRT for efficient navigation. Building
upon these established frameworks, we implement and validate a ROS 2–based navigation
system that integrates SLAM Toolbox
and Nav2 modules on a TurtleBot4 platform,
demonstrating effective mapping, localization,
and path execution in real-world
conditions [12].
2.
Components And Architecture
This
section describes the hardware components we used for building our Smart Autonomous Indoor Navigation Robot. Each component has its
contribution to the sensing, control, power management, and motion of the
robot. The system architecture integrates all these modules
within a single
frame- work that guarantees reliable and smart indoor navigation.

Fig. 4. TF-Mini-S Micro LiDAR Module
A.
TF-Mini-S Micro LiDAR Module
The
TF-Mini-S LiDAR sensor is a compact, low-cost, and high-performance sensor for precise
distance measurement Fig 4. It
works on the principle of Time-of-Flight (ToF), which calculates the distance based on the time a pulse takes to travel to an object and back. The module
offers stable and accurate measurement and communicates via UART or I²C
interface, making it suitable for embedded systems and robotics. In this
project the TF-Mini-S LiDAR module is used as the main obstacle detection sensor, which enables
accurate mapping of the
surrounding environment for intelligent navigation decisions to avoid
collisions during autonomous operation Table 1.
|
LiDAR Model
|
Detection
Range
|
Measurement
Frequency
|
Accuracy
|
|
TF
Mini S Micro Lidar Sensor
|
0.1 m – 12 m
|
100 Hz
|
±4 cm
|
Table 1: Specifications
of different Lidar modules
B.
Raspberry Pi 3 (Model B)
Fig. 5. Raspberry Pi 4 (Model B)
The
Raspberry Pi 4 Model B is a powerful microcomputer with a 1.5 GHz quad-core ARM
Cortex-A72 processor and has memory
ranging from 2 GB to 8 GB LPDDR4 RAM. It
supports full operating
systems such as Raspberry Pi OS
or Ubuntu and includes versatile connectivity options like USB 3.0, HDMI, and Ethernet
Fig 5. It make Rpi more suitable for computational resources such as image
processing and real- time navigation algorithms. In our project, the Raspberry
Pi 4 serves as the CPU as it manages high-level decision-making, path planning,
and sensor data fusion from LiDAR and IMU modules for intelligent autonomous
movements.
C. Arduino UNO R3
The Arduino UNO R3 is a widely
used microcontroller board based on the ATmega328P, having 14 digital
I/O pins, 6 analog
inputs, and a user-friendly USB programming interface. With its low power consumption
feature, real-time response, and simplicity, it is suitable for embedded
control. In our project, Arduino UNO acts as a low-level controller for motor
actuation and interfacing with sensors. It executes commands received by
Raspberry Pi, and it also generates PWM output signals to control motors
through the L298N motor driver to produce quick motion response.

Fig. 6. Arduino UNO R3)
D.
L298N Motor Driver
Fig. 7. L298N Motor Driver
The L298N
is a dual H-bridge driver
having the capability of driving two DC motors, up to 2 A per channel, and handling
voltages from 5 to 35 V Fig 7. It allows bidirectional control of motor rotation,
and it supports PWM-based speed control.
In our autonomous robot, the L298N module interfaces with Arduino UNO for
regulation of speed and direction of the motors, allowing precise movements in
all directions and also turning maneuvers required for smooth indoor
navigation.
E.
LM2596 DC-DC Buck Converter
Fig. 8. LM2596 DC-DC Buck Converter
The LM2596 is a high-efficiency step-down
voltage regulator that
converts higher DC voltages in the neighborhood of 40 V down to much lower,
producing stable outputs, such as 5 V
or 3.3 V Fig 8. As it has high conversion efficiency and low thermal output, it
is well suited for multi-voltage components, which prevents voltage
fluctuations and maintains consistent system performance Fig 9.
Fig. 9. Component Architecture.
3.
Methodology
The robot navigation pipeline
operates in real time in a loop of sensing, localization/mapping,
planning, and control:
1.
Sensor
Data Collection: Data from sensors is continuously
received by the robot. Indeed, TF Mini Li- DAR
produces 2D range
scans, that is to say, arrays
of distance measurements. Ultrasonic sensors report close-range distances. Wheel encoders and IMU provide odometry, thus, incremental
motion estimates. All raw data is published on ROS2 topics. For instance, the
Li- DAR node publishes a LaserScan message with obstacle distances around the robot.
2.
Localization
and Mapping (SLAM): SLAM solves both
localization and mapping problems in a coupled manner. In the context of this
work, we consider a filter-based SLAM approach, namely
Rao Blackwellized Particle Filter or Extended Kalman
Filter. Each incoming LiDAR scan is matched
to the current map in order
to estimate the robot’s pose. In practice,
we rely on the SLAM Toolbox package, which internally uses scan matching
and a particle filter in order to update both the robot’s pose and the map.
Similarly to standard SLAM, the localization step compares the new scan against
the map, e.g., an occupancy grid, in order to refine the pose. On the other hand, the mapping step updates the occupancy
grid given newly observed free or occupied cells. One result is a progressively
built map in 2D about the environment. Once the map is built-after some exploration-we can switch, if needed, to a localization-
only mode, for example, AMCL, using the fixed map.
3.
Path
Planning: The planner computes a collision-free path
given a start, i.e., the current pose, and a goal coordinate.
4.
Motion
Control and Execution: The planned path is
converted into velocity commands. A simple feedback controller, say PID on heading and distance, converts these waypoints to linear and
angular velocities, published to the level topic. The robot executes the first segment of the path for a short duration,
then the loop repeats: it senses again, localizes and replans as necessary. It allows dynamic
re-planning in case unexpected obstacles appear. If there is an obstacle
detected on the immediate path, by either LiDAR or ultrasound, the robot may
stop and replan a new path. Alternatively, it could use a reactive avoidance
rule, such as sidestepping. Throughout, rviz2 displays
the evolving map, the robot’s
pose, and the planned trajectory to monitor.
Key software tools in our
pipeline include ROS2 (Foxy/Galactic), SLAM Toolbox (graph-SLAM or filter-
SLAM), nav2 stack for planning/execution, and sensor drivers. We programmed the high-level logic in C++ and Python
ROS2 nodes. The use of standard ROS2 interfaces ensures modularity (for example,
swapping between gapping and SLAM Toolbox without changing
planner code) Fig 10.
Fig. 10. Flow Chart
4.
Result And Discussion
The
Smart Autonomous Indoor Navigation Robot was successfully designed, implemented
and tested in controlled indoor environments to evaluate its performance in
obstacle detection, path navigation and autonomous decision-making in different conditions. Our proposed
Robot display how well we have integrated hardware into the system, with
Raspberry Pi 4 communicating with Arduino UNO, and associated sensor and actuators [13]. The TF-Mini-S LiDAR module presented
accurate distance measurements within a range of 0.1-12 m with error below ±4
cm. The high accuracy allowed the robot to detect obstacles and change its
route in real time.
During
navigation testing, we placed the robot in difficult indoor layouts containing
both static and dynamic obstacles. The pair of LiDAR and ultrasonic sensor made
sure that effectiveness of multi-sensor fusion was not lost by enabling the
robot to perceive both far- and near range objects accurately. The MPU-6050 IMU enhances stability and orientation control, by maintaining its
balance, which allows smoother turns and reduces
drift.

Fig. 11. SLAM-based mapping
While the robot navigated, we reported no
collisions, and it followed optimized paths to the target destination in more
than 90 percent of test run, showing the robustness of the navigation algorithm
and sensor coordination.
Fig. 12. Prototype Image I
The
RS-775 DC motors, controlled by the L298N motor driver, ensured reliable motion
with adequate torque
on indoor surfaces. The
LM2596 DC-DC buck converter maintained stable voltage level for different
integrated modules, which helps in consistent performance throughout extended
operation periods. The active buzzer helped in providing audible feedback related
to operational states like power-up,
proximity to obstacles, and task completion [14]. We found our system to be cost-effective and efficient, with integrated hardware
and flexible software architecture for further extensions. The result justify the
feasibility of an affordable autonomous navigation platform capable of carrying
out indoor mapping, collision avoidance, and path optimization with less human
supervision. The system’s successful operation and performances under
real-time condition indicates its applicability in various application from home automation, warehouse logistics and various
indoor service robotics.
5.
Conclusions:
This
research work focused on the design and implementation of a smart autonomous
indoor navigation robot that could perform effective navigation and obstacle avoidance
within confined indoor environments.
Fig. 13. Prototype Image II
The proposed
project integrates various
sensors and control modules together for autonomous operation, such as
high-precision distance mapping using TF- Mini-S LiDAR along with proximity
sensing through HC- SR04 ultrasonic sensors and motion detection through an
MPU-6050 [15]. Data fusion and decision processing are handled by Raspberry Pi 4,
while Arduino UNO R3 does low-
level motor control through the L298N motor driver and RS- 775 motors
to achieve stable
and synchronized movement. The experimental results prove that the robot can detect
obstacles effectively, maintain balance, and move smoothly in indoor spaces. The integrated LM2596
buck converter provides
stable power to all components without any fluctuations, and the active
buzzer helps in system feedback for better safety and alerting [16], [17].
Future
improvements will be focused on improving the autonomy and perception of the robot using SLAM techniques,
object detection based on computer vision, and decision systems based on
machine learning. These enhancements could further increase the efficiency, adaptability, and intelligence of our proposed indoor
navigation robot by making it suitable for a
wide range of domestic, industrial, and research applications [18].
6.
Future Scope
The
development of the Smart Autonomous Indoor Navigation Robot lays a strong
foundation for further research and
enhancement in autonomous systems. Although the robot prototype we presented is
capable of successful and reliable indoor navigation with obstacle avoidance,
in the future, further enhancements can be made to enhance its capabilities.
The SLAM technology allows the robot to build and develop dynamic maps of its
environments, therefore increasing its spatial
awareness and its localization accuracy. With the use of
cameras and advanced image processing techniques, computer
vision systems can allow the robot to recognize objects, signs, and dynamic
obstacles, improving decision-making in the case of
complicated indoor spaces.
In addition, we can infuse machine
learning algorithms for optimized path planning as the robot learns from previous navigation experiences to adapt to changes in environmental
conditions. Future development of the system might include cloud connectivity
and IoT integration, such as remote monitoring, real-time data analysis, and multi-robot coordination. Improving power efficiency and precision of motor control for
enhanced operational stability and reliability. All these improvements will
allow the robot to perform tasks in a wider range of applications, including ware- house automation, indoor delivery, surveillance, and
assistive robotics in calculative environments.