Retroactive about Robotics Application with Artificial Intelligence toward the Global Pandemic Scenario

— The COVID-19 pandemic affects the entire world and took about 4.25% of people’s life compared with SARS-CoV-2. In this challenging situation about 25,000 frontline healthcare workers are affected by COVID-19 while providing support for the infected people. Hence, frontline workers are highly affected by the COVID-19 virus due to the absence of appropriate drugs or vaccines. The increase in the spread of the virus leads to a shortage of healthcare workers in different countries. In this scenario, to protect the frontline workers from the COVID-19 virus robots integrated with Artificial Intelligence are employed for pandemic diseases. This paper evaluated the contribution of robotics to the COVID-19 pandemic which comprises Artificial Intelligence. Initially, this paper presented the existing robotics model employed for the COVID-19 pandemics are examined. Through the analysis, the Simultaneous Fast Filtering Localization and Mapping (SFFLAM) model is developed for the hospital environment to promote frontline worker safety in the COVID-19 pandemic. The proposed SFFLAM model uses the extended Kalman filtering integrated with the sectorial error probability (SEP) for robot localization. The examination of robots expressed that through the integration of artificial intelligence robots are employed for the medical, UV-disinfectant, social, COBOTS, and drones. The examination expresses that the proposed SFFLAM model exhibits improved robotics performance for localization and processing. The application of robots with artificial intelligence increases the performance of the overall robot in the hospital during pandemic situations.

workers' unavailability or a minimal number of medical facilities available for the population infected with diseases.Additionally, the issue with the COVID-19 pandemic is an inadequate supply of PPE quality to health workers which improves the spread of diseases throughout the world [4].With a tremendous number of frontline health workers for the mission to serve and protect through the global pandemic causes innovation and technology for the effective management and treatment of diseases [5].Presently, robotics and Artificial Intelligence technique exhibit significant performance to withstand the present situation.As stated, the major characteristic of the COVID-19 virus is effective community transmission [6].The community transmission of the virus is minimized or hampered through the use of robots such as medical robots, UV-lights, Drones, Automated robots, and collaborative robots in hospitals with associated labs [7].Additionally, artificial intelligence is implemented for patient monitoring and categorizing the different stages of infection and utilized for the identification of severe cases for end treatment [8].Artificial intelligence-based robotics has been effectively implemented in different countries and identified as fruitful for the identification of viral transmission in hospitals and metropolis.This paper evaluated the incorporation of robots in disease management integrated with artificial intelligence.The examination is based on the consideration of automated based disease management in the frontline workers for COVID-19.Through the analysis, SFFLAM model is employed for the localization of the robots for the automated process in the robots integrated with artificial intelligence.
The journal reserves the right to do the final formatting of your paper.

II. ROBOTS IN DISEASE MANAGEMENT
Robots were invented to perform the effective management of complex tasks to minimize the workload and labor forces.However, in the present scenario, the cause and loss need to be estimated [9].At present, robotics is utilized for the effective management of diseases to reduce viral disease spread and contractions.The design of micro robots is based on the structure to reduce physical interaction and immune system cells.However, the design and surface evaluation of parameters exhibits the performance of micro robots in locomotion.
R. Djembong, Hebei University and Technology, China.M. T. W. Piash, Hunan University, China.(e-mail: Tawsifpiash1923 hnu.edu.cn)@ @ @ Retroactive about Robotics Application with Artificial Intelligence toward the Global Pandemic Scenario Md Musa Haque, Chandan Sheikder, Rodrigue Djembong, and Md Tawsif Wahid Piash A. Medical Robots Medical robots comprise different types designed uniquely to perform certain tasks or functions.The micro robot's design and structure reduce the physical interaction with the immune system [10].However, the design of surface-borne comprises parameters for critical locomotion and micro robots' performance characteristics.The performance of micro robots is presented in Fig. 1.

B. Clean Robots
In the healthcare system, the cleaners play the critical role of maintenance and cleanliness in hospitals and healthcare centers [11].During the pandemic environment, there is a huge demand skyrocketing as diseases threaten, To overcome the issue cleaner robots are employed in the healthcare sector integrated with human intelligence and machines to increase the efficiency of cleaning and safety.Even, the companies that are providers of labor supply are subjected to an obtuse labor shortage in the situation of pandemic diseases for selfquarantine or other illnesses.Therefore, to overcome the issues commercial cleaning robots need to be employed in addition to the autonomous cleaning solution for quality maintenance and reduction of exposure to risk in high pathogenic infection.Within healthcare centers, a set of cleaning standards are established for the safety of both patients and healthcare providers [12].However, in the COVID-19 pandemic situation, several criteria are implemented based on certain standards and procedures.The cleaner robots are designed based on the consideration of a certain set of safety standards and healthcare regulations for an effective and helpful scenario.The cleaner robots do not perform the tasks such as sneezing, coughing, and handshake this actively minimizes the spread of diseases within and around the hospitals [13].In addition to cleaning robots scrubbers, vacuum sweepers, and shelf scanners are utilized for the metropolis.Automotive mobile robots (AMR) are enabled in different cities in labor-intensive cleaning tasks in a pandemic situation.

C. Ultra-Violet Light Robots
The coronavirus spread is affected by the droplets obtained from the nasal or mouth through sneezing or coughing from the infected person to others [14].The other mode of transmission is due to direct contact with the healthy individual's utensils for the infected person touched the surface.However, the particles from the coronavirus remain from the persistent copper surface, Cardboard, wood, paper money, stainless steel, glass, surgical mask, cloth, and polypropylene plastics.The robots are controlled with ultraviolet light-based devices for efficient disinfection of the employed materials in the health centers.The ultra-violet rats can be grouped as UV-A (320-400nm), UV-B (280-320 nm), and UV-c.Those robots radiate UV rays between the wavelength frequency of 280-100nm which affects the skin and corns and leads to carcinogenic in humans.Additionally, the sterilizing of objects contributed effectively with UV-C with employed PX-UV (Pulsed Xenon Ultraviolet) to disinfect the material surface and objects in the hospitals.Those robots provide a cleanliness rating with the 50 th to 99 th percentile in hospital devices [15].Conventionally, the cleaning methods of UV are micro-killing, cleaning of floors and patrol rooms to perform deep cleans through concentrated UV light.The robots comprise different sensors that are utilized in self-driving, voice-enabled disinfects machine microbes for the high wavelength ray.The UVD robots deploy computer robots for the lidar sensor environment and digital mapping in hospital rooms through automated indication of rooms and robots for disinfection.The robot relies on Simultaneous Filtering Localization and Mapping (SFLAM) for autonomous robots with emission of 20 joules per square meter per second at a wavelength of 254 nm for the elimination of microorganisms such as bacteria and other harmful effects [16].The xenon lamp is used in those robots for the high-intensity germicidal spectrum frequency of 200-315nm.The robot's light strikes are destroying the virus within two minutes through a four-log value of 99.9% with a reduction in time.The ultra-violet robots are used for sanitizing patient rooms after discharge from hospitals [17].In the pandemic of Covid-19, Vanora robots are utilized in Mangalore to sterilize the complete room in 4 minutes.

D. Mobile Robots
The mobile robots are engineered to prevent the navigation and transport of high-risk patients from contaminant areas to prevent direct contact between health workers.These robots are equipped with high-end sensors and cameras to record patients' blood pressure, pulse rate, temperature, and regular monitoring of vision [18].
Additionally, those robots are involved in the provision of food and medicines for isolated and quantized patients within the health care centers.Therefore, with effective monitoring, the patient's condition and diseases are categorized for the treatment of doctors for the recovery of patients from severity.In Covid-19 mobile robots are helpful for the provision of medicines and food to patients in hospitals through 3. frontline workers and doctors.

E. Social Robots
Social robots are considered an important part of society to supply and deliver basic requirements of people within complete lockdown during the pandemic situation.The Care-O-Bots supports adults and individuals in old age by delivering basic needs such as drinks and meals.PARA robots are effective for entertaining children at the home in lockdown.Telepresence robots are effective in surveillance and monitoring of health in people lockdown [19].Hence, social robots are effective for the maintenance of social distancing with proper maintenance in a pandemic lockdown environment.The challenges associated with Covid-19 are screening, diagnosis, and on-demand mental therapy in social robots.The prevention of pandemic is prevented through social distancing, quarantine, and isolation.Therefore, industrial robots are utilized in a dynamic environment that maximizes the social impact positively in the environment.Social robot applications are effective for automatic temperature monitoring in public places such as airports.The automatons are effective for handling staff and patients in hospitals are minimized.Those robots use QR codes those affordable for direct patient help and maintenance exposed to human staff during COVID-19.

III. ROBOTS IN COLLABORATIVE AUTONOMOUS
Recently, robots are employed for the manufacturing of complex tasks and improving the interaction between robots and humans.Those robots are collaborated with humans to perform the tasks of the collaborative robots (COBOTS).Traditionally, those robots employed are evaluated in the fenced areas, implementation of collaborative robots without fences, and interaction between the humans and robots.Those collaborative robots are comprised of a different sensor for the Lidar, Motion monitoring and acceleration, and much more factors [20].However, those robots are not harmful to those working with them with the engagement with the closeness level assessment based on the COBOTS criteria and automation level.The human engagement is evaluated based on the closeness level made of COBOTS criteria and automation level assessment.The COBOTS comprises a broad range of applications with the implementation of medical devices for the automation diagnostic industries.Through COBOTS the blood samples are collected at Copenhagen University Hospital in Gentofte, Denmark.COBOTS exhibits the despite of a 20% increase in analysis of a sample with 90 percentiles.During the pandemic era, Nova Health Traditional Chinese Medicine (TCM) provides the physio-massage robot for the knee in serving patients.The COBOTS uses the pandemic situation in hospitals for the management of labor shortages in laboratories and industries impacts on the economy and management of diseases during COVID-19.Over past decades, COBOTS are utilized and implemented in factories for the management of diseases.

A. Drones
Drones are also a promising technology that contributes to the major role in the management of the pandemic era with COVID -19 for monitoring and clustering of people in lockdown.Additionally, drones reduce the energy requirement of humans and the threat of diseases [21].As an example, drones are implemented in China for broadcast with the multipurpose scenario.To reduce the contract between humans, drones are effectively utilized for the communication between people in the fields for the collection of pandemic diseases health information and status.However, in some countries, disinfectants spray is provided by drones to reduce the spread of pathogens in 50 times faster and more efficient manner compared with the traditional method.Recently, the drone designs comprised an overall capacity of 161 disinfectants with a coverage area of 100,000 square meters in an hour.However, in Chine drones are provided with infrared cameras that are recorded to manage and measurement of larger temperatures to provide adequate safety to the health workers to reduce coronavirus.The attractive performance of drones is the provision of PPEs, medicines, and food in the red quarantine zones.In China, drones are utilized for testing in hospitals in Xinchang County, Zhejiang Province for Diseases Control and prevention with a coverage distance of 3km within 6 minutes.The process takes nearly 20 minutes to complete the same process with the road.

IV. ARTIFICIAL INTELLIGENCE IN COVID-19
At present, SARS-CoV-2 are utilized for the diagnosis of the infection in the virus through RT-PCR (Reverse Transcriptase Polymerase Chain Reaction) with the use of nasal swab in patients.However, the artificial intelligence technique does not exhibit the patient's severity level in the infected patients as the process takes 6 -48 hours to get results with limited PCR sample kits.Computer Tomography (CT) scan is considered a valuable component for the estimation and computation of severity levels in patients.In a real-time manner, Artificial Intelligence (AI) is utilized to assess different stages of infection and severity cases.The conventional technique expressed that AI is used for the training and classification of COVID -19 diseases through the use of CT scans with an accuracy level of more than 90%.In another case, CT is not the preferred tool for COVID-19 diagnosis since it looks similar to influenza or pneumonia.The AI-based algorithm uses the deep convolutional neural network (CNN) in CT scan with the use of Support Vector Machine (SVM), Random Forest, Multilayer Perceptron (MLP), and Decision Tree classifier mode [22].The comparative analysis stated that the decision-tree classifier model exhibits improved performance for the tuning of the dataset.The decision tree accuracies are higher compared with the other models.The CT scan is designed through the integration of a deep-learning model with the decision tree classifier model acquired from the patient's symptoms, history of exposure, and testing in the laboratory to record and diagnosis the various stages of the viral infection and information evaluation with the consideration of cases for the treatment of diseases.
With the PyTorch framework model, the neural network is implemented for the CoVID-19 detection and classification with CXR images acquired from chest X-rays for the classification of abnormal and normal infection.The classification performance of the decision tree exhibits an accuracy level of 98% with an overall average accuracy of 95%.Also, AI is utilized for the identification of asymptomatic and symptomatic estimation for the treatment of diseases with segregated levels.Specifically, the visual features are evaluated with volumetric chest CT images for the COVID-19 with COVNet model.The model design comprises the collected dataset comprised of the 4356 chest CT samples of 3322 patients acquired from the 6 medical centers between the time period of Aug 16, 2016-Feb 17, 2020.Finally, the CT scans comprise 1296 of the infected CoVID -19 patients with 30%, the pneumonia dataset value is 1735 for 40%, and the non-pneumonia value dataset of 30% with a count of 1325.Through the use of RT-PCR CoVID-19 positive cases are obtained for Dec 31, 2019-Feb 17, 2020.With the use of the deep learning technology model, the medical imaging model uses the Chest Radiography process such as X-ray or computed tomography for the detection and diagnosis of pneumonia for the diagnosis of COVID-19.To perform classification patients who are infected severely are examined with Automated X-ray Imaging Radiography systems (AXIRs) and a review is performed with the doctors for the classification to achieve the decision for the clinical assessment.The COVNet mode uses the imaging method with the standard protocols for the imaging process.With chest radiographs, viral pneumonia is utilized for infection monitoring.However, the drawback observed with Chest CT is a misinterpretation of epidemic CT images.Secondly, the CT images morphology and pathologic severity are evaluated for the classification of normal and mild infection in the CT images.

A. Reduce Workload in Healthcare
A sudden increase in the outbreak of pandemic diseases increases the admission of patient numbers in hospitals and healthcare centers for the frontline healthcare professionals to manage the workload for the management of treatment and diseases.In this scenario, AI is considered the effective management of big data analysis in the patients to cope with the treatment of diseases.

B. Applications of Artificial Intelligence in COVID-19
The design analyzer with COVID-19 is installed in the Baguio General, Philippines to classify CoVID-19 patients through examination of CT scans in patients.Similarly, Brunel University, London uses the AT algorithm for the analysis of image scans with an examination of patients' infection identification and estimation through COVID-19 monitoring and treatment.As an effective tool AI is utilized for the identification of high and moderately infected zones in New York.Consequently, hospitals in China comprises of the use of CT scans in patients with an efficiency of 84%, and radiology-based evaluation is measured as 75%.

C. Development of Drugs and Medicines
To prevent pathogenic diseases identification and design of appropriate drug need to be provided for the treatment those are implemented in COVID-19 diseases for the development and discovery of drugs.The development of those technologies is considered as an effective tool for speeding and testing drugs in real time.Therefore the discovery of a drug is considered an effective tool for diagnosis and development.

D. Role of 3D Printing
To manage COVID-19 outbreaks 3D printing technology is utilized for the manufacturing of medical devices in minimal time without affecting the demand in the environment.To maintain an effective supply chain in the pandemic situation COVID-19 is utilized in those situations.The 3D printing technology exhibits efficient performance to generate and manufacture the huge demand-supply compensation.The 3D printable medical model exhibits the effective performance of the respirators and PPEs.

E. Respiratory Support Systems
With the regional shortage, Italy faces the worst situation with the respiratory masks, and non-invasive ventilation with the CPAP/PEEP (Continuous Positive Airway Pressure/Positive End-Expiratory Pressure) to provide respiratory support in COVID-19.Through reverseengineered 3D printing the model values are evaluated for the automated ventilators for the respiratory pressure-controlled support system those identified as the efficient support towards the highly affected illness patients.

F. Personal Protective Equipment
In the present scenario, the 3D global printing model is implemented in global aspects to design and manage of reusable plethora through personal protein equipment to withstand patients and healthcare workers.Respiratory masks use filter cartridges and insurable devices for the primary manufacturing of low-cost desktops in extrusion filament printers.In the global pandemic scenario, 3D-printed face masks are utilized by medical staff and doctors in COVID-19.The University of East Anglia, England uses 3D print ventilators with the use of masks and critical equipment to prevent COVID-19 diseases.V. DESIGN OF SFFLAM WITH KALMAN FILTERING Mathematical Filtering is the process of finding the "best estimate" value from noisy data amounts to "filtering out" the noise.However, a Kalman filter doesn't clear up the data measurements but also projects these measurements onto the state estimate.Kalman filter applies to linear discrete systems whereas its extended version is for non-linear systems.In the state of a system at any time, T can be analyzed as a variable (random) where the uncertainty about the state is comprised by a probability distribution which provides an estimation of the next state and can solve a non-linear problem.
The fast SFFLAM algorithm has the capability for mobile robots for exploring unknown environments with a few landmark estimation problems.In FastSFFLAM, similar to Extended Kalman Filter SFFLAM, the positions of a robot are accepted which behaves according to the motion model with fundamental density.Guaranteed that instructions are independent of each other for which progress is checked for the integration of the parallel system into the environment.

A. Mathematical Modeling
Bernstein's conditions state that: Let P1 and P2 be two processes such that: • Ii is the input variable for Pi (process).
• Oi is the output variable for Pi (process).If P1 || P2(parallel execution of processes), then: • I1 ∩ O2 = ø (Null) All inputs and corresponding outputs are disjoint from each other.• I2 ∩O1 = ø • O1∩ O2 = ø i.For the implementation of this parallelism practical approach is needed that checks input and outputs; it includes an array of sensors like an Accelerometer, and SONAR, through these sensors one can achieve parallelism, as these sensors are portable and hardware-independent which was further used as Degree of Freedom (DoF) to determine the robot pose.ii.Parallelization is important to cut down the run time of a program one can break it up into pieces and run individually in parallel in multiple processing units(GPUs), the compiler first discovers parallel units and then carry out dependency analysis to figure out those segments which are independent and can execute concurrently.

B. Ancillary Sensory Components
One of the criteria for any autonomous robot is the attribute to sense the known environment with the help of data captured.The selection of ancillary sensory components is needed for the localization of autonomous robots as it affects the accuracy and precision of known mapped environments.
The list of ancillary sensory components from which the data is captured is as follows.

C. 3D Accelerometer Sensors
The proposed SFFLAM uses the TI (Texas Instrument) eZ430-Chronos watch which can be integrated wirelessly with the system.The Chronos watch has various sensors, but we utilized only 3D Accelerometer Sensor as it is a portable device and has a high degree of precision along with a builtin accelerometer feature that is adjustable.

D. Bluetooth Sensor (HC-06)
In this research Bluetooth Module, HC-06 was used and deployed on an experimental robot so that one can navigate it with the help of the Arduino Remote Control application.Through the degree of freedom, we can determine the number of values involved in the calculation which was used in upcoming objectives.The navigation of the robot was done using a remote area network (RAN).The sensor is well interfaced and secured with the system comprising the feature of programmability.Various works had done by learned people in the area of robotics like Extended Kalman filtering (motion in the circular path), and various particle filtering algorithms to obtain an accurate estimation of poses.The work was applied to geophysical parameters like mechanical parameters used to identify impulse, force, and friction, which had been observed and analyzed.After that, the geophysical data obtained from robot localization can be more accurate.For the experiment in research work, using the accelerometer mode of the TI watch.This watch helped to send the acceleration data to the Control Centre which is in form of a Cartesian coordinate parameter in form of X, Y, and Z.The aforesaid center talks about the monitoring of data in accelerometer mode (TI watch) which we received during the navigation of the robot.

VI. ANALYSIS OF SECTORIAL ERROR PROBABILITY WITH SFFLAM ALGORITHMS
Sectorial Error Probability is said to be a localization probability in a mapped area when a robot navigates in a sectorial curve concerning the actual pose.Sectorial Error Probability (SEP) is used with a mathematical filter based SFFLAM algorithm that can track the robot for localization which estimates position error in a calculation that is computed from measurements.Sectorial Error Probability (SEP) is used with G mapping (Extended Kalman Filter) based SFFLAM algorithm using an introduced robot for localization which analyzes pose error in a calculation that is computed from observations.The SEP is associated with (x, y) coordinates which is the target location, and is defined to the radius of the small sector at different angles and has the probability of containing true target localization coordinates.EKF-based G mapping SFFLAM is used for map generation and pose estimation.
When the designed robot gets moved, then the robot has to localize in the experimental path, to avert a collision concerning objects.The robot is moved at different angles.
Hence it has been observed from Table I that the navigation deviation of the error in the mentioned sectors is analyzed for the aforementioned algorithms.For 00-150 sectorial turn angle, G mapping based EKF the error deviation varies from 3 to 5 centimeters.0.9 to 2.6 0.9 to 2.6 1.9 to 4 In the case of them SFFLAM (Mono SFFLAM) algorithm error deviation varies from 2 to 6 centimeters whereas for the Full SFFLAM algorithm error deviation varies from 3 to 8 centimeters.For 150-300 sectorial turn angle, G mapping based EKF the error deviation varies from 2.6 to 4.5 centimeters.In the case of them SFFLAM (Mono SFFLAM) algorithm error deviation varies from 2.6 to 4.5 centimeters whereas for the Full SFFLAM algorithm error deviation varies from 3.5 to 4 centimeters.Hence it is observed that as the deviation of error gets minimized (distance decreases), it is seen that the sectorial turn angle increases, therefore G mapping based EKF gives optimized results for finding the SEP.

A. Optimization with SFFLAM
Pre-pinned buffers are that logical hardware that is an extension of TLB (Translation Look aside Buffer) used to store frequently used instructions and data.These cannot be moved around that is, its address has to be kept.It has the property of preserving the values and storing the values in registers for faster accessibility (likewise static and register variables in C).Therefore these key variables are pinned at the time of creation to decrease the memory requirement and increase the performance and efficiency of the system.These buffers are used: i.To achieve parallelism through GPU which contains multiple PEs as our data is in multiple copies (Bernstein Conditions).ii.To perform basic operations using Extended Kalman Filter (a mathematical filter) that relies on matrix operations.Matrix operations are computationally intensive.It can be parallelized with the help of SIMD instructions.As in SIMD, we have multiple PEs so that they can execute in parallel (i.e. they work together on a single task).The basic matrix multiplication optimized algorithm for SIMD instruction based on system control and principle data is used along with spread vector data.iii.As it is important to rearrange arithmetic expressions in which non-parallel terms were deleted to facilitate SIMD instructions on matrix operations like additions and multiplication.Therefore due to pinned buffers, the requirement of memory has got reduced resulting in increased performance and utility.So the usage of pre-Pinned Buffers along with said SFFLAM has resulted in providing an optimum solution to the system.
As shown in mentioned Fig. 8 the need for a large memory for storing the object is observed which results in the no availability of storing the object.Thus, for solving this problem we compressed the available memory blocks to store large objects using defragmentation along with pinning up the memory locations so that the objects could be stored in less memory space as shown in Fig. 9.A Shader core is a single shared memory vector-like resource that accepts instructions and executes them to manipulate the data shown in Fig. 9.Some ALUs combine along with some extra global memory to allow communication between vertex and pixel shader called as compute shader.Shaders are used in which columns are vertices and pixels are rows.While using a compute shader, it is important to get focused on the performance thread group size that is used in SIMD.The Shader instructions which are in excess are fetched, decoded, and executed for a group of inputs and are processed in parallel.As the compute units do not process as a single element, they work with multiple elements with the instruction cache.So here we utilized the compute unit to work as a SIMD unit.GPUs are specifically built for data-parallel computations with high arithmetic intensity as most of the attached processors (GPUs) are in pipeline nature.Here, GPUs act as attached processors which are designed in such a way that their floating-point calculations get enhanced.GPU kernel uses GPUs for accelerating the SIMD instructions and is implemented in the AMD Radeon environment.Hence the performance of the shader depends on the instruction of SIMD computation because as the instructions decrease parallelism increases.

VII. EXPERIMENTAL ANALYSIS
The work was carried out on an experimental robot which was designed in such a way that it integrated SFFLAM.We moved the robot to the robotics lab of the organization where we worked using a Bluetooth controller module on different types of surfaces at various times starting from 1 second.The robot was moved with the help of an Android-based application as shown in Fig. 10.For the experiment, we used the accelerometer mode of the eZ430 Chronos.Acquisition of real-time data from sensors through Bluetooth and its analysis in localization.Results were evaluated when a robot was moving in a given area firstly it needs to map itself and after that, it has to localize on that map.In the experiment, the robot consists of eZ430 Chronos which captures and transmits acceleration information to the Data Centre and force observed on X, Y, and Z coordinates.The robot was navigated in the robotics research lab, and it moves to 17 meters in 48 seconds using Bluetooth on various surface types of viz boulder, glaze, rough tile, and marble flooring and at a dissimilar period which starts from 1 second and takes 13 optimized runs.To calculate Impulse, that is the force being applied over the period 'T' we had to navigate the robot inside the research area.The following tables show the results of the movement of the robot.At 7.38 seconds, it has been observed that there is a high peak in friction as a robot is navigated on the glaze, boulder, and rough surface arena, whereas for marble flooring there is drop-in friction due to the presence of treads on a wheel.After running it in the time frame of 3.8 seconds to 9.3 seconds we observed that the geographical parameter friction initially increased linearly and then showed a linear negative change as the surface texture is being changed during this time frame.After that, the robot's friction became fixed, and the robot started navigating in a linear direction.It has been inferred from figure 11 that for moving the robot on various types of surfaces, sets of data are received due to the presence of treads on a wheel.The wheel of the robot consists of a 12 V D.C. Motor, L298 Motor driver, the wheel having horizontal treads lines with a width of 4mm.The robot is moved using actuators (DC motors) and frictional steering in the experimental field.Arduino UNO having an analog output in PWM (Pulse Width Modulation) is used to control a motor with the help of an H bridge. Due to the property of selfinertia and the presence of geophysical parameters after a certain speed, it gets overridden, and its movement became smooth due to the absorption of friction and other geophysical parameters by the speed of the robot wheel.

A. Analysis of Identifying Objects Using Non-Visual Sensors
It has been analyzed that by using objects of different shapes which can be recognized by an array of IR sensors as stationary objects viz: rectangular, conical, and cylindrical are present in the experimental arena shown in table 4.1, 4.2, 4.3.Here, we analyzed that an array of (3) IR sensor is very sensitive, and the light gets backscattered from these objects.Now in the case of conical shape objects, the light easily passes through them.So, we observed that the Infrared Sensor can detect objects best in the case of rectangular shape objects.
These stationary objects present in the experimental arena can be localized by the SONAR sensor.The SOANR ring is designed in such a way that it provides round trip time which is used to calculate the distance of object and reflection.The provision of automatic hand sanitizers and dispensers for hand washing is implemented in industries, healthcare centers, and hospitals.
Through the implementation of drones, a real-time supply of goods is included with PPEs.The discovery and design of drugs through Artificial Intelligence.Mobile robots are incorporated for the deceased bodies' disposal to minimize contact with humans to reduce viral pathogens.With AI big data analysis is implemented through effective reduction of workload in real-time environment monitoring for the larger proportion of patients.Traffic management reduces the huge gathering of people with social robots for traffic management.In hospital healthcare centers automated guided and mobile robots are utilized for the navigation and transport of infected patients.During the testing process, mobile robots are utilized for the sample collection.

VIII. CONCLUSION
The COVID-19 pandemic leads to the outbreak of preparedness for the disease's pandemic in invasive effects.COVID-19 is incidence by the rapid spreading of the virus community through dispersal.However, in this scenario, the frontline workers are subjected to a high risk of infection due to direct contact with the infected patients.To promote life support to the frontline workers robots are integrated with the artificial intelligence mechanism with the automated environment for the diseases spreading.The analysis of the evaluation stated that to resolve the global issues medical robots, UV-disinfectants, cleaner robots, mobile robots, social, COBOTS and drones are employed.Through the examination for localization of robots Kalman filter based SFFLAM is designed for the robots for the automated process.In the future, mobile robots are considered to affect the navigation and transportation of infected patients.This provides the advanced effect for the automated implementation of healthcare in hospitals for frontline workers.
a) Using drones equipped with infrared cameras; b) People's Checking temperature in their homes itself .

Fig. 4 .
Fig. 4. Artificial intelligence is used to enhance the CT scan analysis to detect and report different stages of infection.
Fig. 10.a) Shown experimental robot; b) the organization where we worked using a Bluetooth controller module on different types of surfaces at various times starting from 1 second.

TABLE I :
ANALYSIS OF SFFLAM ALGORITHMS BASED ON SEP

TABLE II :
RESULTS OF ROBOT MOVING ON MARBLE SURFACE/FLOOR

TABLE IV :
RESULTS OF ROBOT MOVING ON ROUGH SURFACE OR TILE

TABLE V :
RESULTS OF ROBOT MOVING ON BOULDER SURFACE OR TILE

TABLE VI :
RECTANGULAR SHAPE OBSTACLES RESULT SET