Conference, Workshop and Poster Papers

Data Collection

Data is gathered as the team walks through each building with our modified GVI LiBackpack. When the surveying began, the team used a camera and LiDAR separately which proved problematic when it came time to sync the cloud points and images. To correct the issue, GVI LiBackpacks with GPS input for timing and synchronization were combined with an Insta360 camera (to provide colors that can easily be referenced to the LiDAR points). SLAM solution software stitches all colored points into one 3D model. GPS locations of external points are used to geo-reference 3D LiDAR point cloud data.

The GVI LiBackpack's horizontal LiDAR is not as effective in small spaces. For instance, the LiBackpack cannot see down to the floor or up into the rafters. In order to correct this issue, the Map901 team developed a portable scanner, called Signac, which is used to scan small areas that LiBackpack can’t handle.

Temperature, humidity and sound information are collected separately by two other sensors. The information from these sensors combined with the image data are used to annotate the algorithm. The tracking cameras are used in small spaces that are otherwise difficult to scan.

3D Mapping Hardware

map901 map901

Survey Operation

While conducting each survey, the Map901 team followed certain guidelines to achieve the best result from each LiDAR survey. These best practices for surveying indoor spaces followed by the Map901 team are outlined below.

Best Practices:

  • Open all doors before the survey.
  • Avoid capturing moving objects during the survey.
  • Avoid exiting and entering interior spaces through the same threshold when the doorway is narrow.
  • Do not repeat a route already traveled.
  • Scan one or two floors at a time and stitch the data together.
  • Use tracking cameras for small rooms.
map901 map901

Best Practices for Indoor 3D Mapping

[Download the Best Practices Guide]

Video of Survey

This video shows the National Civil Rights Museum survey and was captured on the Insta360 camera.

Data Processing

Summary of Data Processing Workflow

These steps represent the workflow to produce a 3D model:


Raw image data from our collection was transferred to a computer cluster at the University and machine learning algorithms were used to extract public safety related objects. Machine learning was used to create 51 label classes for a training and testing dataset, e.g. exit sign, fire alarm, person. Of these 51 label classes, we identified 30 label classes with high priority from the requirements​ given by first responders.

Then, each image collected was annotated with public safety objects, e.g., fire extinguishers, building control panels, utility shutoffs, exits, etc. The annotation process was initially done manually by inspecting some of the images to identify public safety objects. The results from the manual method provided the ground truth for the development of a learned automated process. In order to fully automate the process of identifying these safety objects, we annotated images for training the Neural Network. This was accomplished by manual annotation using LabelMe, crowd sourcing through contest, and an iterative process. The steps of this iterative process included using trained Neural Network to label images, select images with more annotated objects and errors, manually correcting the selected images to add to training data set, and finally retraining the Neural Network.

With the annotated images, we could fuse the resulting labels and color data to the point cloud. Because we know the physical relationship between our scanner's video camera and LiDAR, we can calculate which pixel of the camera image each point maps to. Then, we transfer that pixel's label from the neural network (if any) and color to the point. Through the SLAM process, we then associate each point to its position in the final 3D model.

We plan to organize and secure all raw data and processed data following NDN by first converting the raw point cloud data to an Octree format. The NDN octree will serve as a datastore accessed via a web service, allowing researchers and first responders to use their own tools to quickly interact with the point cloud data independent of arcGIS or other platforms. ​This web service will allow future development of a range of applications especially for first responders.