Autonomous Aerial Platform

Semantic Mapping

The images from high resolution center camera are semantically segmented through a deep belief net. The segmented images are used to update a 2.5D semantic grid map which maintains our beliefs for the semantic state of each grid cell. Each grid cell is 5m in size and stores the probability of it being a car. To perform the grid update, we project the pixel-wise predictions onto a digital elevation map (DEM), whilst using a belief on the heights of semantic classes and using an estimate of the robot\’s instantaneous pose. As the predictions and poses are noisy, we accumulate predictions over time to obtain accurate semantic maps. The image below shows an example of the projection. Bottom Right: The input image to the system. Top Left: The red cube is registered position of the car on DEM. The blackness of a cell signifies absence of a car and redness presence.

Exploration Planning Results

The vehicle was deployed to scout for cars and collect high resolution data if a car is found. The exploration reward function is the probability distance between current robot's belief and expected updated belief and a fixed reward of 20000 is allocated to scanning a car. The algorithm is run adaptively as the vehicle's representation is updated and is able to guide the vehicle to explore the environment and locate and map both the cars.
In the figure above, a) Testing site, start and end are marked by green nodes and car locations are shown in orange. b) Vehicle starts with a budget of 700m, the reward increases as likelihood of finding the car increases, the crest in reward marks the time at which the global planner found and decided to map the car. Figures 1,2,3 and 4 show the series of plans at various stages of the exploration mission, Dark squares indicate absence of cars and red squares presence of cars. Shades of grey and red signify certainty. Once the car is recognized, a 360 view of the car is associated with a high reward.