Detekt works with georeferenced images, point clouds and GIS data with a wide variety of localization features.
The pipeline processes the mobile mapping data through a 3-stage instance segmentation process. The example below demonstrates this step for road signs detection and classification.
The processed detection results (points and polygons) are transformed into world coordinates using 3D information from lidar data or a depth map. As a result, we receive world-space projected points and polygons to determine the exact position of a detected object or surface.
Objects are detected within multiple images and at different times. All individual detections are then fused to one unified detection which increases robustness and accuracy due to multiple "votes" for the same object.
Depending on the characteristics of each object, we either apply point fusion, surface fusion or volume fusion to achieve the most accurate results.
Detections and location data are individually integrated into internal processes & workflows.
The viewer is one of the core elements of Detekt and lets you navigate through your available image/point cloud data to visually understand the results of the AI model detections. Detection classes can be easily changed and annotations can be manually added for for model improvements via re-training. All functions of the viewer are explained in detail in the knowledge base section.
Comparing the detections and their exact location with existing asset databases is essential to improve data accuracy of any managed asset within a city.
The map is part of the viewer and offers a comprehensive understanding of all detections within your city or municipality. Objects like traffic signs or single road damages can be displayed as icons, road condition is shown as a heatmap. Read more about all map functions in the knowledge base section.
The Detekt API lets you easily connect any application to use the gathered information for your own needs.
Where mobile mapping data is available from at least two separate campaigns, any detected object can be assigned a unique identifier to compare its condition, location and proportions over a given timespan. Comparison results are provided via json files or API.