[Stanford] Airborne Sonar Combines Light and Sound to Map the Deep Ocean
CrossMind.ai logo

[Stanford] Airborne Sonar Combines Light and Sound to Map the Deep Ocean

Dec 01, 2020
Today's camera, lidar, and radar systems do a poor job of mapping the ocean floor. Engineers at Stanford combine light and sound modalities via the photoacoustic airborne sonar system to produce high-quality 3D images of the ocean floor. Abstract: High-resolution imaging and mapping of the ocean and its floor has been limited to less than 5% of the global waters due to technological barriers. Whereas sonar is the primary contributor to existing underwater imagery, the water-based system is limited in spatial coverage due to its low imaging throughput. On the other hand, aerial synthetic aperture radar systems have provided high-resolution imaging of the entire earth's landscapes but are incapable of deep penetration into water. In this work, we present a proof-of-concept system which bridges the gap between electromagnetic imaging in air and sonar imaging in water through the laser-induced photoacoustic effect and high-sensitivity airborne ultrasonic detection. Here, we use air-coupled capacitive micromachined ultrasonic transducers (CMUTs) which is a critical differentiator from previous works and has enabled the acquisition of an underwater image from a fully airborne acoustic imaging system - a task that has yet to be accomplished in the literature. With the entire imaging system located on an airborne platform, there is much promise for the scalability of our system to one which could perform high-throughput imaging of underwater in large-scale deployment. Non-contact acoustic-based imaging modalities are also of much interest to the medical imaging and non-destructive testing communities. Incorporating air-coupled transducers, for example CMUTs, or other resonant sensors in these applications could be aided by the analysis presented throughout this work.