The MASSIMAL research project was conducted in 2020-2024. The main goal of the project was to develop methods for shallow-water coastal habitat mapping based on hyperspectral imaging from unmanned aerial vehicles (UAVs). The project was a collaboration between UiT the Arctic University of Norway, the Norwegian Institute for Water Research, and Nord University, and was financed by the Research Council of Norway (8 million NOK) and UiT the Arctic University of Norway (600 000 NOK).
Project group
NOTE: As the project is now finished, this page is no longer updated. However, outputs based on the project may still be published. To get updated information, see the project page in the Cristin research database, and/or search for the project members' names in research databases. Click the "Publications and presentations" tab to access links to published data, GitHub repositories, and publications from the project.
MASSIMAL has developed new methods for mapping marine habitats such as eelgrass meadows, seaweed communities, kelp forests, and rocky seabeds through aerial imaging. Such habitats provide important ecosystem services like primary production, carbon capture, and nutrient uptake in the ocean. Human activities, climate change, and overgrazing by sea urchins pose significant threats to these habitats.
Shallow coastal areas have been imaged using a hyperspectral camera mounted on a drone flying 50 meters above the sea surface. The hyperspectral images have been combined with underwater images of the seabed to train machine learning algorithms that create maps of the distribution, density, and condition of various habitats.
Data has been collected from several areas along Norway's coast; near Bodø, Vega, Smøla, and Larvik. The datasets represent a wide variation in plant species, natural habitats, weather conditions, and the optical properties of the water, providing a basis for training robust and general machine learning algorithms. The following areas and natural habitats are of particular interest: Seagrass meadows near Bodø and Larvik, areas with commercial harvesting of large kelp at Smøla, and large maerl bed areas near Vega.
Many different methods for collecting "ground truth" data have been tested. These include imaging by snorkeling/diving, using remotely operated vehicles (ROVs), imaging from cameras mounted under boats, and using cameras and various other sensors from autonomous boats. Experiments with machine learning have demonstrated the importance of covering relatively large areas, and to have good example data spanning the variations within an area. Underwater imaging from boats (regular or autonomous) combined with precise position measurements has proven to be best suited for this. Some methods for collecting ground truth data and using drone images have also been included in recommendations for mapping marine habitats developed by NIVA.
Results from the project show that it is possible to find distinct spectral patterns for various vegetation and seabed types. A dataset from Bodø with seagrass and seaweed has been used to find good methods for "feature selection," i.e., selecting the wavelengths that work best for mapping different species. A master's thesis based on data from the Larvik area has yielded promising results in mapping seagrass with and without filamentous algae. Using "clustering" on a dataset from Bodø with a dense mix of different classes has shown that semi-supervised learning can be useful in cases where exact annotation of images is not practically possible. A U-Net machine learning model trained on a dataset from Vega has shown that it is possible to map kelp, seaweed, and rocky seabeds with good accuracy.
The project has also implemented existing methods for "glint correction" to remove reflections from the water surface in the hyperspectral images. These methods are usually used on satellite images, but the results show that they also work well on close-up images with high resolution. This enables drone imaging even in cases where weather conditions are not ideal, such as cloudy weather and/or wind and waves.
The project has imaged around 2.5 square kilometers, with a ground resolution of approximately 3.5 cm. In addition to hyperspectral imaging, most areas have also been imaged with a regular RGB camera, and a few areas have been imaged with multispectral cameras. All these datasets have been published via an open data portal, where one can both explore the datasets interactively in a browser and download the data. The datasets provide a basis for developing general machine learning models for mapping habitats in shallow marine areas along the Norwegian coast, and in areas with similar habitats. The hyperspectral images have very high resolution, both spatially and spectrally, and can be used to simulate other sensors with lower resolution. This can contribute to the development of lighter and smaller equipment that is easy to use in the field, and to method development for using sensors that can cover larger areas (from satellites or aircraft).
A lot of open source software has been developed through the project, including software for calibration and correction of coastal hyperspectral images, generation of georeferenced images for ground truths based on video and position logs, and machine learning for segmentation of hyperspectral images. Much of this software is general and can be directly used by researchers working on similar issues, both in Norway and internationally.
The project has contributed to increased interdisciplinary collaboration between fields of physics, computer science, and marine biology. The project has also laid the foundation for international collaboration with research communities in Valencia and Madrid. The collaboration has inspired several ideas for future research projects.
[Loading...]