pith. machine review for the scientific record. sign in

arxiv: 2605.08196 · v1 · submitted 2026-05-05 · 💻 cs.CV

Recognition: no theorem link

Survey on Disaster Management Datasets for Remote Sensing Based Emergency Applications

Authors on Pith no claims yet

Pith reviewed 2026-05-12 01:21 UTC · model grok-4.3

classification 💻 cs.CV
keywords disaster managementremote sensingdatasetsmachine learningdeep learningcomputer visionemergency responsesatellite imagery
0
0 comments X

The pith

A survey assembles a reference list of image datasets from satellites and drones to train AI systems for managing disasters at every stage.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper gathers publicly available datasets that cover the full cycle of disaster management from preparing for events to recovering after them using remote sensing images. With high-resolution imagery from UAVs and satellites now widely available, the main barrier to building effective machine learning tools is locating suitable annotated data for each phase. By organizing existing resources into one place, the work lets researchers and emergency practitioners move more quickly from data search to model training and deployment. A reader would care because disasters require rapid, data-driven decisions where starting from scratch on datasets wastes critical time.

Core claim

This survey provides a comprehensive overview of publicly available image-based datasets relevant to ML/DL-based disaster management pipelines. Emphasis is placed on datasets that support computer vision and remote sensing tasks across all phases of disaster events including pre-disaster, during, and post-disaster. The goal is to serve as a centralized reference for researchers and practitioners seeking high-quality datasets for rapid development and deployment of remote sensing-driven disaster response solutions.

What carries the argument

The categorized compilation of datasets organized by disaster phase and computer vision task, acting as the centralized reference for data selection in mitigation, preparedness, detection, response, and recovery.

If this is right

  • Researchers can identify suitable datasets for specific disaster management phases without extensive separate searches.
  • Development of models for rapid detection and situational assessment can proceed more quickly using existing annotated imagery.
  • Practitioners gain a single source to support deployment of remote sensing solutions in mitigation through recovery.
  • Effort duplication across groups working on similar computer vision tasks for emergencies is reduced.
  • Better matching of datasets to UAV or satellite sources improves coverage for pre-event and post-event analysis.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Gaps in dataset coverage for certain disaster types or phases could guide targeted collection of new public imagery and annotations.
  • This list might encourage community updates over time as new datasets from recent events become available.
  • Linking the surveyed datasets to specific model performance benchmarks could highlight which phases most need additional data volume.
  • Operational use in real emergencies might expose practical issues like data licensing or format compatibility not covered in the survey.

Load-bearing premise

The compilation is complete and up-to-date, and the listed datasets have sufficient quality and annotation levels to support practical ML/DL pipelines across all disaster phases.

What would settle it

Finding multiple high-quality, relevant remote sensing datasets that were omitted from the survey or discovering that the majority of listed datasets lack annotations detailed enough to train models that generalize to real disaster imagery.

Figures

Figures reproduced from arXiv: 2605.08196 by Alain P. Ndigande, Josiah Wiggins, Sedat Ozer.

Figure 1
Figure 1. Figure 1: This figure illustrates the five phases of disaster [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: A disaster can either be a result of human-related action [PITH_FULL_IMAGE:figures/full_fig_p002_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: NOAA emergency response aerial imagery of fire [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: A sample Maxar satellite imagery (left) and its seg [PITH_FULL_IMAGE:figures/full_fig_p005_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Showcasing various dataset samples. Sample images are visualized aiming at maximizing the variability in disaster [PITH_FULL_IMAGE:figures/full_fig_p011_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Dataset distributions are visualized over the listed 110 datasets in Table II based on: (a) disaster management phases, [PITH_FULL_IMAGE:figures/full_fig_p014_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Distribution of resolution reporting styles in Table II. In [PITH_FULL_IMAGE:figures/full_fig_p014_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Total number of samples (log-scaled) vs. dataset index [PITH_FULL_IMAGE:figures/full_fig_p014_8.png] view at source ↗
read the original abstract

Recent natural disasters have highlighted the urgent need for efficient data-driven approaches to disaster management. Machine learning (ML) and deep learning (DL) techniques have shown considerable promise in enhancing the key phases of disaster management including mitigation, preparedness, detection, response, and recovery. A critical enabler of successful ML or DL based applications in remote sensing, however, is the accessibility and quality of annotated datasets. With the growing availability of high-resolution imagery from unmanned aerial vehicles (UAVs) and satellites, computer vision and remote sensing algorithms have become essential tools for rapid detection, situational assessment, and decision-making in disaster scenarios. This survey provides a comprehensive overview of publicly available image-based datasets relevant to ML/DL-based disaster management pipelines. Emphasis is placed on datasets that support computer vision and remote sensing tasks across all phases of disaster events including pre-disaster, during, and post-disaster. The goal of this work is to serve as a centralized reference for researchers and practitioners seeking high-quality datasets for rapid development and deployment of remote sensing-driven disaster response solutions.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The manuscript is a survey paper that claims to provide a comprehensive overview of publicly available image-based datasets for machine learning and deep learning applications in remote sensing-based disaster management. It emphasizes datasets supporting computer vision tasks across the phases of mitigation, preparedness, detection, response, and recovery, with the goal of serving as a centralized reference for researchers developing remote sensing-driven solutions.

Significance. A well-curated and methodologically transparent survey of this type could serve as a useful reference resource for the remote sensing and computer vision communities working on disaster applications, particularly if it systematically organizes datasets by disaster phase and task type while confirming public accessibility and annotation quality. The paper does not ship machine-checked proofs, reproducible code, or parameter-free derivations, but its potential value lies in the compiled list itself if completeness can be substantiated.

major comments (1)
  1. [Abstract and Introduction] Abstract and Introduction: the central claim that the survey provides a 'comprehensive overview' of publicly available datasets is not supported by any description of the curation process. No search protocol, queried databases, keywords, inclusion/exclusion criteria, cutoff date, or coverage statistics (e.g., number of datasets per disaster type or phase) are provided. This is load-bearing because the entire contribution rests on the representativeness of the listed datasets; without this information, selection bias cannot be assessed and the claim cannot be evaluated.
minor comments (2)
  1. [Abstract] The abstract would be strengthened by including quantitative scope information, such as the total number of datasets reviewed or the breakdown by disaster phase, to allow readers to gauge coverage immediately.
  2. Ensure that every listed dataset includes explicit statements on current public accessibility, license, and annotation quality (e.g., pixel-level vs. image-level labels) to support the claim that they are suitable for practical ML/DL pipelines.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the detailed and constructive review. We agree that the claim of a 'comprehensive overview' requires explicit methodological transparency to allow assessment of representativeness and potential bias. We address the single major comment below.

read point-by-point responses
  1. Referee: [Abstract and Introduction] Abstract and Introduction: the central claim that the survey provides a 'comprehensive overview' of publicly available datasets is not supported by any description of the curation process. No search protocol, queried databases, keywords, inclusion/exclusion criteria, cutoff date, or coverage statistics (e.g., number of datasets per disaster type or phase) are provided. This is load-bearing because the entire contribution rests on the representativeness of the listed datasets; without this information, selection bias cannot be assessed and the claim cannot be evaluated.

    Authors: We agree that the absence of a documented curation process weakens the central claim. In the revised manuscript we will insert a new subsection (placed after the Introduction) titled 'Survey Methodology' that explicitly describes: (1) the search protocol, including databases and repositories queried (Google Scholar, IEEE Xplore, arXiv, Kaggle, Hugging Face Datasets, GitHub, and major remote-sensing data portals); (2) the keyword combinations used (e.g., 'disaster dataset remote sensing', 'flood UAV imagery', 'earthquake satellite dataset', 'post-disaster damage assessment dataset'); (3) inclusion criteria (publicly accessible, image-based, annotated for computer-vision tasks, relevance to at least one disaster-management phase) and exclusion criteria (proprietary data, non-image modalities, purely synthetic datasets without real-world validation); (4) the cutoff date for dataset inclusion; and (5) quantitative coverage statistics (total datasets retained, breakdown by disaster type and by phase). These additions will enable readers to evaluate selection bias and will be cross-referenced in the Abstract and Introduction. revision: yes

Circularity Check

0 steps flagged

No circularity: purely descriptive survey with no derivations or fitted claims

full rationale

This paper is a survey that compiles and describes publicly available image-based datasets for ML/DL-based disaster management. It contains no equations, predictions, first-principles derivations, fitted parameters, or quantitative claims that could reduce to inputs by construction. The central contribution is the curated list itself, presented without any self-referential logic, self-citation load-bearing premises, or renaming of results. No steps match the enumerated circularity patterns; the paper is self-contained as a descriptive reference.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

No mathematical derivations, free parameters, or new entities are introduced; the work rests entirely on curation of prior public datasets and literature.

pith-pipeline@v0.9.0 · 5484 in / 1006 out tokens · 46667 ms · 2026-05-12T01:21:46.740291+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

222 extracted references · 222 canonical work pages · 1 internal anchor

  1. [1]

    https://spacenet.ai/sn5-challenge/

    Automated road network extraction and route travel time estimation from satellite imagery. https://spacenet.ai/sn5-challenge/

  2. [2]

    S. K. Abid, R. Roosli, U. Nazir, and N. S. Kamarudin. Ai-enhanced crowdsourcing for disaster management: strengthening community resilience through social media.International Journal of Emergency Medicine, 18(1):201, 2025

  3. [3]

    Aggarwal, M

    A. Aggarwal, M. Mittal, and G. Battineni. Generative adversarial network: An overview of theory and applications.International Journal of Information Management Data Insights, 1(1):100004, 2021

  4. [4]

    Ahmad, S

    M. Ahmad, S. Distifano, A. M. Khan, M. Mazzara, C. Li, J. Yao, H. Li, J. Aryal, G. Vivone, and D. Hong. A comprehensive survey for hyperspectral image classification: The evolution from conventional to transformers.arXiv preprint arXiv:2404.14955, 2024

  5. [5]

    Akshya and P

    J. Akshya and P. Priyadarsini. A hybrid machine learning approach for classifying aerial images of flood-hit areas. In2019 International conference on computational intelligence in data science (ICCIDS), pages 1–5. IEEE, 2019

  6. [6]

    A. Alam, M. S. Bhat, and M. Maheen. Using landsat satellite data for assessing the land use and land cover change in kashmir valley. GeoJournal, 85(6):1529–1543, 2020

  7. [7]

    F. Alam, T. Alam, M. A. Hasan, A. Hasnat, M. Imran, and F. Ofli. Medic: a multi-task learning dataset for disaster image classification. Neural Computing and Applications, 35(3):2609–2632, 2023

  8. [8]

    F. Alam, F. Ofli, and M. Imran. Crisismmd: Multimodal twitter datasets from natural disasters. InProceedings of the international AAAI conference on web and social media, volume 12, 2018

  9. [9]

    F. Alam, F. Ofli, M. Imran, T. Alam, and U. Qazi. Deep learning benchmarks and datasets for social media image classification for disaster response. In2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 151–158. IEEE, 2020. 17

  10. [10]

    F. Alam, U. Qazi, M. Imran, and F. Ofli. Humaid: Human-annotated disaster incidents data from twitter with deep learning benchmarks. In Proceedings of the International AAAI Conference on Web and social media, volume 15, pages 933–942, 2021

  11. [11]

    Albawi, T

    S. Albawi, T. A. Mohammed, and S. Al-Zawi. Understanding of a convolutional neural network. In2017 international conference on engineering and technology (ICET), pages 1–6. Ieee, 2017

  12. [12]

    A. A. Aleissaee, A. Kumar, R. M. Anwer, S. Khan, H. Cholakkal, G.- S. Xia, and F. S. Khan. Transformers in remote sensing: A survey. Remote Sensing, 15(7):1860, 2023

  13. [13]

    J. Ali, R. Khan, N. Ahmad, and I. Maqsood. Random forests and decision trees.International Journal of Computer Science Issues (IJCSI), 9(5):272, 2012

  14. [14]

    Alidoost and H

    F. Alidoost and H. Arefi. Application of deep learning for emergency response and disaster management. InProceedings of the AGSE Eighth International Summer School and Conference, pages 11–17. University of Tehran, 2017

  15. [15]

    Alpaydin.Machine learning

    E. Alpaydin.Machine learning. MIT press, 2021

  16. [16]

    Alqahtani, M

    H. Alqahtani, M. Kavakli-Thorne, and G. Kumar. Applications of generative adversarial networks (gans): An updated review.Archives of Computational Methods in Engineering, 28:525–552, 2021

  17. [17]

    M. S. Amin and H. Ahn. Earthquake disaster avoidance learning system using deep learning.Cognitive Systems Research, 66:221–235, 2021

  18. [18]

    Andreadis, I

    S. Andreadis, I. Gialampoukidis, A. Karakostas, S. Vrochidis, I. Kom- patsiaris, R. Fiorin, D. Norbiato, and M. Ferri. The flood-related multimedia task at mediaeval 2020. InMediaEval, 2020

  19. [19]

    M. Aqib, R. Mehmood, A. Albeshri, and A. Alzahrani. Disaster management in smart cities by forecasting traffic plan using deep learning and gpus. InSmart Societies, Infrastructure, Technologies and Applications: First International Conference, SCITA 2017, Jeddah, Saudi Arabia, November 27–29, 2017, Proceedings 1, pages 139–154. Springer, 2018

  20. [20]

    Arif, A. Omar, S. Ashraf, A. M. Rahman, M. A. Amin, and A. A. Ali. A comparative study on disaster detection from social media images using deep learning. InProceedings of the Global AI Congress 2019, pages 485–499. Springer, 2020

  21. [21]

    A. Asif, S. Khatoon, M. M. Hasan, M. A. Alshamari, S. Abdou, K. M. Elsayed, and M. Rashwan. Automatic analysis of social media images to identify disaster type and infer appropriate emergency response. Journal of Big Data, 8(1):83, 2021

  22. [22]

    Y . Bai, B. Adriano, E. Mas, and S. Koshimura. Building damage assessment in the 2015 gorkha, nepal, earthquake using only post-event dual polarization synthetic aperture radar imagery.Earthquake Spectra, 33, 12 2017

  23. [23]

    Y . Bai, H. Sezen, and A. Yilmaz. End-to-end deep learning methods for automated damage detection in extreme events at various scales. In2020 25th International Conference on Pattern Recognition (ICPR), pages 6640–6647. IEEE, 2021

  24. [24]

    B. Barz, K. Schr ¨oter, A.-C. Kra, and J. Denzler. Finding relevant flood images on twitter using content-based filters. InPattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10–15, 2021, Proceedings, Part VI, pages 5–14. Springer, 2021

  25. [25]

    B. Barz, K. Schr ¨oter, M. M ¨unch, B. Yang, A. Unger, D. Dransch, and J. Denzler. Enhancing flood impact analysis using interactive retrieval of social media images.arXiv preprint arXiv:1908.03361, 2019

  26. [26]

    Y . Bazi, L. Bashmal, M. M. A. Rahhal, R. A. Dayil, and N. A. Ajlan. Vision transformers for remote sensing image classification.Remote Sensing, 13(3):516, 2021

  27. [27]

    Unsupervised feature learning and deep learning: A review and new perspectives.CoRR, abs/1206.5538, 1(2665):2012, 2012

    Y . Bengio, A. C. Courville, and P. Vincent. Unsupervised feature learning and deep learning: A review and new perspectives.CoRR, abs/1206.5538, 1(2665):2012, 2012

  28. [28]

    Benjamin, H

    B. Benjamin, H. Patrick, S. Christian, S. Venkat, D. Andreas, and B. Damian. The multimedia satellite task at mediaeval 2017. 2017

  29. [29]

    A. Bhoi, S. P. Pujari, and R. C. Balabantaray. A deep learning-based social media text analysis framework for disaster resource management. Social Network Analysis and Mining, 10:1–14, 2020

  30. [30]

    Bonafilia, B

    D. Bonafilia, B. Tellman, T. Anderson, and E. Issenberg. Sen1floods11: A georeferenced dataset to train and test deep learning flood algorithms for sentinel-1. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 210–211, 2020

  31. [31]

    N. I. Bountos, I. Papoutsis, D. Michail, A. Karavias, P. Elias, and I. Parcharidis. Hephaestus: A large scale multitask dataset towards insar understanding. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1453–1462, 2022

  32. [32]

    N. I. Bountos, M. Sdraka, A. Zavras, I. Karasante, A. Karavias, T. Herekakis, A. Thanasou, D. Michail, and I. Papoutsis. Kuro siwo: 33 billionm 2 under the water. a global multi-temporal satellite dataset for rapid flood mapping.arXiv preprint arXiv:2311.12056, 2023

  33. [33]

    P. A. Burrough, R. A. McDonnell, and C. D. Lloyd.Principles of geographical information systems. Oxford University Press, USA, 2015

  34. [34]

    Butler, S

    J. Butler, S. Brown, R. Saunders, B. Johnson, S. Biggar, E. Zalewski, B. Markham, P. Gracey, J. Young, and R. Barnes. Radiometric measurement comparison on the integrating sphere source used to calibrate the moderate resolution imaging spectroradiometer (modis) and the landsat 7 enhanced thematic mapper plus (etm+), 2003-05-01 00:05:00 2003

  35. [35]

    Carri ´on-Mero, N

    P. Carri ´on-Mero, N. Montalv ´an-Burbano, F. Morante-Carballo, A. Quesada-Rom ´an, and B. Apolo-Masache. Worldwide research trends in landslide science.International Journal of Environmental Research and Public Health, 18(18), 2021

  36. [36]

    Chaudhary, S

    P. Chaudhary, S. D’Aronco, J. P. Leit˜ao, K. Schindler, and J. D. Wegner. Water level prediction from social media images with a multi-task ranking approach.ISPRS Journal of Photogrammetry and Remote Sensing, 167:252–262, 2020

  37. [37]

    Chaudhuri and I

    N. Chaudhuri and I. Bose. Exploring the role of deep neural net- works for post-disaster decision support.Decision Support Systems, 130:113234, 2020

  38. [38]

    H. Chen, J. Song, O. Dietrich, C. Broni-Bediako, W. Xuan, J. Wang, X. Shao, Y . Wei, J. Xia, C. Lan, et al. Bright: A globally dis- tributed multimodal building damage assessment dataset with very- high-resolution for all-weather disaster response.Earth System Science Data Discussions, 2025:1–51, 2025

  39. [39]

    S. A. Chen, A. Escay, C. Haberland, T. Schneider, V . Staneva, and Y . Choe. Benchmark dataset for automatic damaged building de- tection from post-hurricane remotely sensed imagery.arXiv preprint arXiv:1812.05581, 2018

  40. [40]

    Z. Chen, C. Wang, N. Zhang, and F. Zhang. Rscc: A large-scale remote sensing change caption dataset for disaster events.arXiv preprint arXiv:2509.01907, 2025

  41. [41]

    Cheng, A

    C.-S. Cheng, A. H. Behzadan, and A. Noshadravan. Dorianet: A visual dataset from hurricane dorian for post-disaster building damage assessment.DesignSafe-CI, 2021

  42. [42]

    D. Y . Chino, L. P. Avalhais, J. F. Rodrigues, and A. J. Traina. Bowfire: detection of fire in still images by integrating pixel color and texture analysis. In2015 28th SIBGRAPI conference on graphics, patterns and images, pages 95–102. IEEE, 2015

  43. [43]

    R. Y . Choi, A. S. Coyner, J. Kalpathy-Cramer, M. F. Chiang, and J. P. Campbell. Introduction to machine learning, neural networks, and deep learning.Translational vision science & technology, 9(2):14–14, 2020

  44. [44]

    Chowdhury, M

    T. Chowdhury, M. Rahnemoonfar, R. Murphy, and O. Fernandes. Comprehensive semantic segmentation on high resolution uav imagery for natural disaster damage assessment. In2020 IEEE International Conference on Big Data (Big Data), pages 3904–3913. IEEE, 2020

  45. [45]

    Christie, N

    G. Christie, N. Fendley, J. Wilson, and R. Mukherjee. Functional map of the world. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6172–6180, 2018

  46. [46]

    Colomba, A

    L. Colomba, A. Farasin, S. Monaco, S. Greco, P. Garza, D. Apiletti, E. Baralis, and T. Cerquitelli. A dataset for burned area delineation and severity estimation from satellite imagery. InProceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 3893–3897, 2022

  47. [47]

    N. R. Council et al.Improving disaster management: the role of IT in mitigation, preparedness, response, and recovery. National Academies Press, 2007

  48. [48]

    Creswell, T

    A. Creswell, T. White, V . Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath. Generative adversarial networks: An overview. IEEE signal processing magazine, 35(1):53–65, 2018

  49. [49]

    Croitoru, V

    F.-A. Croitoru, V . Hondru, R. T. Ionescu, and M. Shah. Diffusion models in vision: A survey.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):10850–10869, 2023

  50. [50]

    S. S. Dar, M. Z. U. Rehman, K. Bais, M. A. Haseeb, and N. Kumar. A social context-aware graph-based multimodal attentive learning frame- work for disaster content classification during emergencies.Expert Systems with Applications, 259:125337, 2025

  51. [51]

    Demir, K

    I. Demir, K. Koperski, D. Lindenbaum, G. Pang, J. Huang, S. Basu, F. Hughes, D. Tuia, and R. Raskar. Deepglobe 2018: A challenge to parse the earth through satellite images. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 172–181, 2018

  52. [52]

    Devaraj, S

    J. Devaraj, S. Ganesan, R. M. Elavarasan, and U. Subramaniam. A novel deep learning based model for tropical intensity estimation and post-disaster management of hurricanes.Applied Sciences, 11(9):4129, 2021

  53. [53]

    Dewangan, Y

    A. Dewangan, Y . Pande, H.-W. Braun, F. Vernon, I. Perez, I. Altintas, G. W. Cottrell, and M. H. Nguyen. Figlib & smokeynet: Dataset and deep learning model for real-time wildland fire smoke detection. 18 Remote Sensing, 14(4):1007, 2022

  54. [54]

    Q. Dong, S. Gong, and X. Zhu. Class rectification hard mining for imbalanced deep learning. InProceedings of the IEEE international conference on computer vision, pages 1851–1860, 2017

  55. [55]

    Z. Dong, Y . Liu, Y . Wang, Y . Feng, Y . Chen, and Y . Wang. Enteromor- pha prolifera detection in high-resolution remote sensing imagery based on boundary-assisted dual-path convolutional neural networks.IEEE Transactions on Geoscience and Remote Sensing, 61:1–15, 2023

  56. [56]

    An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

    A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale.arXiv preprint arXiv:2010.11929, 2020

  57. [57]

    Dotel, A

    S. Dotel, A. Shrestha, A. Bhusal, R. Pathak, A. Shakya, and S. P. Panday. Disaster assessment from satellite imagery by analysing topographical features using deep learning. InProceedings of the 2020 2nd International Conference on Image, Video and Signal Processing, pages 86–92, 2020

  58. [58]

    Ersoy, O

    A. Ersoy, O. T. Yıldız, and S. ¨Ozer. Ortpiece: An ort-based turkish im- age captioning network based on transformers and wordpiece. In2023 31st Signal Processing and Communications Applications Conference (SIU), pages 1–4. IEEE, 2023

  59. [59]

    F. T. J. Faria, M. B. Moin, B. K. Rafa, S. Saha, M. M. Rahman, K. M. Hasib, and M. Mridha. Banglacalamitymmd: A comprehensive benchmark dataset for multimodal disaster identification in the low- resource bangla language.International Journal of Disaster Risk Reduction, page 105800, 2025

  60. [60]

    Firat and M

    M. Firat and M. Gungor. Generalized regression neural networks and feed forward neural networks for prediction of scour depth around bridge piers.Advances in Engineering Software, 40(8):731–737, 2009

  61. [61]

    Floridi, M

    L. Floridi, M. Holweg, M. Taddeo, J. Amaya, J. M ¨okander, and Y . Wen. Capai-a procedure for conducting conformity assessment of ai systems in line with the eu artificial intelligence act.Available at SSRN 4064091, 2022

  62. [62]

    Foggia, A

    P. Foggia, A. Saggese, and M. Vento. Real-time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion.IEEE TRANSACTIONS on circuits and systems for video technology, 25(9):1545–1556, 2015

  63. [63]

    Fujita, K

    A. Fujita, K. Sakurada, T. Imaizumi, R. Ito, S. Hikosaka, and R. Naka- mura. Damage detection from aerial images via convolutional neural networks. In2017 Fifteenth IAPR international conference on machine vision applications (MVA), pages 5–8. IEEE, 2017

  64. [64]

    Gahlot, M

    S. Gahlot, M. Ramasubramanian, I. Gurung, R. Hansch, A. Molthan, and M. Maskey. Curating flood extent data and leveraging citizen sci- ence for benchmarking machine learning solutions.Authorea Preprints, 2022

  65. [65]

    G. Gao, X. Ye, S. Li, X. Huang, H. Ning, D. Retchless, and Z. Li. Ex- ploring flood mitigation governance by estimating first-floor elevation via deep learning and google street view in coastal texas.Environment and Planning B: Urban Analytics and City Science, 51(2):296–313, 2024

  66. [66]

    Ghaffarian, F

    S. Ghaffarian, F. R. Taghikhah, and H. R. Maier. Explainable artificial intelligence in disaster risk management: Achievements and prospective futures.International Journal of Disaster Risk Reduction, 98:104123, 2023

  67. [67]

    Ghahramani

    Z. Ghahramani. Unsupervised learning. InSummer school on machine learning, pages 72–112. Springer, 2003

  68. [68]

    Ghamisi, W

    P. Ghamisi, W. Yu, A. Marinoni, C. M. Gevaert, C. Persello, S. Selvaku- maran, M. Girotto, B. P. Horton, P. Rufin, P. Hostert, et al. Responsible ai for earth observation.arXiv preprint arXiv:2405.20868, 2024

  69. [69]

    Ghorbanzadeh, Y

    O. Ghorbanzadeh, Y . Xu, P. Ghamisi, M. Kopp, and D. Kreil. Land- slide4sense: Reference benchmark data and deep learning models for landslide detection.arXiv preprint arXiv:2206.00515, 2022

  70. [70]

    Giannakeris, K

    P. Giannakeris, K. Avgerinakis, A. Karakostas, S. Vrochidis, and I. Kompatsiaris. People and vehicles in danger-a fire and flood detection system in social media. In2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), pages 1–5. IEEE, 2018

  71. [71]

    Girshick, J

    R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014

  72. [72]

    Goodfellow, Y

    I. Goodfellow, Y . Bengio, and A. Courville.Deep learning. MIT press, 2016

  73. [73]

    Goodfellow, J

    I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y . Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014

  74. [74]

    Gozen and S

    D. Gozen and S. Ozer. Visual object tracking in drone images with deep reinforcement learning. In2020 25th International Conference on Pattern Recognition (ICPR), pages 10082–10089. IEEE, 2021

  75. [75]

    Guha-Sapir and P

    D. Guha-Sapir and P. Hoyois. Estimating populations affected by disasters: A review of methodological issues and research gaps.Centre for Research on the Epidemiology of Disasters (CRED), 2015

  76. [76]

    J. Gui, Z. Sun, Y . Wen, D. Tao, and J. Ye. A review on generative adversarial networks: Algorithms, theory, and applications.IEEE transactions on knowledge and data engineering, 35(4):3313–3332, 2021

  77. [77]

    Y . Guo, C. Wang, S. X. Yu, F. McKenna, and K. H. Law. Adaln: a vision transformer for multidomain learning and predisaster building information extraction from images.Journal of Computing in Civil Engineering, 36(5):04022024, 2022

  78. [78]

    Gupta, B

    R. Gupta, B. Goodman, N. Patel, R. Hosfelt, S. Sajeev, E. Heim, J. Doshi, K. Lucas, H. Choset, and M. Gaston. Creating xbd: A dataset for assessing building damage from satellite imagery. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 10–17, 2019

  79. [79]

    Gupta, R

    R. Gupta, R. Hosfelt, S. Sajeev, N. Patel, B. Goodman, J. Doshi, E. Heim, H. Choset, and M. Gaston. xbd: A dataset for assessing building damage from satellite imagery, 2019

  80. [80]

    H ¨ansch, J

    R. H ¨ansch, J. Arndt, D. Lunga, M. Gibb, T. Pedelose, A. Boedihardjo, D. Petrie, and T. M. Bacastow. Spacenet 8-the detection of flooded roads and buildings. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1472–1480, 2022

Showing first 80 references.