Download
s11517-020-02251-4.pdf 1,57MB
WeightNameValue
1000 Titel
  • Fast body part segmentation and tracking of neonatal video data using deep learning
1000 Autor/in
  1. Hoog Antink, Christoph |
  2. Ferreira, Joana Carlos Mesquita |
  3. Paul, Michael |
  4. Lyra, Simon |
  5. Heimann, Konrad |
  6. Karthik, Srinivasa |
  7. Joseph, Jayaraj |
  8. Jayaraman, Kumutha |
  9. Orlikowsky, Thorsten |
  10. Sivaprakasam, Mohanasankar |
  11. Leonhardt, Steffen |
1000 Erscheinungsjahr 2020
1000 Publikationstyp
  1. Artikel |
1000 Online veröffentlicht
  • 2020-10-23
1000 Erschienen in
1000 Quellenangabe
  • 58(12):3049-3061
1000 Copyrightjahr
  • 2020
1000 Lizenz
1000 Verlagsversion
  • https://doi.org/10.1007/s11517-020-02251-4 |
  • https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7679364/ |
1000 Publikationsstatus
1000 Sprache der Publikation
1000 Abstract/Summary
  • Photoplethysmography imaging (PPGI) for non-contact monitoring of preterm infants in the neonatal intensive care unit (NICU) is a promising technology, as it could reduce medical adhesive-related skin injuries and associated complications. For practical implementations of PPGI, a region of interest has to be detected automatically in real time. As the neonates' body proportions differ significantly from adults, existing approaches may not be used in a straightforward way, and color-based skin detection requires RGB data, thus prohibiting the use of less-intrusive near-infrared (NIR) acquisition. In this paper, we present a deep learning-based method for segmentation of neonatal video data. We augmented an existing encoder-decoder semantic segmentation method with a modified version of the ResNet-50 encoder. This reduced the computational time by a factor of 7.5, so that 30 frames per second can be processed at 960 × 576 pixels. The method was developed and optimized on publicly available databases with segmentation data from adults. For evaluation, a comprehensive dataset consisting of RGB and NIR video recordings from 29 neonates with various skin tones recorded in two NICUs in Germany and India was used. From all recordings, 643 frames were manually segmented. After pre-training the model on the public adult data, parts of the neonatal data were used for additional learning and left-out neonates are used for cross-validated evaluation. On the RGB data, the head is segmented well (82% intersection over union, 88% accuracy), and performance is comparable with those achieved on large, public, non-neonatal datasets. On the other hand, performance on the NIR data was inferior. By employing data augmentation to generate additional virtual NIR data for training, results could be improved and the head could be segmented with 62% intersection over union and 65% accuracy. The method is in theory capable of performing segmentation in real time and thus it may provide a useful tool for future PPGI applications. Graphical Abstract This work presents the development of a customized, real-time capable Deep Learning architecture for segmenting of neonatal videos recorded in the intensive care unit. In addition to hand-annotated data, transfer learning is exploited to improve performance.
1000 Sacherschließung
lokal Infant, Newborn [MeSH]
lokal Infant, Premature [MeSH]
lokal Image processing
lokal Deep Learning [MeSH]
lokal Humans [MeSH]
lokal Nicu
lokal Original Article
lokal Camera-based monitoring
lokal Infant [MeSH]
lokal Image Processing, Computer-Assisted [MeSH]
lokal Human Body [MeSH]
lokal Video Recording [MeSH]
lokal Deep learning
lokal Semantic segmentation
lokal Photoplethysmography [MeSH]
1000 Liste der Beteiligten
  1. https://orcid.org/0000-0001-7948-8181|https://frl.publisso.de/adhoc/uri/RmVycmVpcmEsIEpvYW5hIENhcmxvcyBNZXNxdWl0YQ==|https://frl.publisso.de/adhoc/uri/UGF1bCwgTWljaGFlbA==|https://frl.publisso.de/adhoc/uri/THlyYSwgU2ltb24=|https://frl.publisso.de/adhoc/uri/SGVpbWFubiwgS29ucmFk|https://frl.publisso.de/adhoc/uri/S2FydGhpaywgU3Jpbml2YXNh|https://frl.publisso.de/adhoc/uri/Sm9zZXBoLCBKYXlhcmFq|https://frl.publisso.de/adhoc/uri/SmF5YXJhbWFuLCBLdW11dGhh|https://frl.publisso.de/adhoc/uri/T3JsaWtvd3NreSwgVGhvcnN0ZW4=|https://frl.publisso.de/adhoc/uri/U2l2YXByYWthc2FtLCBNb2hhbmFzYW5rYXI=|https://frl.publisso.de/adhoc/uri/TGVvbmhhcmR0LCBTdGVmZmVu
1000 Hinweis
  • DeepGreen-ID: 5bd445f4e8264987b3e247736ff2418d ; metadata provieded by: DeepGreen (https://www.oa-deepgreen.de/api/v1/), LIVIVO search scope life sciences (http://z3950.zbmed.de:6210/livivo), Crossref Unified Resource API (https://api.crossref.org/swagger-ui/index.html), to.science.api (https://frl.publisso.de/), ZDB JSON-API (beta) (https://zeitschriftendatenbank.de/api/), lobid - Dateninfrastruktur für Bibliotheken (https://lobid.org/resources/search)
1000 Label
1000 Dateien
1000 Objektart article
1000 Beschrieben durch
1000 @id frl:6468800.rdf
1000 Erstellt am 2023-11-17T18:36:19.029+0100
1000 Erstellt von 322
1000 beschreibt frl:6468800
1000 Zuletzt bearbeitet 2023-12-01T08:29:15.846+0100
1000 Objekt bearb. Fri Dec 01 08:29:15 CET 2023
1000 Vgl. frl:6468800
1000 Oai Id
  1. oai:frl.publisso.de:frl:6468800 |
1000 Sichtbarkeit Metadaten public
1000 Sichtbarkeit Daten public
1000 Gegenstand von

View source