Repository logo
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Scholalry Output
  3. Publications
  4. DILIE: Deep Internal Learning for Image Enhancement
 
  • Details

DILIE: Deep Internal Learning for Image Enhancement

Source
Proceedings 2022 IEEE Cvf Winter Conference on Applications of Computer Vision Workshops Wacvw 2022
Date Issued
2022-01-01
Author(s)
Mastan, Indra Deep
Raman, Shanmuganathan  
Singh, Prajwal
DOI
10.1109/WACVW54805.2022.00008
Abstract
We consider the generic deep image enhancement problem where an input image is transformed into a perceptually better-looking image. The methods mostly fall into two categories: training with prior examples methods and training with no-prior examples methods. Recently, Deep Internal Learning solutions to image enhancement in training with no-prior examples setup are gaining attention. We perform image enhancement using a deep internal learning framework. Our Deep Internal Learning for Image Enhancement framework (DILIE) enhances content features and style features and preserves semantics in the enhanced image. To validate the results, we use structure similarity and perceptual error, which is efficient in measuring the unrealistic deformation present in the images. We show that DILIE framework outputs good quality images for hazy and noisy image enhancement tasks.
Publication link
http://export.arxiv.org/pdf/2012.06469
URI
https://d8.irins.org/handle/IITG2025/26282
IITGN Knowledge Repository Developed and Managed by Library

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify