Image blur is one of the most fundamental and challenging problems in photography.
It causes significant image degradation, especially in the low light conditions where longer exposure time is required. Although the blur effect can be reduced by setting a faster shutter speed, it inevitably gains higher levels of noise. Moreover, most users may not be able to take high-quality pictures due to hand shaking, dim lighting conditions and inappropriate shutter speed selections. Therefore, it is important to develop algorithms to remove image blur computationally.
To remove the image blur, we need to estimate the blur kernel, which depicts how the image is blurred, and recover the sharp image. The image deblurring problem is an ill-posed problem as many pairs of the sharp image and the blur kernel could result in a same blurry image. To distinguish the correct pair from others, additional information is required. We introduce a generic image prior and a specific blur motion prior in this thesis to address this issue. Moreover, we explore the influence of image structures on estimating blur kernels, and provide insights on the design of deblurring systems.
Blur kernel estimation turns out to be especially difficult in non-uniform cases, which are the common situations in practice. The presence of camera rotations during exposure would lead to non-uniform blur effect. Scene depth is another factor of non-uniform blur, but is often neglected in recent methods. In this thesis, we introduce a unified framework for joint restoration of scene depth and latent image from a single input. To solve the complex model of non-uniform blur, we propose an efficient deblurring algorithm using backprojection and constrained camera pose subspace to facilitate fast convergence and low computation.
Author
Advisor