Just another WordPress.com site

Improving Resolution by Image Registration – CVGIP 1991 – Part 1

  • Image Resolution depends → Physical Characteristics of Sensor
    • Optics
    • Density of detector elements
    • Spatial Response of detector elements
  • Resolution ↑
    • Sensor modification → X (not available sometimes)
    • Sampling Rate ↑ → More samples from a sequence of displaced pictures
    • Estimate of the sensor ‘s spatial response
  • Solution of the paper → Iterative algorithm to increase image resolution + Image Registration with sub-pixel accuracy
    • Low resolution gray level and color images
    • De-blurring a single blurred
  • Literature Review
    • Tsai & Huang [10] → Frequency domain→Only translations considered
    • Gross[6]→ Assumption: Imaging process is known, relative shifts to the input pictures are known → Merging low-res pictures over a finger grid using interpolation → Single blurred picture of higher spatial sampling →Convolution with restoration filter obtained by pseudo-inverse of matrix of blur operator → De-blurred picture →Only translations considered
    • Peleg et al.[16,12]→Estimate an initial guess of the higher res image →Simulate the imaging process (Assumption: known)→Set of simulated low res images →Error function between the actual and simulated low res images→Minimize Error iteratively → Stall/Max Iters
      • +: Noise-free images
      • -: Highly sensitive to noise, Slow to converge
  • Analogy with Computer Aided Tomography (CAT)
    • CAT → Images are reconstructed from their projections in many directions→Back-projection method
    • SR→Each low-res pixel is a “projection” of a region in the scene →The size is determined by the imaging blur→Similar to Back-projection method*
  • Imaging process:  Quantization of the blurred image f with additive noise n
    • g_k → k-th observation image frame
    • f → original scene
    • h → blurring operator
    • n_k → additive noise
    • s_k → nonlinear quantization function + displacement of k-th frame
    • (x,y)→ center of receptive field( in f) of the detector whose output is g(k)(m,n)
  • Receptive Field (in f) of a detector
    • Output: g_k(m,n)
    • Center: (x,y)
    • Shape → Region of support of the blurring operator h
  • Displacement = translations + rotations
    • (x_0k,y_0k) → Translation of k-th frame
    • theta_k → rotation of k-th frame
    • s_x, s_y → sampling rate in the x and y direction

  • Enhancing the resolution of color images → YIQ representation
    • Monochrome SR algorithm may then be applied separately to each component
    • Gray-level images are processed together with Y component image sequence
  • Obtaining the parameters of Imaging process: Image Registration → Iterative Refinement → Recovering the Blur
  • Image Registration
    • Karen et al[12] based on [13] for this model → Horizontal shift a + Vertical shift b + rotation theta between images g1 and g2 → Valid for small displacements
    • Trick: g2 is translated and rotated of g1 → Expanding Sin and Cos by their Taylor’s series (first two terms) → Expanding g1 to it’s own Taylor series (first term) → Error function between g1 and g2 (overlapping parts)→ Minimize Error → Motion parameters (a,b,theta)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s