Category: Uncategorized

Installation of Anaconda/python and configure OpenCv with extra module for Linux/ Window.

selection_012

Tutorial to discuss steps on how to install Anaconda 4.20 聽and configure OpenCv3.2聽 in Anaconda 4.20 for python 2.7.

After completing these steps I am sure you can use anaconda/Python2.7聽 and OpenCv with extra modules.

Introduction

In office I was assigned a simple work in image processing which required extra modules of OpenCv in Python. It was simple task but I struggled a lot to configure python with opencv extra modules 馃槮 . After wasting my weekends聽 I completed the task on time and came to know how easy it was 馃檪 .

Let us first start with Linux (almost similar steps for聽 Window 7).

Linux System:

1]. Download the Anaconda according to your system configuration from below link.

Download Anaconda

2]. In linux open terminal go to download folder run the anaconda installer sh file.

selection_001

3]. Open the /.bashrc file聽 gedit ~/.bashrc and add the line below line at the end of file and save it.

export PATH=”Path to your anaconda directory”/bin:$PATH”

for me it is: 聽 聽 聽聽 export PATH=”/home/lord/anaconda2/bin:$PATH”

4]. Go to anaconda2/bin folder by terminal and run spyder apps commands. Spyder is good IDE for python.

selection_003

Anaconda is installed 馃榾 馃榾 馃榾 馃榾 馃榾 Now data analysis & Machine learning can be performed in python.

Configure Opencv with anaconda

5]. Download OpenCv 3.2 Linux from below link.

聽 聽 聽 聽 聽 Download OpenCV

6]. Extract the opencv.

7]. Go to opencv build/cv2/python/x64/

8]. Select聽cv2.pyd 聽 and paste it on Anaconda/Lib/side-packages/聽.

9]. Restart the ide .

Open spyder app and try import cv2. If no error we are done with opencv 馃榾

Configure Opencv extra module with anaconda

Install extra module with conda

conda install -c https://conda.binstar.org/menpo opencv

selection_010

That’s it 馃檪 馃檪 馃榾 馃榾聽聽 Open spyder and have fun . . . . .

 

Window7 System :

1]. Download the Anaconda according to your system configuration from below link.

Download Anaconda

2]. Install the anaconda in system.

3]. Go to anaconda2/ folder by terminal (cmd) and run spyder apps commands it will open Spyder IDE.

C:/anaconda2>spyder apps

Our anaconda is installed 馃榾 馃榾 馃榾 馃榾 馃榾聽 we can perform data analysis, ML in python.

Configure Opencv with anaconda

5]. Download OpenCv 3.2 Window7 from below link.

聽 聽 聽 聽 聽 Download OpenCV

6]. Extract the opencv.

7]. Go to opencv build/cv2/python/x64/

8]. Select聽cv2.pyd 聽 and paste it on Anaconda/Lib/side-packages/聽.

9]. Restart the ide .

Open spyder app and try import cv2 if no error we are done with opencv 馃榾 .

Configure Opencv extra module with anaconda

Install extra module with conda

conda install -c https://conda.binstar.org/menpo opencv

That’s it聽 馃檪 馃檪 馃榾 馃榾聽 Open spyder and have fun . . . . .

ppt57

happy_learning

Happy Learning Cheers

 

 

Advertisements

Eigen Value Decomposition

START

 

Basic of Eigen values and Eigen vectors :

  • Suppose A is matrix with size 聽N x N.
  • V is vector with size N.
  • V is called the eigen vector of A.

A V 聽= 聽聽位 V 聽 聽 聽 聽 ……. (i)

  • if multiplication of A scale the vector V .
  • In equation (i) 聽位 is called the eigen value of matrix.
  • Function eig is used in MATLAB to get the eigen vector and eigen value.
  • suppose

聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 [ V, D] 聽= 聽eig (A)

where D is diagonal matrix contain the eigen values and V is columns corresponding eigen vector

聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽AV = VD 聽 聽 聽 聽 聽 …..(2)

form equation 1

位 I – A 聽= 0 聽 聽 called the root of polynomial聽

where聽聽位 is diagonal eigenvalues, I is identity matrix and 聽A is given input matrix.

  • Determinant of 聽matrix A 聽(|A|) will be product of all eigen values.Set of eigen vector is called the spectrum of matrix.
  • If we calculate the eigen value and eigen vector of 聽data; eigen vector 聽represent which basis direction data is spread and eigen value informs聽which basis direction (eigen vector ) have more information about data.

 

Symmetric matrix :聽

If transpose of matrix does not change the matrix its called symmetric matrix.

S = S T

where ST is transpose of S matrix.

One of the example of symmetric matrix is co-variance matrix.

The property of symmetric matrix is that if we calculate eigen value and eigen vector all eigen values will be real.

Eigen Value Decomposition:

if V1 and V2 correspond to eigen values 位 1 and聽位2 , if聽位1 is not equal to聽位2 then vector V1 and V2 will be orthogonal.

Lets take eigen vectors 聽V = ( V1 , V2 , V3 ….) 聽 (orthogonal matrix)

if 聽V*VT is zero聽we can say columns are orthogonal

and if 聽螞 聽= 聽(聽位 1,聽位 2,聽位 3….) 聽diagonal eigen values.

we can write

S = 聽V聽螞 VT 聽 聽 聽……. 聽(3)

where VT is transpose of V

equation 3 is called the eigen value decomposition.

Lets write the code of eigen value decomposition in MATLAB.

To make sense of 聽Eigen value decomposition input matrix should be symmetry matrix so eigen value will be real. Will calculate co variance matrix which is symmetry matrix.

%%%%%%%%%%%%%%%%%%%%%%%% Code Start %%%%%%%%%%%%%%%%

%% read Image

% set path to read image from system

ImagePath = ‘D:\DSUsers\uidp6927\image_processingCode\lena.jpg’;

img_RGB = imread(ImagePath); % read image from path

img_RGB = im2double(img_RGB); % Convert image to double precision

img_gray = rgb2gray(img_RGB); % convert image to gray

%% calculate the co- variance matrix which is symmetry matrix

Cov_I = cov(img_gray);

% Show Cov_I matrix

figure, imshow(Cov_I,[])

%% will decompose symmetry matrix and reconstruct it.

%% Eigen value decomposition ………………….

[E,D] = eig(Cov_I); 聽 聽 聽 聽%%% 聽E is Eigen vectors and D is diagonal of eigen values

%%%%%%% Reconstruction %%%%%%%%%%%%%%%%

% reconstruction loss is loss of information due to removeing egine vector and values.

% In process of reconstruction I used only top 5 eigen values聽and took coresponding eigen vectors

ReCov_I5 = E(:,295:end)*D(295:end,295:end)*E(:,295:end)’;

figure, imshow(ReCov_I5,[])

title(‘reconstruct symmetry matrix with 5 eigen vectors’)

reconstruction_loss5聽= 100 – (sum(sum(D(295:end,295:end)))/sum(D(:)))*100;

% In process of reconstruction I used only top 1聽eigen value聽and took coresponding eigen vector

ReCov_I1 =E(:,300:end)*D(300:end,300:end)*E(:,300:end)’;

figure,聽imshow(ReCov_I1,[])

title(‘reconstruct symmetry matrix with 1聽eigen vectors’)

reconstruction_loss1 = 100-(sum(sum(D(300:end,300:end)))/sum(D(:)))*100

%%%%%%%%%%%%%%%%%%%%%%%%% Code end %%%%%%%%%%%%%%%%%%%

 

Results:聽

Input image

input lena
Input Lena Image

 

Symmetric or Covariance input image generated by Lena image:

input_image1
Covariance / symmetry image

 

Covariance image reconstruction by top 5 eigen values and correstponding vectors:

reconstruct_image
Reconstruction with top 5 eigen vectors

Loss of information because we used only top 聽5 eigen vector 聽= 27.84 percentage

Top 5 eigen vectors contain聽72.16 percentange information of input image

 

Covariance image reconstruction by 1聽eigen vector:

reconstruc_with1eigen
Reconstruction with top 1 eigen vector

Loss of information 聽= 72.40聽percentage

Top 1聽eigen vector contain 27.8聽percentange information of input image

Applications:

Eigen value docomposition is the basic of Sigular value decomposition and Principal Component Analaysis with the help of PCA we can able to reconstruct our original lena image which we will discuss in futher post.

 

the_end1

happy_learning

Happy Learning

Cheers

 

 

 

 

 

GLCM for texture analysis

START

 

 

  • GLCM (gray level co-occurrence matrix) is mainly useful to perform the texture analysis and find the features from image.
  • As name suggested its work on gray image and try to create sort of 2 d histogram from image.
  • Main application of GLCM are texture analysis, feature extraction and segmentation.

Steps to Calculate GLCM matrix :

  • Lets assume image I which is gray image.
  • Initialize GLCM matrix size 256 x 256 聽(256 is level of GLCM).
  • Suppose we use zero angle of GLCM means direction of GLCM is horizontal.
  • Suppose distance of GLCM is 1, means we just look horizontally next pixel to current pixel.
  • In image 聽I(i,j) get the gray value (suppose value of pixel is a = 聽127 at I(i,j)), and get gray 聽value I(i,j+1) in case of distance 1 and zero degree of 聽GLCM (suppose value of pixel is b = 58 at I(i,j+1)).
  • 聽Go to GLCM matrix co-ordinate聽(a = 127, b=58) and increment the value by 1.
  • Iterate the full image that will give us GLCM matrix of聽zero degree for聽distance 1.
  • According to texture type, GLCM distance can be decided.
road_bricks
Figure. 1 texture image of road and brick
  • For road 聽because texture changes so rapidly we consider the small distance to calculate GLCM but for聽bricks, distance of the GLCM matrix should be large.
  • Angle of GLCM should be selected according to 聽direction of image texture changes, so in brick image we want to consider 2 direction 0 degree and 90 degree.
  • Calculate features from GLCM matrix, there are many features are available to perform the texture analysis like contrast, correlation, energy , and聽homogeneity etc.
  • We will get the contrast feature from GLCM matrix which are sufficient to say weather texture is rough or smooth.

Lets write basic GLCM code which calculate the zero degree GLCM for 256 level and get聽contrast feature.

%%%%%%%%%%%%%%%%%%%%%%%%%Code Start here %%%%%%%%%%%%%%%

%%%%% 聽GLCM function start here %%%%%%%%%%%%%

function GLCM_0 = getGLCM0(img_gray, distance)
%% function calculate the GLCM matrix at zero degree angle and given distance
%% GLCM_0 is GLCM matrix for 0 degree angle.
%% img_gray is input gray image
%% distance is distance of GLCM calculated

%% initialize GLCM matrix

GLCM_0 = zeros(256,256);

[row,col] = size(img_gray);

for i=1:row
for j=1:col-distance

pixel1 = img_gray(i,j)+1;
pixel2 = img_gray(i,j+distance)+1;
GLCM_0(pixel1,pixel2) = GLCM_0(pixel1,pixel2)+1;

end
end

end

%%% GLCM function end here %%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%% Contrast function start here %%%%%%%%%%%%%%%%%

function Contrast = GLCM_contrast(GLCM)

%% function calculate the GLCM contrast for GLCM matrix
%% GLCM is Gray level co-occurance matrix as input.
%% Contasrt is feature of GLCM matrix

%% normalize the GLCM

[row,col] = size(GLCM);

N_factor = sum(sum(GLCM));

GLCM = GLCM./N_factor;

Contrast = 0;

for i = 1:row
for j = 1:col

temp = ((i-j)^2)* GLCM(i,j);
Contrast = Contrast + temp;
end
end

end

%%%% Contrast function end here 聽%%%%

 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%main GLCM code %%%%%%%%%

%% this main code calls 聽GLCM and contrast function for given image

clc % clear the screen

clear all % clear all variables

close all % close all MATLAB windows

%% read Image

% set path to read image from system

ImagePath = ‘D:\DSUsers\uidp6927\image_processingCode\frontend-large Brick.jpg’;

img_RGB = imread(ImagePath); % Read image from path

img_gray = rgb2gray(img_RGB); % Convert image to gray

img_gray = imresize(img_gray,[900,1200]); 聽 % re sizing (normalizing the size)

GLCM_0 = getGLCM0(img_gray, 5); 聽 聽 %% get the GLCM matrix for distance 5聽

Contrast = GLCM_contrast(GLCM_0); 聽 %% calculate the contrast feature

disp(‘Contrast of function is =’)
Contrast
%% use matlab inbuilt function to verify result

glcm = graycomatrix(img_gray,’NumLevels’,256,’offset’,[0 5]);

stats = graycoprops(GLCM_0,’Contrast’);

disp(‘Contrast of MATLAB is = ‘)
stats.Contrast

%%%%%%%%%%%%%%%%%%%%%%%%Code End here 聽 %%%%%%%%%%%%%%%%%聽

Matlab provide the inbuilt function from which we can calculate the GLCM and Contrast and Verify our results.

glcm = graycomatrix(img_gray,’NumLevels’,256,’offset’,[0 5]);

stats = graycoprops(GLCM_0,’Contrast’);

Lest take two input image first is road texture and second is brick image apply the GLCM and get the contrast feature.

free asphalt texture or bitumen road background photo
Figure. 2 Road Texture Image

 

frontend-large Brick
Figure. 3 Brick texture image

 

Result:聽

Contrast feature of figure. 2 road texture is 聽3.9489e+03

Contrast feature of figure.3 Brick texture is 91.0206

As from result we can say that road texture have so much texture because of that聽its contrast value is so much high if less texture will be present in image contrast value will be low.

So for smooth area in image聽contrast will be low, 聽for rough area contrast will be high.

* Note: GLCM value depend on the size of image we are performing the GLCM so size of image should be fix or normalize.

For more 聽information about GLCM 聽feature analysis please go through the paper聽harlick texture features聽.

 

the_end1happy_learning

Happy Learning

Cheers

 

 

 

2 D Projective Transform

START

 

Lets take the Lena image to perform the transformation

lena
Input image
  • In this post I am going to discuss image projective transforms.
  • These transform are easy to understand聽and useful in many applications like object detection, 聽3 D view geometry and ADAS.
  • Suppose聽image I(x,y) as input image and our聽output (transform) image is O(u,v)
  • 聽So we can write a Transform matrix T such that :

聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 O(u,v) = T [I(x,y)]

  • Suppose image is in homogeneous co-ordinate system we can add 1

聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽O(u,v,1) = T [I(x,y,1)]

  • Where T is 3 x 3 matrix, according to T matrix we can able to transform 聽the image many ways.

Lets write a code which can take input image multiply it with transform matrix and produce transform image

 

%%%%%%%%%%%%%%%%%%%%%%%% Code Start%%%%%%%%%%%%%%%%

function transform_img = trasform_2d(T, input_img)

%function perform the 2 D projective transform聽

% input_image : gray input image

% T is 3 x 3 trasnform matrix 聽

%transform_image : output gray transform image

[row,col] = size(input_img);

transform_img = zeros(2*row,2*col);

for i=-row:2*row
for j=-col:2*col

U = inv(T)*[i j 1]’; 聽 聽 聽% where ‘ is transpose

if(U(3)>1) 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽聽% in case of 8 dof projective transform

U(1) = U(1)/U(3); %% taking care homogeneous co ordinate
U(2) = U(2)/U(3);
end

if((U(1)>0)&&(U(1)<row-1)&&(U(2)>0)&&(U(2)<col-1)) %% consider only points 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 %which lie inside of input image

transform_img(round(i+row+1),round(j+col+1)) = 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽聽input_img(round(U(1)+1),round(U(2)+1));

end

end
end

end

%%%%%%%%%%%% Code End %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Following are the main 2 D project transform :

  1. 聽Euclidean Transform
  2. Similarity Transform
  3. Affine Transform
  4. Projective Transform

 

Euclidean Transform :

Let define transformation matrix for Euclidean transform

T = [聽R11 聽R12 聽Tx 聽; 聽R21 聽R22 聽Ty ; 0 0 1]

Where R11 , R12, R21 and 聽R22 are rotation matrix and Tx and Ty are translation. So it have have 3 degree of freedom 1 in rotation and 2 in translation.

lets聽take 30 degree rotation angle and -100 pixel translation in both direction

Euclidean

Result of Euclidean Transform

Euclidean Transform dont have property to scale the images.

Similarity Transform:

Let define transformation matrix for聽Similarity聽transform

T = [ s*R11 聽s*R12 聽Tx 聽; s* R21 聽s*R22 聽Ty ; 0 0 1]

It have聽4 degree of freedom.

if in Euclidean transform we added the feature of scale (s) its become similarity tranform

lets聽take 40 degree rotation angle and -100 pixel translation in both direction and 1.5 scaling factor

Similurity_transform.JPG
Result of Similarity transform

Affine Transform :

Let define transformation matrix for聽Affine聽transform

T = [ a11 聽a12 聽Tx 聽; a21 聽a22 聽Ty ; 0 0 1]

In Similarity transform if we can able to take different angle as well as different scale it will become affine transform.

lets聽take 40 degree rotation angle in x direction and 30 degree rotation angle in y direction , -100 pixel translation in both direction, 0.7 scaling factor in x direction and 1.5 聽scaling factor in y direction.

Affine
Result of Affine transform

Projective Transform : 聽

Define the transformation full matrix which include the homogeneous 聽factor as well.

T = 聽[h11 聽h12 聽h13 ; h21 聽h22 聽h23 ; h31 聽h32 聽h33]

Projective transform have 8 degree of freedom because we added 2 聽coefficient h31 聽h32 in affine transform matrix which take care of image homogeneous co-ordinate geometry. (might be in furure calibration post i will discuss it about in detail)

Projective_transform
Result of projective Transform

%%%%%%%%%%%%%%%%%% 聽Code Start 聽%%%%%%%%%%%%%%%%%%%%%%%

clc % clear the screen

clear all % clear all variables

close all % close all MATLAB windows

%% read Image

% set path to read image from system

ImagePath = ‘D:\DSUsers\uidp6927\image_processingCode\lena.jpg’;

img_RGB = imread(ImagePath); % read image from path

img_RGB = im2double(img_RGB); % Convert image to double precision

img_gray = rgb2gray(img_RGB); % convert image to gray

[row, col] = size(img_gray);

figure,imshow(img_gray)
%For a nonreflective similarity,
%[u,v] = [x,y,1]T

%% euclidean transformation.

scale = 1; % scale factor
angle = 30*pi/180; % rotation angle
tx = -100; % x translation
ty = -100; % y translation

sc = scale*cos(angle);
ss = scale*sin(angle);

T = [ sc ss tx;
-ss sc ty;
0 0 1];
% call the function transform_2d
transform_img = trasform_2d(T, img_gray);
figure, imshow(transform_img)
title(‘Euclidean transform’)

 

%% Similarity transform
scale = 1.5; % scale factor
angle = 40*pi/180; % rotation angle
tx = -100; % x translation
ty = -50; % y translation

sc = scale*cos(angle);
ss = scale*sin(angle);

T = [ sc ss tx;
-ss sc ty;
0 0 1];

transform_img = trasform_2d(T, img_gray);
figure, imshow(transform_img)
title(‘Similarity transform’)
%% Affine transform
scale1 = .7; % scale factor
scale2 = 1.7;
angle1 = 40*pi/180; % rotation angle
angle2 = 30*pi/180; % rotation angle
tx = -100; % x translation
ty = -50; % y translation
sc1 = scale1*cos(angle1);
ss1 = scale1*sin(angle2);
sc2 = scale2*cos(angle1);
ss2 = scale2*sin(angle2);

T = [ sc1 ss1 tx;
-ss2 sc2 ty;
0 0 1];
transform_img = trasform_2d(T, img_gray);

figure, imshow(transform_img)
title(‘Affine transform’)

%% projective transform
scale1 = .7; % scale factor
scale2 = 1.7;
angle1 = 40*pi/180; % rotation angle
angle2 = 30*pi/180; % rotation angle
tx = -100; % x translation
ty = -50; % y translation

sc1 = scale1*cos(angle1);
ss1 = scale1*sin(angle2);
sc2 = scale2*cos(angle1);
ss2 = scale2*sin(angle2);

T = [ sc1 ss1 tx;
-ss2 sc2 ty;
.01 .02 1];

transform_img = trasform_2d(T, img_gray);
figure, imshow(transform_img)
title(‘Projective transform’)

%%%%%%%%%%%%%%%%%%%%% 聽Code End %%%%%%%%%%%%%%%%%%%%%%%

 

the_end1

 

happy_learning

Happy Learning

Cheers

 

 

Wavelet Transform

START

In this post, I will discuss the Haar wavelet analysis (wavelet transform) in image processing.

Applications of Haar wavelet transform in image processing are Feature extraction, Texture analysis, Image compression etc.

 

 

 

 

  • Wavelet is time frequency resolutions analysis.
  • As shown in above figure.1 聽its have the information of different time frequencies of signal.
  • Its like applying a comb of filters bands in signal they are called levels. First we apply full frequency band and have no knowledge of time.
  • Apply filter which have half bandwidth of signal lose some frequency information but 聽have some knowledge of time resolution called first level decomposition.
  • Further reduce the filter bandwidth will be second level decomposition.

Haar wavelet is made of two filters one is low pass filter and second high pass filter.

 

filter
Figure. 2 Low pass and high pass filter.

Coefficients of Low pass filter 聽= [0.707, 0.707]

Coefficients of High pass filter 聽= [-0.707, 0.707]

 

bloc_diagram
Figure. 3

 

 

input_output
Figure.4 LL band output of Haar wavelet
  • Figure .3 shows the block diagram of first level decomposition.
  • I is image of size m , n. We take the image column vectors (1d) and apply the low pass filter and high pass filter.
  • 聽Down sample the output and reshape the image I1 of size m, n/2 (size is n/2 because we down sampled the columns by 2).
  • Apply low and high pass filter in row wise in image I1 again down sample the output and reshape the image (m/2,n/2).

 

wavelet_result
Figure.5 聽1st level decomposition of haar wavelet.
  • Figure. 5 shows the result of haar wavelet decomposition results at first level.
  • LL filter output will proceeds for next level decomposition. As its contains most information of input image and looks almost same at down sampled resolution.
  • HH, LH 聽and HL will have texture information in vertical, horizontal and ramp directions. HH, LH and HL are called haar features.
  • We can further 聽decompose the LL image into LL1, LH1, HL1 and HH1 images.

 

output
Figure.6 LL band output in 4 level decompostion

 

wavelet_second_level_result
Figure.7 2nd聽Level decomposistion of haar wavelet

MATLAB provide Wavelet Toolbox GUI to perform the wavelet analysis.

Type wevemenu in command window of 聽MATLAB.

wavemenu
Figure. 8 wavelet tool box GUI in聽MATLAB

 

%%%%%%Code Start %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

function [LL, LH, HL, HH] = haar_wave(image_in)

%% function haar_wave calculate the haar wave decomposition of given imput image
%% Image in :- input gray level image
%% LL :- Low Low band output image
%% LH :- Low High band output image
%% HL :- High Low band output image
%% HH :- High High band output image

image_in = im2double(image_in);
filter_L = [.707, .707];
filter_H = [-.707, .707];
[row, col] = size(image_in);
image_vector = image_in(:);

%% apply low pass filter in one d image column wise image
image_low_pass_1D = filter(filter_L,1,image_vector);
image_high_pass_1D = filter(filter_H,1,image_vector);

% Down sample the image low and high pass image.
temp = 1;
Dimage_low_pass_1D = 0;
Dimage_high_pass_1D = 0;

for i = 1:2:length(image_low_pass_1D)
Dimage_low_pass_1D(temp) = image_low_pass_1D(i);
Dimage_high_pass_1D(temp) = image_high_pass_1D(i);
temp = temp+1;
end

%% reshape the image in 2 D, size will be row/2 , col
Dimage_low_pass = reshape(Dimage_low_pass_1D,round(row/2), col)’;
Dimage_high_pass = reshape(Dimage_high_pass_1D, round(row/2),col)’;

%% conver the image in 1 D but in column vector, I transpose Dimage_low_paas image above.

Dimage_low_pass_1D = Dimage_low_pass(:);
Dimage_high_pass_1D = Dimage_high_pass(:);

LL_1D = filter(filter_L,1,Dimage_low_pass_1D);
LH_1D = filter(filter_H,1,Dimage_low_pass_1D);

HL_1D = filter(filter_L,1,Dimage_high_pass_1D);
HH_1D = filter(filter_H,1,Dimage_high_pass_1D);

%Down sample the all 1 _d vector images
temp = 1;
DLL_1D = 0;
DLH_1D = 0;
DHL_1D = 0;
DHH_1D = 0;

for i = 1:2:length(LL_1D)
DLL_1D(temp) = LL_1D(i);
DLH_1D(temp) = LH_1D(i);
DHL_1D(temp) = HL_1D(i);
DHH_1D(temp) = HH_1D(i);
temp = temp+1;
end

%% reshape the images in size row/2 and clo/2

LL = reshape(DLL_1D, round(col/2), round(row/2))’;
LH = reshape(DLH_1D, round(col/2), round(row/2))’;
HL = reshape(DHL_1D, round(col/2), round(row/2))’;
HH = reshape(DHH_1D, round(col/2), round(row/2))’;

end

 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 %% call the haar wavelet function and check the result

clc % clear the screen

clear all % clear all variables

close all % close all MATLAB windows

%% read Image

% set path to read image from system

ImagePath = ‘D:\DSUsers\uidp6927\image_processingCode\lena.jpg’;

img_RGB = imread(ImagePath); % read image from path

img_RGB = im2double(img_RGB); % Convert image to double precision

img_gray = rgb2gray(img_RGB); % convert image to gray

% first level decomposition
[LL, LH, HL, HH] = haar_wave(img_gray);

% second level decomposition
[LL1, LH1, HL1, HH1] = haar_wave(LL);

%% Draw all image

figure, imshow(img_gray);

figure,
subplot(2,2,1), imshow(LL,[])
title(‘LL haar wave image’)
subplot(2,2,2), imshow(LH,[])
title(‘LH haar wave image’)
subplot(2,2,3), imshow(HL,[])
title(‘HL haar wave image’)
subplot(2,2,4), imshow(HH,[])
title(‘HH haar wave image’)

figure,
subplot(2,2,1), imshow(LL1,[])
title(‘LL1 haar wave image’)
subplot(2,2,2), imshow(LH1,[])
title(‘LH1 haar wave image’)
subplot(2,2,3), imshow(HL1,[])
title(‘HL1 haar wave image’)
subplot(2,2,4), imshow(HH1,[])
title(‘HH1 haar wave image’)

%%%%%%%%%%%%%%%%%Code End %%%%%%%%%%%%%%%%%%%%%%%%%%%%

the_end1

 

happy_learning

Happy Learning

Cheers

Image Blending

 

START

 

  • Image blending is the process to blend (mixing) two images.
  • One of the important application of image blending is to remove the artifacts while creating the panorama or stitching the images.
  • Image blending also used to add the object in background or fore ground in images/scenes.

 

Lets take the two images which we want to blend…..

input_image1
Input image 1
input_image2
Input image 2

 

Following are some of the image blending process :

Suppose first image is a and second image is b

1. Multiply 聽(darken) image blending:

darker
Figure. 1 Darken blending result

聽 聽 聽 聽 聽 聽 聽

F_blend (a,b) = a *b聽

Darken image blending is multiplication of 聽two images pixel by pixel. Result of blending will be darker compared to both input images as shown in above 聽figure .1

2. Screen or brighter image blending:

brighter
Figure.2 Screen聽image blending result

 

F_blend (a, b) = 1 – (1-a)* (1-b)

Screen聽image blending is reverse process of darken image blending its make the image much brighter as shown in figure.2 so if in case we want dark area also be brighter we can use screen image blending .

3. Overlay Image blending :

overlay.JPG
Figure. 3 Overlay blending result

 

F_blend (a, b) 聽 = 聽 聽 聽 聽 聽1 – 2* (1-a) * (1-b) 聽if 聽(a > .5) 聽 else 聽 2* a * b 聽

Overlay blending provide the results in which darker area will become more darker and brighter area becomes brighter on the basis of base image as shown in figure 3. The image in which we check condition is called the base image here we checked the condition on image a .

4. Weightage image blending :聽

weightage

Figure. 4 Weightage image blending.

F_blend (a,b) = 聽(weight * a + (1-weight)*b ) /2

Weightage blending provide the weights to images and take the average of them.聽Figure 4 shows the weightage blending where the weight value is 0.5. Range of weight is 0 to 1.

5. Cross fading blending:

Cross fading is one of the most important blending used to smooth panorama images.

ramp1
Figure. 5 Ramp function for image

Figure. 5 shows we create the ramp function for input images where ramp intensity is 0 to 1 and width of the ramp is the part of image we want to blend.

ramp2
figure. 6 Blending with ramp function

We take聽two ramp images according to figure. 5 ad add them to get the result.

cross_fading
Figure.7 Cross fading blending result

We took full ramp to blend the image as shown in figure. 7 . In left most side image 1 effect is dominant and in right most side image 2 is dominant. In middle it will be average.

image_feathering.JPG
Figure.8

ramp2

Figure. 8 聽shows the image feathering by聽step function . Figure 8 we can apply the cross fading in 聽middle strip window to remove the sharp line you can see the result in Figure 9.

cross_fading2
Figure. 9 Cross fading to remove the artifacts

In some applications we create image pyramid and apply the cross fading in each gaussian and laplacian level聽and reconstruct the result. That is known as pyramid blending process.

聽%%%%%%%%%%%%%%%%%%%% CODE聽START %%%%%%%%%%%%%%%%%%

clc % clear screen
clear all % clear all variable
close all % close all open window

img1 = imread(‘blend_image1.jpg’);
img2 = imread(‘blend_image2.jpg’);

img1 = imresize(img1,[480,640]);
img2 = imresize(img2,[480,640]);
figure, imshow(img1)
title(‘input imag1’)
figure, imshow(img2)
title(‘input imag2’)

img1 = im2double(img1);
img2 = im2double(img2);

[row,col,channel] = size(img1);

%% multiply darken image blending
darken_blend = img1.*img2;

figure, imshow(darken_blend,[])
title(‘darken blending result’)

%% screen brighter image blending.

brighter_blend = 1 – ((1-img1).*(1-img2));
figure, imshow(brighter_blend,[])
title(‘brighter blending result’)

%% weightage blending. weight 1 rang is (0 to 1)

weight1 = .5;
weight2 = 1 – weight1;

weightage_blend = (weight1.*img1 + weight2.*img2)/2;

figure,
imshow(weightage_blend);
title(‘weightage blending result’)

% overlay blending

%% initialize image
overlay_blending = zeros(row,col,channel);

%% assign the value only on index where img1 < .5
overlay_blending(img1<.5) = 2.*img1(img1<.5).*img2(img1<.5);
%% assign the value only on index where img1 > .5

overlay_blending(img1>.5) = 1 – (2.*(1-img1(img1>.5)).*(1-img2(img1>.5)));

figure, imshow(overlay_blending,[])
title(‘overlay blending result’)

%% cross fading blending

%% genrate ramp function which size is widt of image ( full ramp function)
ramp1 = (0:1/col:1);
ramp2 = (1:-1/col:0);

cross_fading = zeros(row,col,channel);

for cols = 1: col

%%% multiply the ramp function with image to get ramp image 1 and ramp image 2 and %%add them to get result
cross_fading(:,cols,:) = (ramp2(cols).*img1(:,cols,:) + ramp1(cols).*img2(:,cols,:));

end

figure, imshow(cross_fading)
title(‘cross fading blending’)

feathering = zeros(row,col,channel);

%% genrate step images and add them
feathering(:,1:col/2,:) = img1(:,1:col/2,:);
feathering(:,col/2:end,:) = img2(:,col/2:end,:);

imshow(feathering)
title(‘image feathering’)

ramp3 = (0:1/100:1);
ramp4 = (1:-1/100:0);

ground = zeros(1,(col/2)-50);
one = ones(1, (col/2)-50);

%% generate the ramp function for cross fading in middle of image with 100 pixel width.
function3 = [ground ramp3 one];
function4 = [one ramp4 ground];

feathering1 = zeros(row,col,channel);

%apply cross fading
for cols = 1: col

feathering1(:,cols,:) = (function4(cols).*img1(:,cols,:) + function3(cols).*img2(:,cols,:));

end

figure,
imshow(feathering1)
title(‘cross fadind result’)

%%% %%%%%%%%%%%%%%%% CODE END %%%%%%%%%%%%%%%%%%%%%

the_end1

happy_learning

Happy Learning

Cheers

 

 

 

 

lication

Gaussian and Laplacian Pyramids

START

Gaussian and laplacian pyramids are applying gaussian and laplacian filter 聽in an image in聽cascade order with different kernel sizes of gaussian and laplacian filter.

 

pyramid
Figure.1 Pyramid
  • Figure. 1 shows pyramid of image.
  • Full image resolution is taken at level 0.
  • At each step up level image resolution is down sample by 2. So if starting image size is 256 X 256 at level 0 in level 1 image size will be 128 X 128.

 

Gaussian Pyramid :

gaussian_pyramid
Figure. 2 Gaussian Pyramid of level 2
  • Figure. 2 shows the Gaussian pyramid block diagram of input image.
  • Take input image 聽(in_image), 聽apply gaussian filter, output will be level 0 image.
  • Down sample the level o image by 2, apply gaussian filter, output will be level 1 image.
  • Down sample聽the level 1 image by 2, apply gaussian filter, output will be level 2 image.
  • Similar way we can create full pyramid of gaussian.
  • Every level use same Gaussin filter. (If you are down sampling the image by 2 at each level, size of gassuan filter should not be changed)
  • Effect of down sampling the image by 2 is equivalent to increasing the bandwidth of gaussian filter by 2.
  • 聽Suppose gassuain filter cutoff frequency is F1 in 0 level if we down sample image by 2 gaussian filter cut of frequency will be 2*F1 for down sampled image.
  • So by down sampling the image, we are applying different sizes of gaussian in input image. That is also called multi resolution analysis.
  • Gaussian pyramids are used in SIFT, Surf, Gabor filters for multi resolution features extractor

gaussian_pyramid_image

Figure. 3 Gaussian pyramid results

Figure. 3 shows gaussian pyramid results. Level 2 image is much blurred as compared to level 0.

Laplacian Pyramid:聽

laplacian_pyramid
Figure. 4 Laplacian pyramid of level 1

 

  • Suppose Level 0, level 1 and level 2 is the output of gaussian pyramid as we discussed above. Figure. 4 shows the block diagram of laplacian pyramid from gaussian pyramid.
  • Up sample the gaussian level 1 image 聽by 2 聽(level 1_2)and subtract the level 1_2 image from level 0 image, output will be laplacian of level 0.
  • Upsample the gaussian level 2 image 聽by 2聽(level 2_2) and subtract the level 2_2 image from level 1 gaussian image , output will be laplacian of level1.
  • Laplacian pyramid try to find out the pass band frequency.
  • Suppose gaussian level 0 frequency is f1 and gaussian level 1 frequency is f2, so laplacian level 0 frequency will be f2 – f1.
  • Laplacian pyramid used many time in object detection pre processing steps.
  • It can help to compute the optical flow for large motion vector.

 

laplacian_pyramid_images1
Figure .5 Laplacian pyramid results of level 2

 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%Code Start %%%%%%%%%%%%%%%

clc % clear the screen

clear all % clear all variables

close all % close all MATLAB windows

%% read Image

% set path to read image from system

ImagePath = ‘D:\DSUsers\uidp6927\image_processingCode\lena.jpg’;

img_RGB = imread(ImagePath); % read image from path

img_RGB = im2double(img_RGB); % Convert image to double precision

img_gray = rgb2gray(img_RGB); % convert image to gray

[row,col,channel] = size(img_gray); % get size of image
h_gauss = fspecial(‘gaussian’,11,1);

% apply gaussian filter to gray image
gauss_level_0 = imfilter(img_gray, h_gauss,’replicate’);

% down sample the gauss_level_0 image by 2 (resize function)
% Apply gaussian filter
gauss_level_1 = imfilter(imresize(gauss_level_0,[round(row/2),round(col/2)]), h_gauss,’replicate’);
% down sample the gauss_level_1 image by 2 (resize function)
% Apply gaussian filter
gauss_level_2 = imfilter(imresize(gauss_level_1,[round(row/4),round(col/4)]), h_gauss,’replicate’);
% down sample the gauss_level_2 image by 2 (resize function)
% Apply gaussian filter
gauss_level_3 = imfilter(imresize(gauss_level_2,[round(row/8),round(col/8)]), h_gauss,’replicate’);

% upsample the gauss_level_1 image by 2(resize function)
% subtract resized gauss_level_1 image from gauss_level_0 image
laplacian_level_0 = gauss_level_0 – imresize(gauss_level_1,[row,col]);
% upsample the gauss_level_2 image by 2(resize function)
% subtract resized gauss_level_2 image from gauss_level_1 image
laplacian_level_1 = (gauss_level_1 – imresize(gauss_level_2,[round(row/2),round(col/2)]));
% upsample the gauss_level_3 image by 2(resize function)
% subtract resized gauss_level_3 image from gauss_level_2 image
laplacian_level_2 = (gauss_level_2 – imresize(gauss_level_3,[round(row/4),round(col/4)]));
% draw all the images
subplot(2,2,1), imshow(img_gray,[])
title(‘input image’)
subplot(2,2,2), imshow(gauss_level_0,[])
title(‘gaussian pyramid level0’)
subplot(2,2,3), imshow(gauss_level_1,[])
title(‘gaussian pyramid level1’)
subplot(2,2,4), imshow(gauss_level_2,[])
title(‘gaussian pyramid level2’)

figure,
subplot(2,2,1), imshow(img_gray,[])
title(‘input image’)
subplot(2,2,2), imshow(laplacian_level_0,[])
title(‘laplacian pyramid level0’)
subplot(2,2,3), imshow(laplacian_level_1,[])
title(‘laplacian pyramid level1’)
subplot(2,2,4), imshow(laplacian_level_2,[])
title(‘laplacian pyramid level2’)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%Code End %%%%%%%%%%%%%%%%

the_end1

 

 

happy_learning

Happy Learning

Cheers

 

 

 

 

Image Enhancements

START

 

  • Image enhancements is manipulation of images (in term of images) that suits to our application.
  • As we discussed in last post by using Gaussian filter聽we can able to remove noises that is also one type of image enhancement technique.

Some of image enhancement techniques are

  1. Negative image
  2. Log transform
  3. Power law聽transform
  4. Filtering聽 (Gaussian filter, Median filter)
  5. Histogram equalization.
  • Post will discuss聽negative image, power log transform and聽log transform of images .
  • Histogram equalization is one the famous method of聽 image enhancement that I am not going to discuss in this post :D.

Negative Transform :聽

output_image =聽 (Negative_range – 1)聽 – input_image ;

  • Its聽generate the negative image of input image.
  • We can manipulate the output image with Negative_range parameter.
  • Negative_range parameter can be 256 to 1.
  • Value below negative_range聽in input image will become zero in output image.

 

transforms
Figure.1 Image Enhancements

Figure shows the negative image of input Lena image with Negative_range = 256.

Power Law Transform:

output_image = Constant * (input_image^ gama) ;

  • Power low transform聽manipulates contrast of image with the help of gama value.
  • Figure.2 shows the聽power law transform output with Constant =1 and gama聽 =3;

Log Transform:

output_image = constant * log(1 + input_image);

  • Log transform聽changes the contrast of image聽in sort of聽reverse of power transform.
  • 1 is added to input image to avoid infinity (log (0)).
  • Log transform manipulate聽images with help of constant multiplier.
  • Figure.2 shows log transform of input lena image with constant value 5.

MATLAB code:

%%%%%%%%%%%%%%%%%%%%%%% code start %%%%%%%%%%%%%%%%%%%%%%

img = imread(‘lena’,‘jpg’); % load the image

img_gray = rgb2gray(img);

img_gray = double(img_gray); % convert image to double data type

[row,col,channel] = size(img);

subplot(2,2,1), imshow(uint8(img_gray))

title(‘Lena image’)

%% negative transform

%% Subtract the input image from 255

img_gray_negative = 255 – uint8(img_gray);

% plot the negative image

subplot(2,2,2), imshow(img_gray_negative)

title(‘Negative Lena image’)

%% power transform

constant = 1;聽聽 % assign the constant parameter equal to 1

gamma = 3;聽聽聽聽聽 % assign gama value to 3

img_gray_power = constant * (img_gray +1).^gamma;

subplot(2,2,3), imshow(img_gray_power,[])

title(‘Power law Lena image gamma = 3’)

% log transform

C = 5;聽聽聽% assign constant C value to 5

img_gray_log = C * log(1+ img_gray);

subplot(2,2,4), imshow(img_gray_log,[])

title(‘log Lena image C = 5’)

%%%%%%%%%%%%%%%%%%% code end %%%%%%%%%%%%%%%%%%%%%

Note*

These transform serve as pre processing modules in image processing pipeline and help to enhance the image significantly according to applications.

 

 

the_end1

happy_learning

Happy Learning

 

Cheers

 

 

 

 

 

 

 

 

 

2 D FFT and Image Filtering

START

In this post I will discuss the聽2 D FFT in term of image processing.

Steps we are going to perform:

  1. Read input images and observe FFT of images.
  2. Generate noisy image with random noise.
  3. Analyze the FFT of聽Lena clean聽image and Lena聽noisy image.
  4. Use Gaussian filters in frequency domain to clean image
  5. Analysis of results.

 

FFT_image1
Figure.1

Figure.1 shows the FFT of single color image and white and gray image FFT.

  • In single color image all pixel are same their is no variations in pixel color.
  • Frequency of image is almost zeros so we can see small dot at center.
  • That’s mean all frequency is concentrated near to zero.
  • In gray and white image color is changed in vertical direction (Y direction), FFT of image shows the frequency distributions in聽Y direction. No frequency changed in聽X direction.
  • If image is blur (means color variation in image is less) its FFT distribution will be concentrated. That’s help to find out amount of blur in image.

 

FFT_2
Figure.2 FFT on random noise
  • Random noise image have random pixel variation, that shows聽many random frequencies are present in noise image.
  • FFT of random noise image will be distributed聽in all direction without having any prominent frequency as shown in figure.2.

 

Lena_FFT
Figure.3
  • Figure.3 shows FFT of standard lena image we can see lot of information lie on center and some high prominent frequency content also present in image that makes image sharp.
  • In lena noisy image random noise is distributed in image but some information is their at center we can see some brightness at center聽.

 

FFT_gaussian27
Figure.4
  • If we take the filter聽which take out center frequency聽of image we can able to rid of random noise.
  • Let take the gaussian mask ( as shown in figure4) 聽and multiply it with聽image in frequency domain. (frequency domain multiplication is convolutions in special domain).
  • Inverse聽the image聽to special domain which is filtered image (shown in figure. 4).

 

gaussian_masks
Figure.5
  • Figure 5 shows the result of image filtering with many sizes of Gaussian mask.
  • Gaussian filter sigma is 5,聽image is too blurred and noise is removed.
  • When Gaussian sigma is 57, image聽is not cleaned.

So with the help of聽FFT we can find out the size and distribution of Gaussian filter according to our applications.

 

Some applications of FFT in image processing

  • 聽Feature extractions.
  • Image Compression.
  • Filtering
  • Find the amount of blur in image.
  • Many time used to speed up the 2 d convolution,聽operation like erosion, dilation.

 

MATLAB code:

%%%%%%%%%%%%%%%%%%%%%%code start%%%%%%%%%%%%%

clc;聽聽聽聽聽聽聽 % clear聽 screen

clear all聽 % clear all variables

close all

img = imread(‘lena’,‘jpg’); % load the image

img_gray = rgb2gray(img);

img_gray = double(img_gray);聽聽 % convert image to double data type

[row,col,channel] = size(img);

noise = 255*rand(row,col);聽聽聽聽聽聽聽聽 % generate random noise

noisy_img = (img_gray + noise)/2 ;聽聽聽 % create noisy image

%聽calculate fft聽 of clean image.

fftimg = fft2(img_gray);

% use fftshift to聽shift the fft in center

fft_img = fftshift(fftimg);

%聽calculate fft聽 of noisy image.

fftnoisy_img = fft2(noisy_img);

fft_noisy_img = fftshift(fftnoisy_img);

% Genrate gassuian filter with size of image and sigma 27.

H = fspecial(‘gaussian’,[row, col]聽,27);

% multiplication of gausaain filter with noisy image in frequency domain

fft_filter_img = H.*fft_noisy_img;

% take inverse of fftshift

fft_filterimg = ifftshift(fft_filter_img);

% calculate absolute value of inverse FFT that is filtered image.

filterimg = abs(ifft2(fft_filterimg));

%plot all the results

subplot(2,2,1), imshow(uint8(img_gray))

title(‘Lena image’)

subplot(2,2,2), pcolor((log(abs(fft_img)))); shading interp

colormap(gray), set(gca,‘Xtick’,[],‘Ytick’,[])

title(‘FFT of Lena image’)

subplot(2,2,3), imshow(uint8(noisy_img))

title(‘Lena noisy image’)

subplot(2,2,4), pcolor((log(abs(fft_noisy_img)))); shading interp

colormap(gray), set(gca,‘Xtick’,[],‘Ytick’,[])

title(‘FFT of Lena noisy image’)

figure,

subplot(2,2,1), imshow(uint8(noisy_img))

title(‘Lena noisy image’)

subplot(2,2,2), pcolor((log(abs(fft_noisy_img)))); shading interp

colormap(gray), set(gca,‘Xtick’,[],‘Ytick’,[])

title(‘FFT of Lena noisy image’)

subplot(2,2,3),imshow(H,[])

title(‘gaussian filter mask’)

subplot(2,2,4),imshow(filterimg,[])

title(‘Filtered image’)

figure,

subplot(2,1,1),imshow(H,[])

title(‘gaussian filter mask of sigma 27’)

subplot(2,1,2),imshow(filterimg,[])

title(‘Filtered image with gaussian’)

 

%%%%%%%%%%%%%%%%%%%%%code end%%%%%%%%%%%%%%%%

 

the_end1

 

happy_learning

Happy Learning

Cheers

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Basic Filtering

START

To start work in CV or IP we should have a input image聽to do some experiments on it. Go to google and type lena. You will get lena standard image like I added below.

lena
Lena image

 

Some words about Miss Lena:

  • Lena is most famous image to perform compression, IP and CV.
  • Lena digitized picture聽was appear on playboy in 1972.
  • Lena Soderberg (ne Sj枚枚blom) was last reported living in her native Sweden, happily married with three kids and a job. if you want to more about lena go through following article Lena聽.

 

Steps we are going to perform:

  1. Read the Lina image as input.
  2. Generate random noise.
  3. Add noise to input image.
  4. Apply average filter and gaussian filter to clean the noisy image

 

filter_mask_post1

Figure shows the mask used to filter image.

  • Random noise is random in nature so if we take the average of random noise its tends to cancel out each other.
  • In above average filter we assign the equal weight to all 25 pixel that is 1/25 .
  • In Gaussian filter instead of providing equal weight we distribute the weights in gaussian manner so in center max weight will be assign.

Post1_result

Result of聽Average filter and Gaussian filter on noisy image.

MATLAB聽code:

%%%%%%%%%%%%%%%%%% Code Start %%%%%%%%%%%%%%%%%%%%%%%%%%

clc 聽 聽% clear the screen

clear all聽 % clear all variables

close all % close all MATLAB windows

%% read Image

% set path to read image from system

ImagePath = ‘D:\DSUsers\uidp6927\image_processingCode\lena.jpg’;聽

img_RGB = imread(ImagePath); % read image from path

img_RGB = im2double(img_RGB); 聽% Convert image to double precision

img_gray = rgb2gray(img_RGB); 聽 % convert image to gray

[row,col,channel] = size(img_gray); 聽 聽% get size of image

% select noise factor

% noise can be controlled by noise factor.

noise_factor = 0.7;

% generate noise

noise = noise_factor*rand(row,col);

% Generate noisy image by adding noise to input image.

noisy_img = 1/2*(img_gray + noise);

%generate average filter with help of fspecial command.

h_ave = fspecial(‘average’,11);

%apply average filter to noisy image

avg_filter = imfilter(noisy_img,h_ave,’replicate’);

%generate gaussian filter of size 11×11 with sigma 1

h_gauss = fspecial(‘gaussian’,11,1);

% apply gaussian filter to noisy image

gauss_filter = imfilter(noisy_img, h_gauss,’replicate’);

%plot all images for results

figure,

subplot(2,1,1),

surf(h_ave);

title(‘ Average filter mask’)

subplot(2,1,2),

surf(h_gauss);

title(‘Gaussian filter mask’);

figure,

subplot(2,2,1)

imshow(img_gray,[])

title(‘Input image’);

subplot(2,2,2)

imshow(noisy_img,[]);

title(‘Noisy image’);

subplot(2,2,3)

imshow(avg_filter,[])

title(‘Average image’);

subplot(2,2,4)

imshow(gauss_filter,[])

title(‘Gaussian image’);

%%%%%%%%%%%%%%%%%%%%%%%%Code End %%%%%%%%%%%%%%%%%%%%%%

Note*

  • fspecial is used to generate filter mask. Type help fspecial in Matlab command window for more detail.
  • imfilter command is used to convolve 2 D filter mask with image.
  • In next post will discuss filtering in frequency domain.

the_end1

 

happy_learning

Happy Learning

Cheers