How to install Docker and run Jupyter notebook with Deep learning libraries (Ubuntu 16.04)

The blog is about step by step procedure to install docker in Ubuntu 16.04 and use the docker by running Jupyter notebook with deep learning libraries.

Step 1: Update the system and install the docker dependencies.

sudo apt-get update

sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common

Step 2 Add docker official key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –

Step 3 Verify the docker key

sudo apt-key fingerprint 0EBFCD88

Step 4 Add the stable docker repository

sudo add-apt-repository \
“deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable”

Step 5 Update the package index

sudo apt-get update

Step6 Install the latest version of Docker CE and container

sudo apt-get install docker-ce docker-ce-cli containerd.io

Step 7 Verify that docker is installed by running docker hello world container

sudo docker run hello-world

If you get the hello-world Docker installation is completed.

Lets use the docker by pulling the Docker image. I am using all in one deep learning docker image of docker hub to show how to use docker. Image have tensorflow , keras , caffe , opencv and others deep leaning library. Below is the link of docker hub for all in one deep learning docker image you can check the details of image.

https://hub.docker.com/r/floydhub/dl-docker/

Step8 Pull the docker image

sudo docker pull floydhub/dl-docker:cpu

Step 9 Run the docker image

docker run -it -p 8888:8888 -p 6006:6006 -v /sharedfolder:/root/sharedfolder floydhub/dl-docker:cpu bash

Above command – p is for port forwarding from docker to system and – v is share a folder from system to docker you need to edit path . I am sharing cv_basics folder to docker with name my_folder. You can give any name while sharing.

In my case command is

docker run -it -p 8888:8888 -p 6006:6006 -v /home/lord/cv_basics/:/root/my_folder floydhub/dl-docker:cpu bash

Now we are inside the docker container. Give execute permission and run the jupyter notebook.

sudo chmod +x run_jupyter.sh

sudo ./run_jupyter.sh

open browser in system and type http://localhost:8888

Check the docker pulled images and docker containers in system.

sudo docker images

sudo docker ps -a

We can see I have two docker images and two excited container in my system.

Happy learning


Advertisements

How to install Nvidia Driver and Cuda 10.0 in Ubuntu 16.04

The post is about to install the latest Nvidia driver and Cuda 10.1 on Ubuntu 16.04 LTS. Recently due to some problem in Ubuntu or Nvidia after updating the Ubuntu I cant login to my system. This is happening due to installed nvidia driver so if you face the same problem follow the below steps first

  • In login Ubuntu screen press ctrl + alt +f1 to get the console access.
  • Remove nvidia driver by sudo apt purge nvidia * (if installed by deb package) or sudo nvidia-uninstall (for local run package).
  • Restart system by ( sudo shutdown -r now )
  • Now you can login to ubuntu system

Follow the below steps to install latest nvidia driver and cuda 10.1 driver.

Cuda driver have dependency of Nvidia driver so first we will install Nvidia driver.

Download the latest nvidia-cuda local run file from below link it should be around 2.4 GB
https://developer.nvidia.com/cuda-downloads?target_os=Linux

Allow permission of downloaded nvidia driver file with chmod command

sudo chmod +x cuda_10.0.130_410.48_linux.run

Create a new folder name nvidia_drivers and extract the cuda run files in this folder.

mkdir nvidia_drivers

./path to cuda_10.0.130_410.48_linux.run –extract=path to nvidia_drivers folder we created above/

For cuda 10.1 run file you will get NVIDIA-Linux-x86_64-418.39.run with some other folder like cuda-toolkit , cuda sample for cuda10 run file you will get three files as shown below. For both cases we just want to install the nvidia linux file for nvidia driver.

Install the Nvidia Driver with below command

sudo ./NVIDIA-Linux-x86_64-410.48.run –no-x-check

Restart the system and check the nvidia driver installation

nvidia-smi

We installed the nvidia driver 410 in system now we will install cuda driver.

Install the cuda 10.1 with below command press d, accept the EULA licensce. After successful installation restart system.

sudo ./cuda-linux.10.0.130-24817639.run

Might be you will get the message that cuda installed with error it is Ok . Just restart your system and add the cuda path to your .bashrc file. Below are the steps

Add the cuda bin and library path to .bashrc file.
Open .bashrc file (sudo gedit ~/.bashrc) and add the following two lines at end make sure to use your own Cuda version i am using 10.1 and in screen shot I used cuda 10.

export PATH=/usr/local/cuda-10.1/bin${PATH:+:${PATH}}

export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64{LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

source ~/.bashrc or close and open new terminal

Check the cuda installation

nvcc –version

That’s it we are done.

All commands together

  1. sudo apt-get purge nvidia-*        (remove old nvidia drivers)
  2. restart system
  3. Download cuda local file   https://developer.nvidia.com/cuda-downloads
  4. sudo chmod +x cuda_10.0.130_410.48_linux.run (allow r/w permission)
  5. mkdir nvidia_drivers ( create new folder)
  6. ./path to cuda_10.0.130_410.48_linux.run –extract=path to nvidia_drivers folder we created above/
  7. sudo ./NVIDIA-Linux-x86_64-410.48.run –no-x-check
  8. restart system
  9. nvidia-smi
  10. sudo ./cuda-linux.10.0.130-24817639.run
  11. export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}
  12. export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64{LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
  13. source ~/.bashrc or close and open new terminal
  14. nvcc –version


Happy learning cheers











How to install Anaconda/python3 with OpneCv3.2.

selection_012

Tutorial to discuss steps on how to install Anaconda 5.3  and configure OpenCv3.2  in Anaconda 5.3 for python 3.6.

After completing these steps I am sure you can use anaconda/Python3  and OpenCv 3.2.

Introduction

In office I was assigned a simple work in image processing which required extra modules of OpenCv in Python. It was simple task but I struggled a lot to configure python with opencv modules 馃槮 . After wasting my weekends  I completed the task on time and came to know how easy it was 馃檪 .

For Ubuntu 16.04 LTS (almost similar steps for  Window 7).

1]. Download the Anaconda according to your system configuration from below link.

Download Anaconda

Run the below command for linux 64 bit system to download anaconda

     curl -O https://repo.anaconda.com/archive/Anaconda3-5.2.0-Linux-x86_64.sh

1

2]. In ubuntu run the below command where you downloaded the anaconda installer file

bash Anaconda3-5.2.0-Linux-x86_64.sh

Follow the anaconda installer press d and accept the license agreement

2

3]. Open the bashrc file  and add the below line at the end of file and save it.

             export PATH=”Path to your anaconda directory”/bin:$PATH

              for me it is:        export PATH=/home/lord/anaconda3/bin:$PATH

3

9

4].  source ~/.bashrc  or close and reopen terminal again

4

Update conda and anaconda

5

6

5]. Run the below command to install opnecv3.2 in anaconda

              conda install -c menpo opencv3

7

6]. Open the spyder app in terminal and import cv2 to verify.  ( spyder app is good default IDE for python in conda environment )

8

Anaconda with opencv is installed 馃榾 馃榾 馃榾 馃榾 馃榾 Now data analysis & Machine learning can be performed in python.

Open spyder app and try import cv2. If no error we are done with opencv 馃榾

Below is all commands together


That’s it 馃檪 馃檪 馃榾 馃榾   Open spyder and have fun . . . . .

ppt57

happy_learning

Happy Learning Cheers

About Eigen Values and Eigen Vectors

selection_012

If we calculate the eigen value and eigen vector of 聽data; eigen vector 聽represent which basis direction data is spread and eigen value informs聽which basis direction (eigen vector ) have more information about data.

 

Basics of Eigen values and Eigen vectors :

  • Suppose A is matrix with size 聽N x N.
  • V is vector with size N.
  • V is called the eigen vector of A.

A V 聽= 聽聽位 V 聽 聽 聽 聽 ……. (i)

  • if multiplication of A scale the vector V .

In equation (i) 聽位 is called the eigen value of matrix.

Function eig is used in MATLAB to get the eigen vector and eigen value.

suppose

聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 [ V, D] 聽= 聽eig (A)

where D is diagonal matrix contain the eigen values and V is columns corresponding eigen vector then

聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽AV = VD 聽 聽 聽 聽 聽 …..(2)

form equation 1

位 I – A 聽= 0 聽 聽 called the root of polynomial聽

where聽聽位 is diagonal eigenvalues, I is identity matrix and 聽A is given input matrix.

  • Determinant of 聽matrix A 聽(|A|) will be product of all eigen values.
  • Set of eigen vector is called the spectrum of matrix.

 

Symmetric matrix :聽

If transpose of matrix does not change the matrix its called symmetric matrix.

S = S T

where ST is transpose of S matrix.

One of the example of symmetric matrix is co-variance matrix.

The property of symmetric matrix is that if we calculate eigen values and eigen vectors, all eigen values will be real.

Eigen Value Decomposition:

if V1 and V2 correspond to eigen values 位 1 and聽位2 , if聽位1 is not equal to聽位2 then vector V1 and V2 will be orthogonal.

Lets take eigen vectors 聽V = ( V1 , V2 , V3 ….) 聽 (orthogonal matrix)

if 聽V*VT is zero聽we can say columns are orthogonal

and if 聽螞 聽= 聽(聽位 1,聽位 2,聽位 3….) 聽diagonal eigen values.

we can write

S = 聽V聽螞 VT 聽 聽 聽……. 聽(3)

where VT is transpose of V

equation 3 is called the eigen value decomposition.

Lets write the code of eigen value decomposition in MATLAB.

To make sense of 聽Eigen value decomposition input matrix should be symmetry matrix so eigen value will be real. Will calculate co variance matrix which is symmetry matrix.

 


%%%%%%%%%%%%%%%%%%%%%%%% Code Start %%%%%%%%%%%%%%%%

%% read Image

% set path to read image from system

ImagePath = ‘D:\DSUsers\uidp6927\image_processingCode\lena.jpg’;

img_RGB = imread(ImagePath); % read image from path

img_RGB = im2double(img_RGB); % Convert image to double precision

img_gray = rgb2gray(img_RGB); % convert image to gray

%% calculate the co- variance matrix which is symmetry matrix

Cov_I = cov(img_gray);

% Show Cov_I matrix

figure, imshow(Cov_I,[])

%% will decompose symmetry matrix and reconstruct it.

%% Eigen value decomposition ………………….

[E,D] = eig(Cov_I); 聽 聽 聽 聽%%% 聽E is Eigen vectors and D is diagonal of eigen values

%%%%%%% Reconstruction %%%%%%%%%%%%%%%%

% reconstruction loss is loss of information due to removeing egine vector and values.

% In process of reconstruction I used only top 5 eigen values聽and took coresponding eigen vectors

ReCov_I5 = E(:,295:end)*D(295:end,295:end)*E(:,295:end)’;

figure, imshow(ReCov_I5,[])

title(‘reconstruct symmetry matrix with 5 eigen vectors’)

reconstruction_loss5聽= 100 – (sum(sum(D(295:end,295:end)))/sum(D(:)))*100;

% In process of reconstruction I used only top 1聽eigen value聽and took coresponding eigen vector

ReCov_I1 =E(:,300:end)*D(300:end,300:end)*E(:,300:end)’;

figure,聽imshow(ReCov_I1,[])

title(‘reconstruct symmetry matrix with 1聽eigen vectors’)

reconstruction_loss1 = 100-(sum(sum(D(300:end,300:end)))/sum(D(:)))*100

%%%%%%%%%%%%%%%%%%%%%%%%% Code end %%%%%%%%%%%%%%%%%%%


Results:聽

Input image

input lena
Input Lena Image

 

Symmetric or Covariance input image generated by Lena image:

input_image1
Covariance / symmetry image

 

Covariance image reconstruction by top 5 eigen values and corresponding vectors:

reconstruct_image
Reconstruction with top 5 eigen vectors

Loss of information because we used only top 聽5 eigen vector 聽= 27.84 percentage

Top 5 eigen vectors contain聽72.16 percentage聽information of input image

 

Covariance image reconstruction by 1聽eigen vector:

reconstruc_with1eigen
Reconstruction with top 1 eigen vector

Loss of information 聽= 72.40聽percentage

Top 1聽eigen vector contain 27.8 percentage聽information of input image

Applications:

Eigen value docomposition is the basic of Sigular value decomposition and Principal Component Analaysis with the help of PCA we can able to reconstruct our original lena image which we will discuss in futher post.

 

the_end1

happy_learning

Happy Learning

Cheers

 

 

 

 

 

GLCM for texture analysis

START

 

 

  • GLCM (gray level co-occurrence matrix) is mainly useful to perform the texture analysis and find the features from image.
  • As name suggested its work on gray image and try to create sort of 2 d histogram from image.
  • Main application of GLCM are texture analysis, feature extraction and segmentation.

Steps to Calculate GLCM matrix :

  • Lets assume image I which is gray image.
  • Initialize GLCM matrix size 256 x 256 聽(256 is level of GLCM).
  • Suppose we use zero angle of GLCM means direction of GLCM is horizontal.
  • Suppose distance of GLCM is 1, means we just look horizontally next pixel to current pixel.
  • In image 聽I(i,j) get the gray value (suppose value of pixel is a = 聽127 at I(i,j)), and get gray 聽value I(i,j+1) in case of distance 1 and zero degree of 聽GLCM (suppose value of pixel is b = 58 at I(i,j+1)).
  • 聽Go to GLCM matrix co-ordinate聽(a = 127, b=58) and increment the value by 1.
  • Iterate the full image that will give us GLCM matrix of聽zero degree for聽distance 1.
  • According to texture type, GLCM distance can be decided.

road_bricks
Figure. 1 texture image of road and brick

  • For road 聽because texture changes so rapidly we consider the small distance to calculate GLCM but for聽bricks, distance of the GLCM matrix should be large.
  • Angle of GLCM should be selected according to 聽direction of image texture changes, so in brick image we want to consider 2 direction 0 degree and 90 degree.
  • Calculate features from GLCM matrix, there are many features are available to perform the texture analysis like contrast, correlation, energy , and聽homogeneity etc.
  • We will get the contrast feature from GLCM matrix which are sufficient to say weather texture is rough or smooth.

Lets write basic GLCM code which calculate the zero degree GLCM for 256 level and get聽contrast feature.

%%%%%%%%%%%%%%%%%%%%%%%%%Code Start here %%%%%%%%%%%%%%%

%%%%% 聽GLCM function start here %%%%%%%%%%%%%

function GLCM_0 = getGLCM0(img_gray, distance)
%% function calculate the GLCM matrix at zero degree angle and given distance
%% GLCM_0 is GLCM matrix for 0 degree angle.
%% img_gray is input gray image
%% distance is distance of GLCM calculated

%% initialize GLCM matrix

GLCM_0 = zeros(256,256);

[row,col] = size(img_gray);

for i=1:row
for j=1:col-distance

pixel1 = img_gray(i,j)+1;
pixel2 = img_gray(i,j+distance)+1;
GLCM_0(pixel1,pixel2) = GLCM_0(pixel1,pixel2)+1;

end
end

end

%%% GLCM function end here %%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%% Contrast function start here %%%%%%%%%%%%%%%%%

function Contrast = GLCM_contrast(GLCM)

%% function calculate the GLCM contrast for GLCM matrix
%% GLCM is Gray level co-occurance matrix as input.
%% Contasrt is feature of GLCM matrix

%% normalize the GLCM

[row,col] = size(GLCM);

N_factor = sum(sum(GLCM));

GLCM = GLCM./N_factor;

Contrast = 0;

for i = 1:row
for j = 1:col

temp = ((i-j)^2)* GLCM(i,j);
Contrast = Contrast + temp;
end
end

end

%%%% Contrast function end here 聽%%%%

 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%main GLCM code %%%%%%%%%

%% this main code calls 聽GLCM and contrast function for given image

clc % clear the screen

clear all % clear all variables

close all % close all MATLAB windows

%% read Image

% set path to read image from system

ImagePath = ‘D:\DSUsers\uidp6927\image_processingCode\frontend-large Brick.jpg’;

img_RGB = imread(ImagePath); % Read image from path

img_gray = rgb2gray(img_RGB); % Convert image to gray

img_gray = imresize(img_gray,[900,1200]); 聽 % re sizing (normalizing the size)

GLCM_0 = getGLCM0(img_gray, 5); 聽 聽 %% get the GLCM matrix for distance 5聽

Contrast = GLCM_contrast(GLCM_0); 聽 %% calculate the contrast feature

disp(‘Contrast of function is =’)
Contrast
%% use matlab inbuilt function to verify result

glcm = graycomatrix(img_gray,’NumLevels’,256,’offset’,[0 5]);

stats = graycoprops(GLCM_0,’Contrast’);

disp(‘Contrast of MATLAB is = ‘)
stats.Contrast

%%%%%%%%%%%%%%%%%%%%%%%%Code End here 聽 %%%%%%%%%%%%%%%%%聽

Matlab provide the inbuilt function from which we can calculate the GLCM and Contrast and Verify our results.

glcm = graycomatrix(img_gray,’NumLevels’,256,’offset’,[0 5]);

stats = graycoprops(GLCM_0,’Contrast’);

Lest take two input image first is road texture and second is brick image apply the GLCM and get the contrast feature.

free asphalt texture or bitumen road background photo
Figure. 2 Road Texture Image

 

frontend-large Brick
Figure. 3 Brick texture image

 

Result:聽

Contrast feature of figure. 2 road texture is 聽3.9489e+03

Contrast feature of figure.3 Brick texture is 91.0206

As from result we can say that road texture have so much texture because of that聽its contrast value is so much high if less texture will be present in image contrast value will be low.

So for smooth area in image聽contrast will be low, 聽for rough area contrast will be high.

* Note: GLCM value depend on the size of image we are performing the GLCM so size of image should be fix or normalize.

For more 聽information about GLCM 聽feature analysis please go through the paper聽harlick texture features聽.

 

the_end1happy_learning

Happy Learning

Cheers

 

 

 

2 D Projective Transform

START

 

Lets take the Lena image to perform the transformation

lena
Input image

  • In this post I am going to discuss image projective transforms.
  • These transform are easy to understand聽and useful in many applications like object detection, 聽3 D view geometry and ADAS.
  • Suppose聽image I(x,y) as input image and our聽output (transform) image is O(u,v)
  • 聽So we can write a Transform matrix T such that :

聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 O(u,v) = T [I(x,y)]

  • Suppose image is in homogeneous co-ordinate system we can add 1

聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽O(u,v,1) = T [I(x,y,1)]

  • Where T is 3 x 3 matrix, according to T matrix we can able to transform 聽the image many ways.

Lets write a code which can take input image multiply it with transform matrix and produce transform image

 

%%%%%%%%%%%%%%%%%%%%%%%% Code Start%%%%%%%%%%%%%%%%

function transform_img = trasform_2d(T, input_img)

%function perform the 2 D projective transform聽

% input_image : gray input image

% T is 3 x 3 trasnform matrix 聽

%transform_image : output gray transform image

[row,col] = size(input_img);

transform_img = zeros(2*row,2*col);

for i=-row:2*row
for j=-col:2*col

U = inv(T)*[i j 1]’; 聽 聽 聽% where ‘ is transpose

if(U(3)>1) 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽聽% in case of 8 dof projective transform

U(1) = U(1)/U(3); %% taking care homogeneous co ordinate
U(2) = U(2)/U(3);
end

if((U(1)>0)&&(U(1)<row-1)&&(U(2)>0)&&(U(2)<col-1)) %% consider only points 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 %which lie inside of input image

transform_img(round(i+row+1),round(j+col+1)) = 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽聽input_img(round(U(1)+1),round(U(2)+1));

end

end
end

end

%%%%%%%%%%%% Code End %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Following are the main 2 D project transform :

  1. 聽Euclidean Transform
  2. Similarity Transform
  3. Affine Transform
  4. Projective Transform

 

Euclidean Transform :

Let define transformation matrix for Euclidean transform

T = [聽R11 聽R12 聽Tx 聽; 聽R21 聽R22 聽Ty ; 0 0 1]

Where R11 , R12, R21 and 聽R22 are rotation matrix and Tx and Ty are translation. So it have have 3 degree of freedom 1 in rotation and 2 in translation.

lets聽take 30 degree rotation angle and -100 pixel translation in both direction

Euclidean

Result of Euclidean Transform

Euclidean Transform dont have property to scale the images.

Similarity Transform:

Let define transformation matrix for聽Similarity聽transform

T = [ s*R11 聽s*R12 聽Tx 聽; s* R21 聽s*R22 聽Ty ; 0 0 1]

It have聽4 degree of freedom.

if in Euclidean transform we added the feature of scale (s) its become similarity tranform

lets聽take 40 degree rotation angle and -100 pixel translation in both direction and 1.5 scaling factor

Similurity_transform.JPG
Result of Similarity transform

Affine Transform :

Let define transformation matrix for聽Affine聽transform

T = [ a11 聽a12 聽Tx 聽; a21 聽a22 聽Ty ; 0 0 1]

In Similarity transform if we can able to take different angle as well as different scale it will become affine transform.

lets聽take 40 degree rotation angle in x direction and 30 degree rotation angle in y direction , -100 pixel translation in both direction, 0.7 scaling factor in x direction and 1.5 聽scaling factor in y direction.

Affine
Result of Affine transform

Projective Transform : 聽

Define the transformation full matrix which include the homogeneous 聽factor as well.

T = 聽[h11 聽h12 聽h13 ; h21 聽h22 聽h23 ; h31 聽h32 聽h33]

Projective transform have 8 degree of freedom because we added 2 聽coefficient h31 聽h32 in affine transform matrix which take care of image homogeneous co-ordinate geometry. (might be in furure calibration post i will discuss it about in detail)

Projective_transform
Result of projective Transform

%%%%%%%%%%%%%%%%%% 聽Code Start 聽%%%%%%%%%%%%%%%%%%%%%%%

clc % clear the screen

clear all % clear all variables

close all % close all MATLAB windows

%% read Image

% set path to read image from system

ImagePath = ‘D:\DSUsers\uidp6927\image_processingCode\lena.jpg’;

img_RGB = imread(ImagePath); % read image from path

img_RGB = im2double(img_RGB); % Convert image to double precision

img_gray = rgb2gray(img_RGB); % convert image to gray

[row, col] = size(img_gray);

figure,imshow(img_gray)
%For a nonreflective similarity,
%[u,v] = [x,y,1]T

%% euclidean transformation.

scale = 1; % scale factor
angle = 30*pi/180; % rotation angle
tx = -100; % x translation
ty = -100; % y translation

sc = scale*cos(angle);
ss = scale*sin(angle);

T = [ sc ss tx;
-ss sc ty;
0 0 1];
% call the function transform_2d
transform_img = trasform_2d(T, img_gray);
figure, imshow(transform_img)
title(‘Euclidean transform’)

 

%% Similarity transform
scale = 1.5; % scale factor
angle = 40*pi/180; % rotation angle
tx = -100; % x translation
ty = -50; % y translation

sc = scale*cos(angle);
ss = scale*sin(angle);

T = [ sc ss tx;
-ss sc ty;
0 0 1];

transform_img = trasform_2d(T, img_gray);
figure, imshow(transform_img)
title(‘Similarity transform’)
%% Affine transform
scale1 = .7; % scale factor
scale2 = 1.7;
angle1 = 40*pi/180; % rotation angle
angle2 = 30*pi/180; % rotation angle
tx = -100; % x translation
ty = -50; % y translation
sc1 = scale1*cos(angle1);
ss1 = scale1*sin(angle2);
sc2 = scale2*cos(angle1);
ss2 = scale2*sin(angle2);

T = [ sc1 ss1 tx;
-ss2 sc2 ty;
0 0 1];
transform_img = trasform_2d(T, img_gray);

figure, imshow(transform_img)
title(‘Affine transform’)

%% projective transform
scale1 = .7; % scale factor
scale2 = 1.7;
angle1 = 40*pi/180; % rotation angle
angle2 = 30*pi/180; % rotation angle
tx = -100; % x translation
ty = -50; % y translation

sc1 = scale1*cos(angle1);
ss1 = scale1*sin(angle2);
sc2 = scale2*cos(angle1);
ss2 = scale2*sin(angle2);

T = [ sc1 ss1 tx;
-ss2 sc2 ty;
.01 .02 1];

transform_img = trasform_2d(T, img_gray);
figure, imshow(transform_img)
title(‘Projective transform’)

%%%%%%%%%%%%%%%%%%%%% 聽Code End %%%%%%%%%%%%%%%%%%%%%%%

 

the_end1

 

happy_learning

Happy Learning

Cheers

 

 

Wavelet Transform

START

In this post, I will discuss the Haar wavelet analysis (wavelet transform) in image processing.

Applications of Haar wavelet transform in image processing are Feature extraction, Texture analysis, Image compression etc.

 

 

 

 

  • Wavelet is time frequency resolutions analysis.
  • As shown in above figure.1 聽its have the information of different time frequencies of signal.
  • Its like applying a comb of filters bands in signal they are called levels. First we apply full frequency band and have no knowledge of time.
  • Apply filter which have half bandwidth of signal lose some frequency information but 聽have some knowledge of time resolution called first level decomposition.
  • Further reduce the filter bandwidth will be second level decomposition.

Haar wavelet is made of two filters one is low pass filter and second high pass filter.

 

filter
Figure. 2 Low pass and high pass filter.

Coefficients of Low pass filter 聽= [0.707, 0.707]

Coefficients of High pass filter 聽= [-0.707, 0.707]

 

bloc_diagram
Figure. 3

 

 

input_output
Figure.4 LL band output of Haar wavelet

  • Figure .3 shows the block diagram of first level decomposition.
  • I is image of size m , n. We take the image column vectors (1d) and apply the low pass filter and high pass filter.
  • 聽Down sample the output and reshape the image I1 of size m, n/2 (size is n/2 because we down sampled the columns by 2).
  • Apply low and high pass filter in row wise in image I1 again down sample the output and reshape the image (m/2,n/2).

 

wavelet_result
Figure.5 聽1st level decomposition of haar wavelet.

  • Figure. 5 shows the result of haar wavelet decomposition results at first level.
  • LL filter output will proceeds for next level decomposition. As its contains most information of input image and looks almost same at down sampled resolution.
  • HH, LH 聽and HL will have texture information in vertical, horizontal and ramp directions. HH, LH and HL are called haar features.
  • We can further 聽decompose the LL image into LL1, LH1, HL1 and HH1 images.

 

output
Figure.6 LL band output in 4 level decompostion

 

wavelet_second_level_result
Figure.7 2nd聽Level decomposistion of haar wavelet

MATLAB provide Wavelet Toolbox GUI to perform the wavelet analysis.

Type wevemenu in command window of 聽MATLAB.

wavemenu
Figure. 8 wavelet tool box GUI in聽MATLAB

 

%%%%%%Code Start %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

function [LL, LH, HL, HH] = haar_wave(image_in)

%% function haar_wave calculate the haar wave decomposition of given imput image
%% Image in :- input gray level image
%% LL :- Low Low band output image
%% LH :- Low High band output image
%% HL :- High Low band output image
%% HH :- High High band output image

image_in = im2double(image_in);
filter_L = [.707, .707];
filter_H = [-.707, .707];
[row, col] = size(image_in);
image_vector = image_in(:);

%% apply low pass filter in one d image column wise image
image_low_pass_1D = filter(filter_L,1,image_vector);
image_high_pass_1D = filter(filter_H,1,image_vector);

% Down sample the image low and high pass image.
temp = 1;
Dimage_low_pass_1D = 0;
Dimage_high_pass_1D = 0;

for i = 1:2:length(image_low_pass_1D)
Dimage_low_pass_1D(temp) = image_low_pass_1D(i);
Dimage_high_pass_1D(temp) = image_high_pass_1D(i);
temp = temp+1;
end

%% reshape the image in 2 D, size will be row/2 , col
Dimage_low_pass = reshape(Dimage_low_pass_1D,round(row/2), col)’;
Dimage_high_pass = reshape(Dimage_high_pass_1D, round(row/2),col)’;

%% conver the image in 1 D but in column vector, I transpose Dimage_low_paas image above.

Dimage_low_pass_1D = Dimage_low_pass(:);
Dimage_high_pass_1D = Dimage_high_pass(:);

LL_1D = filter(filter_L,1,Dimage_low_pass_1D);
LH_1D = filter(filter_H,1,Dimage_low_pass_1D);

HL_1D = filter(filter_L,1,Dimage_high_pass_1D);
HH_1D = filter(filter_H,1,Dimage_high_pass_1D);

%Down sample the all 1 _d vector images
temp = 1;
DLL_1D = 0;
DLH_1D = 0;
DHL_1D = 0;
DHH_1D = 0;

for i = 1:2:length(LL_1D)
DLL_1D(temp) = LL_1D(i);
DLH_1D(temp) = LH_1D(i);
DHL_1D(temp) = HL_1D(i);
DHH_1D(temp) = HH_1D(i);
temp = temp+1;
end

%% reshape the images in size row/2 and clo/2

LL = reshape(DLL_1D, round(col/2), round(row/2))’;
LH = reshape(DLH_1D, round(col/2), round(row/2))’;
HL = reshape(DHL_1D, round(col/2), round(row/2))’;
HH = reshape(DHH_1D, round(col/2), round(row/2))’;

end

 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 聽 %% call the haar wavelet function and check the result

clc % clear the screen

clear all % clear all variables

close all % close all MATLAB windows

%% read Image

% set path to read image from system

ImagePath = ‘D:\DSUsers\uidp6927\image_processingCode\lena.jpg’;

img_RGB = imread(ImagePath); % read image from path

img_RGB = im2double(img_RGB); % Convert image to double precision

img_gray = rgb2gray(img_RGB); % convert image to gray

% first level decomposition
[LL, LH, HL, HH] = haar_wave(img_gray);

% second level decomposition
[LL1, LH1, HL1, HH1] = haar_wave(LL);

%% Draw all image

figure, imshow(img_gray);

figure,
subplot(2,2,1), imshow(LL,[])
title(‘LL haar wave image’)
subplot(2,2,2), imshow(LH,[])
title(‘LH haar wave image’)
subplot(2,2,3), imshow(HL,[])
title(‘HL haar wave image’)
subplot(2,2,4), imshow(HH,[])
title(‘HH haar wave image’)

figure,
subplot(2,2,1), imshow(LL1,[])
title(‘LL1 haar wave image’)
subplot(2,2,2), imshow(LH1,[])
title(‘LH1 haar wave image’)
subplot(2,2,3), imshow(HL1,[])
title(‘HL1 haar wave image’)
subplot(2,2,4), imshow(HH1,[])
title(‘HH1 haar wave image’)

%%%%%%%%%%%%%%%%%Code End %%%%%%%%%%%%%%%%%%%%%%%%%%%%

the_end1

 

happy_learning

Happy Learning

Cheers