The post is about to install the latest Nvidia driver and Cuda 10 on Ubuntu 16.04 LTS. Cuda driver have dependency of Nvidia driver so first we will install Nvidia driver.

Download the nvidia driver run file local from below link (it should be around 1.2 Gb).

Tutorial to discuss steps on how to install Anaconda 5.3 and configure OpenCv3.2 in Anaconda 5.3 for python 3.6.

After completing these steps I am sure you can use anaconda/Python3 and OpenCv 3.2.

Introduction

In office I was assigned a simple work in image processing which required extra modules of OpenCv in Python. It was simple task but I struggled a lot to configure python with opencv modules ðŸ˜¦ . After wasting my weekends I completed the task on time and came to know how easy it was ðŸ™‚ .

For Ubuntu 16.04 LTS (almost similar steps for Window 7).

1]. Download the Anaconda according to your system configuration from below link.

If we calculate the eigen value and eigen vector of Â data; eigen vector Â represent which basis direction data is spread and eigen value informsÂ which basis direction (eigen vector ) have more information about data.

Basics of Eigen values and Eigen vectors :

Suppose A is matrix with size Â N x N.

V is vector with size N.

V is called the eigen vector of A.

A V Â = Â Â Î» V Â Â Â Â ……. (i)

if multiplication of A scale the vector V .

In equation (i) Â Î» is called the eigen value of matrix.

Function eig is used in MATLAB to get the eigen vector and eigen value.

suppose

Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â [ V, D] Â = Â eig (A)

where D is diagonal matrix contain the eigen values and V is columns corresponding eigen vector then

Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â AV = VD Â Â Â Â Â …..(2)

form equation 1

Î» I – A Â = 0 Â Â called the root of polynomialÂ

whereÂ Â Î» is diagonal eigenvalues, I is identity matrix and Â A is given input matrix.

Determinant of Â matrix A Â (|A|) will be product of all eigen values.

Set of eigen vector is called the spectrum of matrix.

Symmetric matrix :Â

If transpose of matrix does not change the matrix its called symmetric matrix.

S = S T

where ST is transpose of S matrix.

One of the example of symmetric matrix is co-variance matrix.

The property of symmetric matrix is that if we calculate eigen values and eigen vectors, all eigen values will be real.

Eigen Value Decomposition:

if V1 and V2 correspond to eigen values Î» 1 andÂ Î»2 , ifÂ Î»1 is not equal toÂ Î»2 then vector V1 and V2 will be orthogonal.

Lets take eigen vectors Â V = ( V1 , V2 , V3 ….) Â (orthogonal matrix)

if Â V*VT is zeroÂ we can say columns are orthogonal

and if Â Î› Â = Â (Â Î» 1,Â Î» 2,Â Î» 3….) Â diagonal eigen values.

we can write

S = Â VÂ Î› VT Â Â Â ……. Â (3)

where VT is transpose of V

equation 3 is called the eigen value decomposition.

Lets write the code of eigen value decomposition in MATLAB.

To make sense of Â Eigen value decomposition input matrix should be symmetry matrix so eigen value will be real. Will calculate co variance matrix which is symmetry matrix.

%%%%%%%%%%%%%%%%%%%%%%%%% Code end %%%%%%%%%%%%%%%%%%%

Results:Â

Input image

Symmetric or Covariance input image generated by Lena image:

Covariance image reconstruction by top 5 eigen values and corresponding vectors:

Loss of information because we used only top Â 5 eigen vector Â = 27.84 percentage

Top 5 eigen vectors containÂ 72.16 percentageÂ information of input image

Covariance image reconstruction by 1Â eigen vector:

Loss of information Â = 72.40Â percentage

Top 1Â eigen vector contain 27.8 percentageÂ information of input image

Applications:

Eigen value docomposition is the basic of Sigular value decomposition and Principal Component Analaysis with the help of PCA we can able to reconstruct our original lena image which we will discuss in futher post.

GLCM (gray level co-occurrence matrix) is mainly useful to perform the texture analysis and find the features from image.

As name suggested its work on gray image and try to create sort of 2 d histogram from image.

Main application of GLCM are texture analysis, feature extraction and segmentation.

Steps to Calculate GLCM matrix :

Lets assume image I which is gray image.

Initialize GLCM matrix size 256 x 256 Â (256 is level of GLCM).

Suppose we use zero angle of GLCM means direction of GLCM is horizontal.

Suppose distance of GLCM is 1, means we just look horizontally next pixel to current pixel.

In image Â I(i,j) get the gray value (suppose value of pixel is a = Â 127 at I(i,j)), and get gray Â value I(i,j+1) in case of distance 1 and zero degree of Â GLCM (suppose value of pixel is b = 58 at I(i,j+1)).

Â Go to GLCM matrix co-ordinateÂ (a = 127, b=58) and increment the value by 1.

Iterate the full image that will give us GLCM matrix ofÂ zero degree forÂ distance 1.

According to texture type, GLCM distance can be decided.

For road Â because texture changes so rapidly we consider the small distance to calculate GLCM but forÂ bricks, distance of the GLCM matrix should be large.

Angle of GLCM should be selected according to Â direction of image texture changes, so in brick image we want to consider 2 direction 0 degree and 90 degree.

Calculate features from GLCM matrix, there are many features are available to perform the texture analysis like contrast, correlation, energy , andÂ homogeneity etc.

We will get the contrast feature from GLCM matrix which are sufficient to say weather texture is rough or smooth.

Lets write basic GLCM code which calculate the zero degree GLCM for 256 level and getÂ contrast feature.

%%%%%%%%%%%%%%%%%%%%%%%%%Code Start here %%%%%%%%%%%%%%%

%%%%% Â GLCM function start here %%%%%%%%%%%%%

function GLCM_0 = getGLCM0(img_gray, distance) %% function calculate the GLCM matrix at zero degree angle and given distance %% GLCM_0 is GLCM matrix for 0 degree angle. %% img_gray is input gray image %% distance is distance of GLCM calculated

Lest take two input image first is road texture and second is brick image apply the GLCM and get the contrast feature.

Result:Â

Contrast feature of figure. 2 road texture is Â 3.9489e+03

Contrast feature of figure.3 Brick texture is 91.0206

As from result we can say that road texture have so much texture because of thatÂ its contrast value is so much high if less texture will be present in image contrast value will be low.

So for smooth area in imageÂ contrast will be low, Â for rough area contrast will be high.

* Note: GLCM value depend on the size of image we are performing the GLCM so size of image should be fix or normalize.

For more Â information about GLCM Â feature analysis please go through the paperÂ harlick texture featuresÂ .

function transform_img = trasform_2d(T, input_img)

%function perform the 2 D projective transformÂ

% input_image : gray input image

% T is 3 x 3 trasnform matrix Â

%transform_image : output gray transform image

[row,col] = size(input_img);

transform_img = zeros(2*row,2*col);

for i=-row:2*row
for j=-col:2*col

U = inv(T)*[i j 1]’; Â Â Â % where ‘ is transpose

if(U(3)>1) Â Â Â Â Â Â Â Â Â Â Â Â % in case of 8 dof projective transform

U(1) = U(1)/U(3); %% taking care homogeneous co ordinate
U(2) = U(2)/U(3);
end

if((U(1)>0)&&(U(1)<row-1)&&(U(2)>0)&&(U(2)<col-1)) %% consider only points Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â %which lie inside of input image

transform_img(round(i+row+1),round(j+col+1)) = Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â input_img(round(U(1)+1),round(U(2)+1));

end

end
end

end

%%%%%%%%%%%% Code End %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Following are the main 2 D project transform :

Â Euclidean Transform

Similarity Transform

Affine Transform

Projective Transform

Euclidean Transform :

Let define transformation matrix for Euclidean transform

T = [Â R11 Â R12 Â Tx Â ; Â R21 Â R22 Â Ty ; 0 0 1]

Where R11 , R12, R21 and Â R22 are rotation matrix and Tx and Ty are translation. So it have have 3 degree of freedom 1 in rotation and 2 in translation.

letsÂ take 30 degree rotation angle and -100 pixel translation in both direction

Result of Euclidean Transform

Euclidean Transform dont have property to scale the images.

Similarity Transform:

Let define transformation matrix forÂ SimilarityÂ transform

T = [ s*R11 Â s*R12 Â Tx Â ; s* R21 Â s*R22 Â Ty ; 0 0 1]

It haveÂ 4 degree of freedom.

if in Euclidean transform we added the feature of scale (s) its become similarity tranform

letsÂ take 40 degree rotation angle and -100 pixel translation in both direction and 1.5 scaling factor

Affine Transform :

Let define transformation matrix forÂ AffineÂ transform

T = [ a11 Â a12 Â Tx Â ; a21 Â a22 Â Ty ; 0 0 1]

In Similarity transform if we can able to take different angle as well as different scale it will become affine transform.

letsÂ take 40 degree rotation angle in x direction and 30 degree rotation angle in y direction , -100 pixel translation in both direction, 0.7 scaling factor in x direction and 1.5 Â scaling factor in y direction.

Projective Transform : Â

Define the transformation full matrix which include the homogeneous Â factor as well.

T = Â [h11 Â h12 Â h13 ; h21 Â h22 Â h23 ; h31 Â h32 Â h33]

Projective transform have 8 degree of freedom because we added 2 Â coefficient h31 Â h32 in affine transform matrix which take care of image homogeneous co-ordinate geometry. (might be in furure calibration post i will discuss it about in detail)

%%%%%%%%%%%%%%%%%% Â Code Start Â %%%%%%%%%%%%%%%%%%%%%%%

In this post, I will discuss the Haar wavelet analysis (wavelet transform) in image processing.

Applications of Haar wavelet transform in image processing are Feature extraction, Texture analysis, Image compression etc.

Wavelet is time frequency resolutions analysis.

As shown in above figure.1 Â its have the information of different time frequencies of signal.

Its like applying a comb of filters bands in signal they are called levels. First we apply full frequency band and have no knowledge of time.

Apply filter which have half bandwidth of signal lose some frequency information but Â have some knowledge of time resolution called first level decomposition.

Further reduce the filter bandwidth will be second level decomposition.

Haar wavelet is made of two filters one is low pass filter and second high pass filter.

Coefficients of Low pass filter Â = [0.707, 0.707]

Coefficients of High pass filter Â = [-0.707, 0.707]

Figure .3 shows the block diagram of first level decomposition.

I is image of size m , n. We take the image column vectors (1d) and apply the low pass filter and high pass filter.

Â Down sample the output and reshape the image I1 of size m, n/2 (size is n/2 because we down sampled the columns by 2).

Apply low and high pass filter in row wise in image I1 again down sample the output and reshape the image (m/2,n/2).

Figure. 5 shows the result of haar wavelet decomposition results at first level.

LL filter output will proceeds for next level decomposition. As its contains most information of input image and looks almost same at down sampled resolution.

HH, LH Â and HL will have texture information in vertical, horizontal and ramp directions. HH, LH and HL are called haar features.

We can further Â decompose the LL image into LL1, LH1, HL1 and HH1 images.

MATLAB provide Wavelet Toolbox GUI to perform the wavelet analysis.

%% function haar_wave calculate the haar wave decomposition of given imput image %% Image in :- input gray level image %% LL :- Low Low band output image %% LH :- Low High band output image %% HL :- High Low band output image %% HH :- High High band output image

%% apply low pass filter in one d image column wise image
image_low_pass_1D = filter(filter_L,1,image_vector);
image_high_pass_1D = filter(filter_H,1,image_vector);

% Down sample the image low and high pass image.
temp = 1;
Dimage_low_pass_1D = 0;
Dimage_high_pass_1D = 0;

for i = 1:2:length(image_low_pass_1D)
Dimage_low_pass_1D(temp) = image_low_pass_1D(i);
Dimage_high_pass_1D(temp) = image_high_pass_1D(i);
temp = temp+1;
end

%% reshape the image in 2 D, size will be row/2 , col
Dimage_low_pass = reshape(Dimage_low_pass_1D,round(row/2), col)’;
Dimage_high_pass = reshape(Dimage_high_pass_1D, round(row/2),col)’;

%% conver the image in 1 D but in column vector, I transpose Dimage_low_paas image above.

Image blending is the process to blend (mixing) two images.

One of the important application of image blending is to remove the artifacts while creating the panorama or stitching the images.

Image blending also used to add the object in background or fore ground in images/scenes.

Lets take the two images which we want to blend…..

Following are some of the image blending process :

Suppose first image is a and second image is b

1. Multiply Â (darken) image blending:

Â Â Â Â Â Â Â

F_blend (a,b) = a *bÂ

Darken image blending is multiplication of Â two images pixel by pixel. Result of blending will be darker compared to both input images as shown in above Â figure .1

2.Screen or brighter image blending:

Â F_blend (a, b) = 1 – (1-a)* (1-b)

ScreenÂ image blending is reverse process of darken image blending its make the image much brighter as shown in figure.2 so if in case we want dark area also be brighter we can use screen image blending .

3. Overlay Image blending :

F_blend (a, b) Â = Â Â Â Â Â 1 – 2* (1-a) * (1-b) Â if Â (a > .5) Â else Â 2* a * b Â

Overlay blending provide the results in which darker area will become more darker and brighter area becomes brighter on the basis of base image as shown in figure 3. The image in which we check condition is called the base image here we checked the condition on image a .

4. Weightage image blending :Â

Figure. 4 Weightage image blending.

F_blend (a,b) = Â (weight * a + (1-weight)*b ) /2

Weightage blending provide the weights to images and take the average of them.Â Figure 4 shows the weightage blending where the weight value is 0.5. Range of weight is 0 to 1.

5. Cross fading blending:

Cross fading is one of the most important blending used to smooth panorama images.

Figure. 5 shows we create the ramp function for input images where ramp intensity is 0 to 1 and width of the ramp is the part of image we want to blend.

We takeÂ two ramp images according to figure. 5 ad add them to get the result.

We took full ramp to blend the image as shown in figure. 7 . In left most side image 1 effect is dominant and in right most side image 2 is dominant. In middle it will be average.

Figure. 8 Â shows the image feathering byÂ step function . Figure 8 we can apply the cross fading in Â middle strip window to remove the sharp line you can see the result in Figure 9.

In some applications we create image pyramid and apply the cross fading in each gaussian and laplacian levelÂ and reconstruct the result. That is known as pyramid blending process.

Â %%%%%%%%%%%%%%%%%%%% CODEÂ START %%%%%%%%%%%%%%%%%%

clc % clear screen
clear all % clear all variable
close all % close all open window

%% assign the value only on index where img1 < .5
overlay_blending(img1<.5) = 2.*img1(img1<.5).*img2(img1<.5); %% assign the value only on index where img1 > .5

%% genrate ramp function which size is widt of image ( full ramp function)
ramp1 = (0:1/col:1);
ramp2 = (1:-1/col:0);

cross_fading = zeros(row,col,channel);

for cols = 1: col

%%% multiply the ramp function with image to get ramp image 1 and ramp image 2 and %%add them to get result
cross_fading(:,cols,:) = (ramp2(cols).*img1(:,cols,:) + ramp1(cols).*img2(:,cols,:));

%% genrate step images and add them
feathering(:,1:col/2,:) = img1(:,1:col/2,:);
feathering(:,col/2:end,:) = img2(:,col/2:end,:);

imshow(feathering)
title(‘image feathering’)

ramp3 = (0:1/100:1);
ramp4 = (1:-1/100:0);

ground = zeros(1,(col/2)-50);
one = ones(1, (col/2)-50);

%% generate the ramp function for cross fading in middle of image with 100 pixel width.
function3 = [ground ramp3 one];
function4 = [one ramp4 ground];