Step 7 Verify that docker is installed by running docker hello world container
sudo docker run hello-world
If you get the hello-world Docker installation is completed.
Lets use the docker by pulling the Docker image. I am using all in one deep learning docker image of docker hub to show how to use docker. Image have tensorflow , keras , caffe , opencv and others deep leaning library. Below is the link of docker hub for all in one deep learning docker image you can check the details of image.
Above command – p is for port forwarding from docker to system and – v is share a folder from system to docker you need to edit path . I am sharing cv_basics folder to docker with name my_folder. You can give any name while sharing.
Tutorial to discuss steps on how to install Anaconda 5.3 and configure OpenCv3.2 in Anaconda 5.3 for python 3.6.
After completing these steps I am sure you can use anaconda/Python3 and OpenCv 3.2.
In office I was assigned a simple work in image processing which required extra modules of OpenCv in Python. It was simple task but I struggled a lot to configure python with opencv modules 😦 . After wasting my weekends I completed the task on time and came to know how easy it was 🙂 .
For Ubuntu 16.04 LTS (almost similar steps for Window 7).
1]. Download the Anaconda according to your system configuration from below link.
If we calculate the eigen value and eigen vector of data; eigen vector represent which basis direction data is spread and eigen value informs which basis direction (eigen vector ) have more information about data.
Basics of Eigen values and Eigen vectors :
Suppose A is matrix with size N x N.
V is vector with size N.
V is called the eigen vector of A.
A V = λ V ……. (i)
if multiplication of A scale the vector V .
In equation (i) λ is called the eigen value of matrix.
Function eig is used in MATLAB to get the eigen vector and eigen value.
[ V, D] = eig (A)
where D is diagonal matrix contain the eigen values and V is columns corresponding eigen vector then
AV = VD …..(2)
form equation 1
λ I – A = 0 called the root of polynomial
where λ is diagonal eigenvalues, I is identity matrix and A is given input matrix.
Determinant of matrix A (|A|) will be product of all eigen values.
Set of eigen vector is called the spectrum of matrix.
Symmetric matrix :
If transpose of matrix does not change the matrix its called symmetric matrix.
S = S T
where ST is transpose of S matrix.
One of the example of symmetric matrix is co-variance matrix.
The property of symmetric matrix is that if we calculate eigen values and eigen vectors, all eigen values will be real.
Eigen Value Decomposition:
if V1 and V2 correspond to eigen values λ 1 and λ2 , if λ1 is not equal to λ2 then vector V1 and V2 will be orthogonal.
Lets take eigen vectors V = ( V1 , V2 , V3 ….) (orthogonal matrix)
if V*VT is zero we can say columns are orthogonal
and if Λ = ( λ 1, λ 2, λ 3….) diagonal eigen values.
we can write
S = V Λ VT ……. (3)
where VT is transpose of V
equation 3 is called the eigen value decomposition.
Lets write the code of eigen value decomposition in MATLAB.
To make sense of Eigen value decomposition input matrix should be symmetry matrix so eigen value will be real. Will calculate co variance matrix which is symmetry matrix.
%%%%%%%%%%%%%%%%%%%%%%%%% Code end %%%%%%%%%%%%%%%%%%%
Symmetric or Covariance input image generated by Lena image:
Covariance image reconstruction by top 5 eigen values and corresponding vectors:
Loss of information because we used only top 5 eigen vector = 27.84 percentage
Top 5 eigen vectors contain 72.16 percentage information of input image
Covariance image reconstruction by 1 eigen vector:
Loss of information = 72.40 percentage
Top 1 eigen vector contain 27.8 percentage information of input image
Eigen value docomposition is the basic of Sigular value decomposition and Principal Component Analaysis with the help of PCA we can able to reconstruct our original lena image which we will discuss in futher post.
GLCM (gray level co-occurrence matrix) is mainly useful to perform the texture analysis and find the features from image.
As name suggested its work on gray image and try to create sort of 2 d histogram from image.
Main application of GLCM are texture analysis, feature extraction and segmentation.
Steps to Calculate GLCM matrix :
Lets assume image I which is gray image.
Initialize GLCM matrix size 256 x 256 (256 is level of GLCM).
Suppose we use zero angle of GLCM means direction of GLCM is horizontal.
Suppose distance of GLCM is 1, means we just look horizontally next pixel to current pixel.
In image I(i,j) get the gray value (suppose value of pixel is a = 127 at I(i,j)), and get gray value I(i,j+1) in case of distance 1 and zero degree of GLCM (suppose value of pixel is b = 58 at I(i,j+1)).
Go to GLCM matrix co-ordinate (a = 127, b=58) and increment the value by 1.
Iterate the full image that will give us GLCM matrix of zero degree for distance 1.
According to texture type, GLCM distance can be decided.
For road because texture changes so rapidly we consider the small distance to calculate GLCM but for bricks, distance of the GLCM matrix should be large.
Angle of GLCM should be selected according to direction of image texture changes, so in brick image we want to consider 2 direction 0 degree and 90 degree.
Calculate features from GLCM matrix, there are many features are available to perform the texture analysis like contrast, correlation, energy , and homogeneity etc.
We will get the contrast feature from GLCM matrix which are sufficient to say weather texture is rough or smooth.
Lets write basic GLCM code which calculate the zero degree GLCM for 256 level and get contrast feature.
%%%%%%%%%%%%%%%%%%%%%%%%%Code Start here %%%%%%%%%%%%%%%
%%%%% GLCM function start here %%%%%%%%%%%%%
function GLCM_0 = getGLCM0(img_gray, distance) %% function calculate the GLCM matrix at zero degree angle and given distance %% GLCM_0 is GLCM matrix for 0 degree angle. %% img_gray is input gray image %% distance is distance of GLCM calculated
%%%%%%%%%%%% Code End %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Following are the main 2 D project transform :
Euclidean Transform :
Let define transformation matrix for Euclidean transform
T = [ R11 R12 Tx ; R21 R22 Ty ; 0 0 1]
Where R11 , R12, R21 and R22 are rotation matrix and Tx and Ty are translation. So it have have 3 degree of freedom 1 in rotation and 2 in translation.
lets take 30 degree rotation angle and -100 pixel translation in both direction
Result of Euclidean Transform
Euclidean Transform dont have property to scale the images.
Let define transformation matrix for Similarity transform
T = [ s*R11 s*R12 Tx ; s* R21 s*R22 Ty ; 0 0 1]
It have 4 degree of freedom.
if in Euclidean transform we added the feature of scale (s) its become similarity tranform
lets take 40 degree rotation angle and -100 pixel translation in both direction and 1.5 scaling factor
Affine Transform :
Let define transformation matrix for Affine transform
T = [ a11 a12 Tx ; a21 a22 Ty ; 0 0 1]
In Similarity transform if we can able to take different angle as well as different scale it will become affine transform.
lets take 40 degree rotation angle in x direction and 30 degree rotation angle in y direction , -100 pixel translation in both direction, 0.7 scaling factor in x direction and 1.5 scaling factor in y direction.
Projective Transform :
Define the transformation full matrix which include the homogeneous factor as well.
T = [h11 h12 h13 ; h21 h22 h23 ; h31 h32 h33]
Projective transform have 8 degree of freedom because we added 2 coefficient h31 h32 in affine transform matrix which take care of image homogeneous co-ordinate geometry. (might be in furure calibration post i will discuss it about in detail)
%% function haar_wave calculate the haar wave decomposition of given imput image %% Image in :- input gray level image %% LL :- Low Low band output image %% LH :- Low High band output image %% HL :- High Low band output image %% HH :- High High band output image