This blog will look at how to build a Local Binary Pattern feature extractor for computer vision tasks.
Local Binary Pattern:
What is LBP
LBP is one of many feature extractors. HOG, SIFT, SURF, FAST, DoG, etc... are all similar but do slightly different things. What makes LBP different is, its main goal is to be used for a texture descriptor on a local level. This gives a local representation of any texture of an image. This is done by comparing a pixel with the surrounding pixels. For each pixel in an image, the surrounding x number of pixels will be looked at. X can be determined and adjusted as needed. The LBP value for every pixel is calculated to its neighbors. If the center pixel is greater than or equal it's neighbor's values, then it will be set to 1, else it will be set to 0.
From the above talbe, you can see how each cell gets calculated. From this point this 2D array will be flattened to a 1D array like this:
This will give
So 71 will be in the output image. This process will be done for every single pixel in the image.
This talbe shows how each cell is calculated:
The basic idea behind this is to calculate each value of the 1D array at each index.
The value is determined by the position of the index in the array. If the value at the index is a 1, then value calculated to be where i is the index position. If the value at the index is 0, then the value is set to a 0 regardless of the index position. Then you sum the results of the whole 1D array to get the center pixel value.
To get the feature vectors from this, you have to calculate a histogram first.
This will be a histogram of 256 bins as the values of the LBP can range from 0 to 255.
Python Implementation:
OpenCV does have an LBP available, but it is meant for facial recognition and would not be appropriate for getting textures off clothing or environments. The use of the sklearn’s model can be very useful then for this project.
Let see how to implement it.
from skimage import feature
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
import numpy as np
import cv2
import os
classLBP:
# Constructor# Needs the radius and number of points for the outer radiusdef__init__(self, numPoints, radius):
self.numPoints = numPoints
self.radius = radius
# Compute the actual lbpdefcalculate_histogram(self, image, eps=1e-7):
# Create a 2D array size of the input image
lbp = feature.local_binary_pattern(image, self.numPoints,
self.radius,
method="uniform")
# Make feature vector#Counts the number of time lbp prototypes appear
(hist, _) = np.histogram(lbp.ravel(),
bins= np.arange(0, self.numPoints + 3),
range=(0, self.numPoints +2))
hist = hist.astype("float")
hist /= (hist.sum() + eps)
return hist
# Create the lbp
loc_bi_pattern = LBP(12,12)
x_train = []
y_train = []
image_path = "LBPImages/"
train_path = os.path.join(image_path, "train/")
test_path = os.path.join(image_path, "test/")
for folder in os.listdir(train_path):
folder_path = os.path.join(train_path, folder)
print(folder_path)
forfilein os.listdir(folder_path):
image_file = os.path.join(folder_path, file)
image = cv2.imread(image_file)
image = cv2.resize(image,(300,300))
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
hist = loc_bi_pattern.calculate_histogram(gray)
# Add the data to the data list
x_train.append(hist)
# Add the label
y_train.append(folder)
After then you can choose whichever model you want to train on. SVM I think would be best, but logistic regression or Naive Bayes could work also. It would be fun to play around with a few options to see which works best.
I trained and tested my code on images of metal and wood textures.
The results are pretty good for something so simple:
Pretty straight forward seeing we are using sklearn's implementation. All we really need to do is create the histograms to get out the feature vectors for each image. This can allow for you to then classify other images that have similar textures on them.
As you can see it works pretty well. The first “wood” image is actually metal siding, but I wanted to see how well it does one something that is very difficult to determine. This misclassification could be due to the overall image looking similar to that of wood flooring texture and not of metal textures. Even a human might have the same issue with this using a black and white photo.
Conclusion:
The ability to extract small scale or fine grain details makes LBP a very handy tool for computer vision tasks. But, one issue is that LBP cannot capture at different scales which causes it to miss out on a global scale features. This can be overcome by using different implementations of LBP which can handle different neighborhood sizes which allows for better control over the scale. Depending on your need the use of a fixed scale or changing one might change.
ERROR: Data Binding annotation processor version needs to match the Android Gradle Plugin version.
You can remove the kapt dependency androidx.databinding:databinding-compiler:1.0.0 and Android Gradle
Plugin will inject the right version.
This blog will be looking at how to set up and start a Hadoop server on windows as well as give some explanation as to what it is used for.
What is Hadoop:
Hadoop is a set of tools that can be used for easy processing and analyzing Big Data for a company and research. Hadoop gives you tools to manage, query, and share large amounts of data with people who are dispersed over a large geographical location. This means that teams in Tokyo can easily work with teams in New York, well not accounting for sleeping preferences. Hadoop gives a huge advantage over a traditional storage system, not only in the total amount of storage possible, but in flexibility, scalability, and speed of access to this data.
Modules
Hadoop is split up into 4 distinctive modules. Each module performs a certain task that is needed for the distributed system to function properly. A distributed system is a computer system that has its components separated over a network of different computers. This can be both data and processing power. The actions of the computers are coordinated by messages that are passed back and forth between each other. These systems are complex to set up and maintain but offer a very powerful network to process large amounts of data or run very expensive jobs quickly and efficiently.
The first module is the Distributed Filesystem. The HDFS allows files to be stored, processed, shared and managed across a set of connected storage devices. HDFS is not like a regular operating file system and normally can be accessed by any supported OS which gives a great deal of freedom.
The second module is MapReduce. There are two main functions that this module performs. Mapping is the act of reading in the data (or gathering it form each node). Mapping then puts all this data into a format that can be used for analysis. Reduce can be considered the place where all the logic is performed on the collected data. In other words, Mapping gets the data, Reducing analyzes it.
Hadoop common is the third module. This module consists of a set of Java tools that each OS needs to access and read the data that is stored in the HDFS.
The final module is YARN which is the system management that manages the storing of the data and running of task/analysis of the data.
Little More Detail:
What is a namenode? A namenode stores all the metadata of all the files in the HDFS. This includes permissions, names, and block locations. These blocks can be mapped to each datanode. The namenode is also responsible for managing the datanode, i.e. where it is saved, which blocks are on which node, etc…
A datanode, aka a slave node, is the node that actually stores and retrieves blocks of information requested by the namenode.
Installation:
Now with the background out of the way lets try to install the system on Windows 10.
Step 1: Download Hadoop Binaries from here Apache Download Mirrors
Step 2: Make its own folder in the C drive to keep things tidy and make sure that it is easy to find.
NOTE DO NOT PUT ANY SPACES AS IT CAN CAUSE SOME VARIABLES TO IMPROPERLY EXPAND.
Step 3: Unpack the tar.gz file (I suggest 7 zip as it works on windows and is free)
Step 4: To run it on Windows, you need a windows compatible binary from this repo https://github.com/ParixitOdedara/Hadoop. You can just download the bin folder and copy all the files from the downloaded bin to Hadoop's bin (replace any files if needed). Simple right.
Step 5: Create a folder called data and, in this folder, create two others called datanode and namenode. The datanode will hold all the data that is assigned to it. The namenode is the master node which holds the metadata for the datanode (i.e. which data node the 64mb blocks is located on)
Step 6: Set up Hadoop Environment variables like so:
HADOOP_HOME=”C:\BigData\hadoop-2.9.1\bin”
JAVA_HOME=<Root of your JDK installation>”
And add it to your path variables like this
Step 7: Editing several configuration files.
First up is the:
Ect\hadoop\hadoop-env.cmd
set HADOOP_PREFIX=%HADOOP_HOME%
set HADOOP_CONF_DIR=%HADOOP_PREFIX%\etc\hadoop
set YARN_CONF_DIR=%HADOOP_CONF_DIR%
set PATH=%PATH%;%HADOOP_PREFIX%\bin
On the first time you start up you need to run I a cmd write:
Hadoop namenode -format
This set up your namenode and gets Hadoop running.
Now cd into your sbin folder and type
start-all.cmd
This will open up 4 other screens like this
Their names are; namenode, datanode, nodemanager, and resourcemanager.
And now we can look at Hadoop in a browser
The resource manager is http://localhost:8088/cluster/cluster
This is what you should be greeted with
And that’s it, we managed to install and use Hadoop. Now this is a very simple way of doing it and there may be better approaches like using Dockers, or commercial versions which are much easier to use and setup, but learning how to set it up and run it from scratch is a good experience.
Conclusion
We learned that it was very complex to set up and configure all of Hadoop. But with all the power it can bring to Big Data analysis as well as large data sets that are used for AI training and testing, Hadoop can be a very powerful tool in any researcher, data scientist, and business intelligence analyst.
A potential use of Hadoop form image analysis where you have images that are stored in different sources or if you want to use a standard set of images, but the number of images is too large to store locally in a traditional storage solution. Using Hadoop, one can establish a feeding reducer that can then be used in a data generator method in a Keras model. The potentially one can have an endless stream of data from an extremely large dataset, thus giving almost unlimited data. This approach can also be used to get numerical data that is stored on a Hadoop system. Now you do not have to download the data directly just use Hadoop to query and do preprocessing of the data before you feed it into your model. This can save time and energy when working with distributed systems.
Why am I making this post?
Well, I basically needed to make my own SIFT algorithm as there is no free one in OpenCV anymore, well at least 3.0+.
For computer vision, one of the most basic ideas is to extract information from an image. This is feature extraction. There are different levels of features mainly global and local features. This blog will look at SIFT which is a local feature extractor. This is done by finding key points or areas of great change then adds quantitative information or descriptors that can then be used in a more complex task like object detection. Ideally, these key points should be able to be uniquely identified in various images regardless of transformations or changes in the image.
Why Python?
Yes, it's not the best in speed for this, and after running the code it takes a hot minute for it to do the feature extraction. But, I can easily use it in any computer vision project that I have now and it plugs and play no problem.
How does SIFT work?
First, you give SIFT a picture to work with, we will be using an image I took of a dog from when I went dog sledding in Finland.
Step1: Double the size of your image both using bilinear interpolation.
Step 2: Blur the image using Gaussian Convolution.
Step 3: Preform more convolutions using Standard Deviation.
Step 4: Downsample each image.
Step 5: Restart the convolution again.
Continue this until the image is too small to perform these steps anymore.
This is called a scale-space which will help simulate many different scales that an image can come in (i.e. from small to larger and everything in between).
After the convolution, we will have to get the Laplacian for each scale space. This gives a grey scale value for each element in the image. The max values for the Laplacian will then be our key points. The max pixel will be a pixel whose value is larger than all its surrounding other pixels. This can be extended to several pixels or even a larger area depending on your needs. We can refine the key point results in the scale space by using a Quadratic Taylor expansion. Along with this, the identification of key points that lie on the edge of an object should also be removed as they are poor key points as they are not unique to translations of an image and are parallel to the edge direction. The point of keypoint extractions is not to find an edge of an object, but rather to find unique features of the image which may or may not lie on the edge of a target. That is the difference between edge detection and key point detection.
Finally, SIFT will give a reference orientation to each key point. Then the gradient of the image will be calculated by finite differences. Then smooth the gradients of the image using box blurs. This will allow for the remaining points that exceed a certain value/threshold to be kept. Another key point will be discarded.
After all of that, we have a list of final key points that we can create the descriptors for. This is done by creating a histogram of the gradient directions for each key point. This will not just make one histogram as it will make several around in a circle of the pixel where the histogram corresponds to the center pixel. Each gradient is a circle shape (rather than a box as used previously), and any key point that cannot create a full circle will be discarded.
So, after all of this, we now are left with a set of key points that are local features that can help us identify unique objects in images. This can then be used on its own for simple computer vision tasks like object identification, image slicing, and binding, or even image similarity for search engines. So, SIFT can be very useful if you know how and why it works.
Code for SIFT
Here is the pseudo code for getting SIFT for key point detection.
Sift Code
Find_Key_Points(image):
Gaussian Smoothing (image)
Downsample the image
Make Gaussians Pyramids
Create Downsample Gaussians Pyramids
For each octave:
Start Extrema detection.
for each image sample at each scale:
find the gradient magnitude
find the orientation
Calculate each keypoints orientation
Calculate each keypoints descriptor
As you can see the key points found in this image is not perfect, but the implementation of SIFT is not very easy. Also, this image has a lot of noise in it so it may look like the algorithm did not work, but it is working fine, but the changes are on such a small level that it is difficult to even see it.
And for those who want to see here is the whole (all be it abridged) code
Now for a more real-world example. Getting data from the Market1501 data set that holds several images of people for different tasks. By running one of the images through the key point extractor, it allows for you to find the local features that are unique to the image itself.From the above picture you can see that a few of the key points are generated that look great, and one that is over on the street which is what you do not want. This is not the SIFT extractors fault it is looking over the whole image and not just the person. If you want a better result for a person's individual key points without the background mess, I would suggest using segmentation to create a mask and then use the results in the SIFT extractor. That way it will limit what is being looked at and will give better local features for each person.
Conclusion
This implementation is not the state of the art method for getting key points and is very susceptible to blurry or poor image quality, as well as background noise and noise in the image that you cannot detect yourself. This took a long while to implement and the results are not that great compared to the time it took to create this. In my opinion, I would look into other means of extracting local features like the segmentation of the image.
Reference
Otero, I. R., & Delbracio, M. (2014). Anatomy of the SIFT Method. Image Processing On Line,4, 370-396. doi:10.5201/ipol.2014.82