Normalizations in Neural Networks

Normalizations for the input data (normalization, equalization)

In image process area, the term “normalization“ has many other names such as contrast stretching, histogram stretching or dynamic range expansion etc.
If you have an 8-bit grayscale image, the minimum and maximum pixel values are 50 and 180, we can normalize this image to a larger dynamic range say 0 to 255. After normalize, the previous 50 becomes 0, and 180 becomes 255, the values in the middle will be scaled according to the following formula:


(I_n: new_intensity) = ((I_o: old_intensity)- (I_o_min: old_minimum_intensity)) x ((I_n_max: new_maximum_intensity) - (I_n_min: new_minimum_intensity)) / ((I_o_max: old_maximum_intensity) - (I_o_min: old_minimum_intensity)) + (I_n_min: new_minimum_intensity)


Normalization

A Note to Techniques in Convolutional Neural Networks and Their Influences I (paper summary)

This blog summarizes techniques that have been revealed in publications in convolutional neural networks. The focus of this summary will be put on the terminologies and their influences. Related technics will not be explained in detail, for some algorithms, I may wrote other standalone articles to explain how they work.
Allow me to issue some abberviations, “conv.” means “convolution“, “op.” means operation, these two words appear too frequently.

LeNet-5

The related paper is : Gradient-Based Learning Applied to Document Recognition, released in 1998.

OpenCV Based Julia Set by C++, OpenMP, OpenCL and CUDA

When I study CUDA with the book CUDA by example, I found an interesting small program, using computer to generate Julia set image, a kind of fractal image. I have some experience on fractal geometry when I was an undergraduate student and I still have interests on it. I tried the example and found that the CUDA example on that book has some syntax errors.
Based on that example, I created the OpenCL version and OpenMP version.
Allow me to show you the result of this program.


Julia set image