Welcome to My blog! –Even though I don’t know why you are here~
About me
I’m a software engineer who is interested in consumer level high performance computer vision, neural network and machine learning. Click the link to view my curriculum vitae. My tools include but not limited to: C/C++, Neon, OpenCL, CUDA, OpenMP, POSIX-Pthread, Torch7 and tensorflow etc.
This post aims to provide a practical guidance to how to compile Torch-Android deep learning tool with OpenBLAS support. All the procedure is target for UNIX-like environment. I presume that you know what BLAS or OpenBLAS is, if you don’t click the blue word to find out.
This post aims at providing a brief guide on using NDK (Native Development Kit) to build a shared library for an Android application. Be noted that this post will not teach you how to develop a whole JNI supported Android application, the outcome of the following procedure will only lead you to generate a shared library (*.so file) which you can use for your android application. And if you want to really use this generated library, you will have to change some function formats according to your project, modification part will be covered of course.
As we need to use ndk-build command to build the C/C++ code for Android applications running under the ARM architecture. First we need to let the system to know where to find ndk-build. Open a terminal and type as follows (noted that you need you change the ndk path according to your download and location):
1 2
cd# Guarantee that you are under the "/Home" directory, works for Ubuntu at least, for other systems "cd /Home" may work. vim .bashrc # Or use command "gedit .bashrc" if you are not a vimer.
This post will help you to go throw the GDB debugger quickly. You are to start a C project from scratch under Linux (I’m using Ubuntu 15.10 distribution). If you can find this post, I presume that you have the basic knowledge on what GDB is.
Since we have already got some bases on how CNN works, so this post I prefer to focus more on the ideologies that make these architectures unique and the general technics that can improve the performance of the CNN. If you are new to this, please visit my previous post, I believe it can help you to understand the paper or at least get the key point of the paper.
This blog aims to teach you how to use your own data to train a convolutional neural network for image recognition in tensorflow. The focus will be given to how to feed your own data to the network instead of how to design the network architecture. Before I started to survey tensorflow, me and my colleagues were using Torch7 or caffe. They both are very good machine learning tools for neural network. The original propose for turning to tensorflow is that we believe tensorflow will have a better support on mobile side, as we all know that Android) and tensorflow are both dominated by Google. If you are really hurry with importing data to your program, visit my Github repo. to get the necessary code to generate, load and read data through tfrecords. I’m too busy to update the blog. Just clone the project and run the build_image_data.py and read_tfrecord_data.py.