Create a simple Neural Network from scratch using Origin C

When learning artificial intelligence, I’m always amazed by those who provide all kinds of supporting packages for Python and R and whatever programming languages people are using. However, in order to make sure I really understand how this machine thinking works, I always want to implement a neural network without using packages, totally from zero. That’s why I choose Origin C because seems like no one has done this on this platform yet and there will be definitely no 3rd party packages that I can use. Here in this blog, I will explain step by step how I did (Thanks to one of Milo Spencer-Harper’s blogs).

First, we need to understand what is Neural Network. The human brain is made of hundreds of billions of neurons. Once the synapse gets enough impulses, it will turn the current neuron on and pass the impulse to the next neuron. We call this process “think”.

NeuImg_01

We can simulate this process programmablly, not by building a brain but by simulating the high-level rules. Let’s use a 2D data table, and in order to simplify the task, we only simulate a neural network with 4 inputs and a single output.

So first, we need to train the neurons for solving unknown cases. In the following table, the first seven cases will be used as training set. Have you noticed the pattern?

  Input Output
Sample 1 0 0 0 1 0
Sample 2 1 1 1 1 1
Sample 3 1 0 1 0 0
Sample 4 0 1 1 0 1
Sample 5 0 0 1 1 0
Sample 6 0 1 0 1 1
Sample 7 1 0 0 1 0
New situation 0 1 0 0 ?

Yes, the output will always follow the second input, so “?” shall be 1.

 

Training Process

So how can we train the network to return a good perdition? The idea is to give each input a weight, either positive or negative. Thus, the input that has the larger weight will have more impact on the output. So first we can give each neuron a random weighting value and let the network adjust them during the process:

  1. Take a training sample, calculate the output based on a special function by the current weights.
  2. Calculate the error, which is the difference between the sample output and the result from step1.
  3. Adjust the weight slightly according to the error value.
  4. Repeat step 1~3 for … ten thousand times.

NeuImg_02

Then the weighting values will finally become an optimal solution for the training sets. Now if we use it for an unknown case, it will provide a close-enough prediction. We call this process “Back Propagation“.

 

The function for calculating network output

You may think what will be the function that we use to calculate the output? First, let’s get the weighted sum of all neurons.
imgtemp-cc8w0k-1

Normalize the result so that it will fall between 0 and 1, by Sigmoid function.

Sigmoid function is an S-shaped curve

SigmoidCurve

Embed the first equation into the second, we will have:

Again, in order to simply the task, we didn’t take lowest excitation threshold into consider.

 

Weight Adjustment Function

So how to modify the weighting value during the process? We can use Error Weighted Derivative method (A.K.A Delta Rule).

Why choose this equation? First, we want the adjustment to be proportional to the error. Second, when there is no input (input = 0), the current weight will not be modified. At last, we multiply it by the slope of the sigmoid curve. In order to understand the slope part, consider the following:

  1. The same function (Sigmoid) is been used to calculate the output
  2. If the calculated output is a large positive (or negative) value, it means that the neural network is taking this (or the opposite) decision
  3. For the curve we can see that the larger the output value be, the smaller the slope is.
  4. If the neural network thinks the current weight is correct (error ≈ 0), it will not be adjusted too much. By multiplying the slope will be able to implement this.

Take the derivative of sigmoid equation, we can get

Put it into the adjustment equation, then we have

Of course, there are other equations you can find that have better performance, while the advantage of this equation is, it’s simple enough to understand.

 

Building your Origin C code

#define NUM_NEURONS 4
#define NUM_MAX_INTERATION 10000
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Main function

void Simple_Neural_Network_Main(vector vs) {
	vector<double> vsWeight;
	vsWeight = Neural_Network_Trainning(Get_Trainning_Matrix(), Get_Trainning_Result());  // Train the network to optimize the weighting vector
	printf("New synaptic weights after trainning:\n[%f, %f, %f, %f]\n\n", vsWeight[0], vsWeight[1], vsWeight[2], vsWeight[3]);  // print out the optimized weights
	
	matrix mat(1,4);
	mat.SetRow(vs, 0);
	vector<double> vsPred;
	vsPred = Neural_Network_Think(mat, vsWeight);  // Re-use think function to provide predicted output
	printf("Considering new situation [%.0f, %.0f, %.0f, %.0f] -->\n%f\n\n", mat[0][0], mat[0][1], mat[0][2], mat[0][3], vsPred[0]);
}

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Training

static vector<double> Neural_Network_Trainning(matrix TrainningMat, vector TrainningResult) {
	// Initialize with a random weight
	vector<double> vsWeight(NUM_NEURONS);
	vsWeight = Get_Random_Vector(NUM_NEURONS);  // generate random weights for starter 
	printf("Random starting synaptic weights:\n[%f, %f, %f, %f]\n\n", vsWeight[0], vsWeight[1], vsWeight[2], vsWeight[3]);  // print initial weights
	
	for (int ii = 0; ii < NUM_MAX_INTERATION; ii++) {
		vector<double> vsPred, vsError, vsAdjustment;
		vsPred = Neural_Network_Think(TrainningMat, vsWeight);	// Get prediction
		vsError = TrainningResult - vsPred;		// Compare with trainning result to get error
		
		matrix mat(TrainningMat);
		mat.Transpose(); 
		vsAdjustment = Matrix_Dot(mat, vsError * Sigmoid_Dev(vsPred));
		vsWeight += vsAdjustment;
	};
	
	return vsWeight;
}

static vector<double> Neural_Network_Think(matrix mat, vector<double> vsWeight) {
	vector<double> vsResult;
	vsResult = Sigmoid(Matrix_Dot(mat, vsWeight));
	return vsResult;
}

static vector<double> Matrix_Dot(matrix mat, vector<double> vs) {
        // Seems like Origin C does not have a matirx dot multiply operation. 
        // But we can create one by our own. No big deal.
	vector<double> vsResult(0);
	for (int ii = 0; ii < mat.GetNumRows(); ii++) {
		vector v;
		mat.GetRow(v, ii);
		vsResult.Add(Vector_Dot(v, vs));
	};
	return vsResult;
}

static double Vector_Dot(vector<double> vs1, vector<double> vs2) {
	int nSize = min(vs1.GetSize(), vs2.GetSize()); 
	double dResult = 0;
	for (int ii = 0; ii < nSize; ii++) {
		dResult += vs1[ii] * vs2[ii];
	};
	return dResult;
}

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Sigmoid

static double Sigmoid(double x) {
	return 1 / (1 + exp(-1*x));
}

static double Sigmoid_Dev(double x) {
	return x * (1 - x);
}

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Trainning Data

static matrix Get_Trainning_Matrix() {
	matrix mat = {
		{0, 0, 0, 1},
		{1, 1, 1, 1},
		{1, 0, 1, 0},
		{0, 1, 1, 0},
		{0, 0, 1, 1},
		{0, 1, 0, 1},
		{1, 0, 0, 1}
	};
	return mat;
}

static vector Get_Trainning_Result() {
	vector vs = {0, 1, 0, 1, 0, 1, 0};
	return vs;
}

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Random

static vector<double> Get_Random_Vector(int nNum) {
	// return a vector with nNum random numbers in [-1, 1] with mean=0
	vector v(nNum);
	int a = 0, b = 1;
	int nRet = unifrnd(v, nNum, a, b);
	return 2*v-1;
}

Epilogue

Compile the OriginC code above and call the main function in Script Window as following (you can change the input vector to other 4-dig combinations):

ScriptWindow_1

you should be able to see results like:

ScriptWindow_2

You did it !!!!!! You’ve built a simple neural network by plain Origin C !!!!

You can see the network trained itself, considered a new case {0, 1, 0, 0} and gives its prediction 0.999998. It’s almost 1 !!!!!

3 Comments on “Create a simple Neural Network from scratch using Origin C”

    1. Hi Anthony,

      Thank you fo the feedback.

      Regarding “Any thoughts on how to implement LSTM neurons on C?”- Do you mean Origin C specifically or C in general? If you mean in C in general, you would have to research that yourself. If you mean in Origin C specifically, you can email us at tech@originlab.com. We would prefer that you provide as many citations as possible, so be prepared.

Leave a Reply to Chris Drozdowski Cancel reply

Your email address will not be published. Required fields are marked *