First Post , NN

Hey all,
I’m trying to write my first Neural Network, decided to go the python way, and use the available libraries.

have read some books about the subject, and am familiar with the drill, overall.
i have my inputs (12000 arrays, each have 500 double values, all in range -1,1. and all are labeled)
my question is :
since my input values have negative values, does it mean I HAVE TO use sigmoid function as activation function (at least for the first layer) ?
(because the RELU function kinda ignores the negative inputs ?)

thanks in advance

No you don’t have to, the bias can correct itself to the appropriate value to potentially use negative inputs.

Note that you should not use an activation function directly on the inputs regardless.

1 Like

@reCurse
I can choose to implement a NN to classify my targets or to calculate a single double.
i.e the problem I have in hand, can be solved as a classification.
BUT for some reason, I prefer to build it in a way that it gives a single end result. (one float in range 0 to 1)
does it mean I shouldn’t use Softmax function at my last layer ?

example :
for each input, NN results a float in range 0,1
the label (target) is in range 0,1 as well
if I don’t use Softmax, how do I calculate cost ? (should I make a function myself for cost?)

Classification only makes sense if you have at least 2 classes to categorize (i.e. 2 outputs). If you are interested in a single scalar output, then it is a regression problem, not a classification one.

That means you should probably keep the output as a linear activation and use MSE as your loss/cost.

1 Like