Jul 25, 2018

Cost functions and its optimal choices - possible?

Question is how to choose the optimal choice for cost function?

Or how to define the loss function, objective function or costs functions for the different scenario below:

1.   Determine the source codes that has a software bug.
2.   Determine whether if a set of symptoms are indicative of cancer?
3.   Determine the sequence of forces to apply to a set of servor motors, in order to draw a picture?

And cost function may also differ among different representation of the problem:

1.  RNN
2.   LSTM
3.   RBM
4.   FF
5.   GRU   

And for many others different NN configurations:


So what makes L2 norm so special?   Is there a better cost functions?   Or something which can be derived through evolution of training?   ie, can we learn to derive the best cost functions for learning?

And there are so many different formulation of cost function:


Ie, given fixed cost function --->  use input/output pairs in dataset for training.

Next fixed the transformation for input/output pair, can we iterate and derive a bettern learning curves?

So like this question asked:


it is hard to choose the right cost functions given so many variabilities.

So is there an automated mechanism to choose the right cost functions to used?   Assuming the neural network is starting from beginning, so initially the weightage are of "baby" quality - and after many evolution, its weightage will gained  more confidence and achieve better quality.

So, the cost functions should also evolved.   Initially the uneducated cost functions should be minimal - effort populated, eg, using just random values.   And as the system gained higher resolutions, the cost functions should be reevaluated and re-improvised.   

References:







No comments: