The fundamental problem neural nets try to solve is approximating a function only given the inputs and outputs of the function, but not the function i

A Simpler Alternative to Neural Nets

submited by
Style Pass
2024-05-12 08:00:04

The fundamental problem neural nets try to solve is approximating a function only given the inputs and outputs of the function, but not the function itself. For example, you may be given the following information:

The problem with neural nets is that they are complicated, specifically in the choices one must make when using them: how many hidden layers there should be, how many neurons per layer, what activation function should be used, etc. This involves a lot “knob-tuning” which is annoying, especially if you just want to approximate a simple function. In this article, I present a simpler way to approximate a function that removes much of the complexity involved with neural nets.

Consider that each term has three values associated with it: a base (in this case, x or y), a coefficient (all 1 in this case), and an exponent. Now consider another function where instead of x and y being the parameters, they are instead constants, and the coefficients and exponents are parameters instead. This function looks like the following:

This function will be our approximated function. The question now is: what should the values of the parameters be (we’re pretending we don’t know what the actual function is)?

Leave a Comment