Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
Please, support PV!
It allows to keep PV going, with more focus towards AI, but keeping be one of the few truly independent places.
Flaws of Neural Networks
  • It has been discovered is that a single neuron's feature is no more interpretable as a meaningful feature than a random set of neurons. That is, if you pick a random set of neurons and find the images that produce the maximum output on the set then these images are just as semantically similar as in the single neuron case.

    This means that neural networks do not "unscramble" the data by mapping features to individual neurons in say the final layer. The information that the network extracts is just as much distributed across all of the neurons as it is localized in a single neuron.

    Every deep neural network has "blind spots" in the sense that there are inputs that are very close to correctly classified examples that are misclassified.

    Since the very start of neural network research it has been assumed that networks had the power to generalize. That is, if you train a network to recognize a cat using a particular set of cat photos the network will, as long as it has been trained properly, have the ability to recognize a cat photo it hasn't seen before.

    However, this isn't true.

  • 1 Reply sorted by
  • Back in the days when I was researching automatic speech recognition, Artificial Neural Networks were kind of en vogue. But they finally performed significantly worse than ordinary stochastical models (like Hidden Markov Models), so at some point people lost interest in ANNs.

    I think that ANNs might be a suitable technology if you want to emulate the sometimes erratic behaviour of living beings - but if you want a reliable tool to work with, other technologies are just better.