|
- What is the difference between convolutional neural networks . . .
The idea is the same as with autoencoders or RBMs - translate many low-level features (e g user reviews or image pixels) to the compressed high-level representation (e g film genres or edges) - but now weights are learned only from neurons that are spatially close to each other
- deep learning - When should I use a variational autoencoder as opposed . . .
deep-learning autoencoders variational-bayes See similar questions with these tags
- Whatre the differences between PCA and autoencoder?
Both PCA and autoencoder can do demension reduction, so what are the difference between them? In what situation I should use one over another?
- What is the origin of the autoencoder neural networks?
The first clear autoencoder presentation featuring a feedforward, multilayer neural network with a bottleneck layer was presented by Kramer in 1991 (full text at ) He discusses dimensionality reduction and feature extraction and applications such as noise filtering, anomaly detection, and input estimation Variational autoencoders, referred to as "robust autoassociative neural networks", were
- Choosing activation and loss functions in autoencoder
Here is the tutorial: https: blog keras io building-autoencoders-in-keras html However, I am confused with the choice of activation and loss for the simple one-layer autoencoder (which is the first example in the link)
- neural networks - Why do we need autoencoders? - Cross Validated
Recently, I have been studying autoencoders If I understood correctly, an autoencoder is a neural network where the input layer is identical to the output layer So, the neural network tries to pr
- mse - Loss function for autoencoders - Cross Validated
I am experimenting a bit autoencoders, and with tensorflow I created a model that tries to reconstruct the MNIST dataset My network is very simple: X, e1, e2, d1, Y, where e1 and e2 are encoding
- When does my autoencoder start to overfit? - Cross Validated
I am working on anomaly detection using an autoencoder neural network with $1$ hidden layer This is an unsupervised setting, as I do not have previous examples of anomalies The input data has pat
|
|
|