2 Comments
User's avatar
⭠ Return to thread
Bogdan Calin's avatar

Nowadays, auto-encoders systems with pretrained RBMs are not used anymore? To me, a beginner in machine learning it sounded like a good idea to train a model to reproduce the original input, forcing the model to compress the original features from 33 to 4 and then back to 33. A FNN like you used does not do that, right? It just goes from 33 to 4 and then you train it to predict 2 labels. The old technique is not more robust?

Expand full comment
Quantitativo's avatar

Nowadays, we don't need to pretrain RBMs. The current technology now allows us to do everything in one pass, which is what I meant :)

Expand full comment