I know Jeff Donahue worked on LSTM models using Caffe. He also gave a nice tutorial during CVPR 2015. He has a pull-request with RNN and LSTM.
Update: there is a new PR by Jeff Donahue including RNN and LSTM. This PR was merged on June 2016 to master.
More Related Contents:
- Test labels for regression caffe, float not allowed?
- caffe data layer example step by step
- scale the loss value according to “badness” in caffe
- caffe: model definition: write same layer with different phase using caffe.NetSpec()
- should I use float or classes as output for the final layer in my neural network?
- Keras input explanation: input_shape, units, batch_size, dim, etc
- Common causes of nans during training of neural networks
- Keras : How should I prepare input data for RNN?
- What is a `”Python”` layer in caffe?
- Many to one and many to many LSTM examples in Keras
- InfogainLoss layer
- How to interpret caffe log with debug_info?
- How to training/testing my own dataset in caffe?
- [caffe]: check fails: Check failed: hdf_blobs_[i]->shape(0) == num (200 vs. 6000)
- Caffe | solver.prototxt values setting strategy
- How to calculate the number of parameters for convolutional neural network?
- What is `weight_decay` meta parameter in Caffe?
- How do I load a caffe model and convert to a numpy array?
- How to feed caffe multi label data in HDF5 format?
- Does Caffe need data to be shuffled?
- Tackling Class Imbalance: scaling contribution to loss and sgd
- What’s the difference between “hidden” and “output” in PyTorch LSTM?
- Caffe: What can I do if only a small batch fits into memory?
- How to create caffe.deploy from train.prototxt
- How to reduce a fully-connected (`”InnerProduct”`) layer using truncated SVD
- What is `lr_policy` in Caffe?
- 2-D convolution as a matrix-matrix multiplication [closed]
- Keras Sequential model input layer
- Epoch vs Iteration when training neural networks [closed]
- TimeDistributed(Dense) vs Dense in Keras – Same number of parameters