Using pre-trained word2vec with LSTM for word generation

I’ve created a gist with a simple generator that builds on top of your initial idea: it’s an LSTM network wired to the pre-trained word2vec embeddings, trained to predict the next word in a sentence. The data is the list of abstracts from arXiv website. I’ll highlight the most important parts here. Gensim Word2Vec Your … Read more

Convert word2vec bin file to text

I use this code to load binary model, then save the model to text file, from gensim.models.keyedvectors import KeyedVectors model = KeyedVectors.load_word2vec_format(‘path/to/GoogleNews-vectors-negative300.bin’, binary=True) model.save_word2vec_format(‘path/to/GoogleNews-vectors-negative300.txt’, binary=False) References: API and nullege. Note: Above code is for new version of gensim. For previous version, I used this code: from gensim.models import word2vec model = word2vec.Word2Vec.load_word2vec_format(‘path/to/GoogleNews-vectors-negative300.bin’, binary=True) model.save_word2vec_format(‘path/to/GoogleNews-vectors-negative300.txt’, binary=False)

How to calculate the sentence similarity using word2vec model of gensim with python

This is actually a pretty challenging problem that you are asking. Computing sentence similarity requires building a grammatical model of the sentence, understanding equivalent structures (e.g. “he walked to the store yesterday” and “yesterday, he walked to the store”), finding similarity not just in the pronouns and verbs but also in the proper nouns, finding … Read more

My Doc2Vec code, after many loops/epochs of training, isn’t giving good results. What might be wrong?

Do not call .train() multiple times in your own loop that tries to do alpha arithmetic. It’s unnecessary, and it’s error-prone. Specifically, in the above code, decrementing the original 0.025 alpha by 0.001 forty times results in (0.025 – 40*0.001) -0.015 final alpha, which would also have been negative for many of the training epochs. … Read more