fork of sentence_word2vec with little improvements

Jonas Weinz 8933ff8e23 fixed little compilation issues 6 months ago
models @ 22036b6f44 db6713aa3a Revert "remove models directory" 2 years ago
.gitignore b2097d5fc0 remove dependency on bazel, switch to g++ for compilation 2 years ago
.gitmodules a970255c07 update readme, add bash scripts to compile ops and get data 2 years ago
README.md c5858d10ce update readme to include pulling submodule 2 years ago
__init__.py 7fdd618061 initial commit 3 years ago
compile_ops.sh 8933ff8e23 fixed little compilation issues 6 months ago
get_data.sh a970255c07 update readme, add bash scripts to compile ops and get data 2 years ago
sentence_word2vec_kernels.cc 9addc5b4a6 fix endless loop in kernel 2 years ago
sentence_word2vec_ops.cc 5fb4748b4b refactor, remove redundant skipgram and neg_train kernels 3 years ago
word2vec.py b2097d5fc0 remove dependency on bazel, switch to g++ for compilation 2 years ago
word2vec_optimized.py ac6c0bad95 add saving of embeddings 2 years ago

README.md

sentence_word2vec

word2vec with a context based on sentences, in C++.

This is based on the tensorflow implementation of word2vec.

However, the context for the model is defined differently:

  • the context for the model is defined in terms of sentences.
  • the context for a given word is the rest of words in a sentence.

This is implemented in C++ in the sentence_word2vec_kernels.cc file.

Why might this be useful? This can be used to model playlists or user histories for recommendation! Or any other kind of 'bagged' data.

Usage

To compile the C++ ops used:

git clone https://github.com/altosaar/sentence_word2vec
cd sentence_word2vec
# pull the models repo submodule
git submodule update --init
./compile_ops.sh

To get the text8 data and split it into sentences for testing:

./get_data.sh

To run the code with a sentence-level context window:

python word2vec_optimized.py -- \
    --train_data text8_split \
    --eval_data questions-words.txt \
    --save_path /tmp \
    --sentence_level True

On a Macbook Air with the following config, the speed is around 17k words/second. This is up from around 2k words/second with a manual python implementation.

➜  ~ sysctl -n machdep.cpu.brand_string
Intel(R) Core(TM) i7-4650U CPU @ 1.70GHz

This directory contains models for unsupervised training of word embeddings using the model described in: (Mikolov, et. al.) Efficient Estimation of Word Representations in Vector Space, ICLR 2013.

Detailed instructions and description of this model is available in the tensorflow tutorials:

File What's in it?
word2vec.py A version of word2vec implemented using TensorFlow ops and minibatching.
word2vec_optimized.py A version of word2vec implemented using C ops that does no minibatching.
sentence_word2vec_kernels.cc Kernels for the custom input and training ops, including sentence-level contexts.
sentence_word2vec_ops.cc The declarations of the custom ops.