Word2vec is a amazing tool which automatically picks up the relationships with words or other way of saying is similarities. Unfortunately there is only CPU implementation available with Google for now.

I was running GoogleNews-2012 (1GB) text corpus  using the tool and it took about 12 hours to generate vectors.  Running the same supporting Nvidia GPUs in CUDA gives 20X speed with 1500 words per sec, completed in about 30-35 minutes using Nvidia 750Ti card. There are 3 GPU implementations available for word2vec

https://github.com/whatupbiatch/cuda-word2vec
https://github.com/fengChenHPC/word2vec_cbow

I found the fengChenHPC is the fastest one with cbow implementation in CUDA. You can install it like

git clone https://github.com/fengChenHPC/word2vec_cbow.git
sudo make all
./demo-word.sh

or

./word2vec -train text8 -output vectors.bin -cbow 1 -size 200 -window 7 -negative 1 -hs 1 -sample 1e-3 -threads 1 -binary 1 -save-vocab voc

After the vectors are generated and dumped, you can query the cosine distance from the distance tool (copy from http://word2vec.googlecode.com/svn/trunk/)

./distance vectors.bin