In this post I’m going to describe how to get Google’s pre-trained Word2Vec model up and running in Python to play with.
Nvidia puts on its GPU Technology Conference (GTC) each year to highlight work being done on GPUs outside of graphics–including machine learning.
This post is just intended to capture my notes on the PageRank algorithm as described in the Mining Massive Datasets course on Coursera.
This post provides some overview and explanation of NVIDIA’s provided sample project ‘matrixMulCUBLAS’ for super-fast matrix multiplication with cuBLAS. The example can be a little confusing, and I think it warrants some explanation.