Visual speech recognition (VSR) is the task of recognizing spoken language from video input only, without any audio. VSR has many applications as an assistive technology, especially if it could be deployed in mobile devices and embedded systems. The need for intensive computational resources and large memory footprint are two major obstacles in deploying neural network models for VSR in a resource constrained environment. We propose a novel end-to-end deep neural network architecture for word level VSR called MobiVSR with a design parameter that aids in balancing the model’s accuracy and parameter count. We use depthwise 3D convolution along with channel shuffling for the first time in the domain of VSR and show how it makes our model efficient. MobiVSR achieves an accuracy of 70% on a challenging Lip Reading in the Wild dataset with 6 times fewer parameters and 20 times smaller memory footprint than the current state of the art. MobiVSR can also be compressed to 6 MB by applying post training quantization.