Neural fields have emerged as a powerful paradigm for representing signals, such as images, videos, 3D shapes, etc. Although it has shown the ability to represent fine details, its e ciency as a data representation has not been extensively studied. Although a signal can be represented by neural fields and stored as neural network parameters, general-purpose optimization algorithms do not exploit the spatial and temporal redundancy of signals explicitly. Inspired by standard video compression algorithms, we propose a neural field architecture for representing and compressing videos that deliberately removes data redundancy through the use of motion information across video frames and residuals to reconstruct video frames instead of storing raw RGB colors. Maintaining motion information, which is typically smoother and less complex than raw signals, requires a far fewer number of parameters. Furthermore, reusing redundant color values further improves the network parameter e ciency. Additionally, we suggest using more than one reference frame for video frame reconstruction. Experimental results have shown that the proposed method outperforms the baseline methods by a significant margin.