High quality virtual views need to be synthesized from adjacent available views for free viewpoint video (FVV) and multiview video coding (MVC) to provide users with a more realistic 3D viewing experience of a scene. View synthesis techniques suffer from poor rendering quality due to holes created by occlusion and rounding integer error through warping. To remove the holes in the virtual view, the existing techniques use spatial and temporal correlation in intra/ inter-view images and depth maps. However, they still suffer quality degradation in the boundary region of foreground and background areas due to the low spatial correlation in texture images and low correspondence in inter-view depth maps. To overcome the limitations mentioned, we use a number of models in Gaussian mixture modelling (GMM) to separate background and foreground pixels in our proposed technique. Here, the missing pixels introduced from the warping process are recovered by the adaptive weighted average of the pixel intensities from the corresponding GMM model(s) and warped image. The weights vary with time to accommodate the changes due to a dynamic background and motions of the moving objects for view synthesis. We also introduce an adaptive strategy to reset GMM modelling if the contributions of the pixel intensities drop significantly. Our experimental results indicate that the proposed approach provides 5.40~ 6.60dB PSNR improvement compared with relevant methods. To verify the effectiveness of the proposed view synthesis technique, we use it as an extra reference frame in the motion estimation for MVC. The experimental results confirm that the proposed view synthesis is able to improve PSNR by 3.15~5.13dB compared to the conventional three reference frames.