Identification of the source social network based on the downloaded images is an important multimedia forensic task with significant cybersecurity implications in light of the sheer volume of images and videos shared across various social media platforms. Such a task has been proved possible by exploiting distinctive traces embedded in image content by social networks (SNs). To further advance the development of this area, we propose a novel framework, called FusionNET , that integrates two established convolutional neural networks (CNNs), with the former (named 1D-CNN ) learning discriminative features from the histogram of discrete cosine transform coefficients and the latter (named 2D-CNN ) inferring unique attributes from the sensor-related noise residual of the images in question. The separately learned features are then fused by the FusionNET to inform the ensuing source identification or source-oriented image classification component. A series of experiments were conducted on a number of image datasets across various SNs and instant messaging apps to validate the feasibility of the FusionNET also in comparison with the performance of the 1D-CNN and 2D-CNN . The encouraging results were observed.