Earth observation (EO) plays a crucial role in creating and sustaining a resilient and prosperous society that has far reaching consequences for all life and the planet itself. Remote sensing platforms like satellites, airborne platforms, and more recently dones and UAVs are used for EO. They collect large amounts of data and this needs to be downlinked to Earth for further processing and analysis. Bottleneck for such high throughput acquisition is the downlink bandwidth. Data-centric solutions to image compression is required to address this deluge. In this work, semantic compression is studied through a compressed learning framework that utilizes only fast and sparse matrix-vector multiplication to encode the data. Camera noise and a communication channel are the considered sources of distortion. The complete semantic communication pipeline then consists of a learned low-complexity compression matrix that acts on the noisy camera output to generate onboard a vector of observations that is downlinked through a communication channel, processed through an unrolled network and then fed to a deep learning model performing the necessary downstream tasks; image classification is studied. Distortions are compensated by unrolling layers of NA-ALISTA with a wavelet sparsity prior. Decoding is thus a plug-n-play approach designed according to the camera/environment information and downstream task. The deep learning model for the downstream task is jointly fine-tuned with the compression matrix and the unrolled network through the loss function in an end-to-end fashion. It is shown that addition of a recovery loss along with the task dependent losses improves the downstream performance in noisy settings at low compression ratios.