Résumé : Historical aerial photographs provide salient information on the historical state of the landscape. The exploitation of these archives is often limited by accessibility and the time-consuming process of digitizing the analogue copies at a high resolution and processing them with a proper photogrammetric workflow. Furthermore, these data are characterised by limited spectral information since it occurs very often in a single band. Our work presents a first application of deep learning for the extraction of land cover from historical aerial panchromatic photographs of the African cities of Goma, Bukavu and Bujumbura. We evaluate the suitability of deep learning for land cover generation from a challenging dataset of photographs from the 1940s and 1950s that covers large geographical extents and is characterised by radiometric variations between dates and locations. A fully convolutional approach is investigated by considering two network architectures with different strategies of exploiting contextual information: one used atrous convolutional layers without downsampling, whereas the second network has both downsampling and learned upsampling convolutional layers (U-NET). The networks are trained to detect three main classes namely, buildings, high vegetation and a mixed class of bare land and low vegetation class. High overall accuracies of >90% in Goma-Gisenyi and Bukavu, and >85% in Bujumbura are obtained. This work provides a novel methodology that outperforms a baseline standard machine learning classifier for the exploitation of the vast archives of historical aerial photographs that can aid long-term environmental baseline studies. Future work will entail developing domain adaptation strategies in order to make the trained network robust for different image mosaics.