The recent development of single-photon avalanche diode (SPADs) arrays as imaging sensors with both picosecond binning capabilities and single photon sensitivity has led to the rapid development of time-of-flight imaging systems. When used in conjunction with a synchronised light source these sensors produce a 3D image. Here, we apply this 3D imaging ability to the problem of drone identification, orientation, and, segmentation. The proliferation of semi-autonomous aerial multi-copters i.e. drones, has raised concerns over the ability of existing aerial detection systems to accurately characterise such vehicles. Here, we fuse the 3D imaging of SPAD sensors with the classification capabilities of a bespoke convolutional neural network (CNN) into a system capable of determining drone pose in flight. To overcome the lack of publicly available training data we generate a photo-realistic dataset to enable the training of our network. After training, we are able to predict the roll, pitch, and yaw of the several different drone types with an accuracy greater than 90%.