SONAR mapping of underwater environments leads to dense point-clouds. These maps have large memory footprints, are inherently noisy and consist of raw data with no semantic information. This paper presents an approach to underwater semantic mapping where known man-made structures that appear in multibeam SONAR data are automatically recognised. From a set of SONAR images acquired by an Autonomous Underwater Vehicle (AUV) and a catalogue of 'a-priori' 3D CAD models of structures that may potentially be found in the data, our algorithm proceeds in two phases. First we recognise objects using an efficient, rotation-invariant 2D descriptor combined with a histogram matching method. Then, we determine pose using a 6 degree-of-freedom registration of the 3D object to the local scene using a fast 2D correlation, refined with an iterative closest point (ICP)-based method. Once the structures located and identified, we build a semantic representation of the world based on the initial CAD models, resulting in a lightweight yet accurate world model. We demonstrate the applicability of our method on field data acquired by an AUV in Loch Linnhe, Scotland. Our method proves to be suitable for online semantic mapping of a partially man-made underwater environment such as a typical oil field.