Creating the 3D model

Collecting the data

In July 2015, in collaboration with a team led by Dr Rab Scott, NAMRC, a Leica ScanStation P20 3D scanner was used to capture a 3D point cloud of the charnel chapel. Seventeen scans were taken at different locations (see Figures 1 and 2) and registered (using Leica Cyclone) to produce a model containing 60 million points.

View the 3D visualisations and walkthroughs

Read more in this news article about the capture work

Figure 1. Setting up the Leica ScanStation in the charnel chapel.

Figure 2. The 17 scan locations shown on a plan view of the chapel. The blue numbers indicate the approximate position of a scan location. A ‘2’ indicates that two scans were taken at that location, each at different heights.

3D laser scanners rapidly fire highly directional laser beams from a base station (which can rotate both horizontally and vertically) and detect the reflection from any surface that is hit. The distance of the scanned object from the base station can be calculated from the journey time, known as the ‘time-of-flight’. The colour of the scanned surfaces can also be recorded using an associated digital camera. The resulting output from using a laser scanner, with associated camera, is a ‘3D point cloud’.

Figures 3 to 10 show the views from different capture positions, looking in specific directions. Figure 3 is from scan position 1 (scan position range 0.16), at the entrance to the room, and shows the other capture positions as ‘mirror balls’ floating in the space. Figure 4 shows the equivalent view without the mirror balls. The grey area at the bottom of the figure is the unseen area from the scan position, ie below the scan tripod.

Figure 3. The view from scan position 1 looking into the room. The floating mirror balls show the positions where other scans were taken. The inset in the top-right of the image shows the scan positions.

Figure 4. The view from scan position 1, looking into the room.

Figure 5. Scan position 2 looking at the shelf of skulls on the opposite wall.

Figure 6. Scan position 8 looking at the far wall of the space.

Figure 7. Scan position 8 looking towards scan position 2, which is to the left of the entrance to the room.

Figure 8. Scan position 8 looking towards the right of the entrance to the room. The graffiti on the upper wall can be seen in the large version of this image.

Figure 9. Scan position 16, looking over one of the collections of bones in the middle of the room.

Figure 10. Scan position 15, by the wall at the opposite of the entrance to the room, looking at the set of skulls that are covered in clear plastic bags.​

The 3D point cloud

For each measurement taken by the laser scanner, and its associated digital camera, a three-dimensional (x, y, z) position and an associated colour (r, g, b) are stored. The collection of all measurements is called a 3D point cloud. Multiple scans, once registered, produce a very large 3D point cloud. A range of software can be used to visualise this data.

Figure 11 uses Autodesk ReCap 360 and Figure 12 uses free web-based software called Potree. To create a real-time visualisation of a large point cloud model, techniques can be used to reduce the size of the model by deleting some of the points, or viewing only a subset of the points. For example, the image in Figure 12 has only four million points, however your brain fills in the details so the shape of the room and the bones can still be identified.

Try out real-time interaction with the model using Potree

The 3D data set has been published online, and is available to download from the University of Sheffield’s online data repository ORDA.

Figure 11. 3D point cloud view of the charnel chapel from the entrance, visualised using Autodesk ReCap 360.

Figure 12. The 3D point cloud visualised using Potree.

Adding surfaces

Our subsequent research work has investigated approaches to create a surface mesh from the point cloud. Early work (Crangle et al., 2016) produced a model that demonstrated the problems of capturing such a complex geometrical space (eg difficulty in positioning the scanner in the space and dealing with complex lighting conditions), producing a noisy model.

A collaboration with Wuyang Shui, a visiting researcher from Beijing Normal University, China, investigated a semi-automatic approach for producing a simplified mesh by downsampling the data set and simplifying different areas of the model in different ways (Shui et al., 2016a; 2016b). Again, the results demonstrated the challenges in dealing with such a complex data set.

In the summer of 2016, as part of a SURE project involving undergraduate student James Williams, an aggressive simplification approach was used to produce a model suitable for real-time interaction on a website. However, as can be seen in Figure 13, the more the data is simplified the more the model begins to lose its realistic appearance. Here, the surfaces of the crania on the nearest stack of bones have merged into a single surface and have lost all their identifying features.

Figure 13. An aggressively simplified, meshed 3D model, suitable for real-time walkthroughs on a website.

Academic papers and publications

Jenny Crangle, Elizabeth Craig-Atkins, Dawn Hadley, Peter Heywood, Tom Hodgson, Steve Maddock, Robin Scott, Adam Wiles. The Digital Ossuary: Rothwell (Northamptonshire, UK). Proc. CAA2016, the 44th Annual Conference on Computer Applications and Quantitative Methods in Archaeology, Oslo, 29 March to 2 April, Session 6: Computer tools for depicting shape and detail in 3D archaeological models.

Wuyang Shui, Steve Maddock, Peter Heywood, Elizabeth Craig-Atkins, Jennifer Crangle, Dawn Hadley and Rab Scott. Using semi-automatic 3D scene reconstruction to create a digital medieval charnel chapel. Proc. CGVC2016, 15-16 September, 2016, Bournemouth University, United Kingdom.

Wuyang Shui, Jin Liu, Pu Ren, Steve Maddock and Mingquan Zhou. Automatic planar shape segmentation from indoor point clouds. Proc. VRCAI2016, 3-4 December 2016, Zhuhai, China.

The Digital Ossuary project was funded by a University of Sheffield Digital Humanities Development Grant.