Mixing things up in Three Dimensions: The Many Uses of Blender in Archaeology

It is safe to say that archaeology generally isn’t the most well funded discipline. Many of you reading this probably know this all too well. This can make experimenting with new technologies and incorporating them into one’s workflow particularly difficult.

Finding relatively low (or at least lower) cost solutions to technological problems in archaeology has always been an interest of mine. My first major breakthrough in scratching this itch was creating a photography rig out of things laying around my house to create better photos for photogrammetry – a method of creating 3D scans from digital photographs. I went on to publish this setup and workflow in Advances in Archaeological Practice.

Once you have 3D assets of objects or sites, there are a myriad of different things you can do with them; from scientific uses such as quantitative artifact analysis, to public engagement purposes such as creating digital exhibits and visits. As part of my own desire to expand my horizons in terms of these possibilities, a few years ago I began learning Blender, a free and open source computer graphics program that incorporates tools for 3D model creation and sculpting, animation, rendering, and even video editing.

I’ve now integrated Blender into my current archaeological practice in several ways. It has been a great way to create resources for intro level students. One way has been in teaching. For example, in 2020 I worked with Anthropology graduate students to create animations for our Introduction to Human Evolution course. These animations used changes in color and bone position to highlight the way certain anatomical features vary across species or sexes. These animations were useful in communicating complex morphological information clearly, which was especially needed in the context of asynchronous remote learning.

Still from an animation demonstrating sexual dimporphism in gorillas by University of Minnesota graduate student Samantha Gogol.

Blender has also been very valuable to our work at the site of Kisese II. Kisese II is part of a series of painted rock shelters in the Kondoa region of Tanzania which have been collectively recognized by UNESCO as a World Heritage Site. It was originally excavated in the 1950s by Louis and Mary Leakey, and later Ray Inskeep. Our team’s recent work, beginning in 2017, has focused on delineating the bounds of the original pits in order to identify in situ deposits. We have experimented with using Blender to create models and animations to communicate to team members and outside audiences the ways in which our modern excavation intersects with historic trenches.

Prior to fieldwork this summer, we also used the modeling features in Blender to create an updated model estimating the extent of the original excavations. We integrated this with a geo-referenced 3D model of the shelter, and used this information to plan the position of our test pits for the upcoming field season. While in a video call, team members in different locations were able to experiment with the size, shape, and position of possible trenches. This allowed us to work together in real time and have a clear visualization of possibilities, making it much easier to balance time, safety, and scientific objectives.

Screenshot of the Blender excavation planning file for the 2022 field season at Kisese II. It includes a scan of the rock face, scans of the trenches dug in 2017 and 2019, an estimated model of the historic trenches made using profile drawings (tan), and the approximate excavation area for 2022 (blue).

I’m particularly excited about my newest form of experimentation with Blender, which involves making use of the integrated scripting feature. Blender lets users write and run Python code. Let’s say you want to have a 3D visualization of your point provenience data, similar to what one could do in ArcGIS Pro for example. In the example shown below, I have programmed Blender to read a comma separated value (CSV) file. It is creating a sphere with a 20 cm radiusat the X, Y, and Z coordinates found in the 4th, 5th, and 6th columns of the dataset. It is then color-coding the created sphere based on a classification (in this case, archaeological layer) found in column 2. My own use of scripting is in its infancy, but I am excited for the ways this tool can be used not just for visualization, but hopefully also for analysis.

Screenshot of Python code in Blender set to read a CSV file and generate color-coded spheres based on artifact provenience data.

Interested in trying this out for yourself? Blender has a bit of a steep learning curve, but the good news is that there are a ton of free resources available online! For example, this excellent tutorial series introduces viewers to the fundamentals of the software by creating a 3D model of a donut from scratch. So the next time you feel like adding to your technological toolkit, sit back with a coffee, bake a digital donut, and give Blender a spin!


The associated article – A Simple Photogrammetry Rig for the Reliable Creation of 3D Artifact Models in the Field – is free to access until the end of 2022.


About the author

Samantha Thi Porter is an Advanced Imaging and Visualization Research Associate at the University of Minnesota, Twin Cities. She runs the Advanced Imaging Service for Objects and Spaces (www.aisos.umn.edu) which provides access to equipment and expertise in 3D scanning, high resolution 2D documentation, online exhibits, and other types of digital imaging to the University community and beyond. She also serves as an ambassador for the IF/THEN initiative of the American Association for the Advancement of Science and Lyda Hill Philanthropies, which works to promote positive role models of women in STEM to young audiences.

Leave a reply