In January 2017, the Getty Museum began a major overhaul of the Getty Villa, which contains an extensive collection of coins, gems, and rings from around 3000 B.C. to around 600 A.D.
As part of that overhaul they commissioned Guidekick to develop a digital kiosk which would take advantage of Getty’s state-of-the-art imaging laboratories to allow people to view all of these small objects in much greater detail than you can see when viewed in a regular display case. To this end, The Getty wanted to deploy a 12.9" iPad Pro next to each physical case which would allow visitors to naturally zoom into each item and see details normally invisible to the naked eye. Alongside these images, the app should display both descriptive and interpretive text, styled to match their physical labels.
At the time, I was subcontracting with Guidekick on a number of their tour guide apps. When they landed the Getty contract, they asked me to take on development duties, working alongside a producer and a designer as well as the Getty team. It was clear right away that the biggest challenge was going to be incorporating Getty's high-resolution photographs which were anything up to 12k x 12k in size. The first gen iPad Pro had a maximum texture resolution of 4096x4096. That simply wasn’t going to be sufficient so I had to find a way to work around the hardware limitations.
The obvious solution seemed to be to break the images up into 4k tiles and then piece them back together again. However, there were some significant barriers to overcome in order to make that work nicely. Unity, for instance, has a “pixel-perfect” option on its canvas object, which allows the positioning of images at exact (integer) pixel coordinates. This is very useful for retro, pixel-art effects, as it prevents floating-point imprecision from leaving tiny gaps between tiles and minimizes the effects of texture filtering..
However, it presented a new problem for my digital label system because we needed to be able to pan and zoom smoothly around our huge images and to transition between different UI views. Because pixel-perfect canvases lock images to precise pixel-coordinates, that prevents smooth movement and transitions. You get a jitter effect when panning and zooming as the images jump between integer coordinates instead of transitioning smoothly. The actual distances involved are relatively small but the rounding effect is nevertheless extremely unpleasant and not acceptable for our purposes at all. I had to combine a number of discrete canvases, all of which had to be kept in sync with each other at all times, in order to manage this trick of having pixel perfect coordinates and still getting a perfectly smooth pan and zoom effect.
This alone is not sufficient, however. Even with locked integer image coordinates, mipmaps and texture filtering still ensure that you see tiny seams between image tiles, and certain patterns are more noticeable than others. For instance, in a regular photograph, where you have a natural variation in color throughout, the minute cracks between tiles are completely imperceptible, even when zoomed all the way in. However, Getty often includes photographs of their “impressions” of a gem or coin - essentially very high quality imprints, created using a mold. Because the impression photos have large sections of flat, white pixels in a block, a seam of not-quite-white pixels running down between a tile is incredibly noticeable. So, we also needed to completely disable mipmap generation and texture filtering in order to eliminate these ugly seams from our images.
Of course, Unity doesn’t actually provide any of the tools necessary to do this kind of image processing so I had to find a third-party library. I ended up using the GDI+ API which has a reasonably robust set of image manipulation functions. At the time, it was mostly Windows-only but it was possible to get an equivalent set of functions running on Mac as well. This allowed me to cut our huge images up into 4k tiles and write them out into Unity assetbundles. It also allowed me to create low-resolution impostors for the case view. It certainly wouldn’t be practical to have 12k x 12k images loaded in memory for an entire case worth of coins, for instance.
Another benefit of this kind of preprocessing was the ability to generate high quality drop shadows for all of the items. I just needed to desaturate and threshold the source image to get a suitable shadow, and then apply an edge blur to create a soft shadow. Offset this source image by a few pixels in each dimension and you have a nice drop shadow. Optionally, you can then bake the drop shadow into the parent image to reduce draw calls and make transitions a lot easier to deal with.
The only real downside to preprocessing the images and storing them in assetbundles was that it was impossible to create content without using Unity, and a custom method would be required to deploy the content. Since we absolutely needed texture compression in order to support so many large images, it was really the only viable option. I created a deployment system which would create an assetbundle for the scene and then one assetbundle per item with all of the images (tiles, impostors, etc.) contained therein. It would then deploy to a folder on Amazon Web Services and create a text file with the URL of that folder. We could then supply the Getty with just the text file and it would automatically retrieve and import all of the content for the given case.
In order to be displayed in the Getty, everything in the project had to meet rigorous standards and attention to detail was essential. To this end, we went through numerous revisions to ensure that the text, fonts, touchscreen controls, gestures, icons, and layouts were all perfect. Finally, when the ten display cases were complete, I was asked to document the tools, record a series of video tutorials and pass off the Unity project to The Getty so that they could continue creating more case layouts themselves.
Early analytics from the project showed that the average user who engaged with the app viewed 50% of the items within a display case, demonstrating a very high engagement rate. In 2021, user engagements had passed 1,000,000.
In 2021, I began work on Digital Label System 2.0 (also known as ViDiC), building on the strengths of the original and addressing some of the flaws that became evident in the years that the DLS had been in operation at the Villa. Getty officially adopted the new DLS 2.0 in 2022 and has been using it for new exhibits as well as replacing the ten cases in the Villa permanent collection.
One of the headline features of DLS 2.0 is the use of “themes” to customize the visual appearance of the interactive completely. If your museum or gallery might be able to benefit from using my Digital Label System, please get in touch with me at phil@nobullintentions.com.
Comments