A richly interactive film of a key Zoroastrian religious ritual drawing upon multiple sources of data.
The Multimedia Yasna is a wide-ranging project at the School of Oriental and African studies to explore the Yasna, the core ritual from the Zoroastrian religious tradition. The project concerts both the written text and the manuscripts containing this text and the performance of the ritual itself. For this part of the project, we were asked to produce a subtitled interactive film for the entire five hour Yasna ritual.
The interactive film drew upon multiple sources of data produced by the Multimedia Yasna team at the School of African Studies and their collaborators and partners elsewhere. These sources included:
Digirati's task was to bring together all of this data in an easy to use interactive tool that allows uses to view the video, bounding boxes, encyclopaedia, transcripts and ritual actions in a single user interface.
Based on the initial visual designs and the data we knew that we had six main areas of user experience that we had to address:
To deliver this user experience there were three core technical challenges that needed to be met:
Our initial thinking was that we could, potentially, deliver a site driven from a combination of:
While the React application worked well, static JSON data was not suitable for driving the React client application largely for reasons of scale. The total number of eventual frame records (more on this below) was over 615,000. The end result would either have been a huge number of files or files of huge size, or both.
Click here to preview the Yasna interactive film
Our eventual solution was a combination of:
Using a set of APIs allowed us to do all the heavy lifting of preparing and normalising the data on the backend in advance, effectively plotting all of the data for the 5 hours of the video on a unified virtual timeline with an API that accepts requests either using frame identifiers or timestamps to return all of the relevant data.
Whenever a user pauses the video or jumps to a different section in the timeline via the scrubber or structural navigation a simple API request with a lightweight JSON payload returns the data required to populate the annotations on the video, keeping the frontend application lightweight, fast and responsive.
Our virtual timeline was smart in the sense that we compressed the data, so that anywhere a single object or bounding box was static over multiple frames of the video, we stored a single point of data, rather than one for every frame, which meant that in areas where the video was relatively static at the level of an individual object we could economise on the amount of data we needed to store in our source data files—in this case, highly compressed Apache Avro files for speed and data storage efficiency—and the amount of data that needed to be stored in the backend for delivery to the client application.