Visual Storytelling in Digital Libraries: Design Strategies for Preserving and Accessing Intangible Cultural Heritage

Authors

  • Siwei Wu Faculty of Fine and Applied Arts, Burapha University, Thailand Author
  • Jian Liu Academy of Arts and Philosophy, Shinawatra University, Thailand Author
  • Heng Zou Department of Film Media Content, Cheongju University, South Korea Author
  • Ting Yu Faculty of Management, Shinawatra University, Thailand Author
  • Zhuoying Jiang History of Arts, Saint Petersburg State University, Russia Author

DOI:

https://doi.org/10.5281/zenodo.17851419

Keywords:

Visual Storytelling, Digital Libraries, Intangible Cultural Heritage, Heritage Building Information Modelling (HBIM), Bacterial Colony Tuned Lightweight Convolutional with Recurrent Neural Networks (BC-LWC-RNN), Heritage Preservation.

Abstract

The advancement of digital technologies has substantially transformed the ways in which cultural heritage is conserved, documented, and made accessible. Visualization techniques and Heritage Building Information Modelling (HBIM) provide essential tools for safeguarding both tangible and intangible heritage, while also facilitating the creation of interactive experiences. This research examines the application of visual storytelling within digital libraries, employing deep learning methods to design strategies for the preservation and accessibility of intangible cultural heritage. The proposed model is capable of processing and analysing diverse data types, including 3D scans, photographic images, textual narratives, audio-visual recordings, and other formats. Specifically, the Bacterial Colony-tuned Lightweight Convolutional with Recurrent Neural Networks (BC-LWC-RNN) framework is utilised to automate the generation of visual narratives, establishing semantic links across heritage datasets. The resulting information is integrated into HBIM environments and delivered via immersive virtual reality interfaces, enhancing user engagement and comprehension through interactive storytelling. The LWCNN component extracts critical spatial and visual features from images or 3D scans, while the RNN component captures sequential or temporal patterns present in textual or spoken narratives, enabling the construction of coherent storylines. The BC-LWC-RNN model demonstrated an identification accuracy of 98.03% for elements of intangible cultural heritage, with a precision of 97.73%, recall of 98.23%, and F1-score of 97.85%. Experimental findings indicate that this model performs with high reliability in generating meaningful visual narratives. Overall, the approach integrates deep learning with HBIM to support the reinterpretation, preservation, and dissemination of intangible cultural heritage within digital library platforms.

Downloads

Published

2025-12-08

Similar Articles

1-10 of 261

You may also start an advanced similarity search for this article.