Semantic Annotation of Images with Text and Sound for Visually Impaired
Accessing appropriate information is the biggest challenge for human and specifically for visually impaired people, and when the information to be accessed is multimedia, it poses a big problem. There is a need of annotating (labelling) the images with text and sound to make it easier for visually impaired people to understand what the image is about. In addition to the ease of understanding, semantic annotation of images can make the semantic search of images simpler, provides ease in visualization, provides a platform for education toolkit, increases tourism by making historical pictures self-speaking, and many more such applications. Semantic Annotation binds ontologies with documents via metadata. A variety of tools have been developed to help visual impaired individuals to interact with the world around them but because of accessibility and cost, they are unapproachable. HELPI has been developed as a powerful tool for annotating the pictures using ontology-based multimedia concepts. Text and sound will be added to pictures using multimedia ontology(m-owl), the meta data thus created is converted in mpeg7 format. “HELPI” makes pictures to speak and help visually impaired people in order to understand and enjoy the content of pictures in a better way.
Schreiber, A. Th, Barbara Dubbeldam, Jan Wielemaker, and Bob Wielinga. "Ontology-based photo annotation." IEEE Intelligent systems 16, no. 3 (2001): 66-74.
Koletsis, Pyrros, and Euripides GM Petrakis. "SIA: Semantic Image Annotation using ontologies and image content analysis." In International Conference Image Analysis and Recognition, pp. 374-383. Springer, Berlin, Heidelberg,
Botorek, Jan, Petra Budikova, and Pavel Zezula. "Visual concept ontology for image annotations." arXiv preprint arXiv:1412.6082 (2014).
Arndt, Richard, Raphaël Troncy, Steffen Staab, and Lynda Hardman. "Comm: A core ontology for multimediaannotation." In Handbook on Ontologies, pp. 403-421. Springer, Berlin, Heidelberg, 2009.
W. Viana, J. B. Filho, J. Gensel, M. Villanova-Oliver, H. Martin. PhotoMap: From location and time to contextaware photo Annotations. In Journal of Location Based Services 2008 2(3), 211-235
Pagare, Reena, and Anita Shinde. "A study on image annotation techniques." International Journal of Computer Applications 37, no. 6 (2012): 42-45.
A. Yavlinsky, E. Schofield, S. M. Rüger, Automated Image Annotation using Global Features and Robust Nonparametric Density Estimation, Proc. of CIVR, pp. 507-517, 2005
Jeon, V. Lavrenko, and R. Manmatha, Automatic Image Annotation and Retrieval using Cross-Med Relevance Models, Proc. ACM SIGIR, pp. 119-126, 2003.
O. Marques, N. Barman, Semi-Automatic Semantic Annotation of Images Using Machine Learning
J. Vompras, S. Conrad, A Semi-Automated Framework for Supporting Semantic Image Annotation, Proc. of ISWC, pp.105-109, 2005.
L. Wenyin, S. Dumais, Y. Sun, H. Zhang, M. Czerwinski and B. Field, Semi-Automatic Image Annotation, Proc. Of INTERACT, pp.326-333, 2001
van Ossenbruggen, Jacco, R. Troncy, and G. Stamou. "Image annotation on the Semantic Web, W3C Working Draft." W3C Semantic Web Best Practices and Deployment Working Group (2006).
B. Shevade, H. Sundaram, L. Xie. “Modeling Personal and Social Network Context for Event Annotation in Images”. In JCDL 2007, ACM Press (2007).
Park, Kyung-Wook, Hyun-Ki Hong, Moohun Lee, and Dong-Ho Lee. "Unified multimedia annotation system using MPEG-7 visual descriptors and semantic event-templates." In Electronics, Information and Communications (ICEIC), 2014 International Conference on, pp. 1-2. IEEE, 2014.
Shuangrong Xia, Xiangyang Gong, Wendong Wang,Ye Tia “Context-Aware Image Annotation and Retrieval on Mobile Device” 2010 IEEE.
- There are currently no refbacks.