UM E-Theses Collection (澳門大學電子學位論文庫)
- Title
-
Mining the Web to support Web image retrieval and image annotation
- English Abstract
-
Show / Hidden
Nowadays, huge amount of images, on almost any conceivable topic, are available on the Web. How to reuse those valuable images draws many attentions in recent years. However, it is hard, if not impossible, to use those images effectively and efficiently unless they are well indexed and organized. Therefore, in this thesis, we try to find the effective and efficient ways to manage and access those images both from the aspect of image semantic and from the aspect of visual features. First of all, a multiplied refinement model is proposed to enhance the performances of text-based retrieval of Web images, with using visual features of Web images. As we know, there are two basic methods for Web image retrieval: text-based and visual content-based. The former is based on the associated text around Web images, and the latter utilizes visual features of images, such as color, texture and shape, as the descriptions of Web images. In fact, each has its limitations in supporting Web image retrieval, if they are used independently. Therefore, in our approach, we try to combine those two basic methods together, In our solution, users start their searches with keywords, and then visual features are used to refine the result based on multiplied refinement model. This thesis also compares three integration models for text-based image retrieval and visual feature image retrieval, including multiplied refinement model, linear refinement model and expansion model. The experiments show that the proposed integration model yields better performance than others. Automatic image annotation based on the Web is another objective of this thesis. Traditional Web image annotation models often make use of some learning algorithms. In general, a set of manually labeled sample images are used in order to construct the annotation model. Some weaknesses of the traditional learning models include: (1) limited annotation vocabulary, (2) labor-intensive work to manually annotate sampling images, and (3) domain specific scope. For such reasons, in this paper, a Web-based automatic image annotation model is proposed. This model is trying to automatically annotate unlabeled images based on Web images. In our model, the terms appearing in the associated texts of Web images are extracted and filtered as the semantic descriptions for the corresponding Web images. However, many noises exist in term of words and in term of images when we construct their relationships. In order to alleviate the affection caused by the noises, some techniques in data mining and information retrieval, such as image's visual feature clustering, local relevance analysis and entropy weighting strategy, are employed. By doing so, for a given term, its relevances to images are re-weighed using visual feature clustering, term cooccurrences and term distribution in the database. Finally, an unlabeled image can be annotated by summarizing those Web images which are close to the unlabeled image in visual similarity. Our experiments show that our annotation model can achieve a satisfactory performance.
- Issue date
-
2007.
- Author
-
Liu, Qian
- Faculty
- Faculty of Science and Technology
- Department
- Department of Computer and Information Science
- Degree
-
M.Sc.
- Subject
-
Image processing -- Digital techniques
Data mining
Image analysis
- Supervisor
-
Gong, Zhi Guo
- Files In This Item
- Location
- 1/F Zone C
- Library URL
- 991000563529706306