A Multimodal Learning to Rank model for Web Pages
Nikhila T Bhuvan1, M Sudheep Elayidom2
1Nikhila T Bhuvan*, Department of Information Technology, Rajagiri School of Engineering & Technology, Rajagiri Valley, Kakkanad, Kochi, Kerala, India.
2M Sudheep Elayidom, Division of Computer Science, School of Engineering, CUSAT, Kerala, India.
Manuscript received on July 02, 2020. | Revised Manuscript received on July 10, 2020. | Manuscript published on August 30, 2020. | PP: 308-313 | Volume-9 Issue-6, August 2020. | Retrieval Number: F1442089620/2020©BEIESP | DOI: 10.35940/ijeat.F1442.089620
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: “Learning-to-rank” or LTR utilizes machine learning technologies to optimally combine many features to solve the problem of ranking. Web search is one of the prominent applications of LTR. To improve the ranking of webpages, multimodality based Learning to Rank model is proposed and implemented. Multimodality is the fusion or the process of integrating multiple unimodal representations into one compact representation. The main problem with the web search is that the links that appear on the top of the search list may be either irrelevant or less relevant to the user than the one appearing at a lower rank. Researches have proven that a multimodality based search would improve the rank list populated. The multiple modalities considered here are the text on a webpage as well as the images on a webpage. The textual features of the webpages are extracted from the LETOR dataset and the image features of the webpages are extracted from the images inside the webpages using the concept of transfer learning. VGG-16 model, pre-trained on ImageNet is used as the image feature extractor. The baseline model which is trained only using textual features is compared against the multimodal LTR. The multimodal LTR which integrates the visual and textual features shows an improvement of 10-15% in web search accuracy.
Keywords: Learning to Rank, LETOR, LTR, transfer learning.