Overview:This workshop is about reinforcement learning in large state/action spaces, learning to optimize search, and the relation of these two.
Content-based information retrieval with relevance feedback is a multi-stage process, where at each stage a user selects the item which is closest to the requested information from a set of presented items, until the requested information is found. The task of the search engine is to present items such that the search terminates in few iterations. More generally, interactive search concerns multi-stage processes where a search engine presents some information and as response gets some feedback, which may be partial and noisy. Since the reward for finding the requested information is delayed, learning a good search engine from data can be modeled as a reinforcement problem, but the special structure of the problem needs to be exploited.
Since for realistic search applications the state space is enormous, this learning problem is a difficult one. Although the literature of reinforcement learning offers many powerful algorithms that have been successful in various difficult applications, we find that there is still relatively little understanding about when reinforcement learning might be successful in a realistic application, or what might make reinforcement learning successful in such an application. Furthermore, little work has been done on applying reinforcement learning to optimize interactive search.
Thus this workshop addresses in particular but not exclusively the following two questions: