People often face the difficult task of having to select the best option from a large set of alternatives, such
as choosing an apartment to rent, a notebook computer to buy, or financial products in which to invest.
We call this task preference-based search.

Most sites require the user to specify a fixed set of criteria and then retrieve the most preferred items from a database. However, due to various shortcomings of human decision-making, people are in general not able to state their preferences. Studies show that only a small portion of users actually manage to find their most preferred options, and often end up with suboptimal results. We have developed new mixed-initiative tools using example-critiquing and suggestions, and shown through user studies that they dramatically increase decision accuracy.


  During the past years I have investigated the use of interactive search tools for helping the user in making accurate decisions in preference-based search; in particular we examined Example-critiquing [1,2,3,4,5,6,7], a technique for enabling users to incrementally construct preference models by critiquing example options that are presented to them.

 The traditional way to obtain the user’s preferences is to ask the user to fill in a form or to answer a set of elicitation questions. However, when the user starts a search, she typically has only few well-defined preferences. Moreover, by answering to a form, users tend to state incorrect preferences based on means-objective [2,7].

 With user studies [2,7] we showed that the form-filling approach is not an effective way of elicit preferences (only 25% of users find their target item). Better results can be achieved with Example-critiquing (45%). This supported the intuition that more accurate preference models are obtained when preferences are expressed on users’ own initiative, supported by behavioral decision theory. 


 To further enhance the elicitation of preferences, we developed strategies for making suggestions [1] that stimulate user's response. In fact, according to our experience, most of the preferences (79%) emerge from positive critiques that identify an opportunity that the user had not considered before [3]. 

Our (model-based suggestions) are produced based on an analysis of users' current preference model and their potential hidden preferences. The intuition behind is that suggestions should be options that have high probability of becoming optimal when a new preference is stated. The uncertainty over the user model is represented by probabilistic distributions over the possible preferences that the user might have. We showed that the model-based suggestions indeed motivate users to express more preferences and help them achieve a much higher level of decision accuracy without additional effort.

Model-based suggestions are highly attractive to users and stimulate them to express more preferences and improve decision accuracy: according to our user studies, the chance of identifying the user’s most preferred item is improved by up to 78% [1], even without using prior knowledge (an uniform distribution of possible preference was used in the test).


Recommendation accuracy can be further increased by refining the uncertainty over users' preferences and making suggestions that are adaptive to the users reactions using Bayesian reasoning [6,7]. We evaluated the decision accuracy of our approach with both simulations and surveys with real users. 

In an unsupervised setting (where accuracy tends to be lower because users might not understand some technical aspect of the interface) Example-critiquing with adaptive suggestions achieved 66% of accuracy, compared to 56% of example-critiquing with standard model-based suggestions. Moreover, the users of example-critiquing with adaptive suggestions interact with the system for less time, therefore less effort is required; the users also find adaptive suggestions more useful on average [7]. This experiment shows that the techniques can be used in real e-commerce applications and it is expected to perform well.

Practical Implementation

We considered how to implement preference-based search in practical situations. For making our approach for preference-based search scale for large databases, we considered several approximations of our model-based suggestions that are built on top of the top-k algorithm [4]. 

We also considered how to implement Preference-based search for configurable products [5], where the solution is constructed from a CSP (Constraint Satisfaction Problem), so that optimal solutions and suggestions can be computed without generating the entire set of solutions.

 Selected Publications (hyperlinked)

  1. P. Viappiani, B. Faltings and P. Pu. Preference-based Search using Example-Critiquing with Suggestions. Journal of Artificial Intelligence Research (JAIR), 27, 2006, pp. 465-503. [bib] [pdf]
  2. P. Viappiani, B. Faltings, and P. Pu. Evaluating Preference-based Search Tools: a Tale of Two Approaches. In Proceedings of the Twenty-first National Conference on Artificial Intelligence (AAAI-06), pages 205-211. AAAI press, 2006.  [bib] [pdf]
  3. P. Viappiani, B. Faltings and P. Pu. The Lookahead Principle for Preference Elicitation: Experimental Results. Seventh International Conference on Flexible Query Answering Systems (FQAS), Milan, Italy, June, 2006, pp. 378-389. [bib] [pdf]
  4. P. Viappiani and B. Faltings. Design and Implementation of Preference-based Search. The 7th International Conference on Web Information Systems Engineering (WISE), LNCS4255, Wuhan, China, October, 2006, pp. 72-83. [bib] [pdf]
  5. P. Viappiani and B. Faltings. Preference-based search for Configurable Catalogs. AAAI 2007 Workshop on Configuration, Vancouver, July, 2007.  [bib] [pdf]
  6. P. Viappiani, P. Pu and B. Faltings. Conversational Recommenders with Adaptive Suggestions. Recommender Systems 2007, pg , Minneapolis, Minnesota, USA, October, 2007, pp/ 89-96. [bib] [pdf]
  7. Paolo Viappiani, Pearl Pu, and Boi Faltings. Preference-based Search with Adaptive Recommendations. AI Communications, 2008 (to appear).