Area: Explainable AI.
Early) RQ: What is it that needs explaining in the context of explainable AI, and who is explaining it to whom?
PICO elements
P(roblem): Many machine learning algorithms – especially the deep learning variants – are very opaque in the sense that it is not clear to neither user nor developer why they arrive at certain, typically surprising, answers. This has created the demand for so called ‘explainable AI’ where at least developers, but optimally also users can “interrogate” an AI-powered system to determine how it arrived at a certain result or course of actions.
I(ntervention): Unkown at the early stage. Most likely not applicable to this early type of RQ. However, in late RQs the intervention would most likely involve some specific approach or algorithm that generates the sought-after explanations that is compared to some other, dispreferred ones. Which algorithm or approach is preferred depends on you, the researcher(s), and you are expected to provide some argument that justifies your choice.
C(omparison): Unknown at the early stage. Could have two interpretations. Interpretation 1: Halfway into the literature search, it may crystallize that there are several interpretations of explainable AI in terms of what precisely is to be explained, as well as differing interpretations of who is ought to be the addressee of the explanations (for example developers vs. end users). One potential comparison therefore may involve different fractions of researchers each of which subscribe to a different interpretation. This would be an example of a research question that can, in principle, be answered by a literature search.
Interpretation 2: For a late RQ the comparison might involve the comparison of different “explaining approaches” that researchers have produced. This would typically imply that one of these interpretations mentioned in ‘interpretation 1’ would have been chosen and fixed prior to conducting this more specific and technical comparison.
O(utcome) In case of the early RQ above there would not be one singular outcome but a number of outcomes that answer the RQ: (a) a detailed description of what precisely is the to-be-explained phenomenon, and (b) a description of both who is the explaining party and who is the addressee with respect to the explanation.
In the case of a late RQ, and given the above-mentioned comparison (interpretation 2), the outcome would involve some quantification of the quality of the generated explanations, failure rates of the system (I.e., number of cases where no explanation can be generated), or the like.
Topic: Machine Learning and Online Shopping RQ: How can AI help with online product recommendations?
(P)roblem: With consumers spending increasing time and money shopping online, online stores aim to personalise product recommendations in an attempt to promote products that they believe that the customer will be interested in purchasing and to optimise the shopping experience. Understanding which models to use or which features to include in such an algorithm is an open problem.
(I)ntervention: Initially unknown but can be one of several things. Given the problem, the interest would be in improving product recommendations in some way. So, you might be interested in comparing different types of algorithms to each other, or using a ML algorithm vs. human performance. You may also be interested in different features: some might try to predict using individual features (age, gender, location), others might use historical features (shopping history, frequency, etc). This might become more obvious as you begin the literature search.
(C)omparison: This depends on the intervention. One ideal comparison would be to assess the performance of algorithm/technique X vs. algorithm/technique Y on dataset Z. Or, one technique across two different data sets: algorithm X on dataset Y and Z.
(O)utcome: This comes back to the intervention. If you are comparing ML techniques, then you can compare reported performance (in this example, good "prediction" might be reported in terms of how many types a product was clicked on, or how many were purchased). Or, if you're interested in comparing datasets/features, your outcome might be a list (or types) of features (i.e. individual, or historical datasets using the above example).
Some examples of how this (earlier) RQ might evolve into a more focused RQ that may be answered with a literature review:
What type of machine learning algorithms can improve product recommendations for online stores? What types of features can be used by artificial intelligence techniques to predict shopping behaviours for online customers?
Do deep neural networks improve online store product recommendations when compared to other techniques?
Additional (early-ish) questions:
How safe is cloud computing?
What are the ethical concerns around artificial intelligence?