Utilizing Artificial Intelligence Improves Planning and Decision Making
Post by Lincoln Tracy
The takeaway
Benjamin Franklin once said, “Failing to plan is planning to fail”. Artificial intelligence can teach people to improve their planning and decision-making strategies by using optimal feedback processes, thereby avoiding sub-optimal outcomes.
What's the science?
Decision-making is an important part of everyday life, but it is often plagued by errors that can have shocking consequences. In many cases, these consequences could be avoided if proper planning strategies were implemented. A crucial part of developing optimal planning strategies is reliable, valid, and timely feedback. However, many real-world settings do not provide enough high-quality feedback to help people discover the optimal strategies on their own. This week in PNAS, Callaway and colleagues developed an artificial intelligence tutor to help people quickly discover the best possible decision-making strategies before testing its effectiveness across several experiments in different settings.
How did they do it?
First, the authors used artificial intelligence to develop a virtual tutor to teach people optimal decision-making processes. The tutor was designed to provide metacognitive feedback (to help participants learn the optimal strategies for themselves) rather than direct feedback (e.g., “you should have gone left”) during an initial training phase before the testing phase began. The authors then recruited over 2500 participants across six different online experiments hosted on Amazon Mechanical Turk or Prolific to test the effectiveness of the intelligent tutor (against direct action feedback or no feedback) in six different settings:
· Experiment 1 introduced participants to the Web of Cash game, where they were required to navigate a spider through a web from its center to an outer edge. Each space on the web contained a reward (or a loss), and participants were aiming to collect as many rewards as possible. All rewards and losses were hidden initially, meaning participants did not know the optimal path to obtain the most rewards. However, participants could pay a small fee to uncover the reward on each space on the web. Participants undertook the training phase with metacognitive, direct, or no feedback before completing the testing phase. The authors quantified participant’s performance relative to how often they used the optimal strategy to navigate the spider through the web.
· Experiment 2 tested whether the metacognitive feedback training was effective in a more complicated alteration of the Web of Cash game than Experiment 1 (routing an airplane through a larger series of airports).
· Experiment 3 tested whether the metacognitive feedback training could be retained by adding a 24-hour delay between the training and testing phases of the Web of Cash game.
· Experiment 4 tested whether metacognitive feedback training was effective in a less structured version of Experiment 1.
· Experiment 5 tested metacognitive feedback in a real-world context—planning an inexpensive road trip (the Road Trip paradigm). Rather than navigating a spider through a web, participants were required to road trip across a country, stopping at several hotels, to end up at a city with an airport. A fourth training condition (watching a video about if-then plans) was added.
· Experiment 6 explored which aspect of the metacognitive feedback (i.e., a time-based penalty for selecting a suboptimal move or a message describing what move the participant should have made) made the largest contribution to the improved scores on the Web of Cash game.
What did they find?
In Experiment 1, the authors found participants performed better on the Web of Cash game after receiving metacognitive feedback compared to the other two feedback conditions. This suggests metacognitive feedback increased participants’ ability to make better decisions without having to think harder. Participants who received metacognitive feedback also performed better in a more complicated version of the Web of Cash game (Experiment 2), when there was a 24-hour delay between training and testing (Experiment 3, suggesting training effects were retained over time), and in a less structured version of the game (Experiment 4). Metacognitive feedback resulted in better performance on the more naturalistic Road Trip paradigm compared to the video only and no feedback groups, suggesting metacognitive training can be transferred to new situations (Experiment 5). Metacognitive feedback with both the delay penalties and information about the optimal choice improved performance more than either of the components individually, and neither individual component improved performance more than receiving no training (Experiment 6). This suggests both aspects of metacognitive feedback are critical to the improvements in decision-making and planning.
What's the impact?
This study found metacognitive feedback provided by an artificially intelligent tutor taught people to quickly learn effective decision-making strategies. The novel feedback method performed better than conventional approaches to providing feedback to improve decision-making performance. These findings represent the first steps in using artificial intelligence tutors in increasingly realistic situations to improve decision-making processes and lead to more optimal outcomes.