People often hesitate to rely on algorithmic advice, even when it is objectively more accurate than human input—a phenomenon
known as algorithm aversion. In two experiments, we investigated the cognitive mechanisms underlying this effect in a clinical
decision-making context. Participants evaluated X-rays for bone fractures, with each image accompanied by advice purportedly
from either an algorithm or a human source. Across experiments, we observed longer response times for algorithmic advice,
indicating increased deliberation. Evidence accumulation modeling revealed that participants set higher decision thresholds
when evaluating algorithmic advice, reflecting a more cautious decision strategy. This hesitancy, observed when the human
advice was attributed to lay participants (Experiment 1), persisted when the human advice was attributed to expert radiologists
(Experiment 2). Accumulation rates and prior preferences did not differ across advisor types, suggesting that algorithm aversion
stems specifically from increased caution rather than reduced perceived reliability. These findings demonstrate that algorithm
aversion manifests as a strategic shift in decision-making and highlight the value of formal cognitive models for understanding
trust in artificial intelligence. Our findings advance the theoretical understanding of algorithm aversion by identifying
response caution as a core mechanism. More broadly, the results demonstrate how formal models of decision-making can clarify
the cognitive architecture of trust in automated systems, offering a foundation for future work on optimizing human–algorithm
collaboration.