Link
We examine whether humanizing artificial intelligence (AI) systems enhance auditor reliance on the specialist evidence they produce. Auditors’ use of specialist evidence continues to be a source of regulatory concern and audit firms are making substantial investments in AI systems that will generate specialist evidence. The firms hope that these systems, like human specialists, will provide evidence that will be used by auditors and will improve audit outcomes. However, even the most reliable AI systems will not be perfect, and auditors will inevitably encounter AI systems that make errors. Consistent with prior research, we demonstrate that when the specialist is known to have made an error, auditors more heavily discount the evidence from the AI system than from the human specialist. This tendency to discount computer-based advice more heavily than identical human advice is referred to as “algorithm aversion” and auditor susceptibility to algorithm aversion can negatively impact on audit quality. Accordingly, we investigate whether humanizing the AI system mitigates algorithmic aversion effects on auditors’ judgments. We find that adding humanizing elements to an AI system facilitates auditors’ reliance on the evidence these systems, particularly after observing the system err. Our findings suggest that design choices around the AI systems being implemented can impact their usefulness and ultimately audit quality.