Over the course of the past ninety years, lie detection has been routinely excluded from American courtrooms in all of its technological forms. A variety of explanations have been offered to justify this exclusion, ranging from a lack of adequate scientific underpinnings, to an inconsistency in published error rates, to the notion that lie detection would usurp the function of the jury as the ultimate fact finder and arbiter of credibility. Lie detection proponents have attempted to rehabilitate the image of the practice, arguing that the evidence (regardless of the technology) does meet the standards for admissibility and concluding that the polygraph is being held to a higher bar than other forensic evidence. However, these scholars have failed to dig deeper to discover why lie detection is held to such a high bar. Systematically looking at the exclusionary opinions for polygraph, and comparing the technologies and the legal justifications to other, routinely admitted forensic sciences, including latent fingerprint identification, bite-mark analysis, and handwriting analysis, shows that the arguments for exclusion could apply equally, if not more aptly to those forensic sciences. This paper goes beyond that one, previously missing, step and asks: Why is lie detection held to such a different standard?