Objective Clinical prediction models trained on electronic health records are routinely evaluated for fairness on observed feature values, but the informativeness of which measurements are absent remains unaudited. We developed the Missingness Demographic Leakage Audit (MDLA), a reproducible four-step informatics framework that tests whether patterns of clinical measurement absence function as latent demographic proxies - constituting a bias pathway invisible to standard fairness audits. Materials and Methods We applied MDLA across development (MIMIC-IV v2.2; n=50,827; mortality 10.2%) and external validation (eICU-CRD v2.0; n=137,773; mortality 9.5%) cohorts following TRIPOD+AI standards. XGBoost, random forest, and logistic regression were trained on 43 clinical features and 44 binary missingness indicators. MDLA quantified demographic predictability from missingness alone, tested feature-level associations with Bonferroni correction, and verified model reliance via ablation. A calibration-aware fairness audit evaluated five criteria across four demographic axes; six post-hoc recalibration strategies were compared on a fairness-utility Pareto frontier. Results Missingness indicators alone predicted racial group membership above chance (AUROC=0.543; 95% CI, 0.540-0.546), with 18 of 43 features showing Bonferroni-significant race-missingness associations (all Cramer's V