MarkTechPost@AI 2024年12月18日
Self-Calibrating Conformal Prediction: Enhancing Reliability and Uncertainty Quantification in Regression Tasks
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

自校准共形预测(SC-CP)是一种结合Venn-Abers校准和共形预测的新方法,旨在提升回归任务中预测的可靠性和不确定性量化。该方法通过扩展Venn-Abers校准到回归任务,实现了校准的点预测和具有有限样本有效性的预测区间。SC-CP通过迭代校准和等渗回归生成包含完美校准点预测的预测集,并进一步构建以校准输出为中心的区间,从而有效平衡校准和预测性能,尤其是在小样本情况下。实验结果表明,SC-CP在医疗利用数据集上表现出色,实现了更窄的区间、更好的校准和更公平的预测,同时适应异方差性,并保持子群体的公平性。

🎯SC-CP结合了Venn-Abers校准和共形预测,为回归任务提供校准的点预测和有限样本有效性的预测区间。

🧪 Venn-Abers校准被扩展到回归任务,通过迭代校准和等渗回归生成包含完美校准点预测的预测集,提高了预测的准确性。

📊 SC-CP构建以校准输出为中心的区间,通过自适应区间考虑过拟合和不确定性,从而平衡校准和预测性能。

📈 在医疗利用数据集(MEPS)上的实验表明,SC-CP实现了更窄的预测区间、更好的校准和更公平的预测,同时适应异方差性。

In machine learning, reliable predictions and uncertainty quantification are critical for decision-making, particularly in safety-sensitive domains like healthcare. Model calibration ensures predictions accurately reflect true outcomes, making them robust against extreme over- or underestimation and facilitating trustworthy decision-making. Predictive inference methods, such as Conformal Prediction (CP), offer a model-agnostic and distribution-free approach to uncertainty quantification by generating prediction intervals that contain the true outcome with a user-specified probability. However, standard CP only provides marginal coverage, averaging performance across all contexts. Achieving context-conditional coverage, which accounts for specific decision-making scenarios, typically requires additional assumptions. As a result, researchers have developed methods to provide weaker but practical forms of conditional validity, such as prediction-conditional coverage.

Recent advancements have explored different approaches to conditional validity and calibration. Techniques like Mondrian CP apply context-specific binning schemes or regression trees to construct prediction intervals but often need more calibrated point predictions and self-calibrated intervals. SC-CP addresses these limitations using isotonic calibration to discretize the predictor adaptively, achieving improved conformity scores, calibrated predictions, and self-calibrated intervals. Additionally, methods like Multivalid-CP and difficulty-aware CP further refine prediction intervals by conditioning on class labels, prediction set sizes, or difficulty estimates. While approaches like Venn-Abers calibration and its regression extensions have been explored, challenges persist in balancing model accuracy, interval width, and conditional validity without increasing computational overhead.

Researchers from the University of Washington, UC Berkeley, and UCSF have introduced Self-Calibrating Conformal Prediction. This method combines Venn-Abers calibration and conformal prediction to deliver both calibrated point predictions and prediction intervals with finite-sample validity conditional on these predictions. Extending the Venn-Abers method from binary classification to regression enhances prediction accuracy and interval efficiency. Their framework analyzes the interplay between model calibration and predictive inference, ensuring valid coverage while improving practical performance. Real-world experiments demonstrate its effectiveness, offering a robust and efficient alternative to feature-conditional validity in decision-making tasks requiring both point and interval predictions.

Self-Calibrating Conformal Prediction (SC-CP) is a modified version of CP that ensures both finite-sample validity and post-hoc applicability while achieving perfect calibration. It introduces Venn-Abers calibration, an extension of isotonic regression, to produce calibrated predictions in regression tasks. Venn-Abers generates prediction sets that are guaranteed to include a perfectly calibrated point prediction by iteratively calibrating over imputed outcomes and leveraging isotonic regression. SC-CP further conformalizes these predictions, constructing intervals centered around the calibrated outputs with quantifiable uncertainty. This approach effectively balances calibration and predictive performance, especially in small samples, by accounting for overfitting and uncertainty through adaptive intervals.

The MEPS dataset predicts healthcare utilization while evaluating prediction-conditional validity across sensitive subgroups. The dataset comprises 15,656 samples with 139 features, including race as the sensitive attribute. The data is split into training, calibration, and test sets, and XGBoost trains the initial model under two settings: poorly calibrated (untransformed outcomes) and well-calibrated (transformed outcomes). SC-CP is compared against Marginal, Mondrian, CQR, and Kernel methods. Results show SC-CP achieves narrower intervals, improved calibration, and fairer predictions with reduced subgroup calibration errors. Unlike baselines, SC-CP adapts to heteroscedasticity, achieving desired coverage and interval efficiency.

In conclusion, SC-CP effectively integrates Venn-Abers calibration with Conformal Prediction to deliver calibrated point predictions and prediction intervals with finite-sample validity. By extending Venn-Abers calibration to regression tasks, SC-CP ensures robust prediction accuracy while improving interval efficiency and coverage conditional on forecasts. Experimental results, particularly on the MEPS dataset, highlight its ability to adapt to heteroscedasticity, achieve narrower prediction intervals, and maintain fairness across subgroups. Compared to traditional methods, SC-CP offers a practical and computationally efficient approach, making it particularly suitable for safety-critical applications requiring reliable uncertainty quantification and trustworthy predictions.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence….

The post Self-Calibrating Conformal Prediction: Enhancing Reliability and Uncertainty Quantification in Regression Tasks appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

自校准共形预测 Venn-Abers校准 共形预测 不确定性量化 回归任务
相关文章