Robust binary and multinomial logit models for classification with data uncertainties

Title

Robust binary and multinomial logit models for classification with data uncertainties

Publication Type
Journal Article
Year of Publication
2025
Journal
European Journal of Operational Research
Abstract
Binary logit (BNL) and multinomial logit (MNL) models are the two most widely used discrete choice models for travel behavior modeling and prediction. However, in many scenarios, the collected data for those models are subject to measurement errors. Previous studies on measurement errors mostly focus on “better estimating model parameters” with training data. In this study, we focus on using BNL and MNL for classification problems, that is, to “better predict the behavior of new samples” when measurement errors occur in testing data. To this end, we propose a robust BNL and MNL framework that is able to account for data uncertainties in both features and labels. The models are based on robust optimization theory that minimizes the worst-case loss over a set of uncertainty data scenarios. Specifically, for feature uncertainties, we assume that the -norm of the measurement errors in features is smaller than a pre-established threshold. We model label uncertainties by limiting the number of mislabeled choices to at most . Based on these assumptions, we derive a tractable robust counterpart. The derived robust-feature BNL and the robust-label MNL models are exact. However, the formulation for the robust-feature MNL model is an approximation of the exact robust optimization problem. An upper bound of the approximation gap is provided. We prove that the robust estimators are inconsistent but with a higher trace of the Fisher information matrix. They are preferred when out-of-sample data has errors due to the shrunk scale of the estimated parameters. The proposed models are validated in a binary choice data set and a multinomial choice data set, respectively. Results show that the robust models (both features and labels) can outperform the conventional BNL and MNL models in prediction accuracy and log-likelihood. We show that the robustness works like “regularization” and thus has better generalizability.