Executive Summary : | This project aims to explore key research problems in algorithmic decision-making with strategic agents, focusing on game theory and machine learning. The research focuses on machine learning algorithms that use data generated by human choice-decisions, such as credit card usage, coaching class attendance, and health plan choices, to make decisions like bank loan approval, college admission, and health insurance. The goal is to evaluate the loss in efficiency of these algorithms and the social aspects of algorithmic decisions, specifically downstream human-in-the-loop learning. The project aims to make theoretical frameworks for strategic learning more practical by relaxing theoretical assumptions from the current classification framework. It also proposes incentive compatible and strategy aware learning frameworks for classical machine learning objectives, such as regression, clustering, and multi-armed bandits. The project also explores fairness and explainability aspects of downstream human-in-the-loop learning by evaluating machine learning algorithms in social contexts, defining fairness based on the problem and social context, and evaluating the cost-of-fairness of an algorithm. The project also conducts experimental evaluations of proposed algorithms and state-of-the-art methods on publicly available datasets. In conclusion, this project aims to develop practical and efficient algorithmic decision-making frameworks that comply with distributive justice norms, such as fairness and explainability, while addressing the challenges faced by strategic agents in finance, healthcare, education, and governance. |