Recalibrating Human–Machine Relations through Bias-Aware Machine Learning: Technical Pathways to Fairness and Trust
DOI:
https://doi.org/10.63332/joph.v5i4.1091Keywords:
Bias-Aware Machine Learning, Algorithmic Fairness, Trust in AI Systems, Intersectional Bias, Explainable AI (XAI)Abstract
Considering the importance of artificial intelligence (AI) in decision-making processes in various fields such as health, law and finance, the concern for bias and fairness of decision making has increased. This paper presents an extensive discussion of bias-aware machine learning(Ml) such as fairness-aware modeling, detection and mitigation. The paper demonstrates aspects of fairness, different forms of algorithmic bias including intersectional bias and how biased systems impact society. The paper turns to appreciation of dentieth, Trust Dynamics, Legal and Regulatory Frameworks And in the Context of Promoting Transparency: Exploring the Role of Explainable AI (XAI). Taking into account the current advances for combatting bias, also pre-processing, in-processing, and post-processing methods, for instance, draw on examples from major domains of interest. Apart from the improvements AIs have achieved, existing challenges involve little attention to relationship among different identities, poor frameworks in place for implementation and operation in other parts of the world, inadequate abuse detection mechanisms among others. Regarding this, we present some of the research questions that focus on the notions of transparency, privacy protected fairness audits, and shared control with the aim of guiding the growth of fair, responsible, and competent AI systems.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
CC Attribution-NonCommercial-NoDerivatives 4.0
The works in this journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.