Technology, AI, and Ethics

Martin, K. Johnson’s Algorithmic Accountability and Platform ResponsibilityComputer ethics across disciplines: Applying Deborah Johnson’s philosophy to algorithmic accountability and AI. Springer.

In this chapter I illustrate the impact of Johnson’s scholarship on the study of corporate responsibility and extend her accountability-as-practice to begin to scope (a) the normative grounding for why tech firms are accountable to their stakeholders and (relatedly) (b) what tech firms are accountable for, and (c) to whom firms are accountable.  Firms are accountable for their design and deployment decisions about AI because firms have the power to make different design/deployment decisions to elicit different moral implications in use.  Firms are accountable for the decisions they make that impact others – whether those impacts are positive, as when creating value for stakeholders or negative, as when firms destroy value for other stakeholders.  Currently, firms have a form of accountability dissonance where scholars and firms take credit for their ability to design algorithms that create value and a positive impact on key stakeholders while simultaneously shying away from the negative consequences, rules being broken, value being destroyed, or rights being diminished for those same decisions.  I use online platforms to illustrate the importance of Johnson’s approach to algorithmic accountability in piercing through a fog of accountability. In each case, firms have been slow to embrace accountability in the moral implications of their decisions; and attribution is made more complicated with the use of AI on a platform.

Martin, K., H.Guo, & R. Easley. 2023. When Platforms Act Opportunistically: Ethics of Platform GovernanceWorking Paper.  

As platforms become more dominant in the marketplace, they face increased scrutiny from the press, regulators, and academics regarding their policy decisions to govern participants in the exchange. Opportunistic policies may make transactions more difficult for exchange participants and even harm actors on the platform. The goal of this paper is to delineate the boundaries of legitimate platform governance and to normatively ground why certain platform governance policies are unethical. We argue that the legitimacy of a platform company’s governance policy depends not only on the market power of the firm but also the beneficiary of the policy intervention. Platforms exist to create an efficient exchange for other market actors and have a dual purpose: benefiting the efficiency of the exchange as well as the traditional long-term value of the firm. While in most cases these purposes are aligned, some platforms will face opportunities where an opportunistic policy would harm the efficiency of the exchange by increasing transaction costs of participants while benefiting the firm. While platforms with low market power enjoy the flexibility of being opportunistic in their policies, we argue platforms with high market power have a duty to the exchange as the primary beneficiary of their policies. We provide the boundary conditions for determining whether a platform company’s interventions may violate their obligation to maintain the integrity of the market and the efficiency of the participants of their market.

Martin, K. 2023. Predatory Predictions and the Ethics of Predictive Analytics. Journal of the Association for Information Science and Technology (JASIST).  

In this paper I critically examine ethical issues introduced by predictive analytics. I argue firms can have a market incentive to construct deceptively inflated true-positive outcomes:  individuals are over-categorized as requiring a penalizing treatment and the treatment leads to mistakenly thinking this label was correct. I show that differences in power between firms developing and using predictive analytics compared to subjects leads to firms reaping the benefits of predatory predictions while subjects can bear the brunt of the costs. While profitable, the use of predatory predictions can deceive stakeholders by inflating the measurement of accuracy, diminish the individuality of subjects, and exert arbitrary power. I then argue that firms have a responsibility to distinguish between the treatment effect and predictive power of the predictive analytics program, better internalize the costs of categorizing someone as needing a penalizing treatment, and justify the predictions of subjects and general use of predictive analytics.  Subjecting individuals to predatory predictions only for a firms’ efficiency and benefit is unethical and an arbitrary exertion of power.  Firms developing and deploying a predictive analytics program can benefit from constructing predatory predictions while the cost is borne by the less powerful subjects of the program.

 

Martin, K.  & B. Parmar. 2024  AI and the Creation of the Knowledge Gap: The ethics of AI transparencyWorking Paper. 

Firms have obligations to stakeholders that do not disappear when managers adopt AI decision systems. We introduce the concept of the AI knowledge gap – where AI provides limited information about its operations while the stakeholder demands for information justifying firm decisions increase. We develop a framework of what firms must know about their AI model in the procurement process to ensure they understand how the model allows a firm to meet existing obligations including the anticipated risks of using the AI decision system, how to prevent foreseeable risks, and have a plan for resilience. We argue there are no conditions where it is ethical to unquestioningly adopt recommendations from a black box AI program within an organization. According to this argument, adequate comprehension and knowledge about an AI model is not a negotiable design feature but a strategic and moral requirement.

Villegas-Galaviz, C. and K. Martin.  Working paper.  Moral Approaches to AI: Missing power and marginalized stakeholders.

The introduction of AI to augment business decisions has strained the standard ethical approaches in business ethics, where the firm is to focus on the interests of stakeholders (SHs). Unique attributes of AI and AI research – reinforcing systems of power, surreptitious yet pervasive data collection, and marginalizing vulnerable SHs – can be better addressed through specific normative approaches that raise the voice of the marginalized SHs either by focusing on the power dynamics of the larger socio-technical system or by prioritizing the relationships between actors and their unique vulnerabilities.

The goal of this article is to examine the prominent moral approaches to the ethics of Artificial Intelligence (AI) in business ethics, identify the strengths and limitations of each approach to the field, and propose normative approaches focused on power and vulnerable SHs as needed within the examination of AI within business ethics.

Villegas-Galaviz, C. and K. Martin. 2023.  Moral Distance, AI, and the Ethics of CareAI & Society.  

This paper investigates how the introduction of AI to decision making increases moral distance and recommends the ethics of care to augment the ethical examination of AI decision making. With AI decision-making, face-to-face interactions are minimized, and decisions are part of a more opaque process that humans do not always understand. Within decision-making research, the concept of moral distance is used to explain why individuals behave unethically towards those who are not seen. Moral distance abstracts those who are impacted by the decision and leads to less ethical decisions. The goal of this paper is to identify and analyze the moral distance created by AI through both proximity distance (in space, time, and culture) and bureaucratic distance (derived from hierarchy, complex processes, and principlism). We then propose the ethics of care as a moral framework to analyze the moral implications of AI. The ethics of care brings to the forefront circumstances and context, interdependence, and vulnerability in analyzing algorithmic decision-making.

Waldman, A. and K. Martin. 2022.  Governing algorithmic decisions:  The role of decision importance and governance on perceived legitimacy of algorithmic decisions.  Big Data & Society.  Jan-June, 1-16.

The algorithmic accountability literature to date has primarily focused on procedural tools to govern automated decisionmaking systems. That prescriptive literature elides a fundamentally empirical question: whether and under what circumstances, if any, is the use of algorithmic systems to make public policy decisions perceived as legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the relative importance of the type of decision, the procedural governance, the input data used, and outcome errors on perceptions of the legitimacy of algorithmic public policy decisions as compared to similar human decisions. Among other findings, we find that the type of decision—low importance versus high importance—impacts the perceived legitimacy of automated decisions. We find that human governance of algorithmic systems (aka human-in-the-loop) increases perceptions of the legitimacy of algorithmic decision-making systems, even when those decisions are likely to result in significant errors. Notably, we also find the penalty to perceived legitimacy is greater when human decision-makers make mistakes than when algorithmic systems make the same errors. The positive impact on perceived legitimacy from governance—such as human-in-the-loop—is greatest for highly pivotal decisions such as parole, policing, and healthcare. After discussing the study’s limitations, we outline avenues for future research.

Martin, K and Ari Waldman. 2022.  Are Algorithmic Decisions Perceived as Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of Algorithmic Decisions  Journal of Business Ethics.

Firms use algorithms to make important business decisions. To date, the algorithmic accountability literature has elided a fundamentally empirical question important to business ethics and management: Under what circumstances, if any, are algorithmic decision-making systems considered legitimate? The present study begins to answer this question. Using factorial vignette survey methodology, we explore the impact of decision importance, governance, outcomes, and data inputs on perceptions of the legitimacy of algorithmic decisions made by firms. We find that many of the procedural governance mechanisms in practice today, such as notices and impact statements, do not lead to algorithmic decisions being perceived as more legitimate in general, and, consistent with legitimacy theory, that algorithmic decisions with good outcomes are perceived as more legitimate than bad outcomes. Yet, robust governance, such as offering an appeal process, can create a legitimacy dividend for decisions with bad outcomes. However, when arbitrary or morally dubious factors are used to make decisions, most legitimacy dividends are erased. In other words, companies cannot overcome the legitimacy penalty of using arbitrary or morally dubious factors, such as race or the day of the week, with a good outcome or an appeal process for individuals. These findings add new perspectives to both the literature on legitimacy and policy discussions on algorithmic decision-making in firms.

Martin, K. 2022.  Algorithmic Bias and Corporate Responsibility: How companies hide behind the false veil of the technological imperative.  Forthcoming in Ethics of Data and Analytics.  Taylor & Francis.  

In this chapter, I argue that acknowledging the value-laden biases of algorithms as inscribed in design allows us to identify the associated responsibility of corporations that design, develop, and deploy algorithms.  Put another way, claiming algorithms are neutral or that the design decisions of computer scientists are neutral obscures the morally important decisions of computer and data scientists.  I focus on the implications of making technological imperative arguments:  framing algorithms as evolving under their own inertia, as providing more efficient, accurate decisions, and as outside the realm of any critical examination or moral evaluation.  I argue specifically that judging AI on efficiency and pretending algorithms are inscrutable produces a veil of the technological imperative which shields corporations from being held accountable for the value-laden decisions made in the design, development and deployment of algorithms. While there is always more to be researched and understood, we know quite a lot about testing algorithms.  I then outline how the development of algorithms should be critically examined to elucidate the value-laden biases encoded in design and development.  The moral examination of AI pierces the (false) veil of the technological imperative.

Martin, K.  2022.  Manipulation, Privacy, and Choice.  North Carolina Journal of Law & Technology.

The phenomenon of interest in this article is targeted manipulation as the covert leveraging of a specific target’s vulnerabilities to steer their decisions to the manipulator’s interests. I position online targeted manipulation as undermining the core economic assumptions of authentic choice in the market. I then explore how important choice is to markets and economics, how firms gained positions of power to exploit vulnerabilities and weaknesses of individuals without the requisite safeguards in place, and how to govern firms in the position to manipulate. The power to manipulate is the power to undermine choice in the market. As such, firms in the position to manipulate threaten the autonomy of individuals, diminish the efficiency of transactions, and undermine the legitimacy of markets.

The goal of this paper is to argue that firms merely in the position to manipulate, with knowledge of individual’s weaknesses and access to their decision making, should be regulated to ensure their interests are aligned with the target. The economic oddity is not that firms have data that render another market actor vulnerable, rather the oddity is that so many firms have data to covertly manipulate others without safeguards in place. Market actors regularly share information about their concerns, preferences, weaknesses, and strengths within contracts or joint ventures or within a relationship with professional duties.  Online, companies have collected preferences and concerns without such safeguards in place.

Martin, K. and Carolina Villegas-Galaviz.  2022. AI and Corporate Responsibility: How and why firms are responsible for AI. In: Poff D.C., Michalos A.C. (eds) Encyclopedia of Business and Professional Ethics. Springer, Cham. https://doi.org/10.1007/978-3-319-23514-1_1297-1

When companies develop and use technology, who is responsible for the moral implications design during development and impacting stakeholders during use can be contested. This chapter explains how we think about corporate responsibility around the design, development, and use of AI.

Martin, K. and Parmar, B.  2021.  Designing Ethical Technology Requires Systems for Anticipation and Resilience. MIT Sloan Management Review.   https://sloanreview.mit.edu/article/designing-ethical-technology-requires-systems-for-anticipation-and-resilience/.

To avoid ethical lapses, organizations need to build systems that help to protect against preventable errors and to recover from ones that are unforeseeable.

Martin, K., K. Shilton, K, and Smith, J.  2019. Business and the Ethical Implications of Technology: Introduction to the Symposium. Journal of Business Ethics.

This symposium focuses on how firms should engage ethical choices in developing and deploying these technologies. In this introduction, we, first, identify themes the symposium articles share and discuss how the set of articles illuminate diverse facets of the intersection of technology and business ethics. Second, we use these themes to explore what business ethics offers to the study of technology and, third, what technology studies offers to the field of business ethics. Each field brings expertise that, together, improves our understanding of the ethical implications of technology. Finally, we introduce each of the five papers, suggest future research directions, and interpret their implications for business ethics

Martin, K. 2019.  Designing Ethical Algorithms. MISQ Executive. June 2019.

In a paper targeting both practitioners and academics in “Designing Ethical Algorithms,” I focus on algorithms as active, opinionated participants in algorithmic decisions, which, like all decisions, make mistakes. I leverage what we know about effective decision making in firms to highlight the types of mistakes we can expect from algorithms and how to better identify, judge and correct those inevitable mistakes. In effect, all algorithmic decisions will produce mistakes; but ethical algorithms will offer a mechanism to identify, judge and correct mistakes.  Here, the onus shifts to the algorithm’s developer to design who is responsible for identifying mistakes, judging mistakes as appropriate (or not) and correcting those mistakes. Importantly, by creating inscrutable, autonomous algorithms, firms may voluntarily take on accountability for the role of the algorithm in the decision including the ability to govern the inevitable mistakes

Martin, K. 2019.  Ethical Implications And Accountability Of Algorithms. Journal of Business Ethics

Algorithms silently structure our lives and can determine whether someone is hired, promoted, offered a loan, or provided housing as well as determine which political ads and news articles consumers see. Yet, the responsibility for algorithms in these important decisions is not clear. In the article “Ethical Implications And Accountability Of Algorithms,” I identify whether developers have a responsibility for their algorithms later in use, what those firms are responsible for, and the normative grounding for that responsibility. I conceptualize algorithms as value-laden, rather than neutral, in that algorithms create moral consequences, reinforce or undercut ethical principles, and enable or diminish stakeholder rights and dignity. In addition, algorithms are an important factor in ethical decisions and influence the delegation of roles and responsibilities within these decisions. As such, firms should be responsible for not only the value-laden-ness of an algorithm but also for designing who-does-what within the algorithmic decision. As such, firms developing algorithms are accountable for designing how large a role individuals will be permitted to take in the subsequent algorithmic decision. Counter to current arguments, I find that if an algorithm is designed to preclude individuals from taking responsibility within a decision, then the designer of the algorithm should be held accountable for the ethical implications of the algorithm in use.

Martin, K. 2018.  Commentary: Trust and the Online Market-Maker:  A comment on Etzioni’s Cyber Trust. Journal of Business Ethics, 156(1): 21-24.

Etzioni’s article – Cyber Trust — highlights the importance of trust for advancing economic transactions, including those online between market strangers. In this comment, I highlight the importance of acknowledging the role of such market makers in fostering trust between strangers and the importance for developers as the designers of trust fostering (or destroying) systems.

Martin, K. 2016. Data Aggregators, Big Data, and Responsibility Online- Who is tracking us online and should they stop? The Information Society, 32(1): 51-63. 

The goal of this paper is to examine the strategic choices of firms collecting consumer data online and to identify the roles and obligations of the actors within the current network of online tracking. In doing so, the focus shifts from placing the onus on individuals to make an informed choice to justifying the roles and responsibilities of firms when gathering, aggregating, and using consumers’ interests or behavior online.

Martin, K. 2015. Ethical Issues in the Big Data IndustryMIS Quarterly Executive. Presentation click here.

Big Data combines information from diverse sources to create knowledge, make better predictions and tailor services. This article analyzes Big Data as an industry, not a technology, and identifies the ethical issues it faces. These issues arise from reselling consumers’ data to the secondary market for Big Data. Remedies for the issues are proposed, with the goal of fostering a sustainable Big Data Industry.

Martin, K. Forthcoming. Role of Business in Responsibility to Protect.  Responsibility to Protect and Private Actors.  Cambridge University Press.

The UN’s Responsibility to Protect (R2P) focuses attention on the responsibilities of the global community to intervene and prevent human rights violations. Introduced in 2001and gaining in popularity, the Responsibility to Protect, suggests two sets of responsibilities: “(1) the responsibility of a state to protect its citizens from atrocities, and (2) the responsibility of the international community to prevent and react to massive human rights violations.” ...

This chapter seeks to better understand how private actors can contribute to the prevention, cessation, and aftermath of R2P events such as the violation of human rights. Specifically, I focus on firms in the information and communication technology industry (ICT) such as telecommunication and Internet communication technology who provide products and services normally provided by state actors and that impact the ability of human rights abuses to occur.

The goal of this paper is to develop a framework for the ethical analysis of global information technologies with an understanding of firms’ obligations within R2P. The introduction of Internet and telecommunication technologies to countries with authoritarian governments has facilitated the imprisonment of dissidents and the surveillance of citizens while also empowering users and protestors facing human rights violations. When established information technologies are introduced to new communities, such as when Google introduced their search technology to China or when Twitter was introduced to Iran, new patterns of use prove difficult to analyze. 

Martin, K. 2014. Regulating Code. Book Review. Business Ethics Quarterly, 24(4):  624-627.

In Regulating Code: Good Governance and Better Regulation in the Information Age, Ian Brown and Christopher Marsden tackle the governance of Internet technology and examine how technologies interact with regulation, broadly construed, in order to identify “more economically effective and socially just regulation.”

Martin, K. 2013. Ethics Issues in Technology, in 3rd Edition of the Wiley (Blackwell) Encyclopedia of Management – Business Ethics Volume.

Martin, K. 2008. Internet Technologies in China- Insights on the Morally Important Influence of Managers. Journal of Business Ethics 83: 489-501.

Within Science and Technology Studies, much work has been accomplished to identify the moral importance of technology in order to clarify the influence of scientists, technologists, and managers. However, similar studies within business ethics have not kept pace with the nuanced and contextualized study of technology within Science and Technology Studies....

In this article, I analyze current arguments within business ethics as limiting both the moral importance of technology and the influence of managers. As I argue, such assumptions serve to narrow the scope of business ethics in the examination of technology. To reinforce the practical implications of these assumptions and to further illustrated the current arguments, I leverage the recent dialog around U.S. Internet technologies in China. The goal of this article is to broaden that which is morally salient and relevant to business managers and business ethicists in the analysis of technology by highlighting key lessons from seminal STS scholars. This article should be viewed as part of a nascent yet burgeoning dialog between business ethics and Sci- ence and Technology Studies – a dialog that benefits both fields of study.

Martin, K. & Freeman, R.E. 2004. The Separation of Technology and Ethics in Business Ethics. Journal of Business Ethics 53: 353-364.

The purpose of this paper is to draw out and make explicit the assumptions made in the treatment of technology within business ethics. Drawing on the work of Freeman (1994, 2000) on the assumed separation between business and ethics, we propose a similar sepa- ration exists in the current analysis of technology and ethics....

After first identifying and describing the separation thesis assumed in the analysis of technology, we will explore how this assumption manifests itself in the current literature. A different stream of analysis, that of science and technology studies (STS), provides a starting point in understanding the interconnectedness of technology and society. As we will demonstrate, business ethicists are uniquely positioned to analyze the relationship between business, technology, and society. The implications of a more complex and rich definition of ‘technology’ ripple through the analysis of business ethics. Finally, we propose a pragmatic approach to understanding technology and explore the implications of such an approach to technology. This new approach captures the broader understanding of technology advocated by those in STS and allows business ethicists to analyze a broader array of dilemmas and decisions.