During the Apple annual shareholders meeting held virtually on February 28, 2024, a significant proposal, Proposal No. 7, brought forth by the AFL-CIO Equity Index Funds and represented by Segal Marco Advisors, sparked a notable discussion. The proposal called for Apple Inc. to prepare a transparency report detailing the company’s use of Artificial Intelligence (AI) in its business operations and to disclose any ethical guidelines the company has adopted concerning its use of AI technology.
The proposal emphasized the need for Apple to adopt an ethical framework for AI technology usage, suggesting that “adopting an ethical framework for the use of AI technology will strengthen our company’s position as a responsible and sustainable leader in its industry.” The proposal underscored the potential social policy issues raised by the integration of AI into business operations, such as discrimination, privacy violations, and the impact on employment due to automation.
In response, Apple’s Board recommended voting against Proposal No. 7, arguing that the company is “committed to responsibly advancing our products and services that use artificial intelligence” and that the scope of the requested report was overly broad, potentially encompassing disclosure of strategic plans and initiatives harmful to Apple’s competitive position. Apple’s statement highlighted its existing efforts in addressing ethical considerations, stating, “Apple has a robust approach to addressing ethical considerations across our business operations.”
Apple detailed its existing policies and practices, including the publication of a Civil Rights Assessment report, efforts in transparency through its machine learning research website, and inclusive design in product development. The company also emphasized its commitment to privacy as a fundamental human right, showcasing a proactive approach to user data transparency and control.
Consequences for Apple and the AI Industry
Apple’s response to Proposal No. 7 has broader implications for both the company and the wider AI industry. By outlining its current practices and ethical considerations, Apple sets a precedent for how major tech companies might navigate the complex landscape of AI ethics and transparency. Apple’s emphasis on existing guidelines and the breadth of its efforts in ethical AI deployment reflects a deliberate and thoughtful approach to technology development, a stance that could influence industry standards and expectations.
However, the rejection of the proposal also raises questions about the balance between competitive secrecy and the increasing demand for transparency in AI operations from shareholders and the public. This tension highlights a significant challenge for the AI industry: how to maintain innovation and competitive advantage while addressing ethical concerns and ensuring transparency.
For Apple, the consequences include continued scrutiny from stakeholders interested in more detailed disclosures regarding AI use and ethical considerations. The company’s approach to navigating these demands while protecting its competitive interests will be closely watched and may serve as a benchmark for other companies in the tech sector.
The broader AI industry faces the challenge of developing and adhering to ethical guidelines that balance innovation with social responsibility. Apple’s stance and the shareholder proposal reflect a growing conversation within the industry about the role of ethics in AI development, a discussion that will likely shape the future trajectory of AI technology and its integration into business and society.
Excerpt of the Notice of 2024 Annual Meeting of Shareholders and Proxy Statement, page 93ff. Copyright © 2024 Apple Inc. All rights reserved. Apple and the Apple logo are trademarks of Apple Inc., registered in the U.S. and other countries and regions. Apple has been advised that the AFL‑CIO Equity Index Funds, represented by Segal Marco Advisors, intends to submit the following proposal at the Annual Meeting: Report on Use of AI Shareholders request that Apple Inc. prepare a transparency report on the companyʼs use of Artificial Intelligence (“AI”) in its business operations and disclose any ethical guidelines that the company has adopted regarding the companyʼs use of AI technology. This report shall be made publicly available to the companyʼs shareholders on the companyʼs website, be prepared at a reasonable cost, and omit any information that is proprietary, privileged, or violative of contractual obligations. Supporting Statement If adopted, this proposal asks our company to issue a transparency report on the companyʼs use of AI technology and to disclose any ethical guidelines that the company has adopted regarding AI technology. We believe that adopting an ethical framework for the use of AI technology will strengthen our companyʼs position as a responsible and sustainable leader in its industry. By addressing the ethical considerations of AI in a transparent manner, we can build trust among our companyʼs stakeholders and contribute positively to society. The adoption of AI technology into business raises a number of significant social policy issues. For example, the use of AI in human resources decisions may raise concerns about discrimination or bias against employees. The use of AI to automate jobs may result in mass layoffs and the closing of entire facilities. AI may be used in ways that violate the privacy of customers and members of the public. AI technology may be used to generate “deep fake” media content that may result in the dissemination of false information in political elections. The White House Office of Science and Technology Policy has developed a set of ethical guidelines to help guide the design, use, and deployment of AI. These five principles for an AI Bill of Rights are 1. safe and effective systems, 2. algorithmic discrimination protections, 3. data privacy, 4. notice and explanation, and 5. human alternatives, consideration, and fallback. (White House Office of Science and Technology Policy, “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People,” October 2022, available at https://www.whitehouse.gov/ostp/ai‑bill‑of‑rights). We believe that the adoption of ethical guidelines for the use of AI can help improve our companyʼs bottom line by avoiding costly labor disruptions. In 2023, writers and performers went on strike against the Alliance of Motion Picture and Television Producers in part over concerns that the use of AI technology to create media content will infringe on the intellectual property and publicity rights of writers and performers and potentially displace human creators. (Wall Street Journal, “Hollywoodʼs Fight: How Much AI Is Too Much?,” July 31, 2023, available at https://www.wsj.com/articles/at‑the‑core‑of‑hollywoods‑ai‑fight‑how‑far‑is‑too‑far‑f57630df). In our view, AI systems should not be trained on copyrighted works, or the voices, likenesses and performances of professional performers, without transparency, consent and compensation to creators and rights holders. We also believe that AI should not be used to create literary material, to replace or supplant the creative work of professional writers. The Board recommends a vote AGAINST Proposal No. 7 because: we are committed to responsibly advancing our products and services that use artificial intelligence and already provide resources and transparency on our approach to artificial intelligence and machine learning, all under the active oversight of our Board; and the scope of the requested report is extremely broad and could encompass disclosure of strategic plans and initiatives harmful to our competitive position and would be premature in this developing area. At Apple, we believe the measure of any great innovation is the positive impact it has on peopleʼs lives. Itʼs why we work every day to make our technology an even greater force for good. Today, our teams around the world work to infuse Appleʼs deeply held values into everything we make. That work can take many forms. But whether weʼre protecting the right to privacy, designing technology that is accessible to all, or using more recycled materials in our products than ever, we are always working to make a difference for the people we serve and the planet we inhabit. Apple has a robust approach to addressing ethical considerations across our business operations and that addresses the issues raised in the proposal. We believe itʼs important to be deliberate and thoughtful in the development and deployment of artificial intelligence, and that companies think through the consequences of new technology before releasing it — something weʼve always been deeply committed to at Apple. Social issues raised in the proposal, like discrimination, bias, and privacy may be implicated by AI technologies, but are not unique to the application of AI. Accordingly, our existing guidelines, policies, and procedures already address the social issues raised, as described below. Our approach to human rights: In all of our work, Appleʼs values are a driving force. Weʼre deeply committed to respecting internationally recognized human rights in our business operations, and align our efforts with the business and human rights due diligence process set forth in the United Nations Guiding Principles on Business and Human Rights. As part of our processes, we conduct due diligence to identify risks and work to mitigate them. This includes identification of salient human rights risks across our organization through internal risk assessments and external industry‑level third‑party audits, as well as through communication channels maintained with rights holders and other stakeholders. In 2023, we published a Civil Rights Assessment report prepared by former U.S. Attorney General Eric Holder and his team at Covington & Burling LLP. The report reviews Appleʼs extensive efforts to respect civil rights and promote equity, diversity, and inclusion, and live by its core values, including accessibility, inclusion and diversity, and privacy. These efforts, many of which began years ago, are reflected in Appleʼs current policies and practices, which are detailed in Covington's report. Our approach to transparency: Appleʼs world‑class machine learning and AI research team, led by our Senior Vice President of Machine Learning and AI Strategy, collaborates with teams across Apple to drive breakthrough advancements in machine learning, and we have a dedicated Apple Machine Learning Research website where we provide meaningful visibility into our machine learning research and aim to make our products and services incorporating machine learning easy to understand. We also publish Human Interface Guidelines, including dedicated sections on Inclusion, Accessibility, Privacy, and Machine Learning, among others, to support developers in their work to build inclusive apps that put people first by prioritizing respectful communication and presenting content and functionality in ways that everyone can access and understand. Further, we provide tools, documentation, sample code, and design best practices to help developers make their apps more accessible. In addition to our machine learning research website and our Human Interface Guidelines, Apple also reports extensively on our commitment to inclusion and diversity and privacy throughout our business. We believe these disclosures provide a robust level of transparency to assure stakeholders of our commitment to values‑driven development while balancing the need to protect the proprietary information that is foundational to our business. Our approach to inclusive design: We strive to build products and services aligned with our values and human rights commitments, as noted in the Civil Rights Assessment report. To cite just a few examples, Apple is engaged in an ongoing process to make Siri a more inclusive and accessible feature, including by engaging socio‑linguist experts to improve speech recognition accuracy rates for users of different ethnic/racial, gender, and geographic backgrounds and working in partnership with Black and African American Vernacular English‑speaking volunteers. Additionally, in developing Face ID, Apple appreciated from the outset that facial recognition algorithms had been associated with divergent error rates across demographic groups and so worked with volunteer participants from around the world to include a representative group of people accounting for gender, age, ethnicity, disability, and other factors. And weʼre taking steps to advance equity in our camerasʼ person recognition features. As a result, the machine learning models used in camera technology aim to show similar performance across various age groups, genders, ethnicities, skin tones, and other attributes. Our approach to privacy: We also believe that privacy is a fundamental human right, and weʼre constantly innovating to give users more transparency and control over their data. We regularly engage with civil society representatives globally on various privacy and freedom of expression issues, including privacy by design and encryption, and our Privacy Policy (6) and service‑specific privacy notices (7) reflect our belief that privacy must remain a top priority in all that we do. Our management Privacy Steering Committee sets privacy standards for teams across Apple and acts as an escalation point for addressing privacy compliance issues. The committee is chaired by Apple's General Counsel and its members include Appleʼs Senior Vice Presidents of Machine Learning and AI Strategy, Software, and Services, and a cross‑functional group of senior representatives from across the business. The scope of the requested report is overly broad and could encompass disclosure of strategic plans and initiatives harmful to our competitive position. This proposal addresses our use of artificial intelligence across our “business operations.” The proposal does not focus on any specific novel use of AI at Apple and, in fact, references well‑established applications of software such as automation of systems. Broadly defined, the requested report could encompass every aspect of our business, including whether and how we use automated systems in, for example, product development and research, supply chain management, financial management and planning, efficient management of energy use throughout our physical plant and buildings, monitoring of cyber and physical security at our facilities, coordinating employee benefit or other personnel programs, and conducting a wide range of other aspects of our business operations. Beyond our business operations, due to its broad nature, the requested report would cover virtually every product and service Apple currently offers. To cite a few examples, Siri™, which has been available for more than a decade, Personal Voice and Live Voicemail included in iOS 17, and life saving features like Fall Detection, Crash Detection, and ECG, would simply not be possible without the use of artificial intelligence and machine learning. The broad scope of the proposal therefore speaks against preparing the report requested, especially as the proponent does not point to any specific use of AI at Apple that raises concerns. Not only would the report encompass every aspect of our operations and nearly every product or service we build, it could also cover competitively harmful disclosure of confidential research and development activities. The AI regulatory landscape is rapidly evolving. Apple has been and will continue to be deliberate and thoughtful in the development and deployment of artificial intelligence. However, this proposal is premature in asking for a dedicated report when the landscape is just starting to emerge and regulators around the world are actively engaged in new rulemaking.