Why You Need a Comprehensive AI Policy
Artificial intelligence (AI) is becoming integral to many industries in Aotearoa, including social and health services. While AI offers benefits, it also poses significant risks that need to be addressed through comprehensive AI policies. That’s why we at the Policy Place have recently released our new AI policy for our online policy clients.
In this blog we consider the importance of having an AI policy in social and health service agencies, the risks of not having a policy and some of the key things to cover in an AI policy for community, social and health services. For our previous post on AI use in social and health services see here.
The Rise of AI in Workplaces
Artificial intelligence is no longer a futuristic concept; it is actively shaping how organisations operate.
The 2024 Work Trend Index Annual Report from Microsoft and LinkedIn released in May this year, found that AI is prevalent in the workplace worldwide. Key findings highlighted that AI use is pervasive in global workplaces and that AI use is beneficial in terms of time-saving, efficiency gains and adding to the enjoyment of work.
However, the Report also identified pervasive risk with AI use; that, in workplaces without an AI policy or other guidance 78% of employees had taken things into their own hands and were bringing and using their own AI tools at work.
The Risks of AI Use without AI policies and guidance include:
- Data Security Risks: AI systems can be vulnerable to cyber-attacks, which can lead to data breaches and loss of sensitive information. Without an AI policy, staff may input personal information and sensitive organisational data.
- Ethical and Legal Risks: AI use can lead to ethical dilemmas and legal issues, such as unauthorised use of personal data, breach of copyright and AI-driven decisions that are biased and breach human rights.
- Operational Risks: Relying on AI without proper oversight can lead to operational inefficiencies, errors, and potential harm to clients.
- Cultural Risks: AI data may not be sufficiently responsive to diverse cultural contexts and needs of different communities. Without proper AI policies and guidance, AI use risks undermining important cultural practices and values, particularly those protected by Te Tiriti o Waitangi.
The Importance of an AI Policy
An AI policy is basically the starter or minimum for a workplace to address some of these risks:
- Ensuring Ethical Use of AI: An AI policy helps ensure that AI tools are used ethically and responsibly. This is crucial in social, community and health services, where decisions made by AI can significantly impact individuals’ lives and well-being.
- Protecting Client Privacy: An AI policy guides how staff should use AI in alignment with the Privacy Act 2020 and privacy policies. This is particularly important for social, health and community services dealing with highly sensitive and confidential data.
- Maintaining Accountability: Clear guidelines within an AI policy guide staff on how they may use AI in their decisions and their duty of reasonable care. This is particularly important in health and social services, where transparency and trust are paramount.
- Preventing Discrimination: An AI policy will include checks that staff must do on AI generated data before relying on it and prohibitions against reliance on unbiased and unverified data.
- Honoring Te Tiriti o Waitangi: AI policies must recognise and protect Treaty of Waitangi rights. This includes ensuring that AI use does not disadvantage iwi and whānau Māori that health and community services work with and that data sovereignty and cultural considerations are respected.
Strategies to support an AI Policy
An AI policy is just the beginning for a workplace wanting to use AI. Like any policy, your AI policy needs to be backed up by a strong implementation strategy that includes the following
- Regular Audits and Assessments: Conduct regular audits of AI systems to ensure they operate as intended and comply with ethical standards.
- Training and Awareness: Provide training for staff on the responsible use of AI and raising awareness about potential risks and ethical considerations.
- Bias Mitigation Strategies: Implement strategies to identify and reduce biases in AI systems eg data checking, surveys and if affordable, bias detection algorithms.
- Robust Security Measures: Apply strong cybersecurity protocols to protect AI systems from threats and ensure the integrity of data.
- Transparent Decision-Making: Ensure through training and policy that staff responsibilities for AI use are clearly articulated, and AI-driven decisions are transparent and explainable.
- Cultural Safety and the Treaty: Use strategies like training, bias detection systems and iwi/community consultation to ensure that the rights of tangata whenua under the Te Tiriti o Waitangi are respected and protected with AI use.
Conclusion
AI brings benefits as well as risks especially for the social, community and health services we work with. To get the most out of AI and help protect against the risks, an AI policy is a “must.” It’s arguably the beginning of a new policy era when, in response to rapidly evolving technology, we need to revise and evolve policies at an equally fast pace.