AI Policy and Procedures – issues for social and health services
As Artificial Intelligence (AI) rapidly evolves, health and social service organisations need AI policy and procedures to guide their ethical and responsible use of AI.
We derive many benefits from AI. In the social service and health sectors, these include support with decision-making and diagnostics, efficiency gains, improved record management and evidence-based practice.
But there are ethical risks of AI use, which vary across different AI applications.
If not addressed in organisational policies and procedures, these risks could threaten the heart and soul of social and health services. We therefore need AI fit-for-purpose policies and procedures to guide us.
AI policy and procedures
Key issues to think about and canvas in your AI policy include data management, transparency, roles and responsibilities, misinformation and legal and regulatory compliance.
Used in health and other service areas, AI systems may collect a large amount of highly sensitive information about a person. This could be misused or used for non-consented or malicious purposes. The security of information collected and used through AI, must therefore be addressed in policies and procedures. Likewise, access to and sharing of the information that is collected.
Transparency in your AI policy
If we’re using generative AI for advice, diagnosis and to help make decisions that affect people’s lives, then it’s important that we understand the basis of the advice and information it provides.
The right to give or refuse informed consent is integral to quality care and service. We also want to provide person-centred care. Before relying on AI to help us with service provision, we should therefore understand and be able to explain to those we serve the criteria and information on which the AI is based. This should reflect in our policies and procedures.
Other issues of transparency to address in your AI policy concern responsibilities for using AI, how the organisation uses AI in its services and activities and the associated risks.
Roles and Responsibilities
AI isn’t for everyone. It’s unrealistic to expect everyone to have a good grasp of it in your organisation. However, with AI likely to play an increasingly important role in your organisation, it’s important to think about AI responsibilities and to include these in your AI policy. Key responsibilities to cover include AI policy review and AI risk management.
Misinformation safeguards in your AI policy and procedures
As we already know, it is becoming harder to distinguish fact from fiction as the technology to generate misinformation so rapidly evolves.
The risks are potentially disastrous. They range from reputational damage and loss of trust in person and organisation through to grievous harm when mis-information is relied on as fact.
AI policy addressing data security is crucial. But it’s not enough.
If we’re going to rely on generative AI, we need safeguards in our AI policy for AI output to be checked and verified.
The strategies will vary, depending on the nature of the AI information, but can include:
- monitoring and tracking to see that AI achieved its intended purpose
- monitoring error rates
- review of AI-generated advice by subject matter experts
- checking AI information against other sources known to be reliable and credible
- monitoring and evaluating feedback from clients about the impacts of AI
Without these checks, your organisation becomes highly vulnerable to the serious impacts of misinformation.
Keeping your AI policy and procedures fit for purpose
AI technology evolves quickly, which makes it challenging to keep our policies and procedures fit for purpose.
Social and health services are required to keep their polices regularly reviewed and updated. At the Policy Place we support our members through regular reviews and updates of their policies based on a two or three year cycle.
However, with your AI policy and procedures, it may be necessary to prescribe more frequent reviews and updates.
Given the rapid rate of change, we could easily think we should just wait and see before we launch into an AI policy. But don’t be fooled. There’s nothing to wait for.
We might not get it right the first time or even the second time with our policy. But we can change and evolve our policies and procedures as we gain more understanding of AI system and risks.