Policy document

Artificial Intelligence Policy

Published 10 July 2025

1. Policy Statement

Maidstone Borough Council’s approach to the use of AI is governed by five key principles. These principles underpin how the Council will use AI to ensure that it contributes positively to innovation and does not directly or indirectly contribute to discriminatory practices.  Its use should be transparent and must not be used without human accountability.

The purpose of this policy document is to provide a framework for the use of several Artificial Intelligence types, including decision-making AI and generative AI products such as Microsoft Copilot, ChatGPT, Google Gemini or other similar tools that may be used by Council employees, contractors, developers, vendors, temporary staff, and consultants.

This policy is designed to ensure that the Council's use of AI is responsible, ethical and that it complies with all applicable laws and regulations and complements the Council’s existing information and security policies. The pace of development and application of GenAI is such that this policy will be in a constant state of development.

2. Principles

The Council’s AI Policy and use are governed by five key principles. The principles clarify the Council’s expectations around responsible AI use and describe good governance at all stages of its use – from project planning to ongoing development and use.

  1. Safety, Security and Robustness
    The Council’s use of AI should be robust, secure and safe. This policy and its supporting processes support the conditions of use and puts in place safeguards to ensure it is used appropriately and does not pose unreasonable safety and/or security risks.
  2. Transparency and Explainability
    The Council is committed to transparency and responsible disclosure regarding its use of AI. To this end, it should provide meaningful information, appropriate to its use and the context of its use. This should demonstrate and help foster a general understanding among staff of the use of AI in terms of its benefits as well as the associated risks and provide reassurances in terms of the Council’s decision-making processes in terms of human agency and oversight.
  3. Fairness and Inclusivity
    The use of AI must reflect the Council’s legislative and democratic responsibilities. This includes ensuring privacy and data protection, and that its use is non-discriminatory and contributes positively to equality, diversity and inclusion
  4. Accountability and Governance
    All staff have a level of accountability with regards to the use of AI as outlined in the policy and supporting processes and policy principles, and governance structure.
  5. Contestability and Redress
    Where appropriate, staff and impacted third parties (e.g., residents) should be able to contest the use of AI directly contributes to an outcome that is harmful or creates material risk of harm.  Clear routes (including informal channels) should be easily available and accessible.

3. Governance

Good governance ensures that AI technologies can be used in a way that aligns with ethical principles, values and legal requirements. It provides a framework for decision-making, accountability, and transparency, mitigating potential risks and maximising the benefits of AI. Robust governance structures help to foster public trust, promote responsible innovation and safeguard users.

To mitigate potential risks and maximise the benefits of AI, it is essential to prioritise data protection and privacy through robust governance.

AI Impact Assessment

Before using any AI tool, staff must first complete the Responsible Artificial Intelligence Impact Assessment (RAIIA)

Data Protection

If any kind of personal or confidential information will be entered into an AI tool or system, staff must first complete a Data Protection Impact Assessment (DPIA). Any DPIA should describe the nature, scope, context and purposes of any processing of personal data. It will need to make clear how and why AI will be used to process data, including:

  • How you will collect, store and use data.
  • The volume, variety and sensitivity of data.
  • The nature of your relationship with individuals.
  • The intended outcomes for individuals and wider society, as well as for you.

The DPIA should show evidence of consideration of alternatives solutions that present less risk (if any) but that achieve the same purpose in terms of process.  The reasons why they were not chosen should be fully documented.

Governance measures should also be in place to ensure effective oversight of the use of AI systems, with clear lines of accountability established. Responsible service areas and / or boards should take steps to consider, incorporate and adhere to the principles and introduce measures necessary for the effective implementation of the AI principles.  AI should be explicitly referenced and included (as applicable) as part of all new (or reviews to) services.


Role Responsibilities Tasks

Deciders

  • WLT
  • Strategic Planning
  • Policy Development
  • Define strategic direction of AI initiatives
  • Alignment with the organisation’s goals and objectives
  • Set the core principles
  • Make final decisions on AI governance policies

Advisors

  • Project   lead
  • Data Protection Officer
  • Legal
  • Equalities lead
  • Risk management lead
  • Compliance   Oversight
  • Ethical Assessment
  • Risk Assessment
  • Advising the board on compliance issues and potential legal implications
  • Provide guidance on ethical implications related to AI technologies
  • Identify risks, oversee controls and mitigations

Recommenders

  • Information   Governance Board
  • Digital and Transformation Board
  • Technical Expertise
  • Implementation   Strategies
  • Provide   technical expertise
  • Advise on AI technologies to use, methodologies and best practice
  • Recommend implementation strategies
  • Align technical aspects of AI projects to organisation’s goals and capabilities

Execution Stakeholders

  • Project lead
  • Information   Governance Board
  • Digital and Transformation Board
  • Project Implementation
  • Continuous   Improvement
  • Stakeholder   Engagement
  • Deliver AI projects within the Framework of AI governance
  • Manage feedback and insights, ensure continuous improvement
  • End users and community engagement to ensure that AI meets requirements

4. Use

This policy applies to all staff using any AI tools, whether through Council owned devices or personal devices used for Council activities (refer to the Council’s Acceptable Use Policy regarding use of personal devices for Council work purposes).

These tools can be embedded in applications such as email clients, productivity tools or video conferencing for transcription and summarisation.

AI must be used in a manner that promotes fairness and avoids bias to prevent discrimination and promote equal treatment. Specifically, AI must not be allowed to solely determine which customers should have access to services. Humans must be involved in such decision-making processes where needed, and there must be an appeal processes for any automated or AI-informed decisions. This process will be undertaken by the Information Governance Team.

Staff may use AI for work-related purposes if they adhere to this policy. This includes tasks such as generating text or content for reports, emails, presentations, images and customer service communications.

Particular attention should be given to Governance, Vendor practices, Copyright, Accuracy, Confidentiality, Disclosure and Integration with other tools.

5. Vendors

Any use of AI technology in pursuit of Council activities should be done with full knowledge of the policies, practices, terms and conditions of its vendors. This includes understanding the vendor’s approach to data privacy, security, ethics, and environmental and social impacts.

The Council should consider the vendors:

  • Ability to provide clear and understandable explanations for the AI model’s decisions where appropriate.
  • Approach to addressing potential biases in any AI data and algorithms it uses.
  • Commitment to protecting sensitive data and ensuring the security of any AI system.
  • Adherence to ethical principles and guidelines for AI deployment.
  • Commitment to reducing the environmental impact of AI deployment.
  • Efforts to address social inequalities and promote inclusive growth through AI, such as ensuring AI systems are not used to perpetuate discrimination or disadvantage marginalised groups.
  • Ability to contribute to the regeneration of ecosystems and natural resources through AI-enabled solutions, such as optimising energy consumption or improving resource management.

By incorporating these considerations into the procurement process, the Council can ensure that vendors contribute to a more equitable, sustainable and inclusive future.

The LGA have produced a Guide to responsible AI procurement

Responsible service area/process: Procurement, Project Lead, Information Governance, Risk Assessment.

7. Accuracy

AI can generate text that appears highly realistic, even when it contains inaccuracies or biases. This is because systems are trained on huge datasets that may include both fictional and factual information. As a result, it is important to fact-check any context produced by AI.

All information generated by AI must be reviewed and edited for accuracy, bias, and ethical considerations prior to use. Users of AI are responsible for reviewing output and are accountable for ensuring accuracy and ethical soundness of AI-generated output before use and/or release. Where response generation options are provided and it is relevant, you must ensure that accuracy is selected over creativity.

The Council must ensure, via its governance of AI use, that it is regularly reviewing and assessing the output of AI for signs of bias and taking steps to mitigate it.

If staff have any doubts about the accuracy or ethical implications of information generated by AI, they should consult with the appropriate service area for further investigation.

8. Confidentiality

When using AI, staff should prioritise the confidentiality and privacy of personal and sensitive information. Any data entered into a AI tool must be handled in accordance with applicable data privacy laws and regulations, including GDPR.

Confidential and personal information must not be entered into a public AI tool. This is because the information will then enter the public domain and may be used for further training of the publicly available tool. This would amount to a data breach. Staff must follow all applicable data privacy laws and organisational policies when using AI. For example:

  • Staff must not use an unauthorised AI tool to write a letter to a customer with any personal details in. For example: ‘Mr A N Other at 123 Acacia Avenue’ as that data could be ingested and kept by the AI for reuse.
  • Staff must not use AI apps on personal phones to record and summarise work meetings, or to use translation services unless within the managed area of a personal device that is controlled by the Council.
  • Staff must not upload spreadsheets full of customer data for AI analysis.

If staff have any doubt about the confidentiality of information or what will happen to the data they enter, they should not use that AI tool. Confidential or personal data should only be entered into a AI tool that has been built or procured specifically for the Council’s use where the data entered is confined for the Council’s sole use and use of that tool has been specifically sanctioned for that purpose by the Information Governance Team. For example, using Microsoft Teams with a Council login to transcribe and summarise meetings is authorised. However, using a free tool downloaded to a personal phone or added to a Teams meeting as a plug-in to transcribe a work meeting may not be authorised and could constitute a data breach (refer to the Council’s Acceptable Use Policy regarding use of personal devices for Council work purposes).

9. Social Impact and Equality

Artificial intelligence is a fast-developing technology. While there are big opportunities, there are also significant risks of discrimination. Staff must consider the potential equality benefits of using AI, the risks it may pose to equality, and how they can reduce any risks. In doing so, the Public Sector Equality Duty (PSED) should be considered at all times. The PSED applies even if staff are:

  • Commissioning someone outside of the organisation to develop the AI for them
  • Buying an existing product
  • Commissioning a third party to use AI on their behalf.

Completion of an EqIA will evaluate systems in place and consider whether any historic biases are influencing data sets used to teach the AI and, therefore, the output of the AI decision-making process.

When conducting an EqIA, staff should consider how the use of AI may affect people with different protected characteristics.


10. Ethical Use and Disclosure

AI must be used ethically and in compliance with all applicable legislation, regulations and Council policies. Staff must not use AI to generate content that is discriminatory, offensive, or inappropriate.  Staff must be aware and informed that any information and data is brought together for research purposes forms part of an evidence base that supports the Council decision making process and must not be taken at face value.  All assumptions or conclusions must be checked and made independently of AI by the responsible officer to ensure that they are non-discriminatory and without political bias.

If there are any doubts about the appropriateness of using AI in a particular situation, staff should consult with their manager or Information Governance Team.

Content produced via AI must be identified and disclosed as containing AI-generated information.

Footnote example:

Note: This document contains content generated by Artificial Intelligence (AI). AI generated content has been reviewed by the author for accuracy and edited/revised where necessary. The author takes responsibility for this content.

11. Risks

Use of AI carries inherent risks for the Council and any third-party vendors.  These associated risks are outlined in the above sections of this policy: copyright accuracy; confidentiality, social impact and equality; and ethical use and disclosure.

A comprehensive risk assessment should be conducted for any project or process where use of AI is proposed via a data protection impact assessment and DDaT assessments. Risk assessments should consider potential impacts, including legal compliance, bias and discrimination, security (including technical protections and security certifications), and data sovereignty and protection.

AI may store sensitive data and information, which could be at risk of being breached or hacked. The council must assess technical protections and security certification of an AI tool before use. If staff have any doubt about the security of information input into AI, they should not use AI.

The accountability or ‘human’ assessment of a project or decision that has been developed or informed by the use of AI is paramount and must be made clear across all associated and supporting processes.

13. Data sovereignty and protection

While an AI platform may be hosted internationally, under data sovereignty rules information created or collected in the originating country will remain under jurisdiction of that country’s laws. The reverse also applies. If information is sourced from AI hosted overseas, the laws of the source country regarding its use and access may apply. AI service providers should be assessed for data sovereignty practice by any organisation wishing to use their AI.

14. Compliance

Any violations of this policy should be reported to the council’s Information Governance Team or senior management. Failure to comply with this policy may result in disciplinary action, in accordance with council’s Human Resources policies and procedures.

15. Review

This policy will be reviewed periodically and updated as necessary to ensure continued compliance with all applicable legislation, regulations and organisational policies.

16. Acknowledgement

By using AI, staff acknowledge that they have read and understood these guidelines, including the risks associated with the use of AI.