The development and widespread use of AI bring both risks and new business opportunities in private as well as in public sectors. Proactive preparation for risks and threats helps organisations to implement their strategies more effectively and to adapt to changing operating environments flexibly.
Successful exploitation of AI can be supported by creating an efficient, organisation-wide model that focuses on identifying and minimising risks through various means of control. The most significant challenges when using AI relate to privacy and transparency, biases in automated decision-making, copyright issues, manipulation and security threats. The aim of means of control is to ensure that AI is in line with the organisation's goals and is developed as well as used responsibly, ethically and legally.
As part of its digital strategy, the EU has published the Artificial Intelligence Act (EU AI Act, AIA) to ensure that this innovative technology is used and developed responsibly and effectively. Artificial intelligence brings significant benefits to many sectors, such as healthcare, transportation, manufacturing, and the energy industry, improving safety and efficiency. According to the regulation that came into force in August 2024, AI systems are assessed and classified based on the risks they pose to their users, which determines the level of regulation they are subject to.
The goal of the regulation is to ensure that AI systems used in the EU are safe, transparent, traceable, equitable, and environmentally friendly. The development and use of AI must also comply with the General Data Protection Regulation (GDPR), the General Product Safety Regulation (GPSR) and the Product Liability Directive (PLD). Additionally, human rights principles such as non-discrimination must be taken into account, as well as the Copyright Directive (DSM) and national legislation, to ensure that AI is used ethically and lawfully.
PwC's approach provides a comprehensive and practical solution for responsible AI governance, enabling organisations to harness the potential of AI safely and ethically. PwC's approach to responsible AI governance is based on a comprehensive management framework that includes the following areas:
A functioning governance model requires commitment from management, collaboration among different experts and other resources from various areas of expertise. Below are presented some considerations for different organisational functions when developing AI or implementing AI applications. It is important to adapt these considerations to each organisation's specific context and requirements.
The role of the organisation's management is to ensure that the development and implementation of artificial intelligence (AI) are aligned with the organisation's strategy with sufficient resources, managed risks, stakeholder engagement, effective communication and training, and consideration of ethical and legal aspects.
It is important to understand how artificial intelligence can support business needs, such as automation, prediction, and improving customer experience. Businesses should ensure the ethical use of AI and manage the risks and impacts associated with its implementation. They should also take care of change management and ensure that employees are aware of the benefits of AI and can utilise it safely and effectively.
The CISO should ensure that the development and use of AI occur safely and responsibly. They are responsible for the organisation's level of information security, vulnerability management, data protection and privacy, staff training, and compliance with relevant rules and standards.
The role of the CIO is to ensure that the development and implementation of AI is aligned with the strategic objectives with sufficient resources, adequate levels of information security and data protection, close collaboration, and continuous monitoring of progress.
The DPO should ensure that the development and implementation of AI comply with data protection legislation. They are responsible for ensuring that the organisation's activities align with the General Data Protection Regulation (GDPR) requirements.
The compliance function should ensure that the development and implementation of AI comply with applicable legislation and the organisation's agreed-upon rules. They are responsible for compliance with laws, creating internal guidelines, contributing to risk management, reporting and documentation, as well as audits and inspections.
The internal auditor should develop new audit methods to verify compliance with AI and AI-driven processes. This requires training and familiarity with new tools.
The CFO should consider the following aspects in AI development and implementation: costs and financial benefits, business strategy, risk management, financial forecasting and analytics, as well as regulations and compliance.
All employees who use AI applications should follow the guidelines for their use, understand the limitations of AI and that the content generated by the AI application may be inaccurate. Users should provide feedback on problems and bugs they encounter.
Examples of our services:
We assist organisations to grow, operate efficiently and responsibly as well as to report reliably in a constantly varying environment – whether it’s a question on a listed, a family-owned or a startup company, a public sector actor or a non-profit organisation.
By investing in good management of AI development and usage, an organisation is able to utilise the full potential of AI without compromising ethicalness and safeness in the activities. We offer comprehensive and tailored solutions that meet the specific needs in each organisation. As our client, we support you based on our extensive expertise covering legal, risk management, data protection and information security services – globally, when needed.
Partner, Cybersecurity & Privacy Assurance Services, PwC Finland
Tel: +358 (0)20 787 7489