Nordia News

The AI Act continues its gradual entry into force – a practical checklist for employers

By Annamari Männikkö
Published: 20.01.2026 | Posted in Insights

The use of artificial intelligence (“AI”) in recruitment and employment management has grown significantly in the last few years, and with it, the need for clear regulation. Against this background, the European Union’s AI Act introduces new obligations for employers, which will enter into force on a phased basis. Although most of the obligations will not be applied until August 2026, employers should ensure that their operations are in line with the requirements of the regulation now at the latest.

Find out where AI is used in the workplace

The first step is to get an overview. AI is no longer used only in recruitment, but also in employee evaluations, performance monitoring, work shift planning, work time monitoring and other employment-related decision-making. The areas of application can be very diverse as AI systems are developing at a rapid rate. It is important for employers to identify all the systems in use, understand the purposes for which they are used, and what kind of impact they may have on employees.

Assess whether the AI systems in question are high-risk

According to the AI Act, the majority of AI systems used in working life fall into the category of high-risk systems. These include, in particular, systems that affect employees’ career development, livelihood, or rights. For example, in decision-making related to recruitment or promotions, AI can lead to discriminatory outcomes due to incorrect or biased input data. The relationship of power between employer and employee emphasizes the need for particular caution.

Inform and involve staff in good time

Existing Finnish cooperation legislation may already require technological changes to be discussed with staff. From August 2026, the AI Act will also introduce an explicit obligation to inform staff representatives and affected employees before introducing a high-risk AI system. As this information is provided within the framework of national legislation, this may in practice also mean initiating change negotiations in some situations.

Ensure and maintain AI literacy

Since February 2025, employers have been obliged to ensure that employees who use AI systems have sufficient AI literacy. This is not a matter of in-depth technical knowledge, but rather an understanding of how AI works, the risks and limitations associated with its use, and the ethical and legal issues it raises. The skills must correspond to the tasks of each employee and must also be maintained appropriately.

Take other relevant legislation into account

The use of AI does not take place in a regulatory vacuum. Employers must take into account, among other things, equality and non-discrimination legislation and data protection regulations in the workplace. There are specific restrictions on automated decision-making, and in many cases it is advisable to carry out a data protection impact assessment (DPIA). Transparency and the lawfulness of data processing, such as the appropriate legal basis for processing personal data, are key principles.

Build effective internal processes for the use of AI
    1. Is supervision in place? The use of AI systems requires an effective governance model and human supervision. The employer must appoint responsible individuals who have sufficient expertise and capabilities to monitor the operation of AI systems and intervene in the results they generate. In addition, the employer must ensure that AI is used in accordance with the system supplier’s instructions and that appropriate technical and organizational safeguards are in place.
    2. Does the AI system have access to the necessary data? The functioning of AI is based on the data it uses, which must be of high quality, appropriate and up to date. Incomplete or incorrect information can skew the algorithm and lead to inappropriate decision-making. The employer must therefore ensure the quality of the data and its regular evaluation.
    3. Is the documentation in order? Careful documentation is a key means of demonstrating the legality of the use of AI. Employers should keep, among other things, impact assessments, documents relating to dialogue with staff, agreements with AI suppliers, training materials, and information on usage monitoring. Data logs generated by AI systems must be retained for at least six months.
    4. Is risk management properly organized? The operation of AI systems must be monitored and tested regularly to ensure that they do not produce discriminatory or incorrect results. Employers should draw up written guidelines for the use and monitoring of AI and for the archiving of documentation. The employer should also ensure that there are mechanisms in place for reporting anomalies, as serious incidents must be reported immediately to the AI service provider and the competent authority.

Remember that the employer has the ultimate responsibility

AI does not transfer responsibility from the employer. For example, if the algorithm used in work shift planning systematically favors certain employees by giving them better work shifts, the seemingly neutral selection criteria actually lead to discrimination against employees with families, or the work-shift schedules do not meet the requirements of working time legislation, the employer is responsible for the decisions made. Simply referring to the functioning of the algorithm is not enough; the employer must be able to demonstrate, if necessary, that they have acted in accordance with the law. In practice, this obligation is impossible to fulfill without a sufficient understanding of how the systems used work.

In conclusion

The requirements of the AI Act will require many employers to make both technical and organizational changes in the workplace. At the same time, they challenge us to rethink the balance between management rights, efficiency, and employee privacy. The August 2026 deadline is approaching fast, and proactive preparation will significantly reduce legal and operational risks. With expert support, the introduction and practical application of AI can be implemented in a controlled and lawful manner. We are happy to assist employers in all matters related to AI, employment law, and data protection.

Read more about our services for Employment Law.

Contact us

Annamari Männikkö
Attorney, Partner, Helsinki annamari.mannikko@nordialaw.com +358 40 164 8989

Related News