Imagine a situation where an AI tool screens hundreds of job applications in minutes and suggests the strongest candidates for interview, or optimises a complex shift rota while taking into account the needs and preferences of thousands of employees. This is no longer wishful thinking somewhere on the horizon, it is already reality in many organisations. According to research, 52% of Finnish organisations use AI solutions in HR tasks. While adoption is still fragmented, the potential is enormous. AI can streamline recruitment and onboarding and offboarding, ease shift planning and task allocation, support performance management, and produce better workforce analytics.
But AI is only as strong as the people who use it. To realise the benefits responsibly and sustainably, HR teams and managers need training and clear ground rules. At the same time, the bar for employers is rising. The use of AI raises legal, ethical and practical questions that cannot be solved with an AI strategy slide deck alone. The law is adding obligations to the use of AI, most recently the EU’s AI Act, which sets requirements both for providers of AI applications and for those who use them.
Bringing AI into use: needs, risks and cooperation
Before deploying an AI system in HR, employers must take care of several important issues. The first step is to identify the need AI is intended to address and to assess the related risks. Labour and data protection laws in particular require that the impacts of new technology on employees are assessed in advance. If, for example, a tool supporting recruitment based on AI is to be introduced, we are dealing with the core of processing job applicants’ personal data. The employer must identify and justify why personal data is processed, how and to what extent it is used, and what changes the AI system may require to current practices or to how applicants are informed. Data protection law requires a risk assessment of personal data processing (data protection impact assessment) before introducing new technology, whether or not AI is involved. In practice, HR AI systems almost automatically mean that risks to employees’ privacy must be identified and appropriate safeguards defined in advance. Fully automated decision making in recruitment, such as screening applications without any human involvement in the final decision, is generally prohibited under data protection rules, so the recruiter’s influence must remain part of the process.
Employers also have cooperation obligations when developing their operations by introducing new technology. Every organisation with at least 20 employees must engage in dialogue with staff to safeguard employees’ opportunities to influence matters that concern them. Deploying AI-based solutions in HR clearly falls into this category and should be discussed with staff in good time. Employers with more than 50 employees are also subject to a specific, lightened change negotiation obligation when new technology is introduced. If AI is expected to reduce the need for labour or materially change job duties, broader change negotiations must be conducted with staff before deployment. All these cooperation processes must take place at the right time, in other words before procurement decisions are made, so that the requirements of the Act on Co-operation within Undertakings are met.
Occupational safety law also applies to the introduction of AI in workplaces. The core idea is that the employer must identify work-related hazards and harms and respond to them in advance. AI brings new dimensions to traditional safety thinking: what hazards and burdens might follow from deploying AI, and how should they be prevented? For example, learning to use new technology can place mental strain on employees, and concerns about one’s rights can cause stress when machine intelligence is involved in HR and supervisory work. The employer must assess these risks as well and ensure appropriate induction, wellbeing and support through the change.
It is widely recognised that AI use challenges non-discrimination at work. Under the Non-Discrimination Act, employers must not treat employees or applicants differently on discriminatory grounds such as age, gender or other personal characteristics. Because AI learns and draws inferences from the data it is fed, it can internalise biases that lurk in that data. This could mean, for instance, that a recruitment algorithm favours candidates based on gender rather than merit. If most previous hires have belonged to a particular gender, AI may treat that as a model of “success”. Eliminating discriminatory bias in AI is difficult because algorithms are often opaque to users and their decision making is hard to explain. In line with the current Government Programme, a public-sector research project has been launched in Finland to identify and prevent discrimination risks related to AI.
The EU AI Act tightens requirements step by step
The European Union has stepped firmly into the regulation of AI. The EU AI Act entered into force in the summer of 2024 and brings new requirements to AI use. From February 2025 the Act has already required organisations to build AI literacy among their staff. AI literacy means employees have the ability to critically evaluate outputs from AI and to use AI responsibly and appropriately.
More is coming from August 2026, when the Act’s core risk-based obligations take effect. There are four risk categories: prohibited, high risk, limited risk and minimal risk. This is a significant milestone for employers using AI because many systems acquired to support HR processes are classified as high-risk AI systems under the Act. High risk brings higher legal requirements for employers who use such systems. Compliance with the AI Act is backed by hefty administrative fines that can reach into the millions depending on company size.
Certain applications are entirely prohibited. These include tools used for recognising employees’ emotions that analyse, for example, intentions or job satisfaction. Likewise, using biometric identifiers to classify people based on ethnicity, political opinion, religion or sexual orientation is not allowed. Nor is social scoring based on personal characteristics if it leads to detrimental treatment for the person, such as limiting or preventing career progression. Covert manipulation or the exploitation of vulnerabilities is also prohibited.
In the terminology of the Act, an employer who procures a ready-made AI solution for HR is typically the deployer. It is also possible, however, for an organisation to modify a general-purpose AI system for its own purpose, in which case the employer may become the provider under the Act. The distinction matters. Providers have far broader legal duties, such as ongoing quality assurance, technical documentation, system conformity assessment and detailed regulatory reporting, compared to deployers. For this reason, employers should as a rule prefer ready-made applications designed for HR use and operate them precisely in line with the manufacturer’s instructions and intended purpose.
Big risks, bigger responsibilities
What kinds of AI use are considered high risk in an HR context? Automated decision making and profiling based on personal characteristics are always high risk. According to the Act, high-risk tools include systems that affect access to work, employment conditions or career progression, and decision making about job performance. Systems used in recruitment, in decisions on employment terms and career development, or in termination of employment are classified as high risk. In the same category are systems that allocate tasks based on behaviour, personality or other personal traits, as well as systems for monitoring and evaluating employee performance during employment. These are all situations in which AI directly affects people and how they are treated at work, which makes careful risk identification essential.
The line is not always straightforward, however. The uses listed above can be considered minimal risk if AI use does not cause significant harm or danger to employees’ health, safety or fundamental rights and does not materially influence decisions about them. A system that screens applications and recommends suitable candidates to a recruiter would most likely be a high-risk application. A tool that only classifies and transfers applications between systems without influencing who advances, or that detects anomalies in decision making without intervening in the decision itself, would fall into the minimal risk category. Interactive AI tools such as HR chat assistants fall into the limited risk category, where employers must inform users about the use of AI.
When an employer deploys a high-risk AI system, the Act requires several things. First, the system must be used properly in accordance with its instructions and intended purpose. Second, a responsible person or team must be appointed to oversee the AI. Those responsible must have appropriate competence, training and authority, together with the resources to do the job. Employers must pay particular attention to data governance, ensuring that the information fed in by the organisation is relevant and sufficiently representative for the purpose so that AI does not mislead. The user organisation must also react to risks during use and report observed errors or biased decision making to the system’s provider and to the regulator, and cooperate with the supervisory authority where necessary.
Transparency is also emphasised in the Act. Employees must be informed when an AI system is introduced and, if they are subject to AI-driven decisions, they have the right to an explanation of a decision that affects them. Last but not least, employers must retain the system’s logs for at least six months where these logs are under their control. Logs record, for example, how the system has made decisions. Retention and traceability are essential if disputes need to be resolved later.
As noted, many laws already require similar action from employers. For example, employee information can be handled appropriately through the cooperation process, and data protection legislation already requires appropriate handling of personal data. The EU’s AI regulation complements this by introducing entirely new, concrete AI obligations. For high-risk systems, employers must secure log retention and ongoing, adequate human oversight throughout the system’s lifecycle.
Users and data at the heart of your AI strategy
While legislation sets the framework for responsible AI use, success is ultimately determined by people. A company can craft an ambitious AI strategy, but if employees lack the skills, willingness or confidence to use the system, the benefits may fail to materialise. Trust in the system is decisive and, in decisions affecting personnel, it is downright critical. Studies show that concerns about one’s rights or about the intended use of the system directly affect whether employees accept AI as part of their daily work. Trust is also eroded by a perceived loss of influence and by scepticism about the system’s true capabilities.
Employers should therefore invest in the most open and transparent possible deployment and in comprehensive staff communication. Integrating AI into HR is a change process that calls for traditional change management and learning. New technology can initially burden both the users and those affected by its decisions, depending on roles and readiness. That is why leadership should listen, provide support and, above all, communicate the change. Later disagreements and potential disputes are best prevented by investing in competence and AI literacy.
Usability is a key factor in a successful AI investment. The so-called “shadow AI” phenomenon is well known. If the employer’s tools are seen as cumbersome or ineffective, employees may start using external AI solutions instead of the organisation’s own tools. This can squander the benefits of the investment and create real data protection and information security risks if data is handled in uncontrolled channels. Alongside AI literacy, organisations should invest in proper user training so that everyone can make the most of the new systems. It is also important to create clear rules for the use of external AI tools, such as general chatbots or other services, in work. This protects personal data, trade secrets and other confidential information.
The central truth of the AI era is that AI is only as reliable as the data it uses. Alongside big data, the focus must be on your own HR data. If it is incomplete or inaccurate, AI will inevitably draw wrong or imprecise conclusions. In the worst case, an employer may inadvertently discriminate against an employee or applicant if structural bias has crept into the data over time. The EU AI Act explicitly requires data quality and reliability in high-risk HR systems. In practice this means that collecting, updating and cleaning HR data needs constant attention so that AI-based decisions are grounded in representative, truthful and meaningful information. This is not an entirely new principle, because for more than twenty years Finnish employers have been allowed to process only employee data that is necessary for the employment relationship under the Act on the Protection of Privacy in Working Life.
In closing
AI’s vast potential will surely be harnessed more broadly in support of HR in the years ahead. Although its use can raise legal and ethical challenges, AI can at its best make work more meaningful and HR and supervisory work more equal, more individual and more efficient. The key is to implement AI in a planned way so that you get the full benefit while safeguarding employees’ rights. Ultimately, people are at the centre of it all, the users who turn strategy into reality in day-to-day work. Organisations that invest genuinely in their people’s technical and ethical competence, and in easy-to-use tools, will be in a strong position in the age of AI.

Kaisa Salo Counsel
040 168 1418
If you’d like to receive our articles directly in your email, subscribe to the Folks newsletter here.
