Artificial Intelligence (AI) is the order of the day and is already impacting work processes and people management in organisations in all its different forms. Anyone who has followed its evolution in recent months has seen rapid and surprising progress.
In the field of recruitment, the use of AI has become widespread, through the writing of job descriptions, interaction with candidates through chatbots, AI is also frequently used in performance management, screening applications and analysing and evaluating candidates' responses and behaviour. AI is also frequently used in performance management, whether in defining metrics and objectives, identifying development opportunities, designing reward programmes or selecting employees with a profile for promotion, among others.
Faced with this rampant and inevitable irruption of AI into companies and society in general, two opposing types of opinions and reactions have emerged in the media and on social networks.
One, the optimist, argues that the productivity gains from AI will benefit humanity in general. For example, the routine workload taken off by AI will free up more time for organisations' people management teams to devote to more valuable and ultimately more human activities, such as monitoring and developing their employees' talents. Employees, for their part, will also have more time to do more rewarding tasks and develop their talents.
At the opposite end of the spectrum is a pessimistic view that fears the loss of many jobs that won't be entirely replaced by others, or at least not by the same people. What should be done with these workers? Retraining will be possible for some of them, but it's doubtful for all.
In addition to these concerns, the increasingly widespread use of AI raises pressing ethical questions, including in the case of talent management in organisations:
- Fully automated and opaque decision-making systems involving job selection and career progression decisions;
- Biases in algorithms trained with historical data that perpetuate stereotypes and discrimination. For example, if the algorithm is trained on the basis of previous hiring decisions, it may systematically select candidates of a certain gender, ethnicity or age;
- Data protection and privacy: AI models work on the basis of large volumes of data, which can be personal and sensitive, and collected and used in an unclear way or even without the consent of its owners;
- Manipulating behaviour: AI can be used to influence attitudes and behaviour, including by creating disinformation and harmful content, such as manipulated images and videos (deepfake), jeopardising people's autonomy.
It is therefore necessary for AI technologies to fulfil transparency criteria. In this sense, the Labour Code has enshrined a duty of information regarding AI systems, which must be fulfilled vis-à-vis the worker (and, where they exist, vis-à-vis the workers' committee and trade union delegates), namely by providing information on the parameters, criteria, rules and instructions on which algorithms or other AI systems that affect decision-making on access to and maintenance of employment, as well as working conditions, are based.
If we are talking about hiring or promotion decisions, the selection criteria and instruments used must be defined and documented from the outset, so they are easy to explain and auditable. The use of valid assessment techniques that are related to the requirements of the job is fundamental to guaranteeing objectivity and fairness between candidates. The Labour Code stipulates that decision-making based on the use of algorithms and other AI systems cannot prejudice the right to equal opportunities and equal treatment with regard to access to employment, training and promotion or professional careers and working conditions.
Studies show that using tests, questionnaires, exercises and interviews that assess skills and competences proven to be required for the job combats biases and helps to build a more competent and diverse workforce.
In addition to the obligations arising from national legislation, it is also important to consider compliance with the Artificial Intelligence Regulation. Under the terms of this Regulation, most AI systems used by companies in the workplace are classified as high risk, depending on their potential for injury, which demonstrates the need for caution when using these AI systems.
It is also essential that the organisation defines who is responsible for the decisions made and who is liable for any errors. In other words, there must be human supervision and contingency plans in case of errors and failures.
As AI develops and its use becomes more widespread, ethical issues become more complex and companies must take steps to avoid bias and discriminatory practices.
Isabel Paredes, Partner e Chief Psychologist, SHL Portugal
Helena Manoel Viana, Associate Lawyer, VdA
Published in Human Magazine on 29/11/2025













