top of page

Algorithms at Work

  • Writer: Nikolaos Papageorgiou
    Nikolaos Papageorgiou
  • Nov 9
  • 8 min read

Once I was hiking with a friend at Hymettous mountain in Athens and at one point I asked him: "Aren't you afraid of all the 'consents' that all the apps we use every day ask for? all the personal information that people voluntarily put on social networking sites? "

His answer puzzled me. "No, I have nothing to hide, why should I be afraid?" This argument, as long as it makes sense, has also been the basis of many authoritarian regimes throughout history. We all have something to hide, from the most innocent to our deepest secrets; privacy has been a fundamental human need since the dawn of civilisation.


Huge tech companies that developed and nurtured the algorithmic & AI enabled systems – including all software incorporating machine learning techniques, logic based systems and other statistical computational methods – (Jeremias, Prassl) are using our personal and behavioral data in exchange for their services and the comfort they offer; however the consent button with all the unintelligible terms in small letters is mostly a delusive than a real option; refusal automatically means exclusion from all the means we daily use to navigate through our modern lives, from paying the bills to finding that nice café and contacting other people - the technology itself is miraculous and has revolutionised the way we experience our world; from the introduction of the first I-Pad to today's Chat GDP, the evolution is beyond imagination and the possibilities unlocked seem limitless.


ree


At TUI there was a board initiative to make the company digital. The desired outcome for our customers could be described as follows:  "At the touch of a button, you can now book your 360 holiday experience, including flight, accommodation and holiday activities from local providers in the destination of your choice - all easily via your mobile device", "During your holiday, you can use the same app to contact your representative to provide information about your whereabouts or to assist you in the unfortunate event of an emergency". Moreover, the application would provide the customer with information on the logistics of the trip all arranged by the company. This approach was uniformed for all countries on all continents where the company operated.

As an HR Specialist, I was responsible for a project covering all of the company's offices in Greece - islands and mainland - as well as many subsidiaries and external partners in the UK and Europe. Apart from my colleagues in Athens, with very few exceptions, I never met in person any of the dozens of people I worked with on a daily basis. All the coordination, the training, and the material sharing, was done using digital communication tools and work platforms. Different groups had access to specific folders, while every action was registered by surname, so we knew who was doing what. I had total control of the project, and I could do so from the company's premises in Athens, from the comfort of my home, from the hotel I was vacating at in Crete, or even from my mobile phone on the sunbed.

Technology has enabled decentralised HR structures and remote working. It has allowed fewer managers to manage more people and has 'elevated' the workplace from the desk to the cloud and from there to anywhere, redefining the boundaries between domicile and work. The dawn of algorithms is here, and traditional HR processes are being digitised (Adams-Prassl, 2022), leading to a hybrid model of 'human-machine' or, in many cases, 'machine-only' decision making. AI systems enable data-driven initiatives that increase an organisation's efficiency, adaptability and ultimately productivity. (Shah et al., 2017). At the same time, however, concerns are being raised about the digitisation and commodification of our personalities (Zuboff, 2019), the new possibilities for tight control over the workforce, and the ever-increasing involvement of algorithms in critical management processes that can lead to the dehumanisation of the production process (Ajunwa et al. 2017; Rosenblat et al., 2014) and the automation of pre-existing racial discrimination (Benjamin, 2019).


Capitalism evolves with technology; the industrial revolution was followed by the digital revolution, and now we are living in the age of surveillance capitalism, as Shoshana Zuboff aptly describes in her title book. Our lives have been monitored and, through datafication, our personalities have been treated as commodities, as assets that don't belong to us but to other people who make a fortune from those assets. It is typical that the New York Times is suing Open Ai and Microsoft over copyright issues because millions of articles have been used to train chatbots without the newspaper's consent, which are now competing with the Times as a source of reliable information (The NYT, 2023).

Silicon Valley trades in, shapes and nurtures our very existence; through governmentality and discipline it encodes norms and imposes uniformity and ownership; it mines our personal data to train its algorithm and AI-enabled systems, which then come after us with aggressive marketing initiatives and predictive capabilities, anticipating our next move and shaping our behaviour, ultimately creating the era of certainty to which it holds most of the keys. (Zuboff, S. 2019).

If nothing else, the Cambridge Analytica scandal has shown that our behavioural data has been traded in the shadows and then used for purposes that affect the functioning of democracy it-self ( Hu, M., 2020). The difference between us as individuals and the NYT as a prosperous organisation is that we don't have the same means and resources to protect our interests, privacy and freedom; however, individual actions such as algo-activism and legislative acts, including the EU's GDPR and AI Act, can provide a shield against the intrusion of algorithmic surveillance into our personal and professional lives (Kellog et al 2020; Adams-Prassl, 2022).    

Surveillance has a long history, from the era of slavery and colonialism to the technical and bureaucratic control of the 20th and 21st centuries; employers innovate to maximise the value extracted from workers and workers resist to maintain their autonomy, dignity and identity (Kellogg et al, 2020). In Foucolt's panopticism, all relevant parts have their own distinctive role and the smallest movements are monitored, all events are recorded in reports and passes from the syndics to the intendants to the mayor, forming a continuous hierarchical model that imposes discipline on the masses. Modern devices and tools offer new possibilities for a tighter, constant, ever-present form of control that tirelessly monitors the physical and digital trail of the workforce.

This strict form of control is a prevalent reality in the gig economy, where algorithmic management practices have replaced traditional HR processes by using self-learning algorithms to monitor, manage and control workers and the labour process (Duggan et al, 2020). In app work platforms such as Deliveroo and Uber, HR professionals tend to be replaced by systems designers, data scientists, programmers and marketing specialists; HRM processes such as performance management have been replaced by online job rankings, and work assignments are made without face-to-face interaction (Duggan et al, 2020).

However, this approach raises critical questions about the relationship between employer and employee; the absence of a human manager and the positive impact that this relationship could have on the phycology of the employee can seriously affect the moral and general wellbeing of the worker; as McAdam describes, an appreciative approach to marginalised young people that highlighted their potential, boosted their self-confidence and led them to achieve many of their professional goals, generating feelings of happiness and joy (McAdam et al., 2009).On the contrary, algorithmic nudges and penalty systems used to direct and punish - the deactivation period on Uber's platform in the event of 'wrong behaviour' - push workers to maintain above-average ratings at the expense of their mental and physical health (Duggan et al., 2020). Furthermore, the proliferation of human managers and the pervasive control enabled by algorithmic management can lead to workers losing trust and confidence (Duggan et al, 2020; Rosenblat et al, 2014) and generate feelings of sadness and anxiety.

Even in more conventional companies, we are seeing the realisation of algorithmic HRM outcomes such as CV screening based on text mining, compensation and appraisal according to data processing of worker performance, and learning and development through algorithmic prediction of skills gaps (Meijerink, 2023). Amazon monitors its warehouse workers by tracking their movements and the time it takes to load and unload items from the docks through sensors in their tablets to consistently use this data for performance reviews or shift assignments (Rosenblat et al. 2014); LinkdIn's algorithm recommends the best potential matches to recruiters based on their organisation's needs, while employers give wellness companies access and deep insights into their employees' personal data. Walmart, for example, pays Castlight Health, Inc. to assess employee data and nudge them towards weight loss programmes or suggest physical therapy instead of expensive surgery (Ajunwa et al. 2017).

What are the implications for the psychological contract of trust between employer and employee, and for employee autonomy and well-being, in the context of such initiatives? As described by Kellogg, Valentine & Christin, employers use algorithms to: direct workers by restricting and recommending; evaluate them by recording and rating; and discipline the workforce by replacing and rewarding (Kellogg et al, 2020). However, it is important here to examine algorithmic management as an alternative to traditional HR practices and to theorise about the outcomes of both approaches on the productivity and mental health of the workforce. As Kellogg suggests, the above initiatives could potentially be perceived by workers as disempowerment, manipulation, surveillance, discrimination, precariousness and stress; what combination of human-algorithm intervention could prevent the generation of negative outcomes?

Algorithmic systems and their subsequent HRM outcomes (Meijerink et al., 2023) are inevitably associated with big data - large, unstructured and often unrelated data sets.Big data is complex to analyse (Kennedy, 2016), but once this is successfully done using statistical and machine learning techniques, it allows for the prediction of trends and the identification of patterns, enabling organisations to make quick business decisions and increase their productivity. (Shah et al, 2017)

Big data in the HR context takes into account the different life experiences and values, socio-demographic characteristics, motivation levels and behavioural patterns of each employee and feeds algorithmic systems to further produce or recommend HRM actions. However, implications have been raised regarding the reproduction of pre-existing racial inequalities through this bilateral relationship between big data mining and algorithmic and AI-enabled systems. As Benjamin Ruha points out in his book Race After Technology, automation, despite appearing neutral or objectively correct, has the potential to perpetuate discrimination, an argument that goes against the prevailing notion that technology is incapable of bias.

As briefly mentioned above, comprehensive legal frameworks are an effective safeguard against the digital commodification and surveillance of our personal and professional lives. The EU has been at the forefront of this, and after the GDPR, which focused on the use and protection of personal data, the AI Act followed in an attempt to examine and legally frame the use of algorithmic-enabled systems and AI software. The AI Act has classified the use of AI systems for critical HR functions such as recruitment and selection, CV screening and filtering, candidate assessment and decision making on promotion or termination of employment relationships as high risk (Adams-Prassl, J., 2022).

But, when a similar regulatory framework remains more vague on other continents, it raises concerns about the legal fortification of not only the workforce, but humanity itself, against the possibilities but also the practices of AI and algorithm-driven systems and those behind this technology. These concerns are only heightened when we see the world's most powerful nations competing in a race to see who can integrate AI into their armies better, faster and more efficiently (The Economist, 2023). The war between Ukraine (backed by the EU and the US) and Russia has already been fought in terms of AI, robotics and satellite internet (The Guardian, 2023).

Whoever conquers the field of AI will have leverage over competing parties, and combined with the lack of commonly agreed legislation, the situation can be derailed into uncharted territory; ambition must be guided by a moral compass that allows humanity to flourish under the emerging possibilities of new technologies.


Nikos Papageorgiou

11.09.25

Comments


Contact Us

Reach Out Anytime, We're Here

500 Terry Francine Street, 6th Floor, San Francisco, CA 94158

  • Facebook
  • Instagram
  • X
  • TikTok

123-456-7890

Build your Pro-Dreams with Sky - CV

© 2035 by Build your Pro-Dreams with Sky - CV. Powered and secured by Wix

bottom of page