The TUC has launched a new taskforce to look at the “creeping role” of artificial intelligence (AI) in managing people at work. The launch comes as a new TUC report, Technology managing people: the worker experience, reveals that many workers have concerns over the use of AI and technology in the workplace:
- One in seven (15%) say that monitoring and surveillance at work has increased since COVID-19.
- Six in ten (60%) say that unless carefully regulated, using technology to make decisions about people at work could increase unfair treatment in the workplace.
- Fewer than one in three (31%) say they are consulted when any new forms of technology are introduced.
- More than half of workers (56%) say introducing new technologies to monitor the workplace damages trust between workers and employers.
Management by algorithm
The report says new forms of worker tech have accelerated during the pandemic – including AI. The AI recruitment market is now forecast to be worth nearly $400 million by 2027 and AI is increasingly being used as a tool to manage people. This includes selecting candidates for interview, day-to-day line management, performance ratings, shift allocation and deciding who is disciplined or made redundant.
The TUC says that AI-powered technologies are currently being used to analyse facial expressions, tone of voice and accents to assess candidates’ suitability for roles, and AI is being utilised by employers to analyse team dynamics and personality types when making restructuring decisions. Left unchecked, the union body warns that AI could lead to greater work intensification, isolation and questions around fairness.
When the TUC surveyed workers about their experience of being managed by AI, many described a sense of loneliness and pressure. One worker reported their working life had become “increasingly robotic, alienating, monotonous and lonely”. Another said that “going to work is not enjoyable anymore as you are scrutinised and watched over constantly”.
New taskforce
The taskforce will bring together experts from trade unions and the legal world to develop new proposals for protecting workers from “punitive” forms of performance management by AI and other types of new technology.
The group’s objectives will include:
- Enabling collective bargaining on the use of technology and data at work.
- Achieving more worker consultation on the development, introduction, and operation of new technologies.
- Empowering workers and trade unions with technical knowledge and understanding to organise and negotiate better deals for workers.
Early next year, in partnership with AI law experts Robin Allen QC and Dee Masters, the taskforce will publish a legal report on how the needs of workers and trade unions should be recognised in the use of AI at work.
Treating workers with dignity
The TUC says workers must share in the gains of new technology and not be robbed of their dignity at work, but the new report warns that global corporations, like Amazon and Uber, are driving advances in the use of AI to monitor and set more demanding targets for workers.
The study shows that a third (33%) of those employed on insecure contracts feel that they have their activities at work monitored at all times.
TUC General Secretary Frances O’Grady said:
“Worker surveillance tech has taken off during this pandemic as employers have grappled with increased remote working. Big companies are investing in intrusive AI to keep tabs on their workers, set more demanding targets – and to automate decisions about who to let go. And it’s leading to increased loneliness and monotony. “Workers must be properly consulted on the use of AI, and be protected from punitive ways of working. Nobody should have their livelihood taken away by an algorithm. As we emerge from this crisis, tech must be used to make working lives better – not to rob people of their dignity.”