AI – der Problembär der Arbeitswelt? (BBC Worklife)

Drucken

Transparenz, Berechenbarkeit und Nachvollziehbarkeit - Werte, die im Verhältnis Arbeitgeber zu Arbeitnehmern eine gewichtige Rolle spielen: Wer kriegt den Job? Wer macht ihn am besten? Wer kriegt was für seine Leistung bezahlt? Wer muss wieder gehen? Transparenz ist auch ein Gütesiegel für eine moderne Unternehmenskultur und muss von der Führung im Innen- wie im Außenverhältnis vorgelebt werden.

Was sagt meine Studie Werte in der Arbeitswelt 2020 dazu? ArbeitnehmerInnen aus der DACH-Region halten viel auf Transparenz, jedoch meinen sie, dass Transparenz bei ihren Arbeitgebern keine Bedeutung hat. Eine durchschnittliche Unternehmenskultur ist demnach nicht von Transparenz gekennzeichnet (die Gründe dafür wurde in dieser quantitativen Studie nicht erhoben). Die jüngsten Finanzskandale wie Wirecard und Commerzialbank Mattersburg oder Misswirtschaftsfälle im öffentlichen Sektor (zB Abfallwirtschaftsverband Liezen) scheinen dieses Urteil der StudienteilnehmerInnen zu bestätigen: Unsere Unternehmen haben noch viel zu tun, um zu einer Transparenzkultur zu kommen.

Die Arbeitswelt muss sich derzeit mit einem neuen Player arrangieren: der künstlichen Intelligenz. Wie disruptiv diese neue Technologie ist, wie sie Arbeitsverhältnisse beeinflusst und was sie mit Werten wie Transparenz zu tun hat, beleuchtet folgender Artikel der BBC Worklife Redaktion (25.3.2021).

 

AI at work: Staff 'hired and fired by algorithm'

The Trades Union Congress (TUC) has warned about what it calls “huge gaps” in UK employment law over the use of artificial intelligence at work.

 

The TUC said workers could be “hired and fired by algorithm”, and new legal protections were needed. Among the changes it is calling for is a legal right to have any “high-risk” decision reviewed by a human. TUC general secretary Frances O’Grady said the use of AI at work stood at “a fork in the road”. “AI at work could be used to improve productivity and working lives. But it is already being used to make life-changing decisions about people at work - like who gets hired and fired. “Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment – especially for those in insecure work and the gig economy,” she warned.

Many workplaces already use automated decision making for simple tasks. For example, Uber assigns driving jobs to its drivers automatically, by computer, and Amazon is known to use AI monitoring systems to watch its staff in its warehouses. And many firms already use an automated system with no human oversight in the first stage of the hiring process, to narrow the field.

The computers rejecting your job application

Computer says go: Taking orders from an AI boss

But as AI becomes more sophisticated, the fear is that it will be entrusted with more serious, high-risk decisions, such as analysing those performance metrics to figure out who should be first in line for promotion – or being let go.

That can happen even when a human is involved, a TUC report warns, thanks to automated decision making. “A human might undertake some formal task, such as handling a document, but the human agency in the decision is minimal,” the authors write. “Sometimes the human decision making is largely illusory, for instance where a human is ultimately involved only in some formal way in the decision what to do with the output from the machine.” The TUC’s report, written with the aid of employment rights lawyers and the AI Law Consultancy, argues that the law has failed to stay abreast of quick progress in AI in recent years.

The union body is calling for:

Discrimination by algorithm has been well-documented in recent years, often as an unintentional side-effect of using systems that fail to account for racial bias. One high-profile example is in facial recognition technology, which has in the past been trained to recognise white faces more easily than those from other backgrounds. Such problems led IBM to abandon some of its efforts with the technology last year, labelling it as “biased”. The TUC also pointed to recent reports of allegations from delivery drivers for Uber Eats who claimed they had been fired because the facial recognition software was unable to recognise their faces. That led to drivers with 100% ratings and thousands of deliveries under their belts being fired for failing to complete an ID check, the affected drivers claimed. Uber denies this, saying a human review is always involved before it drops drivers from its platform.

The authors of the report for the TUC, Robin Allen and Dee Masters from Cloisters law firm, said while AI could be beneficial, “used in the wrong way it can be exceptionally dangerous”. “Already important decisions are being made by machines,” the pair said in a joint statement.

“Accountability, transparency and accuracy need to be guaranteed by the legal system through the carefully crafted legal reforms we propose. There are clear red lines, which must not be crossed if work is not to become dehumanised.”

 

Hier geht´s zum Originalartikel samt Video.

Du willst mehr zum Thema AI recruiting lesen? Klick hier.

Du willst mehr zum Thema Transparenz lesen? Klick hier.