11th February 2025
Radisson Blu Hotel Manchester Airport
24th September 2025
Hilton London Canary Wharf
Search
Close this search box.
Rise up
barnett-waddingham-advert
Clarity4D
Clarity4D

Are workers being punished financially for ignoring advice for AI?

Workers are punished for ignoring AI recommendations and making their own decisions, even if the decision is justified and leads to a better outcome, according to research by Frankfurt School of Finance and Management and ESMT Berlin.

In fact, the researchers found that managers punished workers, through lower bonus payments, when they swayed from AI recommendations, clearly showing that in many organisations there is an over-reliance on AI and algorithms.

These findings come from research by Prof Dr Mirko Kremer, Professor of Supply Chain Management at the Frankfurt School, Francis De Véricourt, Academic Director of the Institute for Deep Tech Innovation (DEEP) at ESMT Berlin, and Hossein Nikpayam, a Postdoc in Operations Management at ESMT Berlin and Frankfurt School. 

The researchers wanted to examine how organisations are implementing AI-based systems to improve and speed up work processes, and the unintended consequences that come with regularly delegating tasks to AI and algorithms.

To do so, the researchers created a controlled lab experiment where a model creates an AI recommendation and decision-makers needed to decide whether to continue with the AI recommendation or use their human judgement to make a different decision. Managers then pay decision-makers a bonus based on their decision – in some instances managers can see all recommendations, just one or none.

The results revealed a huge over-reliance on AI-supported systems when decision-makers are using them for recommendations. Managers wanted decision-makers to follow the advice of the AI algorithms, even if this advice is sub-par or less likely to have a positive outcome.

When decision-makers swayed from the AI recommendations, despite their human knowledge actually leading to a better decision and outcome, they were still punished by managers for not following the algorithm’s advice, and received lower bonus payments despite putting more effort and critical thinking into making the right decision.

This behaviour from managers causes employees to overly rely on the AI’s recommendations and ignore their own expertise and intuition, even when these are superior to the AI, out of fear of being punished. The consequence of this is that decision-making is not improved under AI algorithms – it actually deteriorates – and workers simply shift all decision-making power to an algorithm or risk negative personal consequences.

“While businesses across industries are racing to implement AI algorithms to enhance efficiency and decision-making, it’s crucial to remember that these systems should not be relied upon completely,” says Dr Kremer. “It is important to remember that human expertise remains essential to navigate complexities, make nuanced judgments, and adapt to unexpected situations that AI might miss – especially in in sectors like healthcare, aviation or autonomous vehicles, where the implications of blindly following AI advice can have huge implications.”

The researchers say that this research has important insights for organisations who are wanting to integrate AI systems into their decision-making processes. Firstly, transparency is crucial – managers need to be able to understand and evaluate the AI’s recommendations to make informed decisions.

Secondly, there should be a higher level of trust in human expertise. Organisations should emphasise the importance of human expertise and intuition and encourage employees to deviate from AI recommendations if they do not agree with their own judgment. 

And finally, clear communication structures are needed in organisations, so that employees can freely express their concerns and experiences with AI systems.

YOU MIGHT ALSO LIKE

Leave a Reply

Your email address will not be published. Required fields are marked *