Bosses Are Using AI to Decide Who to Fire

Although most of the signs tell us that artificial intelligence does not take the jobs of anyone, employers are still using technology to justify demobilization operations, and the use of external sources to the south of the world, and they fear workers to submit. But this is not all – an increasing number of employers use artificial intelligence not only as an excuse to reduce their size, but they give him the last saying in who gets a axis.
This is according to a survey of 1,342 managers by Resumebuilder.com, which runs a human resources code. Among those surveyed, 6 out of 10 admitted to consulting a large language model (LLM) when making large decisions to make human resources decisions that affect their employees.
According to the report, 78 percent said they consulted Chatbot to report whether the employee would give an increase, while 77 percent said they used it to determine promotional offers.
66 percent amazing said LLM, like Chatgpt, helped them make decisions on workers’ demobilization,; 64 per cent said they turned into artificial intelligence to obtain advice on the end.
To make things more dismantled, the survey record that approximately 1 in 5 managers often leave LLM to have the final statement in decisions – without human inputs.
Use more than half of the Chatgpt reconnaissance managers, where Microsoft’s Copilot and Google Gemini came in a second and third respectively.
The numbers draw a bleak picture, especially when you think about the problem of Sycophance LLM – a problem in which LLMS creates the temptation responses that enhance user’s willingness. Chatgpt from Openai is famous that it is running out, to the extent that it was forced to address the problem with a special update.
SYCOPHANCY is a particularly flagrant problem if Chatgpt alone makes the decision that can increase the livelihood of someone. Consider the scenario in which the manager searches for an excuse to launch an employee, allowing LLM to confirm his previous concepts and pass Pak effectively on Chatbot.
Ai Brownoscing has already some devastating social consequences. For example, some people who have become convinced that LLMS is really sensitive – which may be related to the brand “AI” – has developed the so -called “Chatgpt Psychosis”.
Chatgpt people have suffered from severe mental health crises, which are characterized by fake separators. Although chatGPT is only on the market for a little less than three years, it is already blamed for divorce, loss of jobs and displacement, and in some cases, the inappropriate commitment to psychological care facilities.
This is all without mentioning the LLMS talent of hallucinations-a non-harsh problem as Chatbots spit makeup moisture in order to provide an answer, even if it is completely wrong. Since Llm Chatbots consumes more data, it also becomes more vulnerable to this hallucinations, which means that the issue is likely to get worse over time.
When it comes to possible life change options like the one who shoots and who is promoting it, you will be better at rolling the dice-and unlike LLMS, at least you will know the difficulties.
More on llms: Openai admits that its new model continues to cheer more than a third of the time
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-07-06 13:00:00