More and more companies are incorporating mandatory generative AI into fundamental aspects of their operations such as hiring and performance reviews; this includes building team sizes and roles around what AI can and cannot do.
According to reporting from The Washington Post, some organizations hide their AI use in case of backlash, while others advertise it.
Generative AI usage in the workforce seems to be here to stay for now at least, so let’s take a closer look at how this tech is impacting hiring and business strategies, as well as an important question leaders should be pondering.
Snapshot of AI’s impact on the hiring process
Jobs most likely to be impacted by AI
What work is left for humans? Jobs made up of repetitive tasks, including data entry clerks, telemarketers, and cashiers, are most likely to disappear in favor of AI. Jobs that require adaptability within a highly specific skill set, like electrical work or nursing, are safer.
In addition, AI agents are transforming software development work.
“In engineering, we’re probably in the final generation where you can go into a company with no AI coding expertise,” Box CEO and co-founder Aaron Levie said in an interview with The Washington Post.
AI is writing resumes and conducting job interviews
Job seekers encounter AI long before they’re hired. Some companies use generative AI avatars to screen applicants and conduct job interviews. At the same time, hiring managers are flooded with poor-quality resumes written by AI. Job seekers, meanwhile, may already be disheartened by a faceless process, only to encounter even more automation.
Companies that use AI hiring tools should keep an eye on the lawsuit that claims Workday’s AI-driven hiring software unfairly filters out applicants over 40.
AI-first strategies leading to staffing questions
In April, Duolingo CEO Luis von Ahn said the company would replace contractors with AI wherever feasible, evaluate AI usage during performance reviews, prioritize AI fluency in hiring, and open new roles only when automation couldn’t meet the need. His LinkedIn post about the initiatives sparked a backlash, with users cancelling subscriptions and saying the company was prizing AI over its human workers.
A few weeks later, van Ahn acknowledged, “AI is creating uncertainty for all of us,” but did not change the policies.
Executives are increasingly reluctant to hire people to perform tasks that generative AI can handle. Still, van Ahn told The Washington Post that Duolingo’s hiring pace remains unchanged despite the company’s expanded AI use.
Shopify has asked teams to demonstrate why it wouldn’t be possible to fill an open role with AI before opening the role up to applicants. Employees in all roles are being asked how they can use AI to increase efficiency.
Some AI risks for business leaders to consider
Companies that lean too heavily on AI risk operational setbacks. For instance, fintech company Klarna has recently begun hiring gig workers to bolster its workforce, which had been gutted in favor of AI. After a 34% headcount reduction, the company recognized it had overextended and brought back some human roles to improve customer service, according to Klarna spokesperson Clare Nordstrom.
In addition, generative AI still tends to produce inaccurate information and has a massive environmental impact.
How much responsibility should generative AI ’employees’ have?
Generative AI is no longer new; organizations have had years to explore it, and college graduates entering the workforce today may have spent those years saturating their personal and educational lives in generative AI content and workflows.
The question leaders should be considering is: How much responsibility will companies grant generative AI ’employees’?