If I had the ear of one of our executives, here are some of the points I would make…
One of the things that would be prime in my discussion is that hiring an AI engine to replace employees is bad for morale. It will disrupt the relationships and the rapport that the team has built. The group I am in has a great synergy and we can fill in the holes of each other’s presentations. We are more together than we are separately and replacing one or two of us with a mandatory use of AI would make part of our enthusiasm leave with them. Going out for drinks after work would not be the same after many of us are gone.
I also have noticed that employees who use AI and then force another worker to correct the lies and “hallucinations” that the coworker produces, the worker doing those remedial edits will resent the other employee for producing poor quality product. It will cause a disconnect between our actual skills and a what an AI ends up foisting on us. I aspire to being the best engineer I can and when my work is evaluated rigidly and I become a clerical worker rather than a creative team member, I no longer see that aspiration as possible.

Another thing will happen when we need to hire another worker. Any real person we consider won’t be chosen for their experience in our business but rather whether they are an LLM whisperer who can get (usually) adequate work from the AI engine. That goes to another concern, the AI is not going to be aware of the proprietary skills that we’ve developed. It might give me something that is good from an outsider’s point of view but that is different that what we need.
Right now, using AI tools to replace employees is a fad that is hard to resist. Will we evaluate those tools as if they were a regular candidate? What will be our criteria for choosing one vendor vs. another? It may be able to write much faster than we can, but our engineers do more than write reports and put together slide shows. Some of its explanations may be plausible, but correcting plausible English text into a quality product can be harder than getting it right the first time. It might be able take an expert and help make them a virtuoso, but it won’t take an average worker and make them a superstar.
Although AI proponents talk about the impending creation of a superintelligence or “artificial general intelligence,” their boosterism obscures that it doesn’t exist and that deception gives us unrealistic expectations. Would we want a coworker who lies and that isn’t as intelligent as our combined team? After all, its intelligence is due to a convincing parlor trick; it can sound intelligent, use the right jargon and look good in the number of words it can create per day, but it won’t pass other measures of value. It can also have a reverse Pygmalion effect on the remnant: supervisors who treat their employees as less skillful than the AI will make them less capable workers. The former synergy would be transformed into a defect rather than being what made us better than our competition.
Another risk of the AI is that we should not be surprised if it reveals our trade secrets and how we solve problems. It does not have loyalty and can make as many harmful decisions as beneficial ones. The tendency will be that the easily produced content of the AI would not get a proper vetting. Editing the fully formed text becomes impractical and the defects would get passed on. The apparent productivity boost of the products obscures the actual work needed to stay in the front. It will harm the company’s reputation to release lies and chaos.
Although talk of superintelligence may make it seem that we’ve got to start using it before the competition does, such a fad will not serve the company well. The media may be enamored with the big talk by AI proponents, but they are just salesman like any other business and what they promise is coming soon is not what they offer.