Candidates Are Using AI to Cheat Interviews in Real Time And Most Companies Are Not Ready
Candidates are using AI to fake interview answers in real time. Most hiring processes were not built to catch it. Here is what actually works now.
A recruiter at RedBalloon recently shared what her screening calls look like in 2025. Candidates showing up in bathrobes. Wet hair. One person started doing yoga mid interview. And the ones who looked polished? Some of them were outsourcing their answers to AI tools in real time.
This is not a one off horror story. It is a signal. And if you run a hiring pipeline of any meaningful size, you need to take it seriously.
Fortune reported the trend as part of a growing pattern that recruiters across industries are flagging. Nearly five years into the hybrid and remote work era, remote interview professionalism has not just plateaued. It has degraded. But the bathrobe problem is the easy part. The AI problem is the one that should keep you up at night.
The Real Problem Is Not Pajamas. It Is Fabricated Competence.
Let's separate the two issues. Sloppy interview etiquette is a filtering problem. Someone shows up looking like they rolled out of bed, you move on. That costs you five minutes.
But a candidate who sounds articulate, gives structured answers, and hits every keyword on your scorecard while quietly feeding your questions into ChatGPT or a real time coaching tool? That is a different animal entirely. That candidate might make it through your phone screen. Your second round. Maybe even your final interview if it is also remote.
You will not know until they are 60 days into the role and cannot perform without the crutch.
For sales and operations leaders running high volume hiring, this is an integrity problem baked into the top of the funnel. Most companies have not updated their process to account for it.
Your Interview Process Was Designed for a Pre AI World
Think about how most enterprise hiring still works. Recruiter screen over Zoom. Hiring manager interview over Zoom. Maybe a panel. Maybe a case study sent in advance with a presentation over Zoom.
Every single one of those touchpoints is vulnerable to AI assistance. A candidate can have a second screen open. They can use an earpiece. They can paste questions into a tool that generates polished responses in under three seconds.
The infrastructure of remote interviewing was built for convenience and speed. It was not built for verification. The evaluation frameworks most companies rely on are behavioral questions, STAR format responses, and competency rubrics. These are exactly the kind of structured output that large language models are optimized to produce.
You are essentially running a test that AI was trained to pass.
What Actually Works Now
Some companies are already adapting. Here is what the smarter ones are doing.
Live problem solving with no prep time. Give candidates a scenario in real time and ask them to walk through their thinking out loud. Not a rehearsed pitch. Not a take home assignment. A live, messy, unscripted problem. AI can generate answers but it cannot replicate genuine cognitive process in conversation.
Mandatory camera on, screen share off. Watch for eye movement patterns. Candidates reading from a second screen have a tell. Their gaze drifts consistently to the same spot. Trained interviewers can catch this.
In person final rounds. Yes, it is more expensive. Yes, it is less convenient. But for roles where competence matters, the cost of a bad hire dwarfs the cost of a plane ticket. Some companies are mandating at least one face to face interaction before extending offers.
AI detection in recruiting tech. This is early but growing fast. Tools like HireVue and others are exploring features that flag anomalous response patterns, unusual pauses consistent with text generation, and other behavioral signals. The tech is not perfect yet. But the market is moving.
This Is a Process Problem, Not a Culture Problem
It is tempting to frame this as a generational thing or a remote work culture failure. That misses the point entirely. Candidates are rational actors responding to incentives. If the system allows AI augmented answers with no detection mechanism, some percentage of candidates will use them. That percentage is going up, not down.
The fix is not to complain about professionalism. The fix is to redesign the process so that authentic capability is what gets measured. That means changing the format of interviews. Updating evaluation criteria. Training hiring managers on what AI assisted responses look and sound like. And yes, being willing to spend more per hire to protect the quality of your pipeline.
The companies that figure this out first will not just hire better. They will build a structural advantage in talent quality that compounds over years. The ones that do not will keep wondering why their new hires look great on paper and fall apart in practice.
That gap is already widening. The question is which side of it you are building on.