Cybercriminals have developed a sophisticated two-stage attack methodology that eliminates malicious links and attachments entirely from their initial contact. Discovered recently by Fortra researchers, these attacks use email threads that appear to be created entirely by AI, mimicking communication styles and using details readily available on the open web. Most importantly, they are bypassing security controls undetected.
By removing traditional indicators of compromise, attackers circumvent signature-based detection, URL filtering, and sandboxing technologies that organizations rely on. The first stage deploys fabricated email threads with one goal, establishing communication and bypassing technical controls through pure social engineering. Once employees respond to what appears to be legitimate internal correspondence, they are prompted to pay a fraudulent invoice.
The invoice looks legitimate. A senior team member from a mid-sized technology firm appears to be corresponding with a billing coordinator about an IT services invoice, complete with a 10% early payment discount. The email thread follows the natural cadence of business communication: an initial invoice, some clarifying questions, and a final payment authorization.
But this entire conversation never happened. The thread was constructed from scratch using publicly available employee information and AI-generated communication that perfectly mimics internal company correspondence. At Fortra, we've identified this new tactic in active attacks where LLMs produce error-free, natural communication that operates entirely outside traditional email security detection mechanisms. These attacks exploit the gap between technical filters and human judgment, delivering social engineering payloads that leave no technical footprint for security tools to analyze.
The Anatomy of a Conversation Hijacking Attack
1. Intelligence Gathering
These attacks begin with attacker reconnaissance. Public platforms such as LinkedIn often offer a blueprint of a company's hierarchy, revealing who likely approves payments, and who could be making the requests. Threat actors study communication patterns, team structures, and identify potential workflows.
Open-source intelligence can be expanded through press releases, financial filings, and industry announcements that highlight vendor relationships and key projects. Past data breaches can even provide writing samples to help attackers mirror internal communication styles. The result is enough information to construct believable email threads that follow known processes.
2. Conversation Crafting
Emails are spaced over realistic timelines, with delays that match typical business response cycles. The tone of each participant is tailored to their role: executives use assertive, decision-focused language, while support staff communicate in longer, explanatory sentences. If known, attackers can even include references to company initiatives, policy changes, or vendor milestones, making the messages feel authentic. Even email signatures, legal disclaimers, and formatting mirror the company's brand.
3. Psychological Manipulation
The attackers exploit a combination of authority bias, confirmation bias, and social proof. Authority bias compels employees to follow requests from apparent superiors without question. Confirmation bias leads people to focus on familiar details such as names, project references, and internal lingo all while overlooking potential red flags. Meanwhile, the presence of multiple individuals in the email chain creates artificial social proof, suggesting the decision has already been vetted. These messages are successful because they align with internal processes. When a fabricated thread shows a clear chain of approval, the transaction is processed as if it were any other routine business request. It transforms external deception into what appears to be internal authorization.
The Technology Powering the Deception
Today's attackers use large language models to generate business-grade communication that mimics an organization's tone and workflow. These systems are capable of adapting to the style of a company's communications using publicly available data. AI-generated emails can maintain conversation continuity, replicate organizational jargon, and create realistic timestamps that follow typical work patterns. This automation enables attackers to more easily target companies with unique, customized threads.
In conjunction with this, automation tools can scrape websites, monitor social media profiles, and track organizational changes. Email signatures, branding, and formatting are reverse-engineered and cloned. Some recently spotted attacks have included highly customized metadata or email headers that further reinforce legitimacy. The result is a crafty forgery of an internal communication accomplished with minimal effort.
How Organizations Can Defend Themselves
Mitigation starts with multi-channel verification. Organizations should look to establish procedures that require employees to confirm "high-risk" actions. This would include verifying things such as financial transfers or password resets, using communication methods outside of email. A quick phone call, secure chat message, or internal ticketing system can introduce just enough friction to stop an attack. Time delays for large transactions and dual-approval policies can provide further safeguards.
Information exposure management is also key. If possible, organizations should audit employee’s professional social media profiles, limit disclosure of team structures, and reconsider the level of operational detail included in public communications. Managing the visibility of approval hierarchies and limiting references to active projects can significantly reduce the intelligence available to attackers.
Building Security-Conscious Cultures
True resilience against conversation hijacking does not come just from technology, but from cultural change. Organizations should normalize verification and foster an environment where challenging authority in the name of security is not simply allowed, but expected.
Standard security training should include simulations that go beyond generic phishing exercises. Employees need exposure to fabricated threads that look and feel like real internal emails. These exercises should address not only technical red flags, but psychological triggers like trust, familiarity, and routine. Cross-functional awareness is also essential. Often, a message may appear routine for one department but would raise concerns if shared across teams. Encouraging open discussion about unusual requests can uncover discrepancies before damage is done.
The Path Forward
AI-amplified conversation hijacking is a powerful evolution in social engineering. These attacks are successful not because of obvious red flags, but because they exploit familiarity, process, and internal authority. Attackers are inserting themselves into trusted workflows and asking for nothing more than routine compliance.
Organizations must respond with layered defenses such as enforcing multi-channel verification, training for even subtle manipulation, reduced corporate intelligence exposure, and building a culture where additional security questions are encouraged. As AI continues to advance, so too will the ability to forge realistic conversation threads. Additionally, the proliferation of writing assistance tools, from spell checkers to grammar platforms, is creating increasingly uniform communication patterns, making it harder to distinguish authentic personal writing styles from sophisticated forgeries.
Organizations positioning themselves for success will be those that grasp these dual trends in threat evolution and internal tool adoption, treating communication integrity as a core operational risk alongside data protection and business continuity.