Monday, 23 Mar 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • ScienceAlert
  • VIDEO
  • White
  • man
  • Trumps
  • Season
  • star
  • Watch
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > Cisco Warns: Fine-tuning turns LLMs into threat vectors
Tech and Science

Cisco Warns: Fine-tuning turns LLMs into threat vectors

Last updated: April 6, 2025 7:44 am
Share
Cisco Warns: Fine-tuning turns LLMs into threat vectors
SHARE

Weaponized large language models (LLMs) that have been fine-tuned with offensive tradecraft are revolutionizing cyberattacks, prompting CISOs to rethink their strategies. These advanced models are capable of automating reconnaissance, impersonating identities, and bypassing real-time detection, thus enabling large-scale social engineering attacks.

Popular models like FraudGPT, GhostGPT, and DarkGPT are now available for as little as $75 a month and are specifically designed for malicious activities such as phishing, exploit generation, and credit card validation. Cybercriminal groups, as well as nation-states, are capitalizing on the revenue potential of these weaponized LLMs by offering them as platforms, kits, and leasing options. These models are increasingly being packaged and sold in a similar manner to legitimate SaaS applications, complete with dashboards, APIs, regular updates, and even customer support.

The rise of weaponized LLMs has blurred the lines between legitimate models and malicious tools, putting legitimate LLMs at risk of being compromised and incorporated into cybercriminal toolchains. Fine-tuning an LLM increases the likelihood of it generating harmful outputs, making it susceptible to attacks such as jailbreaks, prompt injections, and model inversion. Without robust security measures in place, fine-tuned models can quickly become liabilities for organizations, providing attackers with an opportunity to infiltrate and exploit them.

Research conducted by Cisco’s security team has shown that fine-tuning LLMs can compromise their alignment, particularly in industries like healthcare and finance where compliance and safety are paramount. Jailbreak attempts against fine-tuned models have been successful at much higher rates compared to base models, highlighting the increased attack surface that comes with fine-tuning.

See also  Son turns on his lawmaker dad for using 'Epstein loophole' to avoid jail for allegedly touching kids: 'Inexplicable trauma'

Malicious LLMs are now available on the black market for as little as $75 a month, providing cybercriminals with plug-and-play tools for various malicious activities. These models lack the built-in safety features of mainstream LLMs and offer APIs, updates, and dashboards that resemble legitimate SaaS products.

Additionally, the ease with which attackers can poison open-source training datasets for AI models poses a significant threat to AI supply chains. By injecting malicious data into widely used training sets, adversaries can influence the outputs of LLMs in impactful ways, leading to potential security breaches and vulnerabilities.

Furthermore, decomposition attacks can quietly extract copyrighted and regulated content from LLMs without triggering any guardrails. This poses a significant challenge for enterprises, especially those in regulated sectors like healthcare and finance, as it introduces a new compliance risk that extends beyond traditional regulations.

In conclusion, the evolving landscape of weaponized LLMs underscores the need for enhanced security measures and real-time visibility across IT infrastructures. Security leaders must recognize that LLMs are not just tools but represent the latest attack surface that requires proactive defense strategies to mitigate risks effectively.

TAGGED:CiscoFinetuningLLMsThreatTurnsvectorsWarns
Share This Article
Twitter Email Copy Link Print
Previous Article CBO says Medicare’s main trust fund to last until 2052 CBO says Medicare’s main trust fund to last until 2052
Next Article Leg Makeup Tips For A Swell Date Night Experience Leg Makeup Tips For A Swell Date Night Experience
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

KFC chickens suffering ‘on rise’

KFC urged to stop using 'Frankenchicken' breeds by animal welfare charity In a recent animal…

November 12, 2025

Tyler Perry Donates To Support Families Impacted By SNAP Cuts

Leading the pack is Kendrick Lamar, who received a staggering 11 nominations, including Album of…

November 10, 2025

Trump rips into the country’s past leaders and makes sweeping promises

By JENNIFER SMITH, DAVID ROBERTS, and EMILY WATSONNEW YORK (AP) — During President Donald Trump's…

January 20, 2025

Best CD rates today, January 16, 2026 (up to 4% APY return)

Are you looking for a secure place to store your savings and earn a higher…

January 17, 2026

20 Best Black Leggings for Women in 2026, Endorsed by Vogue Editors | Vogue

Silhouette: FlaredFabric: Nylon, elastaneChoosing the best black leggings is crucial for building a versatile wardrobe.…

December 24, 2025

You Might Also Like

Xiaomi 17 vs 17 Ultra Review: 3 Reasons to Buy the Regular
Tech and Science

Xiaomi 17 vs 17 Ultra Review: 3 Reasons to Buy the Regular

March 23, 2026
AI Use Cases Across Industries Guide in 2026
Tech and Science

AI Use Cases Across Industries Guide in 2026

March 23, 2026
Magnetic Fluid Injected Into The Heart May Prevent Strokes, Scientists Think : ScienceAlert
Tech and Science

Magnetic Fluid Injected Into The Heart May Prevent Strokes, Scientists Think : ScienceAlert

March 23, 2026
Can future astronauts be put into comas for space travel like in Project Hail Mary?
Tech and Science

Can future astronauts be put into comas for space travel like in Project Hail Mary?

March 22, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?