Thursday, 20 Nov 2025
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • VIDEO
  • House
  • White
  • ScienceAlert
  • Trumps
  • Watch
  • man
  • Health
  • Season
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > Cisco Warns: Fine-tuning turns LLMs into threat vectors
Tech and Science

Cisco Warns: Fine-tuning turns LLMs into threat vectors

Last updated: April 6, 2025 7:44 am
Share
Cisco Warns: Fine-tuning turns LLMs into threat vectors
SHARE

Weaponized large language models (LLMs) that have been fine-tuned with offensive tradecraft are revolutionizing cyberattacks, prompting CISOs to rethink their strategies. These advanced models are capable of automating reconnaissance, impersonating identities, and bypassing real-time detection, thus enabling large-scale social engineering attacks.

Popular models like FraudGPT, GhostGPT, and DarkGPT are now available for as little as $75 a month and are specifically designed for malicious activities such as phishing, exploit generation, and credit card validation. Cybercriminal groups, as well as nation-states, are capitalizing on the revenue potential of these weaponized LLMs by offering them as platforms, kits, and leasing options. These models are increasingly being packaged and sold in a similar manner to legitimate SaaS applications, complete with dashboards, APIs, regular updates, and even customer support.

The rise of weaponized LLMs has blurred the lines between legitimate models and malicious tools, putting legitimate LLMs at risk of being compromised and incorporated into cybercriminal toolchains. Fine-tuning an LLM increases the likelihood of it generating harmful outputs, making it susceptible to attacks such as jailbreaks, prompt injections, and model inversion. Without robust security measures in place, fine-tuned models can quickly become liabilities for organizations, providing attackers with an opportunity to infiltrate and exploit them.

Research conducted by Cisco’s security team has shown that fine-tuning LLMs can compromise their alignment, particularly in industries like healthcare and finance where compliance and safety are paramount. Jailbreak attempts against fine-tuned models have been successful at much higher rates compared to base models, highlighting the increased attack surface that comes with fine-tuning.

See also  Elon Musk Turns on Starlink for Iranian Public After Islamic Regime Cuts Off Internet Service |

Malicious LLMs are now available on the black market for as little as $75 a month, providing cybercriminals with plug-and-play tools for various malicious activities. These models lack the built-in safety features of mainstream LLMs and offer APIs, updates, and dashboards that resemble legitimate SaaS products.

Additionally, the ease with which attackers can poison open-source training datasets for AI models poses a significant threat to AI supply chains. By injecting malicious data into widely used training sets, adversaries can influence the outputs of LLMs in impactful ways, leading to potential security breaches and vulnerabilities.

Furthermore, decomposition attacks can quietly extract copyrighted and regulated content from LLMs without triggering any guardrails. This poses a significant challenge for enterprises, especially those in regulated sectors like healthcare and finance, as it introduces a new compliance risk that extends beyond traditional regulations.

In conclusion, the evolving landscape of weaponized LLMs underscores the need for enhanced security measures and real-time visibility across IT infrastructures. Security leaders must recognize that LLMs are not just tools but represent the latest attack surface that requires proactive defense strategies to mitigate risks effectively.

TAGGED:CiscoFinetuningLLMsThreatTurnsvectorsWarns
Share This Article
Twitter Email Copy Link Print
Previous Article CBO says Medicare’s main trust fund to last until 2052 CBO says Medicare’s main trust fund to last until 2052
Next Article Leg Makeup Tips For A Swell Date Night Experience Leg Makeup Tips For A Swell Date Night Experience
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Brazil vs Indiana Fever game player stats and box score for May 4

Indiana Fever continued their preseason engagements with a highly anticipated game that marked Caitlin Clark's…

May 4, 2025

A Tribute to Feminine Scents

The world of perfume is a captivating realm that holds the power to evoke memories,…

June 20, 2025

Feds arrest freed Jan. 6 defendant on gun charges from nearly two years ago — one day after Capitol riot charges were dismissed

Florida Man Pardoned by Trump for Capitol Riot Charges Arrested on Federal Gun Charges A…

January 22, 2025

Truth Behind Tom Cruise’s Face, Treatments and Single Life Revealed

Tom Cruise's Health and Wellness Routine Revealed Tom Cruise's appearance has been the subject of…

March 3, 2025

Princess Beatrice, Edoardo Mapelli Mozzi’s Relationship Timeline

Princess Beatrice and Edoardo Mapelli Mozzi's Fairytale Romance Princess Beatrice found her happily ever after…

July 17, 2025

You Might Also Like

Grok says Elon Musk is better than basically everyone, except Shohei Ohtani
Tech and Science

Grok says Elon Musk is better than basically everyone, except Shohei Ohtani

November 20, 2025
Lions have a second roar that no one noticed until now
Tech and Science

Lions have a second roar that no one noticed until now

November 20, 2025
Moss Survived 9 Months in The Vacuum of Space : ScienceAlert
Tech and Science

Moss Survived 9 Months in The Vacuum of Space : ScienceAlert

November 20, 2025
Lost Planet Theia that Created the Moon Came From the Inner Solar System
Tech and Science

Lost Planet Theia that Created the Moon Came From the Inner Solar System

November 20, 2025
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?