SEARCH

    Saved articles

    You have not yet added any article to your bookmarks!

    Browse articles
    Select News Languages

    GDPR Compliance

    We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, Privacy Policies, and Terms of Service.

    Claude AI Exploited In Extortion Attacks Targeting Hospitals & Governments: Here's What Went Down

    1 week ago

    Anthropic has disclosed that its generative AI assistant, Claude, was used in one of the most egregious instances of AI being weaponised by criminals. The company reported having identified a hacker who had executed a “vibe hacking” campaign that involved at least 17 organisations, including healthcare providers, emergency services, and government agencies.

    What Actually Happened?

    Allegedly, the attacker relied on Claude Code, Anthropic’s agentic coding assistant, to automate reconnaissance, steal credentials and invade networks.

    Claude also had the responsibility of advising on what data to give utmost priority and even preparing ransom notes that were drawn visually alarming to pressurise victims into paying six-figure sums. 

    Others were threatened with leaking confidential information to the public.

    Anthropic claimed to have acted promptly to close the operation, blocking the accounts in question, notifying law enforcement and deploying more rapid automated screening systems. 

    The company did not provide complete information about the new safeguards, but emphasised it is still working on improving defences to stay in the lead ahead of abuse.

    ALSO READ: Nvidia Beats Earnings Expectations, Yet Stock Falls 3.2%. What Happened?

    AI and the Cybercrime Threat

    The report is issued at a time when AI in cybercrime is becoming a matter of concern. 

    Last year, OpenAI revealed that its own tools were abused by Chinese and North Korean hackers to debug malware, phishing and target research. Microsoft also stepped in to block their access to AI services.

    Two additional alarming stories were pointed out in the report: the case of Claude, who was involved in a fraudulent employment-application scam in North Korea and the role of AI in the ransomware production. 

    The company cautioned that contemporary AI is reducing the threshold to cybercrime, allowing even small groups or individuals to operate like highly proficient teams.

    “Criminals are adapting fast,” Anthropic said, adding that it is committed to developing stronger safeguards.

    Click here to Read more
    Prev Article
    iPhone 17 Pro Max Leaks: From Price In India To Specficiations, Here's What We Know So Far
    Next Article
    NYT Connections Answers (August 28): Not Able To Find Solution? Let Our Hints Help You Out

    Related Technology Updates:

    Comments (0)

      Leave a Comment