Saturday, October 11, 2025
No Result
View All Result
newshub
  • Global news
    • Climate & energy
      • Climate
      • Carbon
      • Coal
      • Disruptive
      • Gas
      • Nuclear
      • Oil
      • Solar
      • Water
      • Waves
      • Wind
      • Renewable
      • South America
    • Lifestyle
      • Best chefs
      • Cocktail of the week
      • History
      • Influential women
      • Newshub long-read
  • Financial insights
    • Australia
    • Banking
    • Central Banks
    • China
    • Commodities
    • Corporate
    • Europe
    • Fin & tech
      • Tech
      • AI
      • Blockchain
    • Investment
    • Japan
    • Neobanking
    • South East Asia
    • Business of the week
    • UK
    • US
  • Africa
    • Africa finance
    • Burundi
    • Gambia
    • Senegal
  • Asia
    • Asia finance
    • Laos
    • Malaysia
    • South Korea
  • Caribbean
  • MSTRpay
  • Global news
    • Climate & energy
      • Climate
      • Carbon
      • Coal
      • Disruptive
      • Gas
      • Nuclear
      • Oil
      • Solar
      • Water
      • Waves
      • Wind
      • Renewable
      • South America
    • Lifestyle
      • Best chefs
      • Cocktail of the week
      • History
      • Influential women
      • Newshub long-read
  • Financial insights
    • Australia
    • Banking
    • Central Banks
    • China
    • Commodities
    • Corporate
    • Europe
    • Fin & tech
      • Tech
      • AI
      • Blockchain
    • Investment
    • Japan
    • Neobanking
    • South East Asia
    • Business of the week
    • UK
    • US
  • Africa
    • Africa finance
    • Burundi
    • Gambia
    • Senegal
  • Asia
    • Asia finance
    • Laos
    • Malaysia
    • South Korea
  • Caribbean
  • MSTRpay
No Result
View All Result
newshub
No Result
View All Result
ADVERTISEMENT

Using AI to protect against AI image manipulation

2023/08/02/08:20
in AI
Reading Time: 6 mins read
253 2
A A
Using AI to protect against AI image manipulation

“PhotoGuard,” developed by MIT CSAIL researchers, prevents unauthorized image manipulation, safeguarding authenticity in the era of advanced generative models

As we enter a new era where technologies powered by artificial intelligence can craft and manipulate images with a precision that blurs the line between reality and fabrication, the specter of misuse looms large. Recently, advanced generative models such as DALL-E and Midjourney, celebrated for their impressive precision and user-friendly interfaces, have made the production of hyper-realistic images relatively effortless. With the barriers of entry lowered, even inexperienced users can generate and manipulate high-quality images from simple text descriptions — ranging from innocent image alterations to malicious changes. Techniques like watermarking pose a promising solution, but misuse requires a preemptive (as opposed to only post hoc) measure. 

In the quest to create such a new measure, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed “PhotoGuard,” a technique that uses perturbations — minuscule alterations in pixel values invisible to the human eye but detectable by computer models — that effectively disrupt the model’s ability to manipulate the image.

PhotoGuard uses two different “attack” methods to generate these perturbations. The more straightforward “encoder” attack targets the image’s latent representation in the AI model, causing the model to perceive the image as a random entity. The more sophisticated “diffusion” one defines a target image and optimizes the perturbations to make the final image resemble the target as closely as possible.

“Consider the possibility of fraudulent propagation of fake catastrophic events, like an explosion at a significant landmark. This deception can manipulate market trends and public sentiment, but the risks are not limited to the public sphere. Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale,” says Hadi Salman, an MIT graduate student in electrical engineering and computer science (EECS), affiliate of MIT CSAIL, and lead author of a new paper about PhotoGuard. 

“In more extreme scenarios, these models could simulate voices and images for staging false crimes, inflicting psychological distress and financial loss. The swift nature of these actions compounds the problem. Even when the deception is eventually uncovered, the damage — whether reputational, emotional, or financial — has often already happened. This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation.”

PhotoGuard in practice
AI models view an image differently from how humans do. It sees an image as a complex set of mathematical data points that describe every pixel’s color and position — this is the image’s latent representation. The encoder attack introduces minor adjustments into this mathematical representation, causing the AI model to perceive the image as a random entity. As a result, any attempt to manipulate the image using the model becomes nearly impossible. The changes introduced are so minute that they are invisible to the human eye, thus preserving the image’s visual integrity while ensuring its protection.

The second and decidedly more intricate “diffusion” attack strategically targets the entire diffusion model end-to-end. This involves determining a desired target image, and then initiating an optimization process with the intention of closely aligning the generated image with this preselected target.

In implementing, the team created perturbations within the input space of the original image. These perturbations are then used during the inference stage, and applied to the images, offering a robust defense against unauthorized manipulation.

“The progress in AI that we are witnessing is truly breathtaking, but it enables beneficial and malicious uses of AI alike,” says MIT professor of EECS and CSAIL principal investigator Aleksander Madry, who is also an author on the paper. “It is thus urgent that we work towards identifying and mitigating the latter. I view PhotoGuard as our small contribution to that important effort.”

The diffusion attack is more computationally intensive than its simpler sibling, and requires significant GPU memory. The team says that approximating the diffusion process with fewer steps mitigates the issue, thus making the technique more practical.

To better illustrate the attack, consider an art project, for example. The original image is a drawing, and the target image is another drawing that’s completely different. The diffusion attack is like making tiny, invisible changes to the first drawing so that, to an AI model, it begins to resemble the second drawing. However, to the human eye, the original drawing remains unchanged.

By doing this, any AI model attempting to modify the original image will now inadvertently make changes as if dealing with the target image, thereby protecting the original image from intended manipulation. The result is a picture that remains visually unaltered for human observers, but protects against unauthorized edits by AI models.

As far as a real example with PhotoGuard, consider an image with multiple faces. You could mask any faces you don’t want to modify, and then prompt with “two men attending a wedding.” Upon submission, the system will adjust the image accordingly, creating a plausible depiction of two men participating in a wedding ceremony.

Now, consider safeguarding the image from being edited; adding perturbations to the image before upload can immunize it against modifications. In this case, the final output will lack realism compared to the original, non-immunized image.

All hands on deck
Key allies in the fight against image manipulation are the creators of the image-editing models, says the team. For PhotoGuard to be effective, an integrated response from all stakeholders is necessary. “Policymakers should consider implementing regulations that mandate companies to protect user data from such manipulations. Developers of these AI models could design APIs that automatically add perturbations to users’ images, providing an added layer of protection against unauthorized edits,” says Salman.

Despite PhotoGuard’s promise, it’s not a panacea. Once an image is online, individuals with malicious intent could attempt to reverse engineer the protective measures by applying noise, cropping, or rotating the image. However, there is plenty of previous work from the adversarial examples literature that can be utilized here to implement robust perturbations that resist common image manipulations.

“A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation. Working on this pressing issue is of paramount importance today,” says Salman. “And while I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools. As we tread into this new era of generative models, let’s strive for potential and protection in equal measures.”

“The prospect of using attacks on machine learning to protect us from abusive uses of this technology is very compelling,” says Florian Tramèr, an assistant professor at ETH Zürich. “The paper has a nice insight that the developers of generative AI models have strong incentives to provide such immunization protections to their users, which could even be a legal requirement in the future. However, designing image protections that effectively resist circumvention attempts is a challenging problem: Once the generative AI company commits to an immunization mechanism and people start applying it to their online images, we need to ensure that this protection will work against motivated adversaries who might even use better generative AI models developed in the near future. Designing such robust protections is a hard open problem, and this paper makes a compelling case that generative AI companies should be working on solving it.”

Salman wrote the paper alongside fellow lead authors Alaa Khaddaj and Guillaume Leclerc MS ’18, as well as Andrew Ilyas ’18, MEng ’18; all three are EECS graduate students and MIT CSAIL affiliates. The team’s work was partially done on the MIT Supercloud compute cluster, supported by U.S. National Science Foundation grants and Open Philanthropy, and based upon work supported by the U.S. Defense Advanced Research Projects Agency. It was presented at the International Conference on Machine Learning this July.

Source: MiT

Related Posts

MIT study links AI reliance to reduced brain activity
AI

MIT study links AI reliance to reduced brain activity

by newshub
1 week ago

Researchers at the Massachusetts Institute of Technology (MIT) have found that prolonged reliance on artificial intelligence tools can reduce users’...

Read moreDetails
New AI tool promises breakthrough in clinical research

New AI tool promises breakthrough in clinical research

2 weeks ago
Huawei unveils plan to link thousands of AI chips into one supercomputer

Huawei unveils plan to link thousands of AI chips into one supercomputer

2 weeks ago
CSI and HuLoop partner to deliver AI-driven efficiency to banks

CSI and HuLoop partner to deliver AI-driven efficiency to banks

2 weeks ago
Huawei unveils next-generation Ascend chips at Shanghai tech summit

Huawei unveils next-generation Ascend chips at Shanghai tech summit

2 weeks ago
Sovereign AI gets a boost from new NVIDIA microservices

Nvidia to invest up to $100 billion in OpenAI to power next generation AI infrastructure

3 weeks ago
No Result
View All Result

Recent Posts

  • Venezuelan Activist Maria Corina Machado awarded Nobel Peace Prize for defending democracy
  • IMF visit to Kenya raises prospects for new reform-linked programme
  • Gold’s record-breaking rally forces investors to rewrite the rules
  • Yen heads for sharpest weekly fall in a year as euro weakens on political jitters
  • Asian markets open mixed amid shifting global sentiment

Recent Comments

    Archives

    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022

    Categories

    • Africa
    • Africa finance
    • AI
    • An diesem Tag
    • Asia
    • Asia finance
    • Australia
    • Banking
    • Best chefs
    • Biden
    • Blockchain
    • Burundi
    • Business of the week
    • Carbon
    • Caribbean
    • Central Banks
    • China
    • Climate
    • Climate & Energy
    • Coal
    • Cocktail of the week
    • Commodities
    • Corporate
    • Deutsch
    • Deutsch PR
    • Digital Banking
    • English PR
    • Europe
    • Financial insights
    • Focus on neobanking
    • Gas
    • Global news
    • Harris
    • History
    • India
    • Influential women
    • Invest and Rest
    • Italiano PR
    • Jamaica
    • Japan
    • Laos
    • Laos
    • Lifestyle
    • Metaverse
    • MSTRpay
    • Neobanking
    • News
    • Newshub long-read
    • newshub special
    • newshub-special
    • NFT
    • Nobel Prizes 2024
    • Nuclear
    • Oil
    • Press
    • Press releases
    • Pressroom
    • Renewable
    • Russia
    • Senegal
    • Solar
    • South America
    • South East Asia
    • South Korea
    • Stocks
    • Svensk PR
    • Tech
    • Trump
    • Trump trials
    • UFO
    • UK
    • UK News
    • Ukraine
    • US
    • US politics
    • Waves
    • WEX
    • Wind
    • World safety

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org

    Recent Posts

    • Venezuelan Activist Maria Corina Machado awarded Nobel Peace Prize for defending democracy
    • IMF visit to Kenya raises prospects for new reform-linked programme
    • Gold’s record-breaking rally forces investors to rewrite the rules
    • Yen heads for sharpest weekly fall in a year as euro weakens on political jitters
    • Asian markets open mixed amid shifting global sentiment

    Categories

    • Africa
    • Africa finance
    • AI
    • An diesem Tag
    • Asia
    • Asia finance
    • Australia
    • Banking
    • Best chefs
    • Biden
    • Blockchain
    • Burundi
    • Business of the week
    • Carbon
    • Caribbean
    • Central Banks
    • China
    • Climate
    • Climate & Energy
    • Coal
    • Cocktail of the week
    • Commodities
    • Corporate
    • Deutsch
    • Deutsch PR
    • Digital Banking
    • English PR
    • Europe
    • Financial insights
    • Focus on neobanking
    • Gas
    • Global news
    • Harris
    • History
    • India
    • Influential women
    • Invest and Rest
    • Italiano PR
    • Jamaica
    • Japan
    • Laos
    • Laos
    • Lifestyle
    • Metaverse
    • MSTRpay
    • Neobanking
    • News
    • Newshub long-read
    • newshub special
    • newshub-special
    • NFT
    • Nobel Prizes 2024
    • Nuclear
    • Oil
    • Press
    • Press releases
    • Pressroom
    • Renewable
    • Russia
    • Senegal
    • Solar
    • South America
    • South East Asia
    • South Korea
    • Stocks
    • Svensk PR
    • Tech
    • Trump
    • Trump trials
    • UFO
    • UK
    • UK News
    • Ukraine
    • US
    • US politics
    • Waves
    • WEX
    • Wind
    • World safety

    Archives

    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    newshub

    © 2023-2025
    MSTRpay AB
    Legal & Disclosure

    • Global news
    • Financial insights
    • Africa
    • Asia
    • Caribbean
    • MSTRpay

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In

    Add New Playlist

    No Result
    View All Result
    • Global news
      • Climate & energy
        • Climate
        • Carbon
        • Coal
        • Disruptive
        • Gas
        • Nuclear
        • Oil
        • Solar
        • Water
        • Waves
        • Wind
        • Renewable
        • South America
      • Lifestyle
        • Best chefs
        • Cocktail of the week
        • History
        • Influential women
        • Newshub long-read
    • Financial insights
      • Australia
      • Banking
      • Central Banks
      • China
      • Commodities
      • Corporate
      • Europe
      • Fin & tech
        • Tech
        • AI
        • Blockchain
      • Investment
      • Japan
      • Neobanking
      • South East Asia
      • Business of the week
      • UK
      • US
    • Africa
      • Africa finance
      • Burundi
      • Gambia
      • Senegal
    • Asia
      • Asia finance
      • Laos
      • Malaysia
      • South Korea
    • Caribbean
    • MSTRpay

    © 2023-2025
    MSTRpay AB
    Legal & Disclosure