top of page
Search

Trust,Safety and Security due to AI

  • Writer: Manjula Sridhar
    Manjula Sridhar
  • 4 days ago
  • 2 min read

(This blog is written with lots of AI tools for pics and edited by an expert human :-))

AI has inherited all of the existing security issues that have accumulated so far in technology. On top it has introduced few new ones (evolving day by day).


ree

P.C : Canva


Direct Security Threats

Adversarial Attacks - AI systems can be fooled by carefully crafted inputs. For example, adding imperceptible noise to images can cause misclassification, or subtle changes to text can manipulate language models into producing harmful outputs.

Data Poisoning - Attackers can inject malicious data into training sets, causing AI models to learn incorrect patterns or create backdoors that trigger specific behaviours when certain inputs are encountered.

Model Theft - Proprietary AI models can be stolen through query-based attacks, where attackers repeatedly probe a model to reverse-engineer its functionality, or through direct theft of model weights.

AI-Enabled Threats

Sophisticated Social Engineering - AI can generate highly convincing phishing emails, deepfake audio/video for impersonation, and personalized scam content at scale, making traditional fraud detection much harder.

Automated Vulnerability Discovery - AI tools can accelerate the discovery of software vulnerabilities and help generate exploit code, lowering the barrier for cyberattacks.

Disinformation Campaigns - Large language models can create vast amounts of convincing false content, including fake news articles, social media posts, and manipulated media, making information warfare more scalable.

Systemic Risks

Privacy Violations - AI models can memorize and leak sensitive training data. They can also be used to de-anonymize datasets or infer private information from public data.

Supply Chain Vulnerabilities - Dependencies on third-party AI services, pre-trained models, or datasets can introduce hidden risks, including embedded biases, backdoors, or data breaches.

Autonomous Weapons - AI-powered weapons systems raise concerns about decision-making in warfare, accountability, and potential for unintended escalation.

Critical Infrastructure Dependence - As AI becomes embedded in healthcare, finance, transportation, and energy systems, failures or attacks on these AI systems could have cascading consequences.

Emerging Concerns

The rapid pace of AI development also creates challenges around AI safety research keeping up with capabilities, potential misuse by bad actors getting easier as tools become more accessible, and the difficulty of securing increasingly complex AI systems.


For a deep dive into this space, highly recommend reading research blogs of https://www.irregular.com/

 
 
 

Recent Posts

See All
Is AI killing traditional SaaS ? — A Technical view

This statement “The notion that business applications exist, that’s probably where they all collapse, right in the agent era” from Satya Nadella back in December 2024 made almost everyone ponder at wh

 
 
 

Comments


bottom of page