Skip to content

Latest commit

 

History

History
72 lines (58 loc) · 3.05 KB

File metadata and controls

72 lines (58 loc) · 3.05 KB

The Contribution of Artificial Intelligence (Generative AI) to Cybersecurity

Group: Trovia Dimitra, Fatsea Anthippi


Introduction

Cybersecurity is a critical issue in the digital age, as threats in cyberspace grow increasingly complex. Generative AI, a subset of AI, offers new capabilities to enhance defensive mechanisms through: -Process automation -Attack simulation -Vulnerability detection

However, its use raises ethical concerns and risks, as it can be exploited by malicious actors.

Key Points

1. Generative AI as a Defense Tool

  • Process Automation:
    • Optimizes time-consuming tasks (e.g., log file analysis, threat hunting).
  • Example:
    • IBM Security Guardium for cloud data protection.
  • Attack Simulation:
    • Generates realistic scenarios (e.g., phishing emails) to train systems.
    • Techniques like CrowdCanary for phishing website detection.
  • Vulnerability Detection:
    • Application of SAST (Static Application Security Testing) and SMT-based models.

2. Challenges & Risks

  • Malicious Use:
    • Creation of polymorphic malware (e.g., ransomware) and automated hacking.
    • Enhanced phishing attacks via NLG (Natural Language Generation).
  • Ethical Issues:
    • Deepfakes: Identity forgery for social engineering (e.g., a $25M scam using deepfake video calls).
    • Data Leaks: Risks from training LLMs (Large Language Models) on poisoned datasets.
    • Jailbreaking: Bypassing AI model restrictions (e.g., DAN, SWITCH methods).

3. Practical Applications

  • Experiments with ChatGPT 3.5 & Gemini:
    • Jailbreaking: Revealing social engineering details via roleplay techniques.
    • Phishing Email Generation: ChatGPT produced examples, while Gemini refused malicious code.
    • Findings: Models recognize ethical boundaries but remain vulnerable to targeted prompts without "trigger words."

4. Solutions & Future Prospects

  • Ethical Design:
    • Transparency, legal data collection, and privacy protection.
  • Regular Retraining:
    • To address emerging threats and maintain accuracy.
  • Defensive Applications:
    • Anomaly detection, threat prioritization, and automated data analysis.
  • Defense vs. Offense:
    • 70% of CISOs believe Generative AI is more useful for attacks, but its defensive use is rising.

Conclusions

Generative AI is a double-edged sword in cybersecurity:

  • ✅ Benefits: Enhanced threat detection, automation.
  • ❌ Risks: Evolution of malicious techniques (e.g., deepfakes, automated hacking).

Its future impact depends on:

  1. Ethical alignment in model development.
  2. Data protection and continuous updates against emerging threats.
  3. Cross-disciplinary collaboration to address security gaps.

References

Includes 18 sources, such as:

For the full paper, refer to the original work: [P3200203_P3190209.pdf].