Group: Trovia Dimitra, Fatsea Anthippi
Cybersecurity is a critical issue in the digital age, as threats in cyberspace grow increasingly complex. Generative AI, a subset of AI, offers new capabilities to enhance defensive mechanisms through: -Process automation -Attack simulation -Vulnerability detection
However, its use raises ethical concerns and risks, as it can be exploited by malicious actors.
- Process Automation:
- Optimizes time-consuming tasks (e.g., log file analysis, threat hunting).
- Example:
- IBM Security Guardium for cloud data protection.
- Attack Simulation:
- Generates realistic scenarios (e.g., phishing emails) to train systems.
- Techniques like CrowdCanary for phishing website detection.
- Vulnerability Detection:
- Application of SAST (Static Application Security Testing) and SMT-based models.
- Malicious Use:
- Creation of polymorphic malware (e.g., ransomware) and automated hacking.
- Enhanced phishing attacks via NLG (Natural Language Generation).
- Ethical Issues:
- Deepfakes: Identity forgery for social engineering (e.g., a $25M scam using deepfake video calls).
- Data Leaks: Risks from training LLMs (Large Language Models) on poisoned datasets.
- Jailbreaking: Bypassing AI model restrictions (e.g., DAN, SWITCH methods).
- Experiments with ChatGPT 3.5 & Gemini:
- Jailbreaking: Revealing social engineering details via roleplay techniques.
- Phishing Email Generation: ChatGPT produced examples, while Gemini refused malicious code.
- Findings: Models recognize ethical boundaries but remain vulnerable to targeted prompts without "trigger words."
- Ethical Design:
- Transparency, legal data collection, and privacy protection.
- Regular Retraining:
- To address emerging threats and maintain accuracy.
- Defensive Applications:
- Anomaly detection, threat prioritization, and automated data analysis.
- Defense vs. Offense:
- 70% of CISOs believe Generative AI is more useful for attacks, but its defensive use is rising.
Generative AI is a double-edged sword in cybersecurity:
- ✅ Benefits: Enhanced threat detection, automation.
- ❌ Risks: Evolution of malicious techniques (e.g., deepfakes, automated hacking).
Its future impact depends on:
- Ethical alignment in model development.
- Data protection and continuous updates against emerging threats.
- Cross-disciplinary collaboration to address security gaps.
Includes 18 sources, such as:
For the full paper, refer to the original work: [P3200203_P3190209.pdf].