Abdulla
Cybersecurity Specialist | Securing AI Systems in High-Trust Environments
Abu Dhabi, UAE
AI is being deployed faster than it is being secured. I build the systems and methods that keep it in check.
50+
Vulnerabilities Identified & Remediated
7+
Security Systems Delivered to Production
100s
Endpoints Under Encrypted Architecture
1
On-Premise AI R&D Lab (Zero External Dependency)
Selected Work Signals
- Promoted from trainee to specialist based on delivery within the first month
- Every project I’ve delivered is still running in production
- Proposed the AI R&D lab, got it approved, built it, and it’s been running since
- Recognized as Employee of the Year — Future Leaders (2024–2025)
- Wrote a security policy that the organization now uses daily
Featured Projects
Career Trajectory
From digital forensics to securing AI systems in government environments — a progression driven by execution, not titles.
Digital Forensics Intern
Abu Dhabi Police — Emirates Forensics Lab
- Supported forensic investigations and evidence handling in controlled environments
Delivery & Service Project Manager
Huawei Technologies — Abu Dhabi
- Managed delivery of UAE–Oman Smart Gate border systems
- Coordinated 4 concurrent government infrastructure projects across stakeholders
Trainee → Cybersecurity Specialist
Abu Dhabi government authority
- Assigned to Red Team; executed 5 internal pentests within first month
- Identified and remediated 50+ vulnerabilities across staging and production systems
- Promoted to Cybersecurity Specialist based on performance
Cybersecurity Specialist — Security Delivery
Abu Dhabi government authority
- Project managed and delivered 7+ security systems into production
- Designed and implemented encrypted network architecture across multiple environments
- Built secure data extraction platform with audit controls and policy enforcement
Cybersecurity Specialist — AI Transformation
Abu Dhabi government authority
- Called into the AI Transformation initiative based on track record and manager’s trust
- Took initiative to build 11-agent AI system responding to chairman-identified use case; earned Technical AI Team lead
- Architected and deployed fully on-premise AI R&D lab (LLMs/VLMs, zero external dependency)
- Continue leading cybersecurity operations alongside AI system development
How I Think
On Security
I started in offensive security and that has never stopped. Pentesting internal systems, investigating incidents, remediating vulnerabilities — this is still my day-to-day alongside everything else. Understanding how systems break is how I design systems that hold. Security is not something I moved past. It is how I think about everything I build.
On AI Risk
Organizations are deploying AI faster than they understand the attack surface. I build AI systems inside government environments where there is no margin for error — air-gapped, fully local, no external API calls. When you work under those constraints, you learn quickly what a system actually needs versus what just sounds good. The AI R&D lab exists because I needed to prove that secure AI deployment was possible before recommending it to leadership.
On Working Under Constraints
Government environments have rules that do not bend. No cloud. No external data transfer. Approval chains for everything. I treat constraints as design requirements, not obstacles. The data extraction kiosk, the local AI lab, the encrypted network tunnels — each exists because a policy said “no” and I found a way to deliver “yes” without compromising the policy.
On Decisions
I would rather ship something that works within the rules than propose something impressive that will never get approved. Every infrastructure and security project I have delivered is still running in production.
Awards & Recognition
Employee of the Year
Future Leaders (2024–2025)
1st Place
CIS Showdown Competition
3rd Place
CIS Cyber Olympics 2022
AISAF
AI System Assurance Framework
An independent framework for evaluating AI system security in government and enterprise environments.
Why AISAF exists
AI systems are being deployed across government and enterprise environments without a consistent way to assess their security in practice. Existing frameworks — NIST AI RMF, ISO/IEC 42001, and OWASP AI guidance — define what should be governed, but stop short of how to evaluate a real system in front of you.
Governance tells you what matters. AISAF helps you evaluate whether a real system actually meets those expectations.
What AISAF is
AISAF has been tested against realistic AI systems and produced real findings — not theoretical observations. The approach is practitioner-driven, built from direct experience deploying and securing AI in environments where failure is not an option.
Framework at a Glance
10
Threat Domains
8
Assurance Objectives
5
Assessment Phases
4
Architectural Layers
✓
Validated Assessment Output
✓
Standards Alignment
AISAF turns governance policy into something a practitioner can actually use when evaluating a live AI system.
4-Layer Architecture
NIST AI RMF · ISO/IEC 42001 · OWASP · UAE AI Strategy
What AISAF enables
- Structured evaluation of AI systems before and during production deployment
- Evidence-based alignment with governance frameworks
- Identification of risks specific to agentic and AI-driven architectures
- Support for organizations preparing for ISO/IEC 42001 or NIST-aligned environments
AISAF is complementary to existing standards. It operationalizes them — it does not replace or compete with them.
AISAF has been validated through structured assessment scenarios, producing actionable findings. The framework continues to evolve through real-world application.
The methodology is intentionally not fully published. AISAF is designed to be applied in real environments — not just read about.
Working on AI governance in your organization? I would welcome the conversation.
Get in Touch
Open to cybersecurity and AI security roles, advisory work, and research collaboration.
Interested in discussing an opportunity? I’d be happy to share a detailed CV directly.