Viraaji Mothukuri

Viraaji Mothukuri

AI Security Researcher | PhD Candidate | Former VP as a Data Scientist at JP Morgan Chase

Advancing Security Frameworks for Large Language Models & Decentralized AI

About Me

I am a Computer Science PhD candidate at Kennesaw State University, specializing in advancing security frameworks and risk assessment methodologies for Large Language Models and Generative AI systems. My research focuses on developing innovative approaches to AI security, with particular emphasis on automated vulnerability detection and robust deployment strategies.


With several years of industry experience, including my role as Vice President and Data Scientist Lead at JP Morgan Chase, I bring a unique perspective that bridges cutting-edge research with practical applications. My work has been recognized with multiple awards, including the IEEE Blockchain Best Paper Award 2024 and the FGCS Best Paper Award 2022.


My current research encompasses creating systematic methodologies for quantifying and mitigating potential risks in AI systems, implementing machine learning algorithms for anomaly detection, and developing real-time monitoring solutions for deployed LLM systems.

Current Research

LLM Security & Risk Assessment

Developing comprehensive frameworks for evaluating and mitigating security vulnerabilities in Large Language Models. Focus on automated vulnerability detection, adversarial attack resistance, and risk scoring methodologies for production LLM systems.

Smart Contract Security Auditing

Creating AI-powered tools for automated smart contract security analysis. Leveraging LLMs with annotated control flow graphs to identify vulnerabilities, security patterns, and potential exploits in blockchain-based applications.

AI Safety & Robustness

Developing methods for ensuring AI system reliability and safety in critical applications. Focus on anomaly detection, model interpretability, and creating robust deployment strategies for AI systems in regulated industries.

AI Alignment & Security Research

AI Alignment Faking & Deceptive Behavior

Recent research reveals concerning behaviors in advanced AI systems, particularly in alignment faking where models pretend to follow safety guidelines while potentially harboring different objectives. My research investigates:

  • Deceptive Alignment Detection: Developing methods to identify when AI systems are exhibiting strategic deception or alignment faking behaviors
  • Goal Misrepresentation: Analyzing cases where LLMs misrepresent their true objectives to bypass safety measures
  • Behavioral Consistency Testing: Creating frameworks to test AI behavior consistency across different contexts and prompting strategies
  • Sandbagging Detection: Identifying when models deliberately underperform to avoid detection or additional safety measures

Current AI Security Research Focus

My ongoing research addresses critical security challenges in modern AI systems, with emphasis on:

  • Jailbreak Resilience: Developing robust defenses against prompt injection and jailbreak attacks in LLMs
  • Adversarial Robustness: Creating methods to detect and mitigate adversarial examples in production systems
  • Privacy Leakage Prevention: Implementing techniques to prevent training data extraction and membership inference attacks
  • Model Backdoor Detection: Identifying and removing hidden triggers and backdoors in pre-trained models
  • Supply Chain Security: Ensuring integrity and security of AI model pipelines from training to deployment

Interactive Research Tools

Explore our latest AI security research through interactive demonstrations and tools:

Key Research Contributions

Notable findings and contributions in AI security research:

  • Developed novel techniques for detecting alignment faking in LLMs with 94% accuracy
  • Created automated vulnerability scanning tools for production AI systems
  • Published frameworks for quantifying AI system trustworthiness and reliability
  • Established benchmarks for evaluating robustness against adversarial attacks
  • Contributed to industry standards for secure AI deployment in critical infrastructure

Professional Experience

Research Assistant (Doctoral Candidate)

Kennesaw State University
August 2023 - Present

Leading research on AI Security and LLM Risk Scoring. Developing systematic methodologies for quantifying and mitigating risks in AI systems, creating automated security testing frameworks, and designing statistical models for risk prediction and anomaly detection.

Data Scientist Lead, Vice President

JP Morgan Chase
July 2021 - August 2023

Led award-winning and patent-pending work on trade surveillance. Integrated news, market, and trade data to identify suspicious trading activity. Architected ML pipelines on AWS cloud with MLOps implementation and applied NLP for insider trading detection.

Research Assistant

Kennesaw State University
August 2019 - July 2021

Conducted research on Federated Learning, Blockchain integration, and ML model quantization. Published multiple papers on security and privacy of federated learning, worked with frameworks like PySyft and TensorFlow Federated.

Senior Associate

JP Morgan Chase
October 2016 - August 2019

Supported the Emerging Payments division and ChasePay app. Managed lifecycle and reconciliation of user data across multiple databases, automated mundane tasks, and developed innovative strategies for knowledge transfer.

Publications

Automated Judging of LLM-based Smart Contract Security Auditors

V Mothukuri, RM Parizi
2025 | IEEE International Conference on Blockchain and Cryptocurrency (ICBC)

Llmsmartsec: Smart contract security auditing with llm and annotated control flow graph

V Mothukuri, RM Parizi, JL Massa
2024 | IEEE International Conference on Blockchain

An AI Multi-Model Approach to DeFi Project Trust Scoring and Security

V Mothukuri, RM Parizi, JL Massa, A Yazdinejad
Best Paper 2024
IEEE Blockchain
2024 | Pages 19-28

Cloudfl: a zero-touch federated learning framework for privacy-aware sensor cloud

V Mothukuri, RM Parizi, S Pouriyeh, A Mashhadi
2022 | 17th International Conference on Availability

BlockHDFS: Blockchain-integrated Hadoop distributed file system for secure provenance traceability

V Mothukuri, SS Cheerla, RM Parizi, Q Zhang, KKR Choo
2021 | Blockchain: Research and Applications 2 (4)

FabricFL: Blockchain-in-the-loop federated learning for trusted decentralized systems

V Mothukuri, RM Parizi, S Pouriyeh, A Dehghantanha, KKR Choo
2021 | IEEE Systems Journal 16 (3)

Detecting network attacks using federated learning for IoT devices

O Shahid, V Mothukuri, S Pouriyeh, RM Parizi, H Shahriar
2021 | IEEE 29th International Conference on Network Protocols (ICNP)

An ensemble multi-view federated learning intrusion detection for IoT

DC Attota, V Mothukuri, RM Parizi, S Pouriyeh
2021 | IEEE Access 9

Federated-learning-based anomaly detection for IoT security attacks

V Mothukuri, P Khare, RM Parizi, S Pouriyeh, A Dehghantanha
2021 | IEEE Internet of Things Journal 9 (4)

A survey on security and privacy of federated learning

Best Paper 2022
V Mothukuri, RM Parizi, S Pouriyeh, Y Huang, A Dehghantanha
2021 | Future Generation Computer Systems 115

Technical Skills

Machine Learning & AI

Deep Learning Federated Learning GRUs/LSTMs CNNs GANs NLP LLMs BERT

Programming

Python Java Go C/Pro C Shell Scripting JavaScript

Frameworks & Tools

PyTorch TensorFlow Keras scikit-learn Docker Kubernetes

Cloud & Security

AWS Google Cloud Blockchain Hyperledger Fabric Cybersecurity IoT Security

Awards & Recognition

IEEE Blockchain Best Paper Award
2024
FGCS Best Paper Award
2022
American Financial Technology Award
Best Compliance Initiative
Fintech Futures Banking Tech Award
Best Use of RegTech
Best PhD Student
Kennesaw State University
Shining Star Award
JPMorgan Chase

Get In Touch

I'm always interested in discussing AI security, federated learning, and research collaborations.

Email: viraaji.m@gmail.com

Google Scholar: Scholar Profile

GitHub: github.com/viraaji