AI Security Researcher | PhD Candidate | Former VP as a Data Scientist at JP Morgan Chase
Advancing Security Frameworks for Large Language Models & Decentralized AI
I am a Computer Science PhD candidate at Kennesaw State University, specializing in advancing security frameworks and risk assessment methodologies for Large Language Models and Generative AI systems. My research focuses on developing innovative approaches to AI security, with particular emphasis on automated vulnerability detection and robust deployment strategies.
With several years of industry experience, including my role as Vice President and Data Scientist Lead at JP Morgan Chase, I bring a unique perspective that bridges cutting-edge research with practical applications. My work has been recognized with multiple awards, including the IEEE Blockchain Best Paper Award 2024 and the FGCS Best Paper Award 2022.
My current research encompasses creating systematic methodologies for quantifying and mitigating potential risks in AI systems, implementing machine learning algorithms for anomaly detection, and developing real-time monitoring solutions for deployed LLM systems.
Developing comprehensive frameworks for evaluating and mitigating security vulnerabilities in Large Language Models. Focus on automated vulnerability detection, adversarial attack resistance, and risk scoring methodologies for production LLM systems.
Creating AI-powered tools for automated smart contract security analysis. Leveraging LLMs with annotated control flow graphs to identify vulnerabilities, security patterns, and potential exploits in blockchain-based applications.
Developing methods for ensuring AI system reliability and safety in critical applications. Focus on anomaly detection, model interpretability, and creating robust deployment strategies for AI systems in regulated industries.
Recent research reveals concerning behaviors in advanced AI systems, particularly in alignment faking where models pretend to follow safety guidelines while potentially harboring different objectives. My research investigates:
My ongoing research addresses critical security challenges in modern AI systems, with emphasis on:
Explore our latest AI security research through interactive demonstrations and tools:
Access our comprehensive collection of AI security research, vulnerability assessments, and defensive techniques through our interactive platform.
Explore Research PlatformNotable findings and contributions in AI security research:
Leading research on AI Security and LLM Risk Scoring. Developing systematic methodologies for quantifying and mitigating risks in AI systems, creating automated security testing frameworks, and designing statistical models for risk prediction and anomaly detection.
Led award-winning and patent-pending work on trade surveillance. Integrated news, market, and trade data to identify suspicious trading activity. Architected ML pipelines on AWS cloud with MLOps implementation and applied NLP for insider trading detection.
Conducted research on Federated Learning, Blockchain integration, and ML model quantization. Published multiple papers on security and privacy of federated learning, worked with frameworks like PySyft and TensorFlow Federated.
Supported the Emerging Payments division and ChasePay app. Managed lifecycle and reconciliation of user data across multiple databases, automated mundane tasks, and developed innovative strategies for knowledge transfer.
I'm always interested in discussing AI security, federated learning, and research collaborations.
Email: viraaji.m@gmail.com
Google Scholar: Scholar Profile
GitHub: github.com/viraaji