Hi, I’m Harshavardhan Darekar, an AI Security Researcher focused on empirical evaluation of security risks in large language models and agentic systems. I specialize in prompt injection, data poisoning, and misuse scenarios, with a strong background in penetration testing and vulnerability research. I enjoy translating adversarial findings into practical safety insights to help build more reliable and secure AI systems. Currently I design and run empirical adversarial experiments for LLM-powered agent systems at Mindrift, creating automated Python workflows to explore failure modes across multiple interaction channels. I explore how adversarial inputs propagate through retrieval-augmented, tool-using, and environment-aware agents, documenting failure cases and contributing to safer, more robust AI deployments. I also collaborate with engineering teams to prioritize mitigations and improvements.

Harshavardhan Darekar

Hi, I’m Harshavardhan Darekar, an AI Security Researcher focused on empirical evaluation of security risks in large language models and agentic systems. I specialize in prompt injection, data poisoning, and misuse scenarios, with a strong background in penetration testing and vulnerability research. I enjoy translating adversarial findings into practical safety insights to help build more reliable and secure AI systems. Currently I design and run empirical adversarial experiments for LLM-powered agent systems at Mindrift, creating automated Python workflows to explore failure modes across multiple interaction channels. I explore how adversarial inputs propagate through retrieval-augmented, tool-using, and environment-aware agents, documenting failure cases and contributing to safer, more robust AI deployments. I also collaborate with engineering teams to prioritize mitigations and improvements.

Available to hire

Hi, I’m Harshavardhan Darekar, an AI Security Researcher focused on empirical evaluation of security risks in large language models and agentic systems. I specialize in prompt injection, data poisoning, and misuse scenarios, with a strong background in penetration testing and vulnerability research. I enjoy translating adversarial findings into practical safety insights to help build more reliable and secure AI systems.

Currently I design and run empirical adversarial experiments for LLM-powered agent systems at Mindrift, creating automated Python workflows to explore failure modes across multiple interaction channels. I explore how adversarial inputs propagate through retrieval-augmented, tool-using, and environment-aware agents, documenting failure cases and contributing to safer, more robust AI deployments. I also collaborate with engineering teams to prioritize mitigations and improvements.

See more

Experience Level

Expert
Expert
Expert
Expert
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
See more

Work Experience

Add your work experience history here.

Education

MSc Cyber Security at King’s College London
January 1, 2023 - January 1, 2024
BE Computer Science at AISSMS College of Engineering, Pune
January 1, 2019 - January 1, 2022

Qualifications

Certified Ethical Hacker (CEH)
January 11, 2030 - January 8, 2026
Microsoft SC-900: Security, Compliance, and Identity Fundamentals
January 11, 2030 - January 8, 2026
CompTIA Security+
January 11, 2030 - January 8, 2026

Industry Experience

Software & Internet, Professional Services, Government, Other

Experience Level

Expert
Expert
Expert
Expert
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
See more