Hi, I’m JHANSI KANDADIAI, a versatile AI/ML engineer with over 10 years of experience designing and deploying AI-powered applications, data pipelines, and scalable cloud-native solutions. I specialize in full-stack development, data engineering, and Generative AI, delivering measurable business value across banking, healthcare, and retail.
I routinely lead cross-functional teams to architect multi-cloud ML platforms, fine-tune LLMs, build semantic search and RAG pipelines, and deploy scalable MLOps workflows. I’m passionate about mentoring engineers and driving enterprise adoption of responsible AI.
Skills
Experience Level
Language
Work Experience
Education
Qualifications
Industry Experience
• Designed and deployed AI agent testing frameworks integrating GPT-4, BERT, Whisper, and domain-specific LLMs to validate multi-agent workflows and ensure accurate, context-aware responses.
• Engineered RAG pipelines using LangChain, LangGraph, and LlamaIndex with vector databases (Pinecone, Qdrant, Weaviate, Chroma, FAISS) for semantic search, knowledge retrieval, and functional validation of LLM-powered agents.
• Generated and fine-tuned embeddings for banking knowledge bases, improving retrieval accuracy and testing context-aware AI responses by 45%.
• Developed backend microservices in FastAPI and Flask with REST and GraphQL APIs to simulate agent interactions with internal systems and support automated testing scenarios.
• Implemented interactive dashboards using Streamlit and React.js with Tailwind CSS and Material UI to monitor AI agent performance, conversation logs, and test results.
• Applied advanced prompt engineering and LLM fine-tuning using QLoRA, PEFT, DeepSpeed, and Hugging Face Transformers to evaluate and reduce AI hallucinations by 20%, ensuring relevance and reliability.
• Ensured AI safety, compliance, and auditability with Guardrails.ai, differential privacy, and secure handling of sensitive banking data during testing workflows.
• Managed multi-cloud deployment (AWS, Azure OpenAI, GCP Vertex AI) and containerized testing environments with Docker, Kubernetes, and Helm for reproducible agent testing.
• Implemented CI/CD pipelines with GitHub Actions, Jenkins, GitLab CI, and Terraform to automate deployment, rollback, and version control of LLM-based testing frameworks.
• Built analytics and observability pipelines capturing test case execution, conversation metrics, and model performance using Python, Pandas, NumPy, Streamlit, Plotly, ELK Stack, Prometheus, and Grafana.
• Developed escalation and validation workflows to route complex or failed test scenarios to human review, improving QA coverage and reducing error handling overhead by 35%.
• Conducted LLM benchmarking, functional testing, and performance optimization for latency, accuracy, and cloud efficiency, achieving ~30% improvement in processing speed.
• Mentored junior engineers and collaborated with stakeholders via JIRA and Confluence, providing guidance on Python, FastAPI, Streamlit, embeddings, multi-agent testing, and AI validation best practices.
• Designed and deployed AI agent testing frameworks integrating GPT-4, BERT, Whisper, and domain-specific LLMs to validate multi-agent workflows and ensure accurate, context-aware responses.
• Engineered RAG pipelines using LangChain, LangGraph, and LlamaIndex with vector databases (Pinecone, Qdrant, Weaviate, Chroma, FAISS) for semantic search, knowledge retrieval, and functional validation of LLM-powered agents.
• Generated and fine-tuned embeddings for banking knowledge bases, improving retrieval accuracy and testing context-aware AI responses by 45%.
• Developed backend microservices in FastAPI and Flask with REST and GraphQL APIs to simulate agent interactions with internal systems and support automated testing scenarios.
• Implemented interactive dashboards using Streamlit and React.js with Tailwind CSS and Material UI to monitor AI agent performance, conversation logs, and test results.
• Applied advanced prompt engineering and LLM fine-tuning using QLoRA, PEFT, DeepSpeed, and Hugging Face Transformers to evaluate and reduce AI hallucinations by 20%, ensuring relevance and reliability.
• Ensured AI safety, compliance, and auditability with Guardrails.ai, differential privacy, and secure handling of sensitive banking data during testing workflows.
• Managed multi-cloud deployment (AWS, Azure OpenAI, GCP Vertex AI) and containerized testing environments with Docker, Kubernetes, and Helm for reproducible agent testing.
• Implemented CI/CD pipelines with GitHub Actions, Jenkins, GitLab CI, and Terraform to automate deployment, rollback, and version control of LLM-based testing frameworks.
• Built analytics and observability pipelines capturing test case execution, conversation metrics, and model performance using Python, Pandas, NumPy, Streamlit, Plotly, ELK Stack, Prometheus, and Grafana.
• Developed escalation and validation workflows to route complex or failed test scenarios to human review, improving QA coverage and reducing error handling overhead by 35%.
• Conducted LLM benchmarking, functional testing, and performance optimization for latency, accuracy, and cloud efficiency, achieving ~30% improvement in processing speed.
• Mentored junior engineers and collaborated with stakeholders via JIRA and Confluence, providing guidance on Python, FastAPI, Streamlit, embeddings, multi-agent testing, and AI validation best practices.
Hire a AI Engineer
We have the best ai engineer experts on Twine. Hire a ai engineer in St. Louis today.