I am an AI/ML engineer with 4 years of experience building production-grade machine learning and Generative AI solutions across customer service automation, fraud detection, and recommendation systems. I specialize in large language models (BERT, GPT, LLaMA), prompt engineering, and orchestration with LangChain and RAG pipelines for scalable NLP applications. I also focus on end-to-end MLOps, ensuring reliable deployment, monitoring, and continuous improvement of enterprise AI systems using Docker, Kubernetes, MLflow, and cloud platforms like AWS and Azure. I am passionate about delivering impactful AI that drives business outcomes. I design and deploy scalable AI solutions, collaborate with cross-functional teams, and continually refine models through feedback loops and robust monitoring to maximize accuracy and efficiency.

Trisha Kundur

I am an AI/ML engineer with 4 years of experience building production-grade machine learning and Generative AI solutions across customer service automation, fraud detection, and recommendation systems. I specialize in large language models (BERT, GPT, LLaMA), prompt engineering, and orchestration with LangChain and RAG pipelines for scalable NLP applications. I also focus on end-to-end MLOps, ensuring reliable deployment, monitoring, and continuous improvement of enterprise AI systems using Docker, Kubernetes, MLflow, and cloud platforms like AWS and Azure. I am passionate about delivering impactful AI that drives business outcomes. I design and deploy scalable AI solutions, collaborate with cross-functional teams, and continually refine models through feedback loops and robust monitoring to maximize accuracy and efficiency.

Available to hire

I am an AI/ML engineer with 4 years of experience building production-grade machine learning and Generative AI solutions across customer service automation, fraud detection, and recommendation systems. I specialize in large language models (BERT, GPT, LLaMA), prompt engineering, and orchestration with LangChain and RAG pipelines for scalable NLP applications. I also focus on end-to-end MLOps, ensuring reliable deployment, monitoring, and continuous improvement of enterprise AI systems using Docker, Kubernetes, MLflow, and cloud platforms like AWS and Azure.

I am passionate about delivering impactful AI that drives business outcomes. I design and deploy scalable AI solutions, collaborate with cross-functional teams, and continually refine models through feedback loops and robust monitoring to maximize accuracy and efficiency.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Intermediate
See more

Language

English
Fluent

Work Experience

AI/ML Engineer at AT&T
August 1, 2024 - Present
Developed LLM-powered customer service assistants (BERT, GPT, LangChain, RAG) to automate and optimize customer support, reducing average handling time by 40% and automating 65% of Tier-1 support. Implemented NLP pipelines for intent classification and contextual response generation, improving first-response accuracy by 35% across high-volume channels. Built generative AI-based response synthesis modules to produce context-aware replies, increasing customer satisfaction by 28%. Deployed AI microservices using Docker and Kubernetes on AWS (EC2, S3, SageMaker), ensuring 99.9% uptime and scalable inference for 10,000+ concurrent users. Developed continuous training and human-in-the-loop validation workflows (MLflow) enabling ongoing model refinement and reducing false positives by 18%.
Machine Learning Engineer at Mu Sigma
June 1, 2021 - July 1, 2023
Designed and deployed a real-time fraud detection system using XGBoost and Azure Machine Learning, reducing fraud losses by 30% within the first quarter. Performed feature engineering (20+ features) and addressed class imbalance using SMOTE and undersampling, achieving an F1-score of 87% on datasets with <1% fraud rate. Improved model recall to 84% while maintaining high precision by optimizing thresholds and ensemble strategies, reducing production false positives. Built low-latency scoring APIs (<200 ms) with Flask and AKS for real-time transaction evaluation. Developed distributed data processing pipelines (Azure Databricks, PySpark) for scalable feature computation and model training. Integrated model monitoring and performance tracking (Azure Monitor, MLflow) to detect degradation and support automated retraining via Azure DevOps.

Education

Master of Science at Saint Louis University
January 11, 2030 - February 5, 2026
Master of Science, Information Systems at Saint Louis University
January 11, 2030 - February 16, 2026

Qualifications

AWS Certified AI Engineer– Practitioner
January 11, 2030 - February 5, 2026
AWS Certified Machine Learning – Specialty
January 11, 2030 - February 5, 2026
Databricks Certified Generative AI Fundamentals
January 11, 2030 - February 5, 2026
AWS Certified AI Engineer– Practitioner
January 11, 2030 - February 16, 2026
AWS Certified Machine Learning – Specialty
January 11, 2030 - February 16, 2026
Databricks Certified Generative AI Fundamentals
January 11, 2030 - February 16, 2026

Industry Experience

Software & Internet, Professional Services, Telecommunications, Computers & Electronics, Media & Entertainment