I'm Amiel Durant, a machine learning engineer specializing in large language models (LLMs). I have 8+ years of experience customizing, refining, and deploying LLMs for text pattern analysis, summarization, and classification tasks. I'm proficient in Python, PyTorch, TensorFlow, and model fine-tuning frameworks such as Hugging Face, LoRA, and PEFT. I’ve prioritized Gemini models and built scalable inference workflows on Vertex AI and GCP. I collaborate closely with product, research, platform, and QA teams, following Agile/Scrum practices and using Git, Jira, CI/CD with GitHub Actions and Docker. I focus on model evaluation, reducing hallucinations, dataset curation, and ethical alignment for sensitive domains, and I’ve helped deploy real-time editorial tools with latency under 300ms.

Amiel Durant

I'm Amiel Durant, a machine learning engineer specializing in large language models (LLMs). I have 8+ years of experience customizing, refining, and deploying LLMs for text pattern analysis, summarization, and classification tasks. I'm proficient in Python, PyTorch, TensorFlow, and model fine-tuning frameworks such as Hugging Face, LoRA, and PEFT. I’ve prioritized Gemini models and built scalable inference workflows on Vertex AI and GCP. I collaborate closely with product, research, platform, and QA teams, following Agile/Scrum practices and using Git, Jira, CI/CD with GitHub Actions and Docker. I focus on model evaluation, reducing hallucinations, dataset curation, and ethical alignment for sensitive domains, and I’ve helped deploy real-time editorial tools with latency under 300ms.

Available to hire

I’m Amiel Durant, a machine learning engineer specializing in large language models (LLMs). I have 8+ years of experience customizing, refining, and deploying LLMs for text pattern analysis, summarization, and classification tasks. I’m proficient in Python, PyTorch, TensorFlow, and model fine-tuning frameworks such as Hugging Face, LoRA, and PEFT. I’ve prioritized Gemini models and built scalable inference workflows on Vertex AI and GCP.

I collaborate closely with product, research, platform, and QA teams, following Agile/Scrum practices and using Git, Jira, CI/CD with GitHub Actions and Docker. I focus on model evaluation, reducing hallucinations, dataset curation, and ethical alignment for sensitive domains, and I’ve helped deploy real-time editorial tools with latency under 300ms.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Intermediate
See more

Language

French
Fluent
English
Fluent

Work Experience

Machine Learning Engineer (LLM / NLP) at Google France
July 1, 2023 - March 1, 2025
Architected fine-tuning pipelines for Gemini models to detect sentiment, thematic patterns, and tone variation across multilingual corpora (~50M+ documents), improving pattern recognition accuracy by 24%. Refined training datasets using weak supervision labeling and synthetic augmentation, reducing hallucination rates by 18% in production inference workloads. Customized PEFT-based LoRA adapters for domain-specific compliance text, allowing model updates without full retraining and cutting inference latency by 11%. Evaluated model behavior using ethical alignment scoring and custom hallucination-risk metrics. Deployed LLM checkpoints to Vertex AI with autoscaling batch and streaming inference for downstream analytics platforms. Coordinated with product leads, infrastructure teams, and QA analysts to integrate model outputs into real-time editorial and review tools with under 300ms response latency.
NLP Research & Model Adaptation Developer at Janssen Sports Leadership Center
March 1, 2021 - April 1, 2023
Formulated automated player performance narrative summaries using transformer-based text models trained on coaching transcripts and behavioral observation notes. Adapted pre-trained sentence-embedding models to sports psychology terminology, improving semantic clustering coherence by 21%. Trained classification models for motivation type recognition, reaching 92% F1 across multilingual interview datasets. Validated model outputs with sports psychologists and program directors to ensure domain accuracy and ethical tone. Consolidated distributed annotation workflows and metadata storage using PostgreSQL + S3, improving dataset consistency. Mapped deployment flows into internal applications, enabling non-technical staff to interpret outputs through dashboards.
NLP / Text Analytics Developer at FoxAudit
June 1, 2017 - January 1, 2021
Constructed text analytics pipelines for regulatory compliance document review across financial, insurance, and legal sectors. Configured recursive GPT-style encoder-decoder models for policy clause anomaly detection, enhancing flagging precision by 17%. Maintained domain lexicons and rule-based filters integrated into high-volume document import workflows. Analyzed narrative risk patterns and report scoring outputs to support audit analysts and compliance officers. Converted PDF/Scan-based documents into structured representations using OCR + NLP parsing, improving classification throughput. Extracted interpretability signals for stakeholder reporting to ensure transparency and reduce false positive review load.

Education

Bachelor's Degree in Computer Science at University of Angers
January 11, 2030 - June 1, 2017

Qualifications

Add your qualifications or awards here.

Industry Experience

Software & Internet, Healthcare, Professional Services
    paper https://github.com/expert-main

    Machine Learning Engineer (LLM) with 8+ years of experience customizing, refining, and deploying large language models for text pattern analysis, summarization, and classification tasks. Strong proficiency in Python, PyTorch, TensorFlow, and model fine-tuning frameworks (Hugging Face, LoRA, PEFT). Hands-on experience prioritizing Gemini models and deploying scalable inference workflows on Vertex AI and GCP. Skilled in model evaluation, hallucination reduction techniques, dataset curation, and ethical alignment for sensitive domains. Collaborated closely with cross-functional teams, including product, research, platform, and QA, following Agile/Scrum practices and using Git, GitHub, Jira, CI/CD with GitHub Actions and Docker.