Available to hire
I am Nikita Agarwal, a ML engineer and researcher focusing on generative AI, LLMs, and responsible AI governance. I build production-grade multilingual AI systems, synthetic data pipelines, and scalable inference infrastructure to solve real-world needs in healthcare and beyond.
My work spans mechanistic interpretability, safety, and governance, with ongoing collaborations on applied RL, multimodal NLP, and data efficiency. I’m passionate about turning cutting-edge research into reliable, ethical AI that scales.
Skills
Experience Level
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Language
English
Fluent
Hindi
Fluent
Spanish; Castilian
Beginner
Work Experience
Independent Machine Learning Researcher at CogniX
August 31, 2024 - PresentLeading multilingual large language model capabilities for the CogniXpert AI mental health platform. Developed a synthetic data generation pipeline producing over 3 million tokens in African languages, built a production inference engine, and implemented comprehensive evaluation frameworks. Delivered an English model to production with a multilingual version nearing deployment under Meta’s Llama Impact Accelerator program. Conducting FIG fellowship research on consciousness quantification in LLMs, mechanistic interpretability, and steering vectors studies. Previously designed ML models to streamline prior authorization processes at a healthcare AI startup, improving clinical information extraction using RAG, prompt engineering, and model fine-tuning.
Researcher at Supervised Program for Alignment Research
June 4, 2024 - September 9, 20251. Developed real-time machine learning pipelines simulating dynamic agent behaviors using time-series analysis and reinforcement learning to enhance algorithm robustness in noisy environments. \
2. Created high-fidelity simulations of human attitudes and behaviors responding to manipulation threats. Facilitated experiments to test defenses against disinformation from human and AI threat actors.
Machine Learning Researcher at Mayo Clinic, Center for Individualized Medicine
August 31, 2024 - August 27, 2025Adapted medical-specific CLIP and PLIP models to generate meaningful representations and tumor subtyping using multi-modal pathology data. Developed deep learning models to analyze giga-pixel histopathology images and segment pathological features in ovarian cancer, leveraging Argonne National Lab’s computing cluster for large-scale fine-tuning. Collaborated with engineers to deploy optimized models on GCP for scalable, latency-sensitive inference tasks. Conducted comprehensive reviews of large language models for survival analysis, developed open-source evaluation frameworks, and analyzed modeling strategies to enhance medical risk prediction from unstructured data. Surveyed causal representation learning models and proposed theory integration with reinforcement learning for dynamic causal evolution. Designed quantum-optimized algorithms for large-scale causal inference and studied quantum-assisted causal algorithmic recourse.
Junior Research Fellow at National Institute of Advanced Studies, IISc, India
September 29, 2018 - September 20, 20191. Led research on causality testing in cognitive neuroscience focusing on novel measures of consciousness. Reviewed philosophical theories of consciousness to support ongoing research initiatives.
2. Developed the Network Causal Activity measure to quantify causal neural activity across brain regions in different states of consciousness using electrocorticographic data in monkeys.
Graduate Researcher at Cyber Valley Tübingen, Germany
July 1, 2021 - April 29, 2022Led projects on learning topology of constraint-based causal discovery methods. Investigated statistical validity and satisfiability of invariant causal learning algorithms. Proposed new topologically-informed definitions to refine causal discovery approaches.
Student Research Assistant - Empirical Inference Group at Max Planck Institute of Intelligent Systems
September 1, 2020 - February 28, 2021Led research on counterfactual bounds for algorithmic recourse to enhance post-hoc explainability in decision-making. Surveyed explainability and interpretable ML models including LIME and SHAP. Explored philosophical foundations of algorithmic and causal recourse and formulated a causality-assisted explainability model recommending actions under weak causal assumptions.
LLM Engineer at CogniX LTD
July 1, 2025 - PresentSpearheading multilingual LLM capabilities for the CogniXpert AI mental health platform, including a synthetic data generation pipeline (30M+ tokens of African language conversations), a production inference engine, and a comprehensive evaluation framework. Delivered English LLM to production and advanced the multilingual version toward deployment as part of Meta’s Llama Impact Accelerator, supporting real-world clinical applications at scale. Scaled platform infrastructure from 50 users to 10,000+ concurrent users within one month.
Research Associate at Future Impact Group
July 1, 2025 - September 29, 2025Leveraging theories of consciousness to quantify their impact on integrated information scores in frontier LLMs. Designing mechanistic interpretability and ablation pipelines that connect architectural design choices to computational markers of consciousness, advancing explainability in large-scale AI systems.
Independent Machine Learning Researcher at Self-employed
November 1, 2024 - PresentVector Steering in LLMs: exploring controllability of large language models by steering along single and multiple vector directions, analyzing robustness and performance trade-offs. AI Governance: researching frameworks and best practices in AI governance, safety, and policy, with a focus on integrating responsible AI principles into deployment pipelines. Mechanistic Interpretability: reproducing selected experiments from recent mechanistic interpretability papers and experimenting with evaluation frameworks to understand circuit-level behaviors in LLMs.
ML Engineer at Stealth Startup
October 1, 2024 - September 29, 2025Designed and implemented ML models to automate and streamline the prior authorization process. Enhanced clinical information extraction from patient medical records by applying RAG pipelines, prompt engineering, and fine-tuning LLaMA family models, improving accuracy and efficiency in healthcare workflows.
Research Associate at Supervised Program for Alignment Research
September 1, 2024 - September 29, 2025Developed real-time ML pipelines using time-series analysis and reinforcement learning to simulate dynamic agent behaviors, improving algorithmic robustness in noisy environments. Built high-fidelity simulation frameworks to model human responses to manipulation threats and tested defenses against disinformation from both human and AI actors.
Machine Learning Researcher at Mayo Clinic
August 1, 2024 - September 29, 2025Adapted CLIP and PLIP multimodal models for medical data by training on histopathology image–text pairs and clinical notes, generating meaningful representations to support tumor subtyping. Built scalable deep learning pipelines for histopathology image analysis with deployment on GCP for latency-sensitive inference. Created an open-source evaluation framework for LLMs in survival analysis and developed causal representation learning methods integrated with reinforcement learning to improve predictive accuracy in longitudinal data. Designed quantum-optimized algorithms to accelerate causal inference tasks, and implemented disentanglement metrics for BERT-based embeddings trained on clinical data. Mentored projects addressing bias detection and mitigation in breast cancer datasets.
Junior Research Fellow at National Institute of Advanced Studies, India
September 1, 2019 - September 29, 2025Led a project on causality testing in cognitive neuroscience, reviewed philosophy of scientific theories of consciousness, and developed the Network Causal Activity measure to quantify causal neural activity between brain areas in monkeys using electrocorticographic signals in different states of consciousness.
LLM Engineer at CogniX LTD
July 1, 2025 - PresentSpearheading multilingual LLM capabilities for CogniXpert AI mental health platform, including development of a synthetic data generation pipeline (30M+ tokens of African language conversations), a production inference engine, and a comprehensive evaluation framework. Delivered English LLM to production and advanced the multilingual version toward deployment as part of Meta’s Llama Impact Accelerator, supporting real-world clinical applications at scale. Scaled platform infrastructure from an initial 50 users to 10,000+ concurrent users within one month, ensuring reliability, efficiency, and production readiness.
Research Associate at Future Impact Group
July 1, 2025 - September 29, 2025Leveraging insights from theories of consciousness to quantify their impact on integrated information scores in frontier LLMs. Designing mechanistic interpretability and ablation pipelines that connect architectural design choices to computational markers of consciousness, advancing explainability in large-scale AI systems.
Independent Machine Learning Researcher at Self-employed
November 1, 2024 - PresentVector Steering in LLMs: exploring controllability of large language models by steering along single and multiple vector directions, analyzing robustness and performance trade-offs. AI Governance: researching frameworks and best practices in AI governance, safety, and policy, with a focus on integrating responsible AI principles into deployment pipelines. Mechanistic Interpretability: reproducing selected experiments from recent mechanistic interpretability papers and experimenting with evaluation frameworks to understand circuit-level behaviors in LLMs.
ML Engineer at Stealth Startup
October 1, 2024 - September 29, 2025Designed and implemented ML models to automate and streamline the prior authorization process. Enhanced clinical information extraction from patient medical records by applying RAG pipelines, prompt engineering, and fine-tuning LLaMA family models, improving accuracy and efficiency in healthcare workflows.
Research Associate at Supervised Program for Alignment Research
September 1, 2024 - September 29, 2025Developed real-time ML pipelines using time-series analysis and reinforcement learning to simulate dynamic agent behaviors, improving algorithmic robustness in noisy environments. Built high-fidelity simulation frameworks to model human responses to manipulation threats and tested defenses against disinformation from both human and AI actors.
Machine Learning Researcher at Mayo Clinic
August 1, 2024 - September 29, 2025Adapted CLIP and PLIP multimodal models for medical data by training on histopathology image-text pairs and clinical notes, generating meaningful representations to support tumor subtyping. Built scalable deep learning pipelines for histopathology image analysis, achieving accurate segmentation of rare ovarian cancer cases. Optimized deployment on GCP for latency-sensitive inference, supporting near real-time clinical decision-making. Built an open-source evaluation framework for LLMs in survival analysis, benchmarking embedding, fine-tuning, and prompting approaches. Demonstrated improved efficiency in extracting medical risk factors from unstructured data, reducing time and cost of data preparation. Developed novel causal representation learning methods integrated with reinforcement learning, improving predictive accuracy in longitudinal data systems and enabling more robust decision-making in dynamic environments. Designed quantum-optimized algorithms that cut computation time for l
Junior Research Fellow at National Institute of Advanced Studies
September 1, 2019 - September 29, 2025Led a project on causality testing in cognitive neuroscience, focusing on a measure of consciousness. Reviewed the philosophy of different scientific theories of consciousness, providing a comprehensive analysis to support ongoing research. Developed the Network Causal Activity measure to quantify (causal) neural activity between different brain areas in monkeys using electrocorticographic signals in different states of consciousness.
Graduate Researcher at Cyber Valley
April 1, 2022 - September 29, 2025Led a project on learning the topology of constrained-based causal discovery methods. Investigated the statistical validity and satisfiability of existing invariant causal learning algorithms. Proposed new definitions from a topological lens redefining the causal discovery.
Research Assistant at Max Planck Institute of Intelligent Systems
February 1, 2021 - September 29, 2025Led a project on Counterfactual Bounds for Algorithmic Recourse, focusing on developing methods to improve post-hoc explainability in decision-making settings. Surveyed explainable/interpretable machine learning models (xAI/iML) such as LIME, SHAP, etc. Explored philosophy of algorithmic recourse and causal algorithmic recourse approaches (CFE, MINT). Formulated a causality-assisted explainability model that can recommend actions under weak causal assumptions.
Education
Master of Science (by research) in Neural Information Processing at International Max Planck Research School, Universität Tübingen
September 1, 2019 - December 31, 2021Bachelor of Technology in Electrical Engineering at Maulana Azad National Institute of Technology
July 1, 2014 - June 30, 2018Master of Science (by research) in Neural Information Processing at International Max Planck Research School, Universität Tübingen (UoTÜ), Germany
January 11, 2030 - September 29, 2025Bachelor of Technology in Electrical Engineering at Maulana Azad National Institute of Technology (MANIT), India
January 11, 2030 - September 29, 2025Master of Science (by research) in Neural Information Processing at International Max Planck Research School, Universität Tübingen (UoTÜ), Germany
January 11, 2030 - September 29, 2025Bachelor of Technology (B.Tech) in Electrical Engineering at Maulana Azad National Institute of Technology (MANIT), India
January 11, 2030 - September 29, 2025Qualifications
Fellow at Future Impact Group (FIG)
January 11, 2030 - September 29, 2025Alan Turing Data Study Group
January 11, 2030 - September 29, 2025AI Safety and Society Course
January 11, 2030 - September 29, 2025SPAR Mentee under Kellin Pelrine
January 11, 2030 - September 29, 2025AI Safety Fundamentals: Alignment Course
January 11, 2030 - September 29, 2025AI Safety Fundamentals: Alignment Course
January 11, 2030 - September 29, 2025Alan Turing Data Study Group
January 11, 2030 - September 29, 2025Future Impact Group Fellow
January 11, 2030 - September 29, 2025SPAR Mentee under Kellin Pelrine
January 11, 2030 - September 29, 2025AI Safety, Ethics and Society Course
January 11, 2030 - September 29, 2025Industry Experience
Healthcare, Life Sciences, Software & Internet, Education, Professional Services, Media & Entertainment
Skills
Experience Level
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Hire a AI Engineer
We have the best ai engineer experts on Twine. Hire a ai engineer in San Jose today.