Hi, I'm Nikhil Vempati. I specialize in designing, developing, and maintaining scalable ETL pipelines using Azure Data Factory and Azure Databricks. My work involves transforming and loading complex data from various sources into cloud environments such as Azure SQL Database, Azure Synapse Analytics, and Data Lake Storage. I enjoy optimizing data workflows, implementing data validation techniques, and using technologies like PySpark to ensure data is ready for analytics and reporting. I have experience working closely with cross-functional teams to deliver data engineering solutions that meet business needs. Staying current with the latest Azure technologies and best practices is important to me, and I take pride in creating comprehensive documentation and automating data processing tasks. I'm passionate about leveraging cloud and big data tools to build reliable, efficient pipelines that drive meaningful insights.

Nikhil Vempati

Hi, I'm Nikhil Vempati. I specialize in designing, developing, and maintaining scalable ETL pipelines using Azure Data Factory and Azure Databricks. My work involves transforming and loading complex data from various sources into cloud environments such as Azure SQL Database, Azure Synapse Analytics, and Data Lake Storage. I enjoy optimizing data workflows, implementing data validation techniques, and using technologies like PySpark to ensure data is ready for analytics and reporting. I have experience working closely with cross-functional teams to deliver data engineering solutions that meet business needs. Staying current with the latest Azure technologies and best practices is important to me, and I take pride in creating comprehensive documentation and automating data processing tasks. I'm passionate about leveraging cloud and big data tools to build reliable, efficient pipelines that drive meaningful insights.

Available to hire

Hi, I’m Nikhil Vempati. I specialize in designing, developing, and maintaining scalable ETL pipelines using Azure Data Factory and Azure Databricks. My work involves transforming and loading complex data from various sources into cloud environments such as Azure SQL Database, Azure Synapse Analytics, and Data Lake Storage. I enjoy optimizing data workflows, implementing data validation techniques, and using technologies like PySpark to ensure data is ready for analytics and reporting.

I have experience working closely with cross-functional teams to deliver data engineering solutions that meet business needs. Staying current with the latest Azure technologies and best practices is important to me, and I take pride in creating comprehensive documentation and automating data processing tasks. I’m passionate about leveraging cloud and big data tools to build reliable, efficient pipelines that drive meaningful insights.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Intermediate
Intermediate
Intermediate
Intermediate

Language

English
Fluent
Hindi
Intermediate

Work Experience

Azure Data Engineer at Citadel
February 1, 2025 - June 28, 2025
Involved in gathering requirements, system analysis, design, development, testing, and deployment. Designed and implemented migration strategies using Azure SQL Database, Azure Data Factory, Azure Key Vault, and Azure Blob Storage. Created end-to-end data pipelines for loading data from on-premises sources to Azure SQL Server and Azure Data Warehouse with ADF and Databricks notebooks. Built scalable ETL pipelines, automated data extraction and transformation with Python, and implemented copy behaviors and error handling in Azure Data Factory. Managed data workflows using diverse ADF activities and configured logic apps for email notifications. Worked on Spark SQL transformations, job tuning, and optimized data processing. Utilized Agile and JIRA for project management, maintained GitHub repositories, and contributed to building robust data pipelines within Palantir Foundry with data quality checks and regulatory reporting support.
Azure Data Developer at SMS Techsoft India Limited, Bangalore, India
January 1, 2021 - December 31, 2022
Gathered requirements, designed, developed, tested, and deployed data pipelines to load data from on-premises systems to Azure SQL Server and Azure Data Warehouse using Azure Data Factory and Databricks notebooks. Created complex ETL jobs with data flows and Spark SQL transformations. Implemented Azure copy activities, monitored pipelines, configured alerts, and handled real-time data processing using Azure Stream Analytics and Event Hubs. Worked with ARM templates and CI/CD pipelines using Jenkins and Azure DevOps. Developed Python scripts and utilized AWS services like Redshift, Athena, Kinesis alongside Azure tools. Managed project repositories via GitHub and followed Agile methodologies managing sprints with JIRA. Also worked with big data tools like Hive, MapReduce, Oozie, Sqoop, Spark, and Pig to ingest, transform, and process large datasets. Automated workflows and cluster resource sharing. Supported financial reporting and regulatory compliance projects using Palantir Foundry a

Education

Bachelor of Technology at JNTU, India
January 1, 2017 - December 31, 2021
Masters in Computer Science at Rivier University, United States
January 1, 2022 - December 31, 2024

Qualifications

Add your qualifications or awards here.

Industry Experience

Software & Internet, Financial Services, Professional Services, Computers & Electronics