I'm Bhavitha, a data engineer with 5+ years of experience building scalable, cloud-native data pipelines across banking, healthcare, and telecom. I specialize in Snowflake, SQL, and Python, with hands-on experience in large-scale migrations and dimensional modeling (star/snowflake). I focus on data quality, governance, and compliance, including automated validations using Great Expectations. I also work with modern ELT patterns (dbt-style transformations) and orchestration with Airflow, and I contribute to CI/CD and IaC to improve reliability and release efficiency.

Bhavitha

I'm Bhavitha, a data engineer with 5+ years of experience building scalable, cloud-native data pipelines across banking, healthcare, and telecom. I specialize in Snowflake, SQL, and Python, with hands-on experience in large-scale migrations and dimensional modeling (star/snowflake). I focus on data quality, governance, and compliance, including automated validations using Great Expectations. I also work with modern ELT patterns (dbt-style transformations) and orchestration with Airflow, and I contribute to CI/CD and IaC to improve reliability and release efficiency.

Available to hire

I’m Bhavitha, a data engineer with 5+ years of experience building scalable, cloud-native data pipelines across banking, healthcare, and telecom. I specialize in Snowflake, SQL, and Python, with hands-on experience in large-scale migrations and dimensional modeling (star/snowflake).

I focus on data quality, governance, and compliance, including automated validations using Great Expectations. I also work with modern ELT patterns (dbt-style transformations) and orchestration with Airflow, and I contribute to CI/CD and IaC to improve reliability and release efficiency.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert

Language

English
Fluent

Work Experience

Data Engineer at BMO Harris Bank
March 1, 2024 - Present
Built and optimized ETL pipelines in PySpark and AWS Glue 4.0, improving reporting reliability by 30%, enhancing data quality, and cutting manual fixes by half.
Data Engineer at Trinity Health
November 1, 2020 - December 1, 2022
Designed and maintained data pipelines with ADF v2 and PySpark, integrating data from 8 EMR systems with high reliability. Reduced Synapse ETL runtime by 60% with incremental loading, achieving nightly load completion and eliminating duplicate records.
Big Data Engineer at AT&T
February 1, 2019 - November 1, 2020
Processed ~12TB of daily telecom data with Spark and Hive, optimized queries, built Spark Streaming + Kafka pipelines for outage detection, and migrated legacy Informatica jobs to Spark to reduce processing time and costs.

Education

Master of Science (MS), Computer Technology at Eastern Illinois University
January 1, 2023 - December 1, 2024

Qualifications

Microsoft Certified: Azure Data Engineer Associate (DP-203)
January 11, 2030 - January 7, 2026
Data Engineering with Python and PySpark
January 11, 2030 - January 7, 2026
Data Engineering Essentials Hands-on – SQL, Python, Spark, Airflow
January 11, 2030 - January 7, 2026
Data Engineering with AWS
January 11, 2030 - January 7, 2026

Industry Experience

Financial Services, Healthcare, Telecommunications, Software & Internet, Professional Services