I'm Suvarna Chandrika Reddy, a data engineering professional with 8+ years of experience building scalable cloud data platforms and high-performance ETL pipelines across AWS, GCP, and Azure. I have hands-on expertise with Snowflake, BigQuery, Synapse, Databricks, and modern data lake architectures, and I love turning complex data into actionable analytics using Python, PySpark, and a range of web technologies. I've led engineering teams, improved SDLC processes, and implemented DevOps practices to deliver robust data solutions. My work includes migrating on-prem ETL tools to the cloud, designing end-to-end data and ML pipelines, and enabling enterprise reporting and data governance across hybrid environments.

Suvarna Chandrika Reddy

I'm Suvarna Chandrika Reddy, a data engineering professional with 8+ years of experience building scalable cloud data platforms and high-performance ETL pipelines across AWS, GCP, and Azure. I have hands-on expertise with Snowflake, BigQuery, Synapse, Databricks, and modern data lake architectures, and I love turning complex data into actionable analytics using Python, PySpark, and a range of web technologies. I've led engineering teams, improved SDLC processes, and implemented DevOps practices to deliver robust data solutions. My work includes migrating on-prem ETL tools to the cloud, designing end-to-end data and ML pipelines, and enabling enterprise reporting and data governance across hybrid environments.

Available to hire

I’m Suvarna Chandrika Reddy, a data engineering professional with 8+ years of experience building scalable cloud data platforms and high-performance ETL pipelines across AWS, GCP, and Azure. I have hands-on expertise with Snowflake, BigQuery, Synapse, Databricks, and modern data lake architectures, and I love turning complex data into actionable analytics using Python, PySpark, and a range of web technologies.

I’ve led engineering teams, improved SDLC processes, and implemented DevOps practices to deliver robust data solutions. My work includes migrating on-prem ETL tools to the cloud, designing end-to-end data and ML pipelines, and enabling enterprise reporting and data governance across hybrid environments.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
See more

Work Experience

Sr Data Engineer at TD Bank
January 1, 2025 - Present
Managed end-to-end ETL pipelines on GCP, coordinating data movement from multiple sources into Cloud Storage and loading into BigQuery for enterprise analytics. Built scalable batch and streaming ingestion workflows using Python, Dataflow, Pub/Sub, and Compute Engine to streamline data imports into BigQuery warehouses; migrated legacy cron jobs to Airflow for improved scheduling and monitoring. Designed staging layers in Cloud Storage, performed data validation and transformations with Dataflow and Dataproc, and implemented IAM-based access control and query optimization to enhance performance and cost efficiency. Led real-time data processing using Pub/Sub and Spark on GKE; developed AI/ML data pipelines with Vertex AI monitoring and model-tracking.
Sr Data Engineer at Lowe's
November 1, 2023 - December 1, 2024
Migrated on-prem Hadoop to GCP; built scalable batch and streaming data pipelines using Python, PySpark, Hive SQL, and Presto; implemented configurable data delivery pipelines for customer-facing stores; built dashboards in Tableau; optimized data ingest from relational/non-relational sources; built stored procedures in MS SQL for enterprise reporting.
Sr Data Engineer at Deutsche Bank
January 1, 2022 - October 1, 2023
End-to-end ETL and big data pipelines using AWS Glue/EMR; real-time streaming with Kinesis and Kafka; data preprocessing for ML with Spark/EMR; migrated production infrastructure to AWS using serverless patterns (Lambda, Kinesis); CI/CD with CodePipeline; data warehousing with Redshift; integration with SageMaker for ML models; data quality frameworks and validation.
Data Engineer at Verizon
February 1, 2020 - December 1, 2021
Distributed data processing using Hadoop/Spark on AWS EMR; batch and real-time pipelines; data ingestion from Teradata, Oracle, and SQL via Attunity; Hive, HBase; Airflow DAGs to orchestrate Spark workflows; ML deployments using Spark MLlib and TensorFlow; dashboards via Tableau/Kibana; GDPR compliance and data governance.
Data Engineer at I-Flex Solutions
August 1, 2017 - November 1, 2019
Designed scalable chatbot architecture; implemented NLP with NLTK, spaCy, and Rasa; integrated with HR IT systems via REST APIs; deployed Flask/Django-based services; CI/CD with Azure DevOps; trained models and feedback loops for continuous learning.

Education

Add your educational history here.

Qualifications

Add your qualifications or awards here.

Industry Experience

Software & Internet, Financial Services, Professional Services