I am a results-driven data engineer specializing in Databricks and lakehouse architectures. I build scalable data pipelines, optimize Spark workloads, and implement Medallion Architecture for clean data layers. I collaborate with cross-functional teams to deliver reliable data products and ensure data quality and governance. I strive to continuously tune performance from ingestion to analytics-ready layers, support cloud migrations, and create robust, scalable solutions that empower data-driven decisions.

Ashok Kumar Gurram

I am a results-driven data engineer specializing in Databricks and lakehouse architectures. I build scalable data pipelines, optimize Spark workloads, and implement Medallion Architecture for clean data layers. I collaborate with cross-functional teams to deliver reliable data products and ensure data quality and governance. I strive to continuously tune performance from ingestion to analytics-ready layers, support cloud migrations, and create robust, scalable solutions that empower data-driven decisions.

Available to hire

I am a results-driven data engineer specializing in Databricks and lakehouse architectures. I build scalable data pipelines, optimize Spark workloads, and implement Medallion Architecture for clean data layers. I collaborate with cross-functional teams to deliver reliable data products and ensure data quality and governance.
I strive to continuously tune performance from ingestion to analytics-ready layers, support cloud migrations, and create robust, scalable solutions that empower data-driven decisions.

See more

Experience Level

Expert
Expert
Expert
Intermediate
Intermediate

Language

English
Fluent

Work Experience

Databricks Data Engineer at Uber
March 1, 2025 - Present
Designed and built scalable end-to-end ETL pipelines using PySpark in Databricks for processing 7M+ daily transactional records. Implemented Medallion Architecture (Bronze–Silver–Gold). Built ingestion pipelines loading multi-source data into Bronze using Auto Loader. Developed Silver layer transformations for cleansing, normalization, deduplication, and schema validation. Created Gold analytical datasets powering BI and operational analytics; optimized materialized views for reporting performance. Implemented incremental pipelines using Delta MERGE and CDC patterns. Designed SCD Type 2 frameworks preserving historical data. Improved runtime by 40% through partition tuning, query optimization, and resolving Spark bottlenecks including data skew. Optimized file sizes with OPTIMIZE and Z-ORDER. Governed enterprise data using Unity Catalog and automated orchestration with Databricks Workflows.
Data Engineer at Wipro LTD
October 1, 2021 - December 1, 2022
Developed large-scale ETL pipelines processing 500K+ daily enterprise records. Transitioned legacy full-load pipelines into incremental processing. Built transformation logic using joins, aggregations, window functions, and validations. Designed dimensional data models supporting enterprise reporting. Optimized SQL queries, reducing reporting latency. Implemented automated data quality checks and supported cloud migration initiatives.
Data Engineering Intern at Adani
October 1, 2020 - October 1, 2021
Built ingestion and transformation pipelines for behavioral analytics datasets. Designed aggregation workflows enabling analytics dashboards. Optimized schema design to improve query performance. Automated processing workflows reducing manual monitoring effort.

Education

Master of Science — Information Technology at St. Francis College, USA
January 11, 2030 - March 9, 2026
Bachelor of Technology at Centurion University of Technology and Management
January 11, 2030 - March 9, 2026

Qualifications

Add your qualifications or awards here.

Industry Experience

Software & Internet, Professional Services