I am an experienced Data Engineer with over 4 years of expertise in designing, building, and optimizing data pipelines and architectures across cloud platforms such as AWS, Azure, and GCP. I am passionate about leveraging big data frameworks and cloud technologies to drive data-driven decision-making and deliver scalable, efficient data solutions. My background includes extensive experience with ETL processes, real-time data integration, and cloud infrastructure automation. I thrive in dynamic environments, actively participating in software development life cycles and collaborating within Agile teams to implement innovative data engineering solutions. With a solid foundation in computer science and a master's degree, I have developed strong skills in cloud services, big data tools, and CI/CD pipelines. I enjoy working with a variety of technologies like Apache Spark, Kafka, Snowflake, and Kubernetes, aiming to build resilient and scalable data infrastructures. My goal is to continue growing as a cloud-based data engineer and contribute to projects that make impactful data transformations and insights accessible to business stakeholders.

Mounika Ullam

I am an experienced Data Engineer with over 4 years of expertise in designing, building, and optimizing data pipelines and architectures across cloud platforms such as AWS, Azure, and GCP. I am passionate about leveraging big data frameworks and cloud technologies to drive data-driven decision-making and deliver scalable, efficient data solutions. My background includes extensive experience with ETL processes, real-time data integration, and cloud infrastructure automation. I thrive in dynamic environments, actively participating in software development life cycles and collaborating within Agile teams to implement innovative data engineering solutions. With a solid foundation in computer science and a master's degree, I have developed strong skills in cloud services, big data tools, and CI/CD pipelines. I enjoy working with a variety of technologies like Apache Spark, Kafka, Snowflake, and Kubernetes, aiming to build resilient and scalable data infrastructures. My goal is to continue growing as a cloud-based data engineer and contribute to projects that make impactful data transformations and insights accessible to business stakeholders.

Available to hire

I am an experienced Data Engineer with over 4 years of expertise in designing, building, and optimizing data pipelines and architectures across cloud platforms such as AWS, Azure, and GCP. I am passionate about leveraging big data frameworks and cloud technologies to drive data-driven decision-making and deliver scalable, efficient data solutions. My background includes extensive experience with ETL processes, real-time data integration, and cloud infrastructure automation. I thrive in dynamic environments, actively participating in software development life cycles and collaborating within Agile teams to implement innovative data engineering solutions.

With a solid foundation in computer science and a master’s degree, I have developed strong skills in cloud services, big data tools, and CI/CD pipelines. I enjoy working with a variety of technologies like Apache Spark, Kafka, Snowflake, and Kubernetes, aiming to build resilient and scalable data infrastructures. My goal is to continue growing as a cloud-based data engineer and contribute to projects that make impactful data transformations and insights accessible to business stakeholders.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Intermediate
Intermediate
See more

Work Experience

AWS Data Engineer at Evernorth Health Services
August 1, 2024 - Present
Created and managed ETL processes, data warehousing solutions, and automated AWS infrastructure with Terraform, improving system uptime and reducing management time. Developed data migration pipelines from SQL Server to Snowflake and used AWS Lambda for data validation and transformations. Integrated Tableau with multiple data sources for visualization. Enhanced Hadoop cluster efficiencies, wrote Spark streaming applications consuming Kafka data, and managed NoSQL databases like DynamoDB. Worked on cloud migration projects, developed CI/CD pipelines, and used Kubernetes for container orchestration. Implemented monitoring dashboards using Kibana and Elasticsearch for near real-time log analysis. Participated throughout the SDLC and implemented data quality and master data management solutions.
GCP Data Engineer at Accenture (Nokia)
July 31, 2023 - August 26, 2025
Transformed large datasets from Teradata to HDFS using Sqoop and monitored Spark clusters using Log Analytics and Ambari. Improved query performance by transitioning log storage to Azure SQL Datawarehouse. Developed data pipelines with Spark Scala APIs and PySpark on Databricks and used Azure and GCP services extensively including Data Factory, BigQuery, Dataflow, and Cloud Storage. Implemented dynamic cluster provisioning on Dataproc to optimize costs and wrote Python DAGs in Airflow to orchestrate complex data workflows. Managed large-scale databases and designed streaming pipelines with fault tolerance on Google Dataflow.
Data Engineer at HID Global
November 30, 2022 - August 26, 2025
Developed ETL processes with PySpark and Spark SQL and created security frameworks with AWS Lambda and DynamoDB. Improved existing Spark and Hadoop algorithms to optimize performance. Worked on Kafka streaming for real-time data processing and managed batch jobs on Databricks. Created CI/CD pipelines using Git and Jenkins, implemented workflow automation with Apache Airflow, and applied Kubernetes-based security policies. Migrated legacy metrics to Snowflake on Google Cloud and designed DAGs integrating AWS services and external APIs. Standardized Elasticsearch usage and reduced query-related issues significantly.
AWS Data Engineer at Evernorth Health Services
August 1, 2024 - Present
Responsible for creating and managing ETL processes and data warehousing solutions to consolidate healthcare data from diverse sources. Automated AWS infrastructure with Terraform, reducing management time by 90%. Migrated multistate-level data from SQL Server to Snowflake, developed Spark Streaming applications to process real-time Kafka data, and integrated Tableau for comprehensive data visualization. Utilized AWS services such as Lambda, Glue, EMR, DynamoDB, and S3 to manage and automate data workflows. Developed CI/CD pipelines using Jenkins and implemented monitoring using Kibana and Grafana. Worked on cloud migration and infrastructure as code for scalable, reliable systems.
GCP Data Engineer at Accenture (Nokia)
July 31, 2023 - August 26, 2025
Involved in large-scale data transformation from Teradata to HDFS and vice versa using Sqoop. Implemented data security and validation processes and improved query performance by transitioning log storage to Azure SQL Data Warehouse. Designed and developed data ingestion and aggregation pipelines using Spark Scala APIs and Azure Data Factory. Managed GCP cloud services including BigQuery, Dataflow, Dataproc, and Cloud Storage, optimizing costs with autoscaling policies. Developed Python Airflow DAGs to orchestrate end-to-end pipelines and ensured data quality and compliance with industry standards.
Data Engineer at HID Global
November 30, 2022 - August 26, 2025
Developed ETL pipelines with PySpark and Spark SQL for batch processing and real-time data ingestion using Kafka and Spark Streaming. Implemented AWS Lambda and DynamoDB to enforce fine-grained access control and created CI/CD pipelines using Git, Jenkins, and Bash scripts. Managed data migration projects, optimized batch and streaming workflows using Google Dataflow and Pub/Sub, and implemented Apache Airflow to automate complex workflows. Contributed to Kubernetes security through namespaces and RBAC policies. Documented Elasticsearch usage, resulting in faster onboarding and reduced query-related issues.
AWS Data Engineer at Evernorth Health Services, St. Louis, Missouri, USA
August 1, 2024 - Present
Designed, built, and managed ETL processes and data warehousing solutions to consolidate and make data available for analysis. Automated and monitored AWS infrastructure using Terraform, reducing management time and improving uptime. Led one-time data migration from SQL Server to Snowflake with Python and SnowSQL. Used AWS Lambda for data validation and transformation, integrating Tableau with multiple data sources for comprehensive reporting. Developed Spark Streaming applications for real-time data processing and optimized Hadoop job processing. Involved in automation of CI/CD pipelines using Jenkins and managed real-time NoSQL databases with DynamoDB. Integrated Kubernetes with cloud-native services to scale pipelines and implemented data profiling, cleaning, and MDM. Participated actively in all phases of SDLC.
GCP Data Engineer at Accenture (Nokia), Mumbai, India
July 31, 2023 - August 26, 2025
Transformed large datasets between Teradata and HDFS using Sqoop incremental imports. Monitored Spark clusters and transitioned log storage to improve query performance. Implemented validation scripts ensuring data quality feeding Google Data Studio. Designed and developed data ingestion and integration pipelines in Hadoop environments using GCP services such as Compute Engine, Storage, BigQuery, Dataproc, and Dataflow. Used Azure Data Factory and Spark for complex data transformations and managed large-scale databases with performance tuning. Created fault-tolerant streaming pipelines and developed Python DAGs in Airflow for orchestration of data pipelines. Actively handled cloud shell SDK for configuration and troubleshooting in GCP.
Data Engineer at HID Global, Mumbai, India
November 30, 2022 - August 26, 2025
Developed PySpark ETL jobs using DataFrame and Spark SQL API and created security frameworks using AWS Lambda and DynamoDB. Enhanced existing Hadoop algorithms using Spark context and handled real-time data processing with Kafka and Spark Streaming. Deployed fully automated CI/CD pipelines using Git, Jenkins, and custom Python/Bash tools. Implemented Apache Airflow workflows and created Kubernetes namespaces with RBAC policies for enhanced security. Migrated legacy metrics to Snowflake and integrated AWS services in DAGs for workflow scheduling. Standardized Elasticsearch usage to improve team efficiency and documented data infrastructure processes.

Education

Masters / Computer science at University of Missouri, Kansas, USA
January 11, 2030 - August 26, 2025
Masters in Computer Science at University of Missouri, Kansas
January 11, 2030 - August 26, 2025
Masters / Computer Science at University of Missouri, Kansas, USA
January 11, 2030 - August 26, 2025

Qualifications

Add your qualifications or awards here.

Industry Experience

Healthcare, Telecommunications, Software & Internet, Professional Services, Financial Services