I am an accomplished software engineer with over 21 years of experience working across government, telecoms, travel, financial, and retail domains. I specialize in designing scalable data pipelines and AI applications using cutting-edge technologies such as Palantir Foundry, Databricks, Apache Spark, AWS, and various AI/ML frameworks. My expertise includes building multi-agent AI applications and deploying efficient machine learning pipelines with MLOps best practices. I have a proven track record of leading platform evaluations, migrating legacy systems to cloud-native platforms, and improving operational efficiency through automation. I enjoy collaborating with cross-functional teams to deliver impactful data solutions that drive better decision-making and enhance customer experience.

Srinivasulu Munagala (Srini)

I am an accomplished software engineer with over 21 years of experience working across government, telecoms, travel, financial, and retail domains. I specialize in designing scalable data pipelines and AI applications using cutting-edge technologies such as Palantir Foundry, Databricks, Apache Spark, AWS, and various AI/ML frameworks. My expertise includes building multi-agent AI applications and deploying efficient machine learning pipelines with MLOps best practices. I have a proven track record of leading platform evaluations, migrating legacy systems to cloud-native platforms, and improving operational efficiency through automation. I enjoy collaborating with cross-functional teams to deliver impactful data solutions that drive better decision-making and enhance customer experience.

Available to hire

I am an accomplished software engineer with over 21 years of experience working across government, telecoms, travel, financial, and retail domains. I specialize in designing scalable data pipelines and AI applications using cutting-edge technologies such as Palantir Foundry, Databricks, Apache Spark, AWS, and various AI/ML frameworks. My expertise includes building multi-agent AI applications and deploying efficient machine learning pipelines with MLOps best practices.

I have a proven track record of leading platform evaluations, migrating legacy systems to cloud-native platforms, and improving operational efficiency through automation. I enjoy collaborating with cross-functional teams to deliver impactful data solutions that drive better decision-making and enhance customer experience.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Intermediate
Intermediate
See more

Language

Javanese
Advanced
English
Fluent

Work Experience

AI/Data Engineer at Ministry of Justice
November 1, 2024 - Present
Developed an AI chat assistant for sensitive data queries and implemented a ReAct + RAG chatbot that improved resolution rates by 20%. Designed and built data streaming pipelines using AWS services for Justice Finance data, migrated Cloudera jobs and Oracle ETL workloads into AWS infrastructure. Created a generic onboarding framework for partners in the Tax platform. Designed finance data pipelines integrating cost-centre and budget datasets into Palantir Foundry Ontology, enabling unified analytics. Built Databricks models with MLflow supporting finance forecasting and reporting and collaborated with finance teams to deliver dashboards and interactive workbooks for better decision-making.
Big Data Engineer at HMRC
November 1, 2024 - September 5, 2025
Built streaming pipelines for PODS/CTC Traders data ingestion and migrated Cloudera jobs into AWS EMR/Glue orchestrated with Airflow. Developed generic frameworks for onboarding new partners and supported production pipelines ensuring service level adherence.
Data Engineer at Bank of America
August 1, 2021 - September 5, 2025
Ingested and transformed client and third-party data into AWS environments. Designed Spark/Scala transformation frameworks to support Data Lake ingestion. Developed AWS data pipelines to manage Data Lake and Data Mart workloads and migrated on-premises data warehouse systems to the AWS Cloud.
Big Data Engineer at Barclays
July 31, 2019 - September 5, 2025
Developed loyalty analytics applications for credit card and hotels.com services. Migrated Teradata workloads to AWS leveraging EMR, Hive, S3, and Airflow. Built generic ingestion frameworks for JDBC to AWS S3/Hive and created Spark Streaming jobs processing clickstream data. Standardized Airflow Helm templates in Kubernetes and automated AWS infrastructure with Terraform.
Big Data Engineer / Developer at Bank of America / Barclays / NYSE / Thomson Reuters / HP / Nokia
October 1, 2017 - September 5, 2025
Held various engineering roles involving real-time reference data pipelines with Kafka Streams, HBase, Impala, Dockerized microservices on Kubernetes, clickstream and product analytics pipelines on GCP BigQuery, DataProc. Developed Hadoop/Spark pipelines, HBase, Sqoop ingestion, partitions & compression strategies. Built C++/Java enterprise applications, SQL analytics, embedded systems, and Cisco security applications.
AI/Data Engineer at Ministry of Justice
November 1, 2024 - Present
Built an AI chat assistant for sensitive data queries and developed a ReAct + RAG chatbot improving resolution rate by 20%. Designed data streaming pipelines using Kinesis, S3, and EMR for Justice Finance data. Migrated Cloudera jobs to AWS EMR/Glue and Oracle ETL workloads into AWS. Created a generic onboarding framework for partners in the Tax platform. Implemented finance data pipelines in Palantir Foundry integrating multiple cost-centre and budget datasets into the Ontology for unified analytics. Built Databricks models with MLflow for financial forecasting and reporting, enabling scalable data transformations and forward-fill logic. Collaborated with finance teams to develop interactive dashboards and Foundry Workbooks, enhancing decision-making on cost allocation and forecasting.
Big Data Engineer at HMRC
November 1, 2024 - September 5, 2025
Built streaming pipelines for PODS/CTC Traders data ingestion. Migrated Cloudera jobs into AWS EMR/Glue with Airflow orchestration. Developed generic frameworks for onboarding new partners. Supported production pipelines and ensured SLA adherence.
Data Engineer at Bank of America
August 1, 2021 - September 5, 2025
Ingested and transformed client and third-party data into AWS cloud platforms. Designed Spark/Scala transformation frameworks for data lake ingestion workflows. Built AWS Data Pipelines for Data Lake to Data Mart workloads. Migrated on-premises data warehouse systems into AWS Cloud.
Data Engineer at Bank of America
March 31, 2024 - September 5, 2025
Short-term contract role returning to support data engineering initiatives including real-time reference data pipelines and transformation workloads.
Big Data Engineer at Barclays
July 1, 2019 - September 5, 2025
Developed loyalty analytics applications for credit cards and hotels.com. Migrated Teradata workloads to AWS EMR, Hive, and S3. Built generic ingestion frameworks from JDBC sources to AWS S3 and Hive. Created Spark Streaming jobs for clickstream data. Standardized Airflow Helm templates in Kubernetes clusters. Automated AWS infrastructure provisioning using Terraform.
AI/Data Engineer at Ministry of Justice
November 1, 2024 - Present
Built an AI chat assistant for sensitive data queries and developed a ReAct + RAG chatbot that improved resolution rates by 20%. Designed data streaming pipelines from Kinesis to S3 and EMR for Justice Finance data, and migrated Cloudera jobs to AWS EMR/Glue and Oracle ETL workloads into AWS. Created a generic onboarding framework for partners in the Tax platform. Designed and implemented finance data pipelines in Palantir Foundry integrating multiple cost-centre and budget datasets into the Ontology for unified analytics. Built Databricks models with MLflow for finance forecasting and reporting using scalable transformations and forward-fill logic, and collaborated across finance teams to deliver interactive dashboards and Foundry Workbooks, enhancing decision-making on cost allocation and forecasting.
Big Data Engineer at HMRC
November 1, 2024 - September 5, 2025
Built streaming pipelines for PODS/CTC Traders data ingestion and migrated Cloudera jobs into AWS EMR/Glue using Airflow orchestration. Developed generic frameworks for onboarding new partners and supported production pipelines ensuring SLA adherence.
Data Engineer at Bank of America
August 1, 2021 - September 5, 2025
Ingested and transformed client and third-party data into AWS. Designed Spark/Scala transformation frameworks for Data Lake ingestion and built AWS Data Pipelines for Data Lake to Data Mart workloads. Migrated on-premises data warehouse into AWS Cloud.
Data Engineer at Bank of America
March 1, 2024 - September 5, 2025
Data engineering role focusing on ingestion and transformation frameworks in Spark/Scala and AWS services.
Big Data Engineer at Barclays
July 1, 2019 - September 5, 2025
Developed loyalty analytics applications and migrated Teradata workloads to AWS including EMR, Hive, and S3. Built generic ingestion frameworks (JDBC to AWS S3/Hive), created Spark Streaming jobs for clickstream data, standardized Airflow Helm templates in Kubernetes, and automated AWS infrastructure via Terraform.
Data Engineer at Bank of America
January 1, 2017 - September 5, 2025
Developed real-time reference data pipelines with Kafka Streams, HBase, and Impala. Created Dockerized microservices deployed on Kubernetes.
Data Engineer at Barclays
January 1, 2016 - September 5, 2025
Built clickstream and product analytics pipelines on Google Cloud Platform using BigQuery and DataProc.
Big Data Engineer at NYSE
January 1, 2015 - September 5, 2025
Developed Hadoop and Spark pipelines, HBase and Sqoop ingestion, and implemented partitioning and compression strategies.
Software Engineer at Thomson Reuters / HP / Nokia
January 1, 2010 - September 5, 2025
Built C++/Java enterprise applications, SQL-based analytics, embedded systems, and Cisco security applications.
AI/Data Engineer at Ministry of Justice
November 1, 2024 - Present
Built an AI chat assistant for sensitive data queries and developed a ReAct + RAG chatbot improving resolution rate by 20%. Designed data streaming pipelines using AWS Kinesis, S3, and EMR for Justice Finance data. Migrated Cloudera and Oracle ETL workloads to AWS EMR/Glue. Created a generic partner onboarding framework for Tax platform and implemented finance data pipelines integrating multiple cost-center and budget datasets into Palantir Foundry Ontology. Built Databricks models with MLflow for finance forecasting and reporting, enabling scalable transformations and forward-fill logic. Collaborated with finance teams to deliver interactive dashboards improving cost allocation and forecasting decisions.
Big Data Engineer at HMRC
November 1, 2024 - September 5, 2025
Built streaming pipelines for PODS/CTC Traders data ingestion. Migrated Cloudera jobs to AWS EMR/Glue orchestrated by Airflow and developed generic frameworks for onboarding new partners. Supported production pipelines ensuring SLA adherence.
Data Engineer at Bank of America
August 1, 2021 - September 5, 2025
Ingested and transformed client and third-party data into AWS. Designed Spark and Scala transformation frameworks for Data Lake ingestion and built AWS Data Pipelines for Data Lake to Data Mart workloads. Led migration of on-premises data warehouse into AWS Cloud.
Data Engineer at Bank of America
March 1, 2024 - September 5, 2025
Performed data engineering tasks during a contract period, details as above.
Big Data Engineer at Barclays
July 1, 2019 - September 5, 2025
Developed loyalty analytics applications for credit cards and hotels.com. Migrated Teradata workloads to AWS EMR and Hive. Built generic ingestion frameworks for JDBC to AWS S3/Hive. Created Spark streaming jobs for clickstream data and standardized Airflow Helm templates in Kubernetes. Automated AWS infrastructure via Terraform.
Various engineering roles at Previous Roles including Bank of America, Barclays, NYSE, Thomson Reuters, HP, Nokia
October 1, 2017 - September 5, 2025
Worked on real-time reference data pipelines with Kafka Streams, HBase, Impala, and Dockerized microservices on Kubernetes. Developed clickstream and product analytics pipelines on GCP BigQuery and DataProc. Developed Hadoop/Spark pipelines, HBase, Sqoop ingestion, and implemented partitions and compression strategies. Built C++/Java enterprise apps, SQL-based analytics, embedded systems, and Cisco security applications.

Education

Bachelor of Engineering (Computer Science) at J.N.T. University, India
January 11, 2030 - September 5, 2025
Bachelor of Engineering (Computer Science) at J.N.T. University, India
January 11, 2030 - September 5, 2025
Bachelor of Engineering (Computer Science) at J.N.T. University, India
January 11, 2030 - September 5, 2025
Bachelor of Engineering (Computer Science) at J.N.T. University, India
January 11, 2030 - September 5, 2025

Qualifications

SC + NPPV3 Clearance
January 11, 2030 - September 5, 2025
SC + NPPV3 Clearance
January 11, 2030 - September 5, 2025
SC + NPPV3 Clearance
January 11, 2030 - September 5, 2025
SC + NPPV3 Clearance
January 11, 2030 - September 5, 2025

Industry Experience

Government, Financial Services, Telecommunications, Retail, Travel & Hospitality