Hi, I'm Ram, a data engineer with 6+ years of experience building scalable data pipelines, data warehouses, and analytics solutions across banking, consulting, and SaaS. I enjoy partnering with business stakeholders to deliver secure, reliable, and high-performing data platforms that power real-time analytics and BI. I am proficient in AWS and Azure, Snowflake, Redshift, Spark, PySpark, Airflow, Databricks, and data governance. I thrive in Agile/DataOps environments and collaborate with data scientists and analysts to deliver impactful data products that drive business outcomes.

Hi, I'm Ram, a data engineer with 6+ years of experience building scalable data pipelines, data warehouses, and analytics solutions across banking, consulting, and SaaS. I enjoy partnering with business stakeholders to deliver secure, reliable, and high-performing data platforms that power real-time analytics and BI. I am proficient in AWS and Azure, Snowflake, Redshift, Spark, PySpark, Airflow, Databricks, and data governance. I thrive in Agile/DataOps environments and collaborate with data scientists and analysts to deliver impactful data products that drive business outcomes.

Available to hire

Hi, I’m Ram, a data engineer with 6+ years of experience building scalable data pipelines, data warehouses, and analytics solutions across banking, consulting, and SaaS. I enjoy partnering with business stakeholders to deliver secure, reliable, and high-performing data platforms that power real-time analytics and BI.

I am proficient in AWS and Azure, Snowflake, Redshift, Spark, PySpark, Airflow, Databricks, and data governance. I thrive in Agile/DataOps environments and collaborate with data scientists and analysts to deliver impactful data products that drive business outcomes.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
See more

Language

English
Fluent

Work Experience

Senior Data Engineer at Bank of Montreal (BMO)
March 1, 2024 - October 31, 2025
Designed and developed scalable data pipelines using AWS Glue, PySpark, and Lambda to ingest data from core banking systems; built a data lake on Amazon S3 and integrated with Redshift for advanced analytics and BI reporting; implemented real-time streaming with Kafka and Kinesis for fraud detection and transaction monitoring; optimized Redshift performance via partitioning, clustering, and distribution keys; ensured data governance and regulatory compliance (CCAR, Basel III); applied CI/CD with Jenkins and Terraform for provisioning and deployment; collaborated with data scientists to provide clean datasets for predictive models.
Data Engineer at Tiger Analytics
February 1, 2024 - February 1, 2024
Migrated and enhanced a Customer 360 platform for a leading North American retailer, consolidating customer interactions, sales, and marketing data into a unified platform for personalization and segmentation. Developed ETL pipelines in Azure Data Factory to move data from on-prem SQL Server and Oracle into Azure Data Lake (ADLS). Built transformations in Azure Databricks (PySpark) for cleansing, standardization, and enrichment. Designed star schema models in Azure Synapse to support BI dashboards; created reusable ingestion frameworks reducing processing time by 30%. Implemented data quality checks and Delta Lake monitoring; ensured GDPR/data privacy compliance during cloud migration.
Associate Data Engineer at Clio
December 1, 2021 - December 1, 2021
Developed a data analytics platform for Clio’s legal practice management SaaS, enabling insights into case management, billing, and client interactions. Designed and implemented ETL pipelines to load application and usage data into a centralized data warehouse; automated data extraction and transformation from transactional databases using Python and SQL. Deployed Airflow-based scheduling and monitoring; supported migration to Snowflake-based warehouse; partnered with product managers to deliver usage analytics dashboards in Power BI; optimized SQL queries and indexes; contributed to data governance practices to ensure KPI accuracy and consistency.
Senior Data Engineer at BMO
March 1, 2024 - November 13, 2025
Building a next-generation AWS-based Enterprise Data Lake to integrate structured, semi-structured, and streaming data sources for regulatory compliance, customer insights, and fraud detection across BMO’s retail and commercial banking divisions. Designed scalable data pipelines using AWS Glue, PySpark, and Lambda for ingestion from core banking systems; built data lake on S3; integrated with Redshift for analytics; implemented real-time streaming with Kafka + Kinesis for fraud detection and transaction monitoring; optimized Redshift query performance with partitioning, clustering, and distribution keys; ensured data governance and regulatory requirements (CCAR, Basel III) with compliance teams; applied CI/CD with Jenkins and Terraform; collaborated with data scientists to provide clean datasets for predictive models.

Education

Add your educational history here.

Qualifications

Add your qualifications or awards here.

Industry Experience

Financial Services, Professional Services, Software & Internet, Retail, Other