Available to hire
I’m a Croatia-based Data Engineer with extensive experience designing and delivering end-to-end data solutions. I specialize in building robust ETL/ELT pipelines, data modeling, and scalable data platforms for both startups and established enterprises.
Currently, I work as a data engineer at an outsourcing company, focusing on architecting data pipelines, optimizing performance, and enabling data-driven decision making across complex environments.
Experience Level
Language
Croatian
Advanced
English
Advanced
Work Experience
Data Engineer at EPAM
March 1, 2023 - PresentDeveloped ETL/ELT pipelines using Databricks (PySpark, SQL) to clean, transform, and enrich data before loading into downstream systems. Implemented Python-based transformation scripts in Databricks notebooks for data validation, cleansing, and aggregation. Orchestrated end-to-end data workflows using Azure Data Factory to automate ingestion, transformation, and scheduling. Optimized Databricks Spark jobs by tuning cluster configurations, caching strategies, and parallel processing for improved performance. Implemented Delta Lake architecture on Databricks to ensure ACID compliance, schema evolution, and efficient time-travel queries.
Data Engineer at Bank of Ireland (via EPAM)
April 1, 2025 - PresentDeveloped ETL/ELT pipelines for banking data using Databricks (PySpark, SQL) and integrated with downstream data stores. Built and optimized data workflows in Azure Data Factory, enabling reliable ingestion, transformation, and scheduling for financial data.
Data Engineer at Thomson Reuters
August 1, 2024 - February 28, 2025Ingested data from Azure Blob Storage in multiple formats (Parquet, CSV). Built ETL/ELT pipelines with Databricks (PySpark, SQL) to clean, standardize, and preprocess data. Used dbt to model, transform, and document data in Snowflake, and automated dbt workflows for incremental updates. Designed Azure Data Factory pipelines for ingestion orchestration. Loaded processed data via Snowflake COPY commands and native connectors; optimized Databricks Spark jobs and implemented RBAC/data masking in Snowflake to enhance security. Improved ETL efficiency and reduced processing times with on-demand Snowflake scaling.
Data Engineer at Bayer Pharmaceuticals
January 1, 2024 - July 31, 2024Ingested raw data from Woody_IO and Azure Blob Storage (JSON, Parquet, CSV) with efficient partitioning. Built ETL/ELT pipelines using Databricks (PySpark, SQL) to clean, transform, and enrich data before loading into Snowflake data marts. Implemented Azure Data Factory pipelines for ingestion orchestration. Used Snowflake COPY and native connectors to load data; optimized Spark jobs with tuning and parallel processing. Implemented data validation and cleansing routines with PySpark to ensure data integrity.
Data Engineer at Whirlpool Corporation
August 1, 2023 - December 31, 2023Ingested raw data from Azure Blob Storage (CSV, JSON, Parquet) and implemented partitioning and compression to optimize cost and performance. Built ETL/ELT pipelines using Databricks (PySpark, SQL) to clean, transform, and enrich data before loading into Snowflake data marts. Created Python-based transformation scripts within Databricks notebooks for data validation, cleansing, and aggregation. Orchestrated data workflows using Azure Data Factory for ingestion, transformations, and scheduling. Optimized Spark jobs via cluster tuning, caching strategies, and parallel processing; implemented RBAC and data masking in Snowflake to enforce security policies.
Data Engineer at Travel + Leisure Co.
March 1, 2023 - July 31, 2023Built Snowflake-native ETL pipelines leveraging SnowPipe for real-time data ingestion and dbt for transformations. Optimized Snowflake queries with clustering, partitioning, and result caching. Applied time travel, zero-copy cloning, and fail-safe features to improve data recovery and auditing. Implemented data validation and quality checks before loading into Snowflake data marts; enforced security with RBAC and data masking.
Data Engineer at EPAM (Belarus)
July 1, 2022 - May 31, 2023Ingested raw data from various sources (APIs, databases, logs) into AWS S3, ensuring efficient storage using partitioning, compression (Parquet), and lifecycle policies. Developed ETL/ELT pipelines using Apache Airflow to automate data movement between AWS S3, Redshift, and external sources. Ingested raw data for analytics and reporting.
Education
at Belarusian State University of Informatics and Radioelectronics
January 11, 2030 - January 1, 2022Qualifications
Industry Experience
Software & Internet, Professional Services, Financial Services, Healthcare, Life Sciences, Manufacturing
Experience Level
Hire a Data Scientist
We have the best data scientist experts on Twine. Hire a data scientist in Split today.