I'm a data engineer with over 3.5 years of hands-on experience in building scalable data pipelines across multi-source e-commerce data. I design and automate ETL processes using Python (factory pattern), S3, and data warehouse technologies like RedShift and data lakehouse architectures. I'm focused on autonomous, analysis-ready formats and deployment automation using Shell scripting and Cron scheduling; currently, I'm exploring Apache Iceberg to improve performance and scalability of our big data pipelines. Beyond technical work, I lead a team of data engineers at GrowByData Services— guiding project execution, mentoring team members, and fostering a collaborative, growth-oriented culture. I believe leadership is about people, processes, and technology aligning to drive impact, and I remain deeply committed to continuous learning in new technologies and non-technical areas.

Er. Nabin Hyanmikha

I'm a data engineer with over 3.5 years of hands-on experience in building scalable data pipelines across multi-source e-commerce data. I design and automate ETL processes using Python (factory pattern), S3, and data warehouse technologies like RedShift and data lakehouse architectures. I'm focused on autonomous, analysis-ready formats and deployment automation using Shell scripting and Cron scheduling; currently, I'm exploring Apache Iceberg to improve performance and scalability of our big data pipelines. Beyond technical work, I lead a team of data engineers at GrowByData Services— guiding project execution, mentoring team members, and fostering a collaborative, growth-oriented culture. I believe leadership is about people, processes, and technology aligning to drive impact, and I remain deeply committed to continuous learning in new technologies and non-technical areas.

Available to hire

I’m a data engineer with over 3.5 years of hands-on experience in building scalable data pipelines across multi-source e-commerce data. I design and automate ETL processes using Python (factory pattern), S3, and data warehouse technologies like RedShift and data lakehouse architectures. I’m focused on autonomous, analysis-ready formats and deployment automation using Shell scripting and Cron scheduling; currently, I’m exploring Apache Iceberg to improve performance and scalability of our big data pipelines.

Beyond technical work, I lead a team of data engineers at GrowByData Services— guiding project execution, mentoring team members, and fostering a collaborative, growth-oriented culture. I believe leadership is about people, processes, and technology aligning to drive impact, and I remain deeply committed to continuous learning in new technologies and non-technical areas.

See more

Experience Level

Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Intermediate
See more

Language

Nepali
Advanced
English
Advanced
Hindi
Intermediate

Work Experience

Data Engineer at Maitri Holding Services
July 1, 2025 - Present
Data Engineer responsible for building scalable data pipelines, contributing to an open-source TUVA project, and creating a SOLID and factory-pattern-driven automation framework for terminology updates.
Sr. Data Engineer at GrowByData Services
January 1, 2025 - July 1, 2025
Drafted, implemented, and automated health checks for third-party tools (e.g., Dremio); initiated a POC for Iceberg to improve data pipeline efficiency; updated the architecture; developed an interactive data status reporting workflow; modularized system glitches and migrated AWS-based Dremio to Kubernetes.
Data Engineer at GrowByData Services
December 1, 2023 - December 1, 2024
Researched integration of Apache Airflow with the Factory design pattern of the Data Acquisition Framework; automated ETL with Python; introduced the Data-LakeHouse concept; used Dremio as the query engine to support data lakehouse; optimized SQL; improved Bash and Shell scripting; managed resource segregation across projects.
Associate Data Engineer at GrowByData Services
December 1, 2021 - December 1, 2023
Acquired data from multiple e-commerce sources using Python in the Factory Design pattern; automated data ingestion into RedShift (Data Warehouse) using Scala; automated the Data Acquisition process via Shell scripting and CronTab Scheduling; produced insights reports; supported scaling of storage infrastructure; gained AWS EC2 knowledge.
Internship in Data Engineering at Leapfrog Technology, Inc.
September 1, 2021 - November 1, 2021
Processed data into the database using Python in a data warehouse context; transformed data per requirements; visualized data using Power BI.

Education

Master’s Degree at College of Information Technology, Pokhara University
January 1, 2023 - January 8, 2026
Bachelor’s Degree at Khwopa Engineering College, Purbanchal University
January 1, 2016 - January 1, 2020
Higher Secondary at Khwopa Higher Secondary School
January 1, 2014 - January 1, 2016
Secondary Level Certificate at Bagiswori Higher Secondary School
January 1, 2013 - January 1, 2014
Master’s Degree at College of Information Technology, Pokhara University
January 1, 2023 - January 8, 2026
Bachelor’s Degree at Khwopa Engineering College, Purbanchal University
January 1, 2016 - January 1, 2020
Higher Secondary School Level at Khwopa Higher Secondary School
January 1, 2014 - January 1, 2016
Secondary Level Certificate at Bagiswori Higher Secondary School
January 1, 2013 - January 1, 2014

Qualifications

Add your qualifications or awards here.

Industry Experience

Software & Internet, Professional Services, Healthcare, Computers & Electronics