Available to hire
I am Sneha Pallerla, a data engineer with about six years of experience building scalable data pipelines across finance, manufacturing, and telecom. I design, develop, and maintain end-to-end data infrastructure that powers analytics and business intelligence.
I am skilled in cloud platforms like AWS and Palantir Foundry, and big data technologies such as Spark, Hadoop, Hive, and HBase. I collaborate in agile environments, lead cloud migrations, automate repetitive workflows, and deliver dashboards that enable data-driven decision making.
Skills
Experience Level
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Intermediate
Intermediate
Beginner
Beginner
Beginner
Language
English
Fluent
Hindi
Fluent
Telugu
Fluent
Marathi (Marāṭhī)
Fluent
Work Experience
Senior Data Engineer at Airbus
June 1, 2023 - June 1, 2025Designed and deployed 10+ scalable ETL pipelines processing 500K+ aircraft telemetry records using Palantir Foundry (Slate, Workshop, Contour, Code Workbook, Ontology, Data Pipeline), PySpark, and Python, enabling timely fleet analytics. Maintained and optimized 20+ KPIs for global aircraft performance to support data-driven decision making. Built interactive dashboards to monitor TCO/TFO metrics, improving issue detection by 35%. Managed data products used by 200+ engineers and customer service representatives, reducing repair turnaround time by 25%. Participated in SAFe Agile ceremonies, ensuring delivery of high-priority features every 3 weeks.
Senior Data Engineer at HashedIn by Deloitte (Client: Vanguard)
September 1, 2021 - June 1, 2023Led migration of 100+ legacy DB2 jobs to AWS Glue and PySpark, achieving 30% faster processing and 20% cost reduction. Automated 7 manual reporting workflows with Python and PySpark, saving 40+ hours/month for Business Analysts. Translated legacy COBOL logic to PySpark pipelines with full functionality parity in the cloud. Designed ETL pipelines for 160M+ semi-structured records, reducing SLA breaches by 50%. Created reusable CloudFormation templates for Glue ETL jobs, decreasing setup time by 60%. Implemented hashing validation scripts to reduce data mismatches by 90% and orchestrated end-to-end workflows using AWS Glue Workflows, S3, and Aurora PostgreSQL.
Application Development Analyst – Big Data at Accenture (Client: AT&T)
February 1, 2019 - September 1, 2021Engineered Big Data solutions using Hadoop, Hive, HDFS, HBase, and Oracle SQL to support revenue assurance and analytics for telecom services. Improved KPI/KRA monitoring via the TADA platform, identifying over $500K in potential revenue leakages. Developed database schemas, views, and materialized views to improve downstream query performance by ~40%. Resolved 20+ critical production issues monthly with 99.9% pipeline reliability. Automated repetitive ingestion/validation workflows with Shell scripting, reducing manual workload by ~30%.
Education
Post Graduate Diploma in Advanced Computing at CDAC, Bengaluru, India
August 1, 2018 - January 1, 2019Bachelor of Technology in Electronics and Communication Engineering at Visvesvaraya National Institute of Technology (VNIT), Nagpur, India
July 1, 2014 - May 1, 2018Qualifications
AWS Certified Developer – Associate
January 11, 2030 - January 12, 2026Certified SAFe 6 Practitioner
January 11, 2030 - January 12, 2026Industry Experience
Telecommunications, Financial Services, Manufacturing, Software & Internet, Professional Services
Skills
Experience Level
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Expert
Intermediate
Intermediate
Beginner
Beginner
Beginner
Hire a Data Scientist
We have the best data scientist experts on Twine. Hire a data scientist in Toronto today.