Reporgo
Back-End Developer is needed in Ahmedabad, India.
I am looking for an AWS Data Engineer with some experience in platform engineering for my personal project, Reporgo. The tasks involve architecting and optimizing cloud-native data pipelines on AWS, building scalable ETL workflows using Python and Docker, and designing data processing solutions. You’ll also collaborate on secure AI/LLM pipelines and establish CI/CD automation for data workflows. It’s important to have proven experience in AWS data engineering, proficiency in Python, and a solid understanding of DataOps and MLOps tools.
This is a part-time contract position based in Ahmedabad, India. Strong communication and collaboration skills are essential, as you’ll be working closely with AI/ML teams and advising stakeholders on data governance and platform scaling. If you have a background in computer science or data engineering, I would like to hear from you.
Can you provide examples of your preferred style?
AWS Data Engineer with some experience with platform engineering.
Tasks:
Architect and optimize cloud-native data pipelines on AWS (using services like Lambda, Step Functions, API Gateway, Kinesis, S3, and EventBridge).
Build scalable, modular ETL workflows using Python, Docker, and GitHub Actions with private runners.
Design real-time and batch data processing solutions with DynamoDB Streams, RDS PostgreSQL, and OpenSearch.
Collaborate on secure and compliant AI/LLM pipelines using AWS Bedrock and KMS for encryption and governance.
Develop and maintain containerized services using ECR and orchestrate workflows with Docker-based Lambda functions.
Establish CI/CD automation for data and model pipelines, integrating GitHub Actions, IaC (Terraform/CDK), and Docker.
Enable data discoverability, reproducibility, and FAIR compliance through metadata management, lineage tracking, and data cataloging.
Work closely with AI/ML teams to support feature store management, training pipelines, and inference integration.
Advise stakeholders on data governance, platform scaling, and LLM-ready architecture patterns.
Qualifications & Skills
Proven experience in AWS-centric data engineering, especially across API Gateway, DynamoDB, RDS, OpenSearch, and ECR.
Proficiency in Python, containerization with Docker, and event-driven architecture (e.g., Lambda, SNS/SQS, Step Functions).
Hands-on expertise in ETL, DataOps, and MLOps with modern tools like Dbt, Apache Spark, or AWS Glue.
Familiarity with data governance principles, encryption (KMS), IAM policies, and privacy-by-design concepts in cloud.
Experience integrating with LLMs via Bedrock or similar, and optimizing pipelines for retrieval-augmented generation (RAG) or fine-tuning.
Strong understanding of GitOps, Infrastructure as Code, and CI/CD for data applications.
Bonus: Familiarity with FHIR, healthcare interoperability standards, or building LLM-integrated clinical data platforms.
Excellent communication and collaboration skills—capable of bridging technical and clinical or business domains.
Degree in Computer Science, Data Engineering, or related field—or equivalent hands-on experience.
Who are you, and what do you do
Pathik
In what capacity are you hiring?
For a personal project
Where are you in the hiring process?
I’m ready to make a paid hire
What type of work is this?
Part-time position (Contract)
What experience level is needed?
Beginner: $15-25,Junior: $25-35
Client contact preference:
📱 WhatsApp / SMSTwine Pro members will get direct contact details of this client after applying. Upgrade now.
No longer accepting applications
Get instant notifications for new Back-End Developer jobs. Enter your email:
How It Works
🔍Get quality leads
Review job leads for free, filter by local or global clients, and get real time notifications for new opportunities.
🎉Apply with ease
Pick the best leads, unlock contact details, and apply effortlessly with Twine's AI application tools.
📈Grow your career
Showcase your work, pitch to the best leads, land new clients and use Twine’s tools to find more opportunities.