I'm a cross-disciplinary AI engineer with a background in cognitive science, signal processing, and applied statistics. My professional work has spanned industrial computer vision (wood defect detection, conveyor-belt inspection on embedded cameras), energy market analysis and financial risk modeling, and neuroscience research at the Max Planck Institute. What these contexts share is messy real-world data, constrained hardware, and stakeholders who need plain-language explanations of what the AI is actually doing — and that intersection is where I do my best work. My strongest asset is moving quickly from an ill-defined problem to a working prototype. At one employer, I identified their hardest unsolved vision problem at a job fair, built an object detection prototype in six months, and was hired to scale it into production. I've done similar things with portfolio risk models, LLM-based document pipelines, and custom hardware projects. I work across the full pipeline: data acquisition and labeling strategy, model training and evaluation, quantization-aware optimization for edge deployment, firmware integration in Python, Lua, and C++, and user-facing documentation. More recently I've been building LLM-powered systems — RAG pipelines, embedding-based filtering, transcript summarization with context nesting — using both cloud APIs and local inference. What I bring beyond the technical stack is a strong intuition for where models break and why. My statistics training means I examine residuals, question distributional assumptions, and catch failure modes before they reach production. I also genuinely enjoy the human side: designing interfaces, writing clear documentation, and translating model behavior into decisions that non-technical people can act on. My electrical engineering background and years of hardware tinkering mean I'm comfortable reading schematics, talking to firmware teams, and understanding the physical constraints a model will eventually run inside. I'm best suited for projects that need rapid prototyping, cross-domain problem-solving, or a fresh perspective on a stuck technical problem.

Dominic Portain

I'm a cross-disciplinary AI engineer with a background in cognitive science, signal processing, and applied statistics. My professional work has spanned industrial computer vision (wood defect detection, conveyor-belt inspection on embedded cameras), energy market analysis and financial risk modeling, and neuroscience research at the Max Planck Institute. What these contexts share is messy real-world data, constrained hardware, and stakeholders who need plain-language explanations of what the AI is actually doing — and that intersection is where I do my best work. My strongest asset is moving quickly from an ill-defined problem to a working prototype. At one employer, I identified their hardest unsolved vision problem at a job fair, built an object detection prototype in six months, and was hired to scale it into production. I've done similar things with portfolio risk models, LLM-based document pipelines, and custom hardware projects. I work across the full pipeline: data acquisition and labeling strategy, model training and evaluation, quantization-aware optimization for edge deployment, firmware integration in Python, Lua, and C++, and user-facing documentation. More recently I've been building LLM-powered systems — RAG pipelines, embedding-based filtering, transcript summarization with context nesting — using both cloud APIs and local inference. What I bring beyond the technical stack is a strong intuition for where models break and why. My statistics training means I examine residuals, question distributional assumptions, and catch failure modes before they reach production. I also genuinely enjoy the human side: designing interfaces, writing clear documentation, and translating model behavior into decisions that non-technical people can act on. My electrical engineering background and years of hardware tinkering mean I'm comfortable reading schematics, talking to firmware teams, and understanding the physical constraints a model will eventually run inside. I'm best suited for projects that need rapid prototyping, cross-domain problem-solving, or a fresh perspective on a stuck technical problem.

Available to hire

I’m a cross-disciplinary AI engineer with a background in cognitive science, signal processing, and applied statistics. My professional work has spanned industrial computer vision (wood defect detection, conveyor-belt inspection on embedded cameras), energy market analysis and financial risk modeling, and neuroscience research at the Max Planck Institute. What these contexts share is messy real-world data, constrained hardware, and stakeholders who need plain-language explanations of what the AI is actually doing — and that intersection is where I do my best work.

My strongest asset is moving quickly from an ill-defined problem to a working prototype. At one employer, I identified their hardest unsolved vision problem at a job fair, built an object detection prototype in six months, and was hired to scale it into production. I’ve done similar things with portfolio risk models, LLM-based document pipelines, and custom hardware projects.

I work across the full pipeline: data acquisition and labeling strategy, model training and evaluation, quantization-aware optimization for edge deployment, firmware integration in Python, Lua, and C++, and user-facing documentation. More recently I’ve been building LLM-powered systems — RAG pipelines, embedding-based filtering, transcript summarization with context nesting — using both cloud APIs and local inference.

What I bring beyond the technical stack is a strong intuition for where models break and why. My statistics training means I examine residuals, question distributional assumptions, and catch failure modes before they reach production. I also genuinely enjoy the human side: designing interfaces, writing clear documentation, and translating model behavior into decisions that non-technical people can act on. My electrical engineering background and years of hardware tinkering mean I’m comfortable reading schematics, talking to firmware teams, and understanding the physical constraints a model will eventually run inside.

I’m best suited for projects that need rapid prototyping, cross-domain problem-solving, or a fresh perspective on a stuck technical problem.

See more

Experience Level

Expert
Expert
Expert
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Beginner
Beginner
Beginner
Beginner
Beginner
Beginner
See more

Language

English
Fluent
German
Fluent
Swedish
Beginner
French
Beginner

Work Experience

Embedded AI Developer at SICK IVP
November 1, 2023 - June 30, 2025
Worked on the Inspector series: ruggedized industrial cameras performing on-device AI object detection and defect detection for conveyor-belt environments. Contributed to the MobileNet-based feature detector pipeline (PyTorch, ONNX, DVC, MLFlow, Docker), including quantization-aware training, dataset pruning, and training augmentation experiments. Designed adversarial datasets to stress-test the detection system, uncovering unhandled edge cases and implementing targeted solutions such as rotation augmentation. Co-developed a new auto-thresholding algorithm, prototyped an AI-powered OCR plugin, and helped build an end-to-end object detection benchmark. Contributed to firmware development in Lua and C++, wrote tests, and participated in structured release processes via GitLab CI in an Agile environment.
Senior AI Engineer at Microtec AB
April 1, 2020 - October 30, 2023
Joined after delivering a six-month prototype that solved a previously intractable problem: detecting light wood knots in oak using deep learning–based object detection, where conventional image processing had failed. Built the system from scratch: data acquisition, labeling, model training (MMdetection, PyTorch), and demoing the results. Then transitioned it into the company's internal vision framework. Expanded the AI to cover multiple defect types and added classification capabilities. Scaled the AI team from one to five, including hiring, onboarding, workspace design, and infrastructure build-out. Maintained continuous customer contact to align development priorities with real-world requirements. Managed performance optimization across both model architecture and deployment hardware.
Assistant Engineer at Swiftronix
January 1, 2020 - March 31, 2020
Assembled custom digital audio hardware in a small-batch production environment. Operated and optimized a pick-and-place machine, hand-soldered cables and PCB patches, and contributed to workshop infrastructure improvements including thermal camera modification and oven upgrades. Wrote a Python test suite covering 200+ firmware functions to automate QA for customer-facing products. Organized the component inventory system, reducing production downtime from stock-outs.
Senior Energy Analyst at Trianel GmbH
February 1, 2016 - October 31, 2018
Automated the generation and distribution of customer-facing energy market reports, replacing manual workflows with a Python pipeline including custom PDF generation and email distribution. Refactored an 8,000-line legacy Matlab codebase for daily analysis and prediction, transforming an opaque, maintenance-heavy system into structured, documented code. Trained recurrent neural networks for heating and coal demand forecasting. Investigated persistent breaches in the portfolio risk model, identified a flawed distributional assumption, and implemented and calibrated a nonlinear Heston model as a replacement, which satisfied the department's accuracy targets. Wrote 12 pages of successor documentation.
PhD-level Researcher at Max Planck Institute for Cognitive and Brain Sciences
December 30, 2011 - June 28, 2015
Designed and executed a three-year research project on cortical language processing, combining EEG, MEG, and MRI data. Built individual conductivity head models from structural MRI, co-registered multimodal brain imaging data, and ran transfer entropy analyses on the merged signals. Wrote all analysis code (Python, Matlab), managed compute-intensive workflows across multiple workstations, and produced a 120-page scientific report. The role demanded experimental design, signal processing, and large-scale data wrangling from end to end.

Education

Master of Science (Cognitive Science) at University of Twente, Netherlands
August 1, 2008 - March 31, 2011
Studied at the intersection of neuroscience, applied statistics, signal processing, and programming, with a minor in User Interaction Design. Bachelor's thesis investigated lateral effects on the P300 component in pain perception. Master's thesis developed a noise-tolerant method for detecting evoked signals in continuous EEG data.

Qualifications

Add your qualifications or awards here.

Industry Experience

Energy & Utilities, Life Sciences, Computers & Electronics, Manufacturing, Software & Internet

Experience Level

Expert
Expert
Expert
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Intermediate
Beginner
Beginner
Beginner
Beginner
Beginner
Beginner
See more