I’m Giannis Roussos, a full-stack software engineer based in Athens, Greece. I design and ship scalable SaaS platforms and AI-integrated systems using TypeScript, React, Next.js, and Node.js. I enjoy building practical, maintainable architectures and collaborating across product and design teams to deliver solid user experiences.
I’ve led end-to-end production builds, rebuilt system architectures for reliability and performance, and delivered real-time dashboards and AI-powered integrations while prioritizing security, accessibility, and clean code. I thrive in fast-paced environments and love turning complex requirements into elegant, working solutions.
Skills
Experience Level
Language
Work Experience
Education
Qualifications
Industry Experience
- Five-model Gemini/Gemma waterfall cascade (Gemini 2.5 Flash → Flash Lite → three Gemma variants)
- Reactive 429/503 switching with capability detection between Gemini and Gemma model families (Gemma does not support the systemInstruction field — handled with a runtime supportsSystemInstruction() check)
- 45-second AbortSignal timeout per call
- RAG via Supabase pgvector with a pure-TypeScript cosine-similarity fallback path for RPC failures
- HL7 FHIR R4 data explorer and raw .hl7 file ingestion alongside PDF and text
- Live PubMed evidence grounding for clinical claims
- Redis rate limiting, Zod schema validation, structured audit logging
- Next.js: https://www.twine.net/signin
- Rails: https://www.twine.net/signin
Multi-model AI clinical decision support platform built independently in both Next.js and Ruby on Rails to benchmark architectures.
What it does
Clinicians paste medical records and receive distilled summaries, suggested differential diagnoses, treatment options, and live evidence from PubMed — all grounded in the document context only, with no external knowledge injection (hallucination eliminated by design).
Architecture highlights
The Rails benchmark
Rebuilt the entire system in Ruby on Rails over a single weekend (framework learned from scratch) to compare architectures. Result: 60% faster total execution (5.6s vs 14.7s), 85% reduction in JavaScript payload.
Stack: TypeScript, Next.js (App Router), Ruby on Rails 7, Supabase (Postgres + pgvector), Vercel AI SDK, Redis, Zod, Hotwire (Turbo/Stimulus), Vercel, Fly.io
Live demos:
Source: https://www.twine.net/signin
- Custom SVG waveform rendering for live patient vitals
- Server-side data transformation reducing client-side processing overhead by 30%
- Node.js/Express proxy layer running on AWS App Runner
- Environment-driven configuration for multiple FHIR sandbox sources
- React/Next.js frontend with responsive design across desktop and tablet form factors
Real-time patient vitals dashboard pulling HL7 FHIR R4 data through a Node.js/Express proxy on AWS App Runner.
Why it exists
Built from the perspective of an ICU nurse who saw firsthand how fragmented and slow FHIR data delivery is in production hospital environments. The goal: prove that clean server-side data transformation can deliver a responsive clinical dashboard without overloading the browser.
Technical highlights
Stack: TypeScript, Next.js, Node.js, Express, AWS App Runner, HL7 FHIR R4, SVG
Live demo: https://www.twine.net/signin
Source: https://www.twine.net/signin
- BlazeFace bounding box geometry with mirror correction for the front-facing camera (scaleX(-1) on the video element flips raw coordinates — corrected before segment math runs)
- Per-segment confidence calculation: angle the segment occupies, find the corresponding point on the oval and on the face bounding-box ellipse, measure normalized distance, convert to confidence
- Centering score multiplier so segments only fully illuminate when the face is both well-sized and well-centered
- Asymmetric temporal smoothing (0.4 rising, 0.25 falling) and spatial smoothing across neighboring segments for a fluid 30fps animation
- Rendered as a conic-gradient built from 36 per-segment RGBA values, masked with a radial gradient so only the 4px border ring shows color
- Firebase Cloud Storage integration for capture upload and admin dashboard
AI selfie-capture module built during a paid software engineering internship at Skinstric. Iterated from naive pixel-based detection to a production-grade implementation that ships well beyond the original Figma spec.
The technical core
A real-time visual feedback ring of 36 independent arc segments around an oval, each lighting up as the user’s face aligns with it. Built from first principles — no library handles this specific combination of geometry, smoothing, and rendering.
Implementation details
Stack: TypeScript, Next.js, React, MediaPipe BlazeFace, Firebase Cloud Storage, Browser Media API, SVG, Conic Gradients
Live demo: https://www.twine.net/signin
Source: https://www.twine.net/signin
Hire a Full Stack Developer
We have the best full stack developer experts on Twine. Hire a full stack developer in Athens today.