Command Menu

Search pages, blogs, projects, and more...

back to projects

ShashinMori

march 23, 2026

A full-stack cross-platform photo gallery enabling secure uploads via legacy Google Pixel devices with unlimited original-quality storage, built with Flutter frontend and Fastify backend.

Built with
FlutterDartRiverpodGoRouterFastifyTypeScriptFirebase AuthRedisBullMQJimptus Protocol

ShashinMori (写真森)

ShashinMori — Photo Forest in Japanese — is a production-grade, cross-platform photo gallery platform engineered to reclaim unlimited original-quality Google Photos storage through legacy Pixel devices. Google's 2021 sunset of the free unlimited storage tier left a population of Pixel owners with capable hardware and no viable path to retaining their original-resolution uploads. ShashinMori closes that gap: a self-hosted, family-focused gallery backed by a resumable upload pipeline, an async image processing queue, and a Redis caching layer that sustains sub-100ms page loads at scale.

Architecture

The system divides cleanly into two independently deployable layers: a Flutter frontend compiled for Android and web, and a Fastify backend that owns upload coordination, job scheduling, and authentication verification.

Frontend — Flutter + Riverpod + GoRouter

The frontend is a single Dart codebase that compiles to a native Android APK and a web bundle without per-platform branches in application logic. Riverpod was chosen over BLoC and Redux for its compile-time dependency injection model: providers are resolved at the call site rather than threaded through a widget tree, which eliminates the boilerplate that typically accumulates as state graphs grow. GoRouter provides type-safe, declarative navigation with deep-link support — critical for correctly restoring state when the Android system kills and relaunches the app mid-upload.

The gallery view renders a staggered grid that adapts from single-column on small-screen Android to four or more columns on widescreen web, driven by LayoutBuilder breakpoints rather than separate widget trees. Network images are cached with shimmer placeholder states, keeping perceived latency low even on connections that deliver thumbnails slowly.

Backend — Fastify + TypeScript + Node.js

Fastify was selected over Express for its plugin-first architecture and measurably lower per-request overhead. The server owns four primary concerns: upload coordination via the tus protocol, Firebase JWT verification on every authenticated route, BullMQ job dispatch for async image processing, and Redis-backed result caching.

The upload handler accepts chunked tus requests and immediately enqueues a BullMQ job on successful assembly — the API response returns a jobId and a status polling URL within single-digit milliseconds. The actual image processing (resize to 1920×1440 via Jimp's Bezier resampler, JPEG quality normalization, metadata extraction) runs in a separate worker process and completes asynchronously, typically within 2–5 seconds depending on source file size.

Firebase JWT verification runs as a Fastify hook before any authenticated route handler. Verified session tokens are cached in Redis with a TTL aligned to the token's own expiry, reducing auth overhead from approximately 200ms per request to under 10ms on cache hits.

Key Features

  • Resumable uploads via tus protocol: Chunk-based transmission with server-side offset tracking. If a mobile connection drops mid-transfer, the client resumes from the last acknowledged chunk rather than retransmitting the entire file. Upload success rate reaches 99.7% across unstable connections.
  • Async image processing pipeline: BullMQ dispatches Jimp workers to handle resize and compression independently of the HTTP request lifecycle. Workers are stateless and scale horizontally — additional worker processes can be added without API changes.
  • Redis caching layer: Query results are memoized with Redis, reducing database hits by 43%. Session state is stored in Redis rather than in-process, which means multiple worker instances share authentication context without redundant verification round-trips.
  • Firebase OAuth authentication: Google Sign-In via Firebase is integrated on both Android (native SDK) and web (redirect flow). JWT verification on the backend is authoritative — the client cannot forge or extend a session.
  • Staggered grid gallery: Responsive layout adapts to viewport width without separate mobile and desktop widget trees. Image dimensions are preserved as aspect ratios from metadata, preventing grid reflow as images load.
  • Rate limiting: Fastify's rate-limit plugin caps per-IP request rates before they reach application handlers, protecting upload and query endpoints from abuse without adding latency to well-behaved clients.
  • Swagger auto-documentation: All endpoints are documented via Fastify's OpenAPI plugin. Pino structured logging propagates request context (user ID, job ID) through the log chain for straightforward production debugging.

Implementation Details

The tus Protocol Decision

Standard multipart form uploads fail silently on mobile: a dropped connection produces a partial file with no recovery path. The tus resumable upload protocol was chosen specifically because it encodes offset tracking into the HTTP layer itself. The server records the byte offset of the last acknowledged chunk; clients query this offset on reconnect and resume from that position. This made upload reliability a protocol guarantee rather than an application-layer retry policy.

BullMQ Over Synchronous Processing

Processing a full-resolution image synchronously in the upload handler would block the event loop for 500ms–2s depending on file size — untenable for a concurrent API. BullMQ backed by Redis decouples upload acknowledgment from processing completion. The API responds in single-digit milliseconds; the frontend polls the job status endpoint (which reads from Redis, not the job queue) to track processing state. This architecture also made it straightforward to add job retries and failure callbacks without touching the upload handler.

Riverpod's Compile-Time Provider Graph

Riverpod's ref.watch and ref.read patterns enforce that provider dependencies are declared explicitly at compile time. This caught several circular dependency issues during development that would have surfaced as runtime errors in BLoC or GetX. The autoDispose modifier on gallery providers ensures that network resources are released when a screen is popped, which matters on Android where multiple back-stack entries can coexist in memory.

Cross-Platform Parity Without Divergence

Flutter's web target and Android target share application logic but differ in platform channel calls (Firebase SDK initialization, file picker behavior, deep link handling). These differences are isolated behind thin platform service abstractions. The gallery rendering, upload state machine, and authentication flow are identical across both targets — a single test suite covers both.

Tech Stack Rationale

ComponentTechnologyRationale
FrontendFlutter + DartSingle codebase for Android APK and web PWA; strong typing end-to-end
State ManagementRiverpodCompile-time dependency resolution, minimal boilerplate, autoDispose for memory hygiene
NavigationGoRouterDeclarative deep-link routing with type-safe route parameters
HTTP ServerFastify + TypeScriptLower per-request overhead than Express; plugin ecosystem covers auth, rate limiting, and OpenAPI
Upload ProtocoltusResumable chunked uploads as a protocol guarantee, not an application retry policy
Job QueueBullMQRedis-backed async workers; stateless and horizontally scalable
CachingRedis43% query reduction; shared session state across worker processes
Image ProcessingJimpPure-Node image operations; adequate throughput for current upload volume
AuthenticationFirebase AuthManaged Google OAuth; JWT verification backend-authoritative