╦  ╦╦╔╗╔╔═╗╦ ╦╔═╗╔═╗╔═╗╔╦╗  ╔╦╗
╚╗╔╝║║║║╠═╣╚╦╝╠═╣║ ╦╠═╣║║║   ║║
 ╚╝ ╩╝╚╝╩ ╩ ╩ ╩ ╩╚═╝╩ ╩╩ ╩  ═╩╝
ISSUE #001 → BACKEND ENGINEER


BACKEND

Building scalable systems and APIs.

I'm a Backend Engineer from Chennai with 2+ years of production experience. I specialise in Django, REST APIs, PostgreSQL, Redis, and AWS — shipping backend systems that handle real load, real users, and real problems.

LOCATION Chennai, IN
EXPERIENCE 2+ Years
CURRENT ROLE Trames Pvt Ltd
STATUS EMPLOYED
vinayagam@backend:~$
Vinayagam D — Python Backend Engineer
Chennai | Backend Eng | Python
200K+ Users Served
//
200% API Performance Gain
//
60% Manual Work Automated
//
8+ Production Apps Shipped
01

ABOUT
ME

$ cat about/vinayagam.txt

I'm a Backend Engineer based in Chennai, India with 2+ years of production experience designing and shipping scalable backend systems. I specialise in Django and Django REST Framework — building structured REST APIs, designing PostgreSQL schemas, implementing Redis caching strategies, and deploying production workloads on AWS. I build for correctness first, then optimise for scale.


Currently at Trames Private Limited — a Singapore-based logistics technology company — building Node.js and TypeScript backend services for global freight operations, real-time shipment tracking, and supply chain data infrastructure.


Previously at Thaagam Foundation, I architected 8+ production Django backends serving 200K+ users — including webhook-driven messaging systems, AI-powered document validation pipelines, multi-gateway payment backends, and real-time features via Django Channels and WebSockets. I also built Commai, a multi-tenant SaaS omnichannel communication platform integrating WhatsApp, Instagram, Facebook Messenger, and Email APIs into a unified AI-assisted dashboard.

02

WORK
HISTORY

FEB 2026 — PRESENT

Backend Engineer

TRAMES PRIVATE LIMITED · Singapore (Remote)
TypeScriptNode.jsPostgreSQLAWS

→ Contributing to a logistics technology platform that uses AI, blockchain, and cloud infrastructure for global supply chain and freight management.

→ Building and maintaining backend services in TypeScript and Node.js supporting global freight operations and real-time shipment tracking.

OCT 2023 — OCT 2025

Python Django Developer

THAAGAM FOUNDATION · Chennai, India
PythonDjangoDRFPostgreSQL RedisAWS EC2S3SESSNS WhatsApp APIGemini AIDjango Channels

→ Built 8+ scalable production Django backends serving 200K+ users with 99%+ uptime across CRM, HR management, and civic service platforms.

→ Developed a WhatsApp automation platform using the WhatsApp Business API and AWS SNS/SES — reduced manual communication workload by 60%.

→ Integrated Google Gemini AI for document and ID validation — built an AI-powered fraud detection layer for donation and registration workflows.

→ Optimised database queries and implemented Redis caching strategies — improved API response times and report generation performance by 200%.

→ Deployed all applications on AWS (EC2, S3, Route53) and implemented WebSocket real-time features using Django Channels.

→ Built multi-platform payment integration across Razorpay and PayU gateways for donation and event management systems.

200K+ Users
200% Perf. Gain
60% Work Automated
8+ Apps Shipped
FEB 2023 — JUL 2023

PHP Development Intern

CADD CENTRE · Chennai, India
PHPMySQLMVC Architecture

→ Trained in PHP web development, MVC architecture, and MySQL database design.

→ Developed multiple training projects and earned professional certification.

03

SELECTED
PROJECTS

PROJECT_02 / 2024
AI · CIVIC TECH · DJANGO 200K+ USERS

AI-Powered Civic
Services Platform

PROBLEM

A large NGO serving 200K+ citizens had no digital infrastructure for service requests or identity verification. Staff manually reviewed ID documents, processed complaints on paper, and routed them across departments — creating multi-day processing backlogs and high error rates in identity validation workflows.

SOLUTION

Built a production Django backend with DRF exposing structured REST endpoints for citizen registration, service request submission, and grievance tracking. Integrated Google Gemini AI to validate uploaded identity documents — extracting fields, verifying authenticity, and flagging anomalies asynchronously via Celery before persisting records. Redis caching serves read-heavy endpoints; document binaries are stored in AWS S3 with presigned URL access.

ARCHITECTURE
Client Nginx Django API Redis Cache PostgreSQL Gemini AI AWS S3
TECH STACK
PythonDjangoDRFPostgreSQL RedisCeleryGoogle Gemini AI AWS EC2AWS S3Nginx
IMPACT
200K+
Active Users
99%+
Uptime SLA
~0
Manual ID Reviews
PROJECT_03 / 2023
FINTECH · PAYMENTS · COMPLIANCE

Thaagam Donation
Management System

PROBLEM

Donations arrived through multiple channels — online, UPI, and bank transfer — with no centralised tracking, no automated receipting, and no reconciliation workflow. Manual 80G tax receipt generation took days, and there was no audit trail for compliance reporting.

SOLUTION

Built a payment collection backend integrating Razorpay and PayU via signed webhook handlers. Each payment confirmation is processed asynchronously — transaction state is updated in PostgreSQL with idempotency checks to prevent double-processing, and AWS SES dispatches 80G-compliant tax receipts automatically. A Django admin interface provides finance teams with real-time reconciliation views and exportable transaction logs.

ARCHITECTURE
Donor Django API Razorpay / PayU Webhook Handler PostgreSQL AWS SES Receipt
TECH STACK
DjangoDRFPostgreSQL RazorpayPayURedisAWS SES
IMPACT

Reconciliation time reduced from days to under five minutes. Zero manual receipt generation — the complete post-payment workflow runs without human intervention. Full 80G-compliant transaction audit trail maintained in PostgreSQL.

04

SYSTEM
DESIGN

$ cat architecture/scalable-backend.txt

The architecture pattern below reflects how I design and deploy production backend systems — used across 8+ applications serving 200K+ users. Each layer has a single, well-defined responsibility. No layer bleeds into another's concern.

REQUEST FLOW — SCALABLE PRODUCTION BACKEND
Client
Browser · Mobile · API consumer
Nginx
API Gateway / Reverse Proxy
Django API
DRF · Business Logic · Auth
Redis
Cache · Queue · Session
PostgreSQL
Primary Datastore · ACID
External APIs
WhatsApp · AI · S3 · SES
ASYNC LAYER
Redis Queue (Broker) Celery Workers Email Dispatch · AI Processing · Webhook Handlers · File Processing
HOW THE SYSTEM WORKS
01
REQUEST FLOW

Every inbound HTTP or WebSocket request hits Nginx first. Nginx handles TLS termination, rate limiting, and connection management at the OS level — Python never sees raw connections. Nginx proxies qualified requests to Gunicorn/Daphne, which runs Django workers. Django processes authentication, validates input, executes business logic, and returns a structured response — typically in under 150ms for cached paths.

02
REDIS CACHING

Read-heavy endpoints — dashboard aggregates, user profiles, dropdown data, permission sets — are expensive to recompute on every request. Django checks Redis before hitting PostgreSQL. On a cache hit, the response is returned in sub-millisecond latency with zero database load. Cache keys are invalidated via Django signal handlers on model writes, keeping data consistent without polling. This is what delivered a 200% API performance improvement in production.

03
ASYNC WORKERS

Any operation involving network I/O — sending emails, calling AI APIs, processing uploaded documents, dispatching WhatsApp messages — is offloaded to Celery workers via Redis as the message broker. The HTTP request cycle returns immediately with a task ID. Workers retry failed tasks with exponential backoff. This is what keeps API response times consistent regardless of downstream service latency — and what made zero-blocking webhook processing possible in Commai.

04
SCALABILITY

Each layer scales independently. Gunicorn workers scale horizontally behind Nginx without changing application code. Celery worker concurrency is adjusted per queue priority — high-priority queues (webhooks, payments) get dedicated workers. PostgreSQL read replicas serve analytics queries without competing with write traffic. Redis cluster mode handles cache and queue load separately. This layered scalability is what allows 200K+ users to be served from the same architectural pattern.

05

SYSTEM
DESIGN NOTES

$ cat notes/system-design.md

Short technical notes on backend architecture patterns and system design decisions — drawn from real production work across webhook systems, caching layers, and messaging infrastructure.

NOTE_01 EVENT-DRIVEN SYSTEMS

Designing Webhook Processing Systems

Webhooks deliver events from external services — payment gateways, messaging APIs, identity providers — as HTTP POST requests to your endpoint. The core engineering constraint is that the external provider expects a 200 OK response within a tight timeout window (typically 5–10 seconds). If your endpoint is slow or fails, the provider retries — often at increasing intervals. This means your webhook handler cannot synchronously execute business logic, write to the database, call external APIs, or do anything that blocks.

The correct pattern: the webhook endpoint does exactly two things — validates the payload signature and enqueues the raw event into a Redis queue. It then returns 200 OK immediately. A Celery worker picks up the event asynchronously, processes it, updates the database, and triggers any downstream effects. Failed jobs are retried with exponential backoff without the provider ever seeing the failure.

PROCESSING FLOW
External Service
WhatsApp / Razorpay / Stripe
Webhook Endpoint
Validate sig → Enqueue → 200 OK
Redis Queue
Raw event payload
Celery Worker
Process · Retry on fail
PostgreSQL
Persist with idempotency key
WHY ASYNC Webhook endpoints must respond in under 5s. Any processing that could fail or block — database writes, API calls, file processing — must run outside the request cycle.
WHY IDEMPOTENCY Providers retry on network failure. Processing the same event twice must produce the same result. Use the event ID as an idempotency key — check before writing, skip if already processed.
USED IN Commai (WhatsApp / Instagram / FB webhooks), Donation System (Razorpay / PayU payment confirmations)
NOTE_02 PERFORMANCE ENGINEERING

Redis Caching Strategy

Not all data changes at the same rate. A user's profile, a list of country codes, a permission matrix — these change rarely but are read on every request. Fetching them from PostgreSQL on every hit is wasteful: it adds 5–50ms of latency per query, consumes database connections, and degrades under concurrent load. Redis sits between your API and PostgreSQL as an in-memory cache, serving these reads in under 1ms with no database involvement.

The hard problem isn't caching — it's invalidation. The wrong strategy (TTL-only expiry) means stale data or cache stampedes. The correct approach ties cache invalidation to model lifecycle events: Django's post_save and post_delete signals delete or update the cache key the moment the data changes. This gives you sub-millisecond reads with guaranteed consistency — you get the performance of in-memory storage without serving stale records.

CACHE READ PATH
Client Request
GET /api/profile/
Django API
Check Redis first
Redis Cache
HIT → return <1ms
→ MISS →
PostgreSQL
Query + warm cache
WHAT TO CACHE High-read, low-write data: user profiles, permission sets, dropdown data, aggregated dashboard stats, third-party API responses with a known validity window.
INVALIDATION Use Django post_save signals to delete the cache key on write. Never rely on TTL alone for data correctness — TTL is a safety net, not a strategy.
RESULT 200% API response time improvement in production across report generation endpoints and high-frequency dashboard reads at Thaagam Foundation.
NOTE_03 DISTRIBUTED SYSTEMS

Scaling Messaging Platforms

A messaging platform that processes inbound events from WhatsApp, Instagram, Messenger, and Email simultaneously faces a fundamental throughput problem: message arrival is bursty and unpredictable, while database writes, AI processing, and outbound API calls have variable latency. If you process each event synchronously in-request, a spike in inbound volume or a slow downstream API stalls the entire system.

The solution is event-driven architecture with queue-backed workers. Each inbound channel publishes events to a dedicated Redis queue rather than processing inline. Workers consume from these queues at their own pace — independently scalable based on queue depth. Priority queues ensure that payment confirmations and user-facing messages are processed before analytics events. The system never applies backpressure to the event source; it accepts everything and processes everything, eventually, correctly.

MULTI-CHANNEL EVENT FLOW
WhatsApp API
Instagram API
Facebook API
Email API
Webhook Gateway
Normalise + Enqueue
Redis Queues
Per-channel priority lanes
Worker Pool
Celery · Auto-scale on depth
PostgreSQL
Thread-safe conversation store
AI + Response
Auto-reply generation
QUEUE ISOLATION Each channel gets its own queue. A backlog on Email processing never delays WhatsApp message delivery. Workers are assigned to queues based on priority, not round-robin across all tasks.
WORKER SCALING Worker concurrency is configured per queue. During peak traffic, additional Celery processes spin up against the same Redis broker without changing application code or restarting the API server.
USED IN Commai platform — four independent webhook receivers feeding a unified normalisation pipeline with per-channel Redis queues and AI response workers.
06

TECH
STACK

$ cat skills/all.txt
## BACKEND — LANGUAGES & FRAMEWORKS
Python Expert
Django Expert
Django REST Framework Expert
Flask / FastAPI Advanced
Django Channels Advanced
## DATABASES & CACHING
PostgreSQL Expert
MySQL Advanced
Redis (Caching & Queues) Advanced
## CLOUD & DEVOPS
AWS EC2 / S3 Advanced
AWS Route53 / SES / SNS Advanced
Docker Advanced
Nginx Advanced
## ARCHITECTURE & PATTERNS
REST API Design Expert
WebSockets (Async) Advanced
Redis Caching Strategies Advanced
Async Task Processing Advanced
## AI & INTEGRATIONS
Google Gemini AI Advanced
Prompt Engineering Advanced
RAG / LangChain Intermediate
WhatsApp Business API Advanced
2+
YEARS OF PRODUCTION EXPERIENCE
8+
PRODUCTION APPS SHIPPED
200K+
USERS ACROSS DEPLOYED SYSTEMS
PRIMARY STACK
PythonDjangoDRF PostgreSQLRedisAWS DockerNginxCelery
07

WHAT I'M
FOCUSED ON

$ cat focus/current.txt
→ BUILDING AT TRAMES

Contributing to a global freight management platform — learning how large-scale logistics systems are architected and operated in production.

→ DEEPENING SYSTEM DESIGN

Focused on distributed systems, high-availability architectures, and building backends that scale under real-world pressure.

→ EXPLORING AI INTEGRATION

Integrating AI into backend workflows — from intelligent APIs to RAG pipelines — to build smarter, more adaptive systems.

→ GROWING AS AN ENGINEER

Reading, building side projects, and sharpening my craft every day. Still early in my journey — excited about what's ahead.

09

GET IN
TOUCH

connect.sh
$ ./contact --info
X / TWITTER x.com/_vinyjr
LOCATION Chennai, Tamil Nadu, India
$ _