╦ ╦╦╔╗╔╔═╗╦ ╦╔═╗╔═╗╔═╗╔╦╗ ╔╦╗ ╚╗╔╝║║║║╠═╣╚╦╝╠═╣║ ╦╠═╣║║║ ║║ ╚╝ ╩╝╚╝╩ ╩ ╩ ╩ ╩╚═╝╩ ╩╩ ╩ ═╩╝
BACKEND
Building scalable systems and APIs.
I'm a Backend Engineer from Chennai with 2+ years of production experience. I specialise in Django, REST APIs, PostgreSQL, Redis, and AWS — shipping backend systems that handle real load, real users, and real problems.
ABOUT
ME
I'm a Backend Engineer based in Chennai, India with 2+ years of production experience designing and shipping scalable backend systems. I specialise in Django and Django REST Framework — building structured REST APIs, designing PostgreSQL schemas, implementing Redis caching strategies, and deploying production workloads on AWS. I build for correctness first, then optimise for scale.
Currently at Trames Private Limited — a Singapore-based logistics technology company — building Node.js and TypeScript backend services for global freight operations, real-time shipment tracking, and supply chain data infrastructure.
Previously at Thaagam Foundation, I architected 8+ production Django backends serving 200K+ users — including webhook-driven messaging systems, AI-powered document validation pipelines, multi-gateway payment backends, and real-time features via Django Channels and WebSockets. I also built Commai, a multi-tenant SaaS omnichannel communication platform integrating WhatsApp, Instagram, Facebook Messenger, and Email APIs into a unified AI-assisted dashboard.
WORK
HISTORY
Backend Engineer
→ Contributing to a logistics technology platform that uses AI, blockchain, and cloud infrastructure for global supply chain and freight management.
→ Building and maintaining backend services in TypeScript and Node.js supporting global freight operations and real-time shipment tracking.
Python Django Developer
→ Built 8+ scalable production Django backends serving 200K+ users with 99%+ uptime across CRM, HR management, and civic service platforms.
→ Developed a WhatsApp automation platform using the WhatsApp Business API and AWS SNS/SES — reduced manual communication workload by 60%.
→ Integrated Google Gemini AI for document and ID validation — built an AI-powered fraud detection layer for donation and registration workflows.
→ Optimised database queries and implemented Redis caching strategies — improved API response times and report generation performance by 200%.
→ Deployed all applications on AWS (EC2, S3, Route53) and implemented WebSocket real-time features using Django Channels.
→ Built multi-platform payment integration across Razorpay and PayU gateways for donation and event management systems.
PHP Development Intern
→ Trained in PHP web development, MVC architecture, and MySQL database design.
→ Developed multiple training projects and earned professional certification.
SELECTED
PROJECTS
COMMAI —
OMNICHANNEL AI
A production SaaS platform that unifies customer communication across WhatsApp, Instagram, Facebook Messenger, and Email into a single AI-powered business dashboard — built on a multi-tenant Django backend with webhook-driven real-time message processing.
Businesses manage customer conversations across four separate platforms — WhatsApp, Instagram, Facebook Messenger, and Email — each requiring a distinct tool, credential set, and workflow. Support teams lose context switching between interfaces, responses are delayed, messages fall through the cracks, and there is no unified audit trail of customer interactions across channels.
Designed and built a multi-tenant SaaS backend in Django where each channel (WhatsApp Business API, Instagram Graph API, Facebook Messenger API, Email) has a dedicated webhook endpoint. Inbound events are normalised into a unified message schema, enqueued in Redis for async processing, persisted to PostgreSQL with full conversation threading per tenant, and surfaced via a single dashboard API. An AI layer analyses inbound message content and generates automated reply suggestions to reduce first-response time and manual support workload.
Each tenant's data is isolated at the schema level — webhook endpoints are channel-specific but converge into a shared message normalisation pipeline before entering the queue. This keeps all business logic channel-agnostic: adding a new messaging integration requires only a new webhook handler; the core processing pipeline, database schema, and AI layer remain untouched. Celery workers handle async dispatch so webhook endpoints return 200 OK immediately — the platform never blocks on outbound message delivery.
Eliminated the operational overhead of managing 4 separate communication platforms. Webhook-driven architecture ensures zero message loss under load. AI-assisted responses reduce average first-response time and allow support teams to handle higher conversation volume without scaling headcount.
AI-Powered Civic
Services Platform
A large NGO serving 200K+ citizens had no digital infrastructure for service requests or identity verification. Staff manually reviewed ID documents, processed complaints on paper, and routed them across departments — creating multi-day processing backlogs and high error rates in identity validation workflows.
Built a production Django backend with DRF exposing structured REST endpoints for citizen registration, service request submission, and grievance tracking. Integrated Google Gemini AI to validate uploaded identity documents — extracting fields, verifying authenticity, and flagging anomalies asynchronously via Celery before persisting records. Redis caching serves read-heavy endpoints; document binaries are stored in AWS S3 with presigned URL access.
Thaagam Donation
Management System
Donations arrived through multiple channels — online, UPI, and bank transfer — with no centralised tracking, no automated receipting, and no reconciliation workflow. Manual 80G tax receipt generation took days, and there was no audit trail for compliance reporting.
Built a payment collection backend integrating Razorpay and PayU via signed webhook handlers. Each payment confirmation is processed asynchronously — transaction state is updated in PostgreSQL with idempotency checks to prevent double-processing, and AWS SES dispatches 80G-compliant tax receipts automatically. A Django admin interface provides finance teams with real-time reconciliation views and exportable transaction logs.
Reconciliation time reduced from days to under five minutes. Zero manual receipt generation — the complete post-payment workflow runs without human intervention. Full 80G-compliant transaction audit trail maintained in PostgreSQL.
SYSTEM
DESIGN
The architecture pattern below reflects how I design and deploy production backend systems — used across 8+ applications serving 200K+ users. Each layer has a single, well-defined responsibility. No layer bleeds into another's concern.
Every inbound HTTP or WebSocket request hits Nginx first. Nginx handles TLS termination, rate limiting, and connection management at the OS level — Python never sees raw connections. Nginx proxies qualified requests to Gunicorn/Daphne, which runs Django workers. Django processes authentication, validates input, executes business logic, and returns a structured response — typically in under 150ms for cached paths.
Read-heavy endpoints — dashboard aggregates, user profiles, dropdown data, permission sets — are expensive to recompute on every request. Django checks Redis before hitting PostgreSQL. On a cache hit, the response is returned in sub-millisecond latency with zero database load. Cache keys are invalidated via Django signal handlers on model writes, keeping data consistent without polling. This is what delivered a 200% API performance improvement in production.
Any operation involving network I/O — sending emails, calling AI APIs, processing uploaded documents, dispatching WhatsApp messages — is offloaded to Celery workers via Redis as the message broker. The HTTP request cycle returns immediately with a task ID. Workers retry failed tasks with exponential backoff. This is what keeps API response times consistent regardless of downstream service latency — and what made zero-blocking webhook processing possible in Commai.
Each layer scales independently. Gunicorn workers scale horizontally behind Nginx without changing application code. Celery worker concurrency is adjusted per queue priority — high-priority queues (webhooks, payments) get dedicated workers. PostgreSQL read replicas serve analytics queries without competing with write traffic. Redis cluster mode handles cache and queue load separately. This layered scalability is what allows 200K+ users to be served from the same architectural pattern.
SYSTEM
DESIGN NOTES
Short technical notes on backend architecture patterns and system design decisions — drawn from real production work across webhook systems, caching layers, and messaging infrastructure.
Designing Webhook Processing Systems
Webhooks deliver events from external services — payment gateways, messaging APIs, identity providers — as HTTP POST requests to your endpoint. The core engineering constraint is that the external provider expects a 200 OK response within a tight timeout window (typically 5–10 seconds). If your endpoint is slow or fails, the provider retries — often at increasing intervals. This means your webhook handler cannot synchronously execute business logic, write to the database, call external APIs, or do anything that blocks.
The correct pattern: the webhook endpoint does exactly two things — validates the payload signature and enqueues the raw event into a Redis queue. It then returns 200 OK immediately. A Celery worker picks up the event asynchronously, processes it, updates the database, and triggers any downstream effects. Failed jobs are retried with exponential backoff without the provider ever seeing the failure.
WhatsApp / Razorpay / Stripe
Validate sig → Enqueue → 200 OK
Raw event payload
Process · Retry on fail
Persist with idempotency key
Redis Caching Strategy
Not all data changes at the same rate. A user's profile, a list of country codes, a permission matrix — these change rarely but are read on every request. Fetching them from PostgreSQL on every hit is wasteful: it adds 5–50ms of latency per query, consumes database connections, and degrades under concurrent load. Redis sits between your API and PostgreSQL as an in-memory cache, serving these reads in under 1ms with no database involvement.
The hard problem isn't caching — it's invalidation. The wrong strategy (TTL-only expiry) means stale data or cache stampedes. The correct approach ties cache invalidation to model lifecycle events: Django's post_save and post_delete signals delete or update the cache key the moment the data changes. This gives you sub-millisecond reads with guaranteed consistency — you get the performance of in-memory storage without serving stale records.
GET /api/profile/
Check Redis first
HIT → return <1ms
Query + warm cache
post_save signals to delete the cache key on write. Never rely on TTL alone for data correctness — TTL is a safety net, not a strategy.
Scaling Messaging Platforms
A messaging platform that processes inbound events from WhatsApp, Instagram, Messenger, and Email simultaneously faces a fundamental throughput problem: message arrival is bursty and unpredictable, while database writes, AI processing, and outbound API calls have variable latency. If you process each event synchronously in-request, a spike in inbound volume or a slow downstream API stalls the entire system.
The solution is event-driven architecture with queue-backed workers. Each inbound channel publishes events to a dedicated Redis queue rather than processing inline. Workers consume from these queues at their own pace — independently scalable based on queue depth. Priority queues ensure that payment confirmations and user-facing messages are processed before analytics events. The system never applies backpressure to the event source; it accepts everything and processes everything, eventually, correctly.
Normalise + Enqueue
Per-channel priority lanes
Celery · Auto-scale on depth
Thread-safe conversation store
Auto-reply generation
TECH
STACK
WHAT I'M
FOCUSED ON
Contributing to a global freight management platform — learning how large-scale logistics systems are architected and operated in production.
Focused on distributed systems, high-availability architectures, and building backends that scale under real-world pressure.
Integrating AI into backend workflows — from intelligent APIs to RAG pipelines — to build smarter, more adaptive systems.
Reading, building side projects, and sharpening my craft every day. Still early in my journey — excited about what's ahead.
GITHUB
ANALYTICS
Multi-tenant SaaS backend unifying WhatsApp Business API, Instagram Graph API, Facebook Messenger API, and Email into a single webhook-driven Django platform with AI-powered automated response generation.
AI-powered civic services backend serving 200K+ citizens — Django + Gemini AI + PostgreSQL. Asynchronous identity document validation via Celery; Redis caching on read-heavy citizen profile endpoints.
Webhook-driven payment backend integrating Razorpay and PayU with idempotency guarantees, signature verification, async receipt dispatch via AWS SES, and a full 80G-compliant transaction audit trail in PostgreSQL.