About
I’m a Software Engineer at Zenact AI, where I build AI agents, intelligent workflows, and backend systems.
I have hands-on experience in backend engineering, automation, and scalable system design, working with startups across AI, defence, and healthcare.
I enjoy understanding how complex systems work end-to-end, and I’m naturally curious about technology, products, and large-scale problem-solving.
See you sometime,
Cheers:)
Work Experience

Zenact AI
Software Engineer
• Implemented a backend service that allows users to upload mobile application binaries for testing, including S3 storage integration and metadata persistence (user, organization, timestamps, and asset details) in DynamoDB.
• Built a retrieval service that returns all uploaded application assets and associated metadata for a given organization, enabling smooth selection and reuse in automated testing workflows.
• Developed a status retrieval service that exposes real-time installation progress to the frontend by querying DynamoDB and returning structured JSON responses.
• Built lifecycle management APIs to start and stop AI-driven testing agents for a given execution run, ensuring controlled orchestration of test sessions.
• Developed a background worker service to track mobile app installation status on AWS Device Farm, aggregating event updates and persisting status records in DynamoDB.
• Optimized the Docker CI build pipeline by restructuring build stages and caching dependencies, reducing build times by 60% and build context size by 97.5%, significantly improving CI/CD throughput and developer productivity.
• Built an LLM-agnostic agent layer enabling seamless dynamic switching between OpenAI, Groq and Gemini through environment variables, ensuring zero downtime and uninterrupted operations.

Mercor
Software Engineer (Work Trial)
• Built an end-to-end Typeahead Search System that improved recruiter workflow by surfacing personalized suggestions based on search history, frequency, and recency.
• Designed a new MongoDB collection and schema to store user-specific search history, including query text, truncated display text, hard filters, timestamps, and usage statistics.
• Implemented intelligent deduplication logic to detect near-duplicate queries, merge history, update use-count, refresh timestamps, and maintain a rolling limit of the most recent 50 queries.
• Developed a Redis-powered caching layer (1-hour TTL) for fast prefix matching, achieving sub-100ms response times with graceful fallback to MongoDB on cache misses.
• Built a robust Typeahead API with Firebase JWT authentication, cache-first retrieval, ranking logic, and filter-aware suggestion handling.
• Integrated query saving into the search lifecycle: automatically persisted queries, filters, timestamps, and usage stats after each search execution.
• Containerized and monitored the system using Docker, AWS, and DataDog, and produced detailed sequence diagrams and architectural documentation.

Tennr (YC W23)
Implementation Engineer (Intern)
• Built and deployed end-to-end automation workflows on Tennr’s healthcare automation platform, using RaeLM (Tennr’s medical-document LLM) to process referrals, eligibility documents, and insurance information with 97% extraction accuracy.
• Designed workflow logic for document classification, data extraction, patient qualification, and insurance verification, ensuring accurate routing and reliable population of client EHR and billing systems.
• Engineered workflow optimizations that reduced approval and processing delays by 20% by improving triage rules, error-handling branches, and follow-up logic for missing or incomplete documents.
• Monitored live automation pipelines across multiple healthcare clients and resolved workflow-level issues, configuration bugs, and edge-case failures, maintaining 100% uptime for all assigned accounts.
• Collaborated with operations and engineering teams to validate outputs, refine extraction rules, and improve system reliability, ensuring consistent performance for clinics, DMEs, and specialty practices.
• Delivered client-specific implementations for Neb Medicals, Aveanna, Gammie, Lymphacare, and other healthcare providers, tailoring automation logic to their documentation formats, reimbursement workflows, and operational needs.

Neuralix AI
Software Engineer (Intern)
• Worked on backend components for a real-time intelligence and situational-awareness platform that unified data from multi-modal sources such as satellite feeds, structured files, documents, images, and API-based inputs.
• Contributed to the design of ingestion pipelines for processing diverse data formats (JSON, XML, HTML, CSV, PDF, text, images, videos), ensuring smooth normalization and storage across PostgreSQL, MongoDB, and a vector database.
• Implemented backend endpoints in FastAPI to support chat-based querying, document retrieval, and geospatial insight services used across operational teams.
• Assisted with integration of backend services into a Docker-based deployment setup and helped maintain reliable communication between ingestion, storage, and search subsystems.
• Collaborated with senior engineers to refine backend logic, troubleshoot pipeline issues, and strengthen the platform’s ability to handle continuous multi-source data updates.
Skills
Contributions
Giving back to the community through code and documentation.
📌 Open edX Credentials: Deprecated Feature Flag Cleanup
Contributed to the Open edX Credentials by removing a deprecated waffle flag (USE_CERTIFICATE_AVAILABLE_DATE).
Cleaned up unused definitions, stale references, and outdated comments from the repo.
Ensured full consistency by verifying that no remaining references to the flag existed across the project.
Helped reduce technical debt and simplify the configuration layer.
Improved code readability and maintainability for future contributors and maintainers.
📌 Implemented Beta Likelihood Support for Bounded Data Modeling.
Contributed Beta likelihood support to Prophetverse, a Bayesian extension of Meta’s Prophet time-series forecasting library.
Added first-class support for modeling bounded data (0–1) such as CTR, conversion rates, and retention ratios.
Implemented BetaTargetLikelihood with stable link functions, smooth clipping, and NumPyro-based likelihood calculations.
Added the new ProphetBeta model and integrated it across the likelihood registry.
Wrote comprehensive unit tests and updated documentation with mathematical details.
Improves Prophetverse’s modeling flexibility and removes the need for workarounds like logit transforms.
Get in Touch
Open to opportunities, projects, and interesting conversations. Connect via the socials below, or feel free to schedule an intro call.

