LogicTree, Inc.

April 16, 2020

LogicTree, Inc.

logictree logo

Scaling a Growing API: My Experience at Logictree.com

This article details my experience scaling the backend API for Logictree.com's data247.com application. Initially deployed on a single server with a failover backup, the API's success demanded a robust infrastructure to handle the influx of clients.

Scaling Challenges and Solutions:

  • Server Scaling: We transitioned from a single server to a multi-server architecture with dedicated servers for the main API, database instances, and microservices.
  • High Availability and Load Balancing: HAProxy ensured high availability and distributed traffic across the API servers. Maxscale performed the same role for the MariaDB servers.
  • Task Queues for Efficiency: Celery, a task queue system, helped handle time-consuming tasks asynchronously, improving user experience by eliminating unnecessary waiting times.
  • Caching for Performance: We implemented a multi-layered caching strategy using Redis (in-memory) and MongoDB (NoSQL) to optimize both read and write operations.
  • Asynchronous API for Scalability: To combat high CPU usage, the API was transitioned to an asynchronous model, significantly reducing CPU consumption from over 90% to less than 40%.
  • Handling Bulk Requests: For our B2B clients with bulk request needs, a combination of Python's asyncio library, MariaDB stored procedures, database sharding, and query optimizations enabled the processing of 3,000 requests within 9 seconds.
  • Proactive Monitoring: Sentry and Percona tools were implemented for proactive performance monitoring and error tracking, allowing us to address issues before they impacted clients.

Lessons Learned and Looking Forward:

Logictree.com provided me with invaluable experience in scaling a complex API. This journey is ongoing, and I'm excited to share my future software adventures with you on LinkedIn. Follow me to stay connected!