Best Practices for Webhooks
This page includes best practices and architecture considerations to keep in mind when implementing a system to receive Livespace Webhooks.
Building for scalability
It's important to keep scalability in mind as you build your integration. During peak times, Livespace can generate a high volume of Webhooks in a short period.
Using asynchronous processing
We strongly recommend using asynchronous processing for your Endpoint.
Instead of processing the Webhook logic directly within the HTTP request handler, you should:
- Verify the signature.
- Quickly store the Webhook in a queue
- Immediately return a
200 OKresponse to Livespace. - Process the Webhook from the queue using a separate worker process.
This approach ensures that your Endpoint remains responsive and stays within the 3-second timeout limit, even during high-volume bursts.
Distributing Webhook processing
For high-volume applications, consider placing your Webhook processing servers behind a load balancer. This allows you to scale horizontally by adding more servers during periods of heavy traffic, ensuring high availability.
Monitoring and maintenance
Monitoring Endpoints for downtime
You should monitor your Endpoints to ensure they remain available. If your Endpoint goes down, Livespace will attempt to redeliver Webhooks according to the retry policy.
However, after 3 failed attempts, the Webhook will no longer be sent. We recommend building your application to automatically restart the Endpoint if it fails and manually fetching recent changes from the Livespace API to backfill any missed data during the downtime.
Handling duplicate Webhooks
While Livespace aims for "at least once" delivery, network issues or retries might result in your application receiving the same Webhook more than once.
Your implementation should be idempotent. You can use the id field in the Webhook payload (which is a unique UUID for each notification) to track which Webhooks you have already processed.