hosting-kubernets - Operational
hosting-kubernets
Previous page
Next page
hosting-denodeploy - Operational
hosting-denodeploy
admin - Operational
admin
VTEX - Operational
VTEX
Third Party: Amazon Web Services (AWS) → AWS ec2-sa-east-1 - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → API - Degraded performance
Third Party: Cloudflare → Cloudflare Sites and Services → CDN/Cache - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → DNS Root Servers - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → SSL Certificate Provisioning - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → Workers - Operational
Third Party: Fly.io → AWS s3-sa-east-1 - Operational
Third Party: Fly.io → Application Hosting → GRU - São Paulo, Brazil - Operational
Third Party: Github → Actions - Operational
Third Party: Github → API Requests - Operational
Third Party: Github → Git Operations - Operational
Third Party: Github → Pull Requests - Operational
Third Party: Github → Webhooks - Operational
Third Party: Shopify → Shopify API & Mobile - Operational
Third Party: Shopify → Shopify Checkout - Operational
Third Party: Shopify → Shopify Point of Sale - Operational
Third Party: Supabase → Supabase Cloud → AWS ec2-sa-east-1 - Operational
Third Party: Discord → Voice → Brazil - Operational
Third Party: Supabase → Auth - Operational
Third Party: Supabase → Storage - Operational
This incident has been resolved.We apologize for the inconvenience and are working to prevent this type of problem in the future.
We are continuing to work on a fix for this incident.Our nginx is still suffering due to excess of routes/endpoints.
Our nginx that responds requests for environments is overloaded. We are escalating the service to improve our SLA.
Due to this failure, operations related to admin operations on envs are failing.
This doesn't impact production sites.
Some sites reported slowness from 9:20 to 9:39 (timeout requests increased from 600 to 1.6k) requests (0.8% to 2.3%).
We rolled back some configuration.
A change on our network configuration dropped packages from some origins.
Deno deploy isolates were killed due to memory starvation.
A new memory profile was applied, thus resolving this issue.
Deno deploy added capacity to mitigate reported issues.
Still investigating root cause.
We are back to normal status rate.
Feb 2025 to Apr 2025
Next