hosting-kubernets - Operational
hosting-kubernets
Previous page
Next page
hosting-denodeploy - Operational
hosting-denodeploy
admin - Operational
admin
VTEX - Operational
VTEX
Third Party: Amazon Web Services (AWS) → AWS ec2-sa-east-1 - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → API - Degraded performance
Third Party: Cloudflare → Cloudflare Sites and Services → CDN/Cache - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → DNS Root Servers - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → SSL Certificate Provisioning - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → Workers - Operational
Third Party: Fly.io → AWS s3-sa-east-1 - Operational
Third Party: Fly.io → Application Hosting → GRU - São Paulo, Brazil - Operational
Third Party: Github → Actions - Operational
Third Party: Github → API Requests - Operational
Third Party: Github → Git Operations - Operational
Third Party: Github → Pull Requests - Operational
Third Party: Github → Webhooks - Operational
Third Party: Shopify → Shopify API & Mobile - Operational
Third Party: Shopify → Shopify Checkout - Operational
Third Party: Shopify → Shopify Point of Sale - Operational
Third Party: Supabase → Supabase Cloud → AWS ec2-sa-east-1 - Operational
Third Party: Discord → Voice → Brazil - Operational
Third Party: Supabase → Auth - Operational
Third Party: Supabase → Storage - Operational
This incident has been resolved.
We are investigating reports of other sites experiencing failures.
Deno Deploy has identified an issue and is actively working on a solution.
We have escalated the issue to Deno Deploy. It appears to be related to Deno KV and is primarily affecting sites utilizing the experimental loader cache feature.
We are currently investigating this incident.
Publish has returned to normal operations. No data was lost during the incident. You are now clear to perform a publish to update the production environment.
Deno deployed implemented a fix and are currently monitoring the result.
Deno deploy was notified and is actively progressing towards a solution.
A deno deploy rollout increased the number of timeouts. They did a rollback and we are monitoring the number of issues on our sites.
Deno Deploy has implemented corrective measures as of 9:25 AM (GMT+3), and we are observing a notable reduction in the number of errors.
Deno Deploy rollouted a new infrastructure version at 7:15 am (GMT+3), which could be linked to the recent increase in errors. We have promptly initiated communication with their infrastructure team and are investigating any potential mitigating issues on our end.
No errors have been reported since the last update.
The incidence of errors has decreased, returning to standard levels. We are actively monitoring the situation.
Deno deploy implemented a solution, and we are monitoring the outcomes.
Notably, we have observed a substantial reduction in errors, plummeting from the range of 30-40% to 8-10%.
At 8 pm, a surge in traffic prompted a swift response from Deno deploy, that was able to scale their infrastructure.
By 9 pm, a redeployment led to the redirection of the entire traffic to new isolates; however, Deno encountered scaling challenges. We are actively liaising with Deno support to address this issue, and they have raised the overall cluster limit to effectively manage the increased data load.
Nov 2023 to Jan 2024
Next