Since yesterday around 13h00 CET, my calls to the Helix API are timeout randomly. I would say maybe 1 in 10 at first, then quickly over a short period 1 in 2
In the following capture, a log traces the errors. Each “Symfony\Component\HttpClient\Exception\TimeoutException” traces a timeout while the task runs every minute
bad luck when hitting the API and a server in the backend is restarting in the pool (or late removed from the pool)
Basically retry the request. The second attempt should clear. Since it’s not constant as you have noticed, so even an instant retry should get the required data.
Ok no worries, for now retry “works” the next minute usually, but it’s still a hell of a hassle.
The fact that it started to screw up at a round hour like that makes me think there’s a change somewhere… Can you confirm to me that they are aware that there may be a problem somewhere? Is this problem looked at from their side?
And if it’s fast.ly it’s somewhat out of Twitch’s hands anyway and likely fast.ly already looking at it.
The times of my faults don’t line up with your times, and I have way less faults than you (like <10 total distributed across 2 servers in 2 different geographicals (which also spread the faults into different time blocks)) and I hit the API a similar amnount of time as you I imagine)
So it’s just fast.ly things I guess.
And since it’s not a constant issue. I imagine it’s a Twitch Can’t fix and a fast.ly being weird (course that doesn’t account for any BGP schnanigans or other things that could screw with routes or server load balancers load balancing etc)
Sods law it was Paris Data Center maintaince that was screwing up… or follow up rebalancing on fastly’s network
I also face same issue from this date , it using two different applications on different networks - on both getting sometimes timeout issue, usually about 20 per day