Hi, I have an app that polls the get streams endpoint (https://api.twitch.tv/helix/streams). I woke up to many errors that said my OAuth token was suddenly invalid. This cannot be due to the fact that the token is expired since I have a daily job that refreshes my tokens and the errors started happening 3 hours after the refresh job. The errors are also not helpful since all it says is {"error":"Unauthorized","status":401,"message":"Invalid OAuth token"}
. I’m adding extra debug layers but some insight would be helpful.
If you are generating app access tokens/server to server tokens.
This may suggest you have a rogue job that is generating a new token every time instead of reusing a token. And you his the max token count.
If you generate 50 (valid) tokens
When you generate token 51, it force kills token 1.
So it sounds like you have a rogue job that is not reusing a token.
And the good job is getting a dead token as a result.
You could use the validate endpoint but the validate endpoint would just give you the same error, the token is dead.
gotcha, I’ll implement the reactive refresh approach rather than my proactive approach
Also just worth having a dig to make sure all your jobs that use an app access token are using the same token and you don’t have one that is being silly and always using a new token.
And check all your jobs are grabbing the token from the right place (and you don’t have one that didn’t grab the new one)
And yeah it’s potentially better to use a token till it dies, than to force renew daily. It’ll last around 60 days. So you can save some overhead with a validate call, and then don’t renew if it’s got 2+ days on it.
And yes if you do a call and the tokens dead, instant make and store a new one.
I doubt it’s an API issue but a implementation issue somewhere. Even more so in a complex system with many jobs/scripts.
I had the same issue, 2/3 of my long running server tasks would complain when the token died, as I had a cron job that would never use a stored token and always made a new one. That was fun to track down.
So, really, you want both systems here
A cronjob to handle key management, that then stores the key in redis/shared memory
And if another job (streams checker) gets a bad key, it will make a new key and put that key back into the same redis/stored memory.
FINALLY! On the matter of streams, if it’s a “fixed list of streamers”, checkout EventSub and Twitch can tell you when a stream goes live/offline
Thanks for the detailed reply!
I double checked and the job should be grabbing the correct token, its more likely that it was the 50 token problem. So I have two separate services running now!
Also my initial approach was to use EventSub or webhooks but since it’s a dynamic list of streamers it won’t be ideal.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.