I don’t think the rate-limit affects the current versions of the API, only the new one that’s been announced yesterday I think.
It would make sense to be by Client-ID but there are other things to note when doing this.
The Client-ID is public so anyone can use your Client-ID and perform (unauthenticated) requests to the API in the name of your App, this means other people would be able to use your rate limit and block your requests if they wanted by surpassing that same limit.
I’m sure twitch knows that and will figure out something to prevent that from happening.
Yah, I asked Dallas about that and he said they look for other people abusing your ID and give you warnings etc. I guess I’m going to trust what they said, they looked at current usage and 90% of the people will be fine. Hopefully they’ll reach out if they see us on their radar, which I’ve been told we did show up a couple of times.
I don’t know if it’s ID or IP but this information looks useful.
To ensure excellent performance and great uptime, we’re enforcing rate limiting on the new API at 2 requests per second. These rate limits are calculated in a 60 second window, and your remaining rate limit will be returned to you in an HTTP header.
I do hope the IP or User Auth is taken into account as well, otherwise it would effectively shut down all purely clientside apps. I always try to limit the amount of API requests as much as I can, but I can’t control who uses my app or even know how many people use it.
Or, for some of the applications that are shared with many users and have one Client ID. Do you mean per IP/OAuth per Client ID? Because, the Client-ID is public data, all I have to do is snatch one from some-popular-service and DDoS them. I may have misread what you said too, not enough coffee today!
Do you mean for larger services like Muxy’s Follower alert, or for custom alerts? We have a custom Twitch follower alert on our stream that I wrote, and it only makes 1 request every minute. 120 requests per minute is WAY more than is needed for a simple follower alert. Even at my most request-intensive processes within our channel bot, we rarely hit the previous soft-limit of 60 requests per minute. It also helps that I have a throttle set up so it never sends out more than 1 request per second.
As far as I can tell, the previous rate limit of 1 per second (60 per minute) hasn’t caused any major problems for StreamLabs, Muxy, NightDev or other services. More than likely, they have multiple connections to the Twitch API spanning across multiple IDs and IPs, giving them more breathing room. The 120 per minute limit + having information returned about how much is left would be more helpful to them than a hindrance.
That being said, WebHooks will be a bigger boon for those services in the long run
I know some of the Twitch peeps had said that they are going to be closely monitoring the new api to try and determine developers needs… But tbh, I can’t see rate limits being smashed that bad once the hooks are out… From the Trello board, it seems as though the hooks will cover a fair majority of the most used endpoints
I also run a multi-channel service and it will make more sense to me that the rate limits operated per access token.
I’m going to start recording metrics about my Twitch API calls to see whether I should move to the new API. There are some optimisations that I can do now that there are the webhooks and the bulk calls, but until I have some of my own metrics, I don’t think that I can responsibly move over.
Hi @george, here is my feedback with regards to my use case/product.
Right off the bat, I have 2 processes that scan the list of games and live streams, page by page sequentially. I am OK with the 120 req/min limit for those. Additionally, my system may occasionally scan the list of live streams for a particular game, but I can probably fit that too within the 120 req/min for my Client-ID.
However, my system is a single page application (SPA), and users that log in through Twitch also get to see the list of streams they follow, whether the streams are live or offline. Since it’s an SPA, my intent is to refresh the list every once in a while in the background to keep it up-to-date and notify the user when a stream goes from offline to live. It is imperative in my case that the rate limit here is only applied for each specific user / OAuth. Imagine I have 20,000 users online at the same time, and each user follows around 500 livestreams, that means I need about 5-6 HTTP requests per user to refresh their whole list (since page size is 100). That would mean I have to funnel about 100,000 HTTP requests in a 120 req/min rate limit on top of my catalog queries mentioned in the previous paragraph. That would take over 13 hours, which is astronomically higher than what I need. I’d like to be able to refresh each user’s list every few minutes or so.
Can we get confirmation that certain API calls, like lists of followed channels for a given user, will only be rate-limited by user ID + client ID combination, as opposed to client ID only?
The other day, out of boredom, a friend and I were testing the new APIs out and watching the X-Rate-Limit data coming back. Using an OAuth will give you the 120. When we used the same client ID - without OAuth, from different IP addresses, I can tell you that he and I both had the same 30-uses, as we both counted down together from 30, not intertwining.
You could do what we did and test out the new API calls and watch the X-Rate-Limit data like we did and see what you see as well.