Getting rate limited now? URGENT

Hey, I posted a while ago asking about rate limiting and client ID, I was assured rate limits would be applied to each individual user’s oauths, But apparently that’s not the case because everyone is unable to use my application. Does twitch still do whitelisting?

If you do not provide an oauth token then the rate limit is 30 requests per IP in 60 seconds.

If you do provide an oauth token then the rate limit is 120 requests per client ID in 60 seconds.

This has not changed since Helix launched.



Can you describe your issue? I have only been using Helix a little bit but the rate limiting is working as documented for me.

30 requests per 60 seconds is not enough to gather the data I need, especially for larger channels. I need a list of users in the channel and constantly checking if they’re following or subscribing, So I was doing the client ID, and yesterday I had people testing it, and experienced rate limiting across everyone.

I could be mistaken though, I made a few changes, So I guess i’ll wait to see if it happens again.

If you are doing follower notifications you should be using webhooks instead of polling. You can subscribe to 1000 topics (likely channels in your case) per client ID which is enough for most users.

Edit: Looks like you can’t specify multiple to/from IDs like you can on other endpoints.

i’m not only doing follow notifications, i need to check if a viewer is following or not following, same for subscribing. It seems weird to me that the information for who’s in the chat channel and their statuses of following and subscribing have to be different. Call me crazy but if twitch really wanted to reduce the amount of web requests they would give all that information with one call, or at least allow for checking multiple users in one call.

Also if a streamer has 300 viewers and 70,000 followers, it does not make sense to check 100 users at a time.

You can still use webhooks to track new follow events.This may help reduce the frequency that you need to poll.

@george you can’t specify “the 100 users in the channel”. The API only allows you to get 100 users of all followers at a time. There is no option to give a list of 100 user ids and get a response.

If someone has 70k followers, and you do 100 users at a time. That’s 700 requests. If that’s the only request that’s occurring (it isn’t), then that’s 350 seconds (2 requests/sec) to get all the necessary information. That’s ~6 minutes. And if this is “per client-id” like their docs say, which I think is complete bs and their docs are wrong as I’ve tested it, then it’s literally impossible to make a non-alert based program.

This is the use case:
A dashboard that shows all users in the chat and their contribution status. Whether they have chatted recently, are a follower, or are a subscriber.

The subscriber resource only allows one request at a time. So 300 viewers is 300 calls, 70k followers is 700 calls. That’s around 9 minutes for one user, just to start up the application.

Also - the follower event doesn’t work. Because there is no unfollow event. You still have to be constantly cycling through the entire list every single time. Albeit 300 viewers and 70k followers is a poor example because you’d rather do 300 individual calls than 700 calls 100 at a time. But as soon as they hit 700 viewers, then the use case comes back.

The point Travis is making is how can you reasonably make a dashboard as the use case above has described, and not get rate limited? If I have 200 streamers using my application, and each have 500 viewers and tens of thousands of followers, then how exactly can I get any of this data reliably?

I’ve been playing around with the rate limiting on twitch in order to answer this question, and the only answer is “Keep adding client ids”. But that doesn’t work either. The docs say rate limiting is by client id (dumb, it should be by oauth token like everyone else in the known universe), but it’s not. It’s by IP. There’s literally no way around it.

How often are you doing these full follower/subscriber list lookups? Also, are not doing any form of caching/storing the results? Because if you are hitting the API every single time you need to check if a user is subscribed/following, then the fault isn’t with the rate limits, but your app design.

@Dist Sure we could say it’s app design. But how would you build the following conceptually and have a responsive application for the end user?

A dashboard of a streamers current viewers, and whether each viewer is following, or a subscriber.

There is no event system for subscribes. So we have to do one check for every user that enters the channel every time we stream. Let’s say we cache this. For the length of the stream. That’s fine because subs don’t just disappear (as far as I know?)

There is an event system for follows, but not unfollows. So we can’t exactly rely on this at all. So when a streamer goes live you have to get the status of every viewer in the channel. If the channel has > 500 viewers this becomes too much work. So now we have to get all the followers for the channel 100 at a time. Can we cache this? Not really, because there is no way to get unfollows other than “not in the list of 70,000”.

Youtube has this functionality because they have an “events API” where it has every “event” that occurred for the channel in a list based on time. So you can get “all new events” and stay up to date.

Do you have any suggestions on how to get over these hurdles? Right now I’m of the opinion that based on how the Twitch API has been built it only supports silly things like “when someone does X, do Y” stuff. Not stream management or persistent data applications.

From everything I can tell, to get something like this working you have to be doing 2 requests a second, constantly, forever, to keep up with the changing status of the channel and it’ll still have a 10minute+ lag per user.

Cache subs list, then you can compare the user to that cache. If they subbed between the time you cached the data, and the time you performed the check then guess what, that subscription is an event in chat, and pubsub, (and in the future perhaps a webhook too), all of which you can add to your cache when it happens. You can also get a general idea of when to expect a sub to expire too, but a cache can just be refreshed. Also, if you’re connected to chat you can use userstate data to also help keep the cache up to date between refreshes but obviously this wont have full coverage.

Again, cache the list of followers. When a new user follows you can get that event from a webhook. If they try to unfollow and refollow to try trigger another alert, then you can refer back to the cache and see that they had already previously been a follower and no need to re-trigger any alerts. Sure you wont have precise timings on when someone unfollows, but do you have significant evidence that this is an issue for your app? Out of the 500 viewers you mentioned, how many have you seen unfollow during stream AND where it would be of significant impact to your apps functionality that it can’t wait until the next cache refresh?

Even for more important events where up to date information is needed, such as a follower only giveaway, you can use a cache (updated to new followers through the webhook) to limit entrants, and then when drawing a winner perform a one-off lookup on that specific user for their following status at the time of drawing a result.

@Dist I’ve written three responses and deleted it every time. Cache doesn’t solve this problem, it makes my user experience worse. Unfollows are important to the usability of my app, and the viewers interaction with the app is high volume. Users who are not followers/not subs, need to have their functionality removed. It’s important to the streamers.

And cache doesn’t help that it takes users over an hour to get all the data necessary to run the app for the first time.

I tested this when Helix was first published and it was behaving correctly according to the docs. I just tested three different scenarios again my library.


  • Test 1: providing 2 Bearer tokens, each made with different Client ID’s.
  • Test 2: providing 2 Bearer tokens, each made the same Client ID.
  • Test 3: providing 2 different Client ID’s.


  • Test 1: Each Bearer token was successfully able to make 120 requests, independent of each other.
  • Test 2: Only 120 requests total were able to be made between the two Bearer tokens.
  • Test 3: Only 30 requests total were able to be made between both Client ID’s.

In summary, when Bearer tokens are provided the rate limiting behavior is based off of Client ID. When only Client ID’s are provided, then the rate limiting behavior is based off of IP.

1 Like

@Six By IP is better than by client-id. Why would it be that way? I’ll send you screenshots of my tests to show you that it’s always by IP address. Eta probably like 30m for me to reproduce it all.

See my original post, I added a third test.

How is by IP better? If that was by IP then you would be stuck regardless of how many tokens/Client ID’s you have. At least having based on Client ID’s offers some room to expand.

@Six Assume you aren’t trying to cheat a system. Let’s say you’re making a mobile phone app. You make one client-id, and the user auth’s their own account for usage of the API. If it’s by IP Address they cannot hurt other users. If it’s buy client-id then one user can destroy your app for all users using the app. Limits by client-id do not scale. Limits by IP at least scale somewhat, albeit limited to the functionality you can supply.

Being caught using 10-50 client-ids can get you perma banned on some API services once they realize you’re using multiple client-ids.

Seems like you’re approaching this one sided. Think about people who host web servers that offer Twitch functionality. Everyone who uses the service will be using the same host server, i.e., the same IP, restricting them to one pipeline.

If scaling is that much of an issue, just have each of your users register the app using their own Client ID. It’s not hard to do.

You say it’s not hard to do because you’re on a developer forum. it was a struggle to simply get people to enter their channel name into a form field correctly. We’re talking daily support tickets. So now it’s automatic, they click a button and poof.

And I am speaking from my problem set. I currently don’t have a single server. I understand that problem set as well though. But that’s Twitch’s rate limiting problem. It SHOULD be by OAuth token, as what your original reply to my comment was. Or do what youtube does, and give an exceptionally high rate limit per client-id (2 million?).

From my personal tests, which I’m going to provide, any mixture of multiple client-ids and oauth tokens gets me blocked as soon as I hit 120 requests.

Well until a Helix dev comes by to explain why they decided on this design choice, we’re just going to have to accept it for what it is.

As for “proof” for my first test, here’s a screen shot of two asynchronous calls successfully getting 120 requests each: