Looking for an optimal setup for a stream directory

So I’m creating a stream directory for a site. It’s flow is something like this:

  • Curator submits a twitch stream name.
  • Stream name is used on the backend to retrieve user details and create a record in the local database.
  • User details are used to create a stream up / down notification subscription
  • When the stream directory root page is hit I fetch the relational twitch user_ids from the database and use them to retrieve the live streams.

The last point is a bit of a hanging up for because of the limited number of api calls a single client id can make. If I say for example have 30 people all hitting the stream root page at the same time that’s roughly 30 api calls. Some solutions:

  • I can cache the result in the backend using redis and clear the cache If I get a stream up / down notification.

My only problem with this is I’m doing server side pagination, which slightly complicates this. I guess I could send all the records and paginate on the client side.

Any suggestions would be appreciated.

If you create a App Access Token, you get 120 API calls.

Or you can make the Client/Browser do the API calls as limits are by ClientID/IP KeyPair. So 30 requests per browser using your site… But I’d redis cache and use that data. But I wouldn’t do that myself…

This is the best option which is what I do. I grab a incoming stream webhook and throw it in Redis. So I refer to Redis instead of API. It’s also quicker/more performant.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.