Error requesting data via ""

That’s the script I’m using, is written in Python 3.6.

link = '' + temp
accept    = 'application/vnd.twitchtv.v5+json'
client_id = 'xxxxxxxxxxxxxxxxxxxxxxxx'
r = requests.get(link, headers={'Client-ID':client_id, 'Accept': accept})

temp is a comma-separated list of channel IDs

After about 15hours the script crashed with this error.

File "C:\Users\Alessandro\AppData\Local\Programs\Python\Python37\lib\sitepackages\requests\", line 449, in sendtimeout=timeout
File "C:\Users\Alessandro\AppData\Local\Programs\Python\Python37\lib\sitepackages\urllib3\", line 638, in urlopen_stacktrace=sys.exc_info()[2])
File "C:\Users\Alessandro\AppData\Local\Programs\Python\Python37\lib\sitepackages\urllib3\util\", line 398, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='', port=443): Max retries exceeded with url: /kraken/streams/channel=21167655,132817946,406599498,222337332,151678267,124318726,418867795,408608813,408608813 (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x0000026C81BE4BE0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

What am I doing wrong? Thanks for you time.

is indicative that for whatever reason you performed a DNS lookup for and it failed.

roughly equivlent to a 5xx error code. and just try again

I think about your answer, but the problem seems to be that I’m requesting to much data.
I mean, I was conviced that API has been created for that, but maybe I’m requesting to much data? Or whatelse?

How can bypass this problem? Because you said “just try again” but that’s a script that run continuosly and keep asking data through API… I cannot “try again”.

As Barry has said, a getaddrinfo failed error would indicate an issue on your side with resolving the address, not an issue on Twitch’s side.

Also, when you say you’re continuously asking for data, how frequently are you hitting the endpoint? All Twitch API endpoints use caching so if you’re requesting the same resources more frequently than once a minute you’ll be getting back cached data some of the time and just wasting requests, and if you’re only waiting a few seconds between requests you can even get erroneous data by hitting different cache servers and getting data out of order.


You application needs to be fault tolerant of Request Failures/API faults/non 200 codes and be able to Retry the same request.

This includes but its not limited to

  • Twitch Outages
  • Issues with your firewall preventing out bound request
  • getaddrinfo failing to obtain DNS information
  • Cloudflare breaking again
  • DNS failing
  • Twitch API returning a Non 200 code
  • Your disk getting full and thus failing the request
  • Your datacenter losing network connectivity.

Each one of these things, the only thing you can do is retry the request.

Either mark down a “no data” for this fetch, or retry.

1 Like

I’ve googled a few minutes about my problem and found out that I should use a Fake User Agent rotation for this kind of problem. Is this worth it? Is this legal? Maybe using a cool down of 1 minute will solve the problem?

My script has a cool down of 10 seconds, and every cycle I do like 5-10 endpoint hits.

This won’t solve your getaddrinfo issue.

Nor will it bypass any caching Twitch has in place as it’ll go “well you are clearly trying to bypass the cache” and serve you a cached response.

The cache is server side, not client side.

This is too fast, you shouldn’t be polling any quicker than once per minute.

There’s no point polling the stream end point quicker than once per minute as are just wasting CPU cycles as you’ll just get the same response.

Further more you’ll end up getting messy debounces when the stream starts/end’s if you hit different servers in the pool behind the load balancer polling quicker than a minute.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.