Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate limit exceeded which should have already been handled #256

Open
xEcEz opened this issue Oct 24, 2018 · 3 comments
Open

Rate limit exceeded which should have already been handled #256

xEcEz opened this issue Oct 24, 2018 · 3 comments

Comments

@xEcEz
Copy link
Contributor

xEcEz commented Oct 24, 2018

I've been encountering this issue a couple of times now. For context, I'm performing fetching tasks in a multiprocess environment. I'd say that this is quite rare but still quite bad for my use case when it happens.

...
File "/usr/lib/python3.6/site-packages/merakicommons/ghost.py", line 90, in __get__ obj.__load__(load_group) 
File "/scripts/cassiopeia/core/common.py", line 281, in __load__ data = configuration.settings.pipeline.get(type=self._load_types[load_group], query=query) 
File "/usr/lib/python3.6/site-packages/datapipelines/pipelines.py", line 459, in get return handler.get(query, context) 
File "/usr/lib/python3.6/site-packages/datapipelines/pipelines.py", line 185, in get result = self._source.get(self._source_type, deepcopy(query), context) 
File "/usr/lib/python3.6/site-packages/datapipelines/sources.py", line 120, in get return source.get(type, deepcopy(query), context) 
File "/usr/lib/python3.6/site-packages/datapipelines/sources.py", line 69, in wrapper return call(self, query, context=context) 
File "/usr/lib/python3.6/site-packages/datapipelines/queries.py", line 323, in wrapped return method(self, query, context) 
File "/scripts/cassiopeia/datastores/riotapi/match.py", line 247, in get_match_timeline data = self._get(url, {}, app_limiter=app_limiter, method_limiter=method_limiter) 
File "/scripts/cassiopeia/datastores/riotapi/common.py", line 243, in _get raise new_error from error RuntimeError: Encountered an HTTP error code 429 with message "Rate limit exceeded" which should have already been handled. Report this to the Cassiopeia team.
@xEcEz xEcEz changed the title Rate limit exceeded" which should have already been handled Rate limit exceeded which should have already been handled Oct 24, 2018
@jjmaldonis
Copy link
Member

I haven't responded because this is a tough one. We need to be able to reproduce the issue in order to fix it, and ideally to get into the debugger just before the exception is thrown. Since it seems to be a rare problem, it's hard to reproduce, and it uses multiprocessing which we sort of support but don't want to guarantee, I'm hesitant to dump a weeks worth of free time into reproducing, debugging, and fixing this issue without significantly more information.

@xEcEz
Copy link
Contributor Author

xEcEz commented Nov 2, 2018

Thanks for your answer. I totally understand your point, as I am aware there are only a few people using Cass in a multiprocessing environment. I've also refrained myself from posting other rares issues I've been encountering, typically with locks, as I believe it's also due to multiprocessing...
It's tough for me to debug too, as I run jobs on the cloud and have no idea of where to start looking regarding what triggers the errors.

That being said, an error happening while one job is running is really bothering for me, as it means one of the process getting killed and my job getting slower overall. I could try to catch different kind of errors but I'd be interested in finding the causes of them more than anything.

@xEcEz
Copy link
Contributor Author

xEcEz commented Nov 7, 2018

Not sure this will help, but it happened again:

RuntimeError: Encountered an HTTP error code 429 with message "Rate limit exceeded" which should have already been handled. Report this to the Cassiopeia team.
at _get (/scripts/cassiopeia/datastores/riotapi/common.py:243)
at get_match_list (/scripts/cassiopeia/datastores/riotapi/match.py:154)
at wrapped (/usr/lib/python3.6/site-packages/datapipelines/queries.py:323)
at wrapper (/usr/lib/python3.6/site-packages/datapipelines/sources.py:69)
at get (/usr/lib/python3.6/site-packages/datapipelines/sources.py:120)
at get (/usr/lib/python3.6/site-packages/datapipelines/pipelines.py:185)
at get (/usr/lib/python3.6/site-packages/datapipelines/pipelines.py:459)
at generate_matchlists (/scripts/cassiopeia/datastores/ghost.py:364)
at __next__ (/usr/lib/python3.6/site-packages/merakicommons/container.py:365)
at __iter__ (/usr/lib/python3.6/site-packages/merakicommons/container.py:354)
at _generate (/usr/lib/python3.6/site-packages/merakicommons/container.py:377)
at __len__ (/usr/lib/python3.6/site-packages/merakicommons/container.py:360)

The original len() call is made on a Summoner.match_history item. Sadly, it is still very difficult for me to pintpoint the exact context that gives rise to this, as this happens on a remote job...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants