You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is running fine and gives the expected results.
My problem is, that our repo seems to be really large and while running that process danger makes about 1900 calls to the github api, which lets us run into the rate limit really fast.
Now i wanted to ask, if i can somehow avoid that?
I am not really sure where to start here to avoid the github api calls.
thx in advance!
The text was updated successfully, but these errors were encountered:
This sounds like the code in the dangerfile is doing some overly intense work - honestly, I'd start there - is it accidentally reading every file in the repo somehow?
Try wrap up #991 which verifies that you're running on the right commit, and uses the fs instead of the GitHub API for read access to files - that only works if you're running danger on CI, but that's at the 90% use case
Hey,
i am running danger standalone in a kubernets pod-container.
The command i use to run danger looks like that:
This is running fine and gives the expected results.
My problem is, that our repo seems to be really large and while running that process danger makes about 1900 calls to the github api, which lets us run into the rate limit really fast.
Now i wanted to ask, if i can somehow avoid that?
I am not really sure where to start here to avoid the github api calls.
thx in advance!
The text was updated successfully, but these errors were encountered: