Merge branch 'sh-document-strace-unicorn' into 'master'

Document how to troubleshoot internal API calls

See merge request gitlab-org/gitlab-ce!15337
This commit is contained in:
Achilleas Pipinellis 2017-11-20 08:56:49 +00:00
commit cbee84ca29
1 changed files with 28 additions and 0 deletions

View File

@ -163,6 +163,34 @@ separate Rails process to debug the issue:
1. In a new window, run `top`. It should show this ruby process using 100% CPU. Write down the PID.
1. Follow step 2 from the previous section on using gdb.
### GitLab: API is not accessible
This often occurs when gitlab-shell attempts to request authorization via the
internal API (e.g., `http://localhost:8080/api/v4/internal/allowed`), and
something in the check fails. There are many reasons why this may happen:
1. Timeout connecting to a database (e.g., PostgreSQL or Redis)
1. Error in Git hooks or push rules
1. Error accessing the repository (e.g., stale NFS handles)
To diagnose this problem, try to reproduce the problem and then see if there
is a unicorn worker that is spinning via `top`. Try to use the `gdb`
techniques above. In addition, using `strace` may help isolate issues:
```shell
strace -tt -T -f -s 1024 -p <PID of unicorn worker> -o /tmp/unicorn.txt
```
If you cannot isolate which Unicorn worker is the issue, try to run `strace`
on all the Unicorn workers to see where the `/internal/allowed` endpoint gets
stuck:
```shell
ps auwx | grep unicorn | awk '{ print " -p " $2}' | xargs strace -tt -T -f -s 1024 -o /tmp/unicorn.txt
```
The output in `/tmp/unicorn.txt` may help diagnose the root cause.
# More information
* [Debugging Stuck Ruby Processes](https://blog.newrelic.com/2013/04/29/debugging-stuck-ruby-processes-what-to-do-before-you-kill-9/)