nginx uwsgi 502 bad gateway error

I am trying to get a deployment on ec2 running currently on ubuntu 9.10 and am running into a ton of errors while load testing. I am totally unsure of whether there is a bug somewhere or if our configuration is not great.

When we make requests we are getting a bunch of Broken Pipe errors in our uwsgi log, which is causing bad gateway errors from nginx.

Could someone hopefully shed some light on what we might have done wrong. I'm attaching the ngingx.conf, uwsgi.xml, nginx.log snipped and uwsgi.log snippit.

I'm under like, the biggest timeline ever, if anyone could come to my aid I would be forever in your debt.


Here is a sample log error:
{address space usage: 117727232 bytes/112MB} {rss usage: 37171200 bytes/35MB} [pid: 16159|app: 0|req: 44/172] 10.160.114.37 () {32 vars in 513 bytes} [Wed May 19 14:43:16 2010] GET /gavin-newsome/story/bet-you-didnt-see-one-coming/ => generated 0 bytes in 4846 msecs (HTTP/1.1 200) 7 headers in -1 bytes (0 async switches on async core 0)

I'm on im too:

yahoo: iamquarry
aim: loudenwain

please help? :)

Dan


  #2  
20-05-2010 08:10 AM
Administrator
 
Default Click here to search the UWSGI archives Default

Il giorno mer, 19/05/2010 alle 14.55 -0700, Dan McComas ha scritto:
> I am trying to get a deployment on ec2 running currently on ubuntu
> 9.10 and am running into a ton of errors while load testing. I am
> totally unsure of whether there is a bug somewhere or if our
> configuration is not great.
>


Hi Dan, you are stressing your system with a too high load for your
setup.

In your setup nginx is trying to connect to the uWSGI server, but (as it
is working on previous request) it will not answer to nginx.

Then nginx will timeout.

If you stop your test, you will seee nginx and uWSGI start working
again.

If you really want to increase tolerance of nginx you have to increse
the upstream connect timeout using the option

uwsgi_connect_timeout

Set it (for example) at 30 seconds

You can even increase the backlog queue of the uWSGI socket (but this is
not a good choice) using the -l flag.

You can even try adding more processes in uWSGI (but remember to use
-M)


--
Roberto De Ioris
http://unbit.it

_______________________________________________
uWSGI mailing list
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
)

  #3  
20-05-2010 04:25 PM
Administrator
 
Default Click here to search the UWSGI archives Default

Could you reccomend a number of processes per core?




> Il giorno mer, 19/05/2010 alle 14.55 -0700, Dan McComas ha scritto:
>> I am trying to get a deployment on ec2 running currently on ubuntu
>> 9.10 and am running into a ton of errors while load testing. I am
>> totally unsure of whether there is a bug somewhere or if our
>> configuration is not great.
>>
>
>
> Hi Dan, you are stressing your system with a too high load for your
> setup.
>
> In your setup nginx is trying to connect to the uWSGI server, but
> (as it
> is working on previous request) it will not answer to nginx.
>
> Then nginx will timeout.
>
> If you stop your test, you will seee nginx and uWSGI start working
> again.
>
> If you really want to increase tolerance of nginx you have to increse
> the upstream connect timeout using the option
>
> uwsgi_connect_timeout
>
> Set it (for example) at 30 seconds
>
> You can even increase the backlog queue of the uWSGI socket (but
> this is
> not a good choice) using the -l flag.
>
> You can even try adding more processes in uWSGI (but remember to use
> -M)
>
>
> --
> Roberto De Ioris
> http://unbit.it
>
> _______________________________________________
> uWSGI mailing list
> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
_______________________________________________
uWSGI mailing list
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
)


  #4  
20-05-2010 05:30 PM
Administrator
 
Default Click here to search the UWSGI archives Default

Il giorno gio, 20/05/2010 alle 08.25 -0700, Dan McComas ha scritto:
> Could you reccomend a number of processes per core?


It is application dependent, but as django is often in waiting-state
(waiting for query result), i think that a couple of processes per core
should be enough

原文地址:https://www.cnblogs.com/lexus/p/1826551.html