Hello,
this is driving me crazy, i have a downloading website written in php to download from remote servers using curl but the thing is the httpd connections are not terminated they are still in sleeping mod which is killing my server resources, here is the top output

top - 20:55:36 up 4 days, 13:41, 1 user, load average: 1.99, 5.73, 10.47
Tasks: 2207 total, 5 running, 2202 sleeping, 0 stopped, 0 zombie
Cpu(s): 24.1%us, 1.5%sy, 0.0%ni, 73.3%id, 0.0%wa, 0.0%hi, 1.1%si, 0.0%st
Mem: 4045976k total, 4000712k used, 45264k free, 1448k buffers
Swap: 8385920k total, 2353584k used, 6032336k free, 30336k cached

i traced one pid using this strace -p 22254 -s 80 -o /tmp/debug.lighttpd.txt i got this output which i'm not sure but it seems to be polling and polling

clock_gettime(CLOCK_MONOTONIC, {10396, 471413333}) = 0
poll([{fd=24, events=POLLOUT}], 1, 725) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {10397, 196905333}) = 0
clock_gettime(CLOCK_MONOTONIC, {10397, 196955333}) = 0
poll([{fd=24, events=POLLOUT}], 1, 1000) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {10398, 197890333}) = 0
clock_gettime(CLOCK_MONOTONIC, {10398, 197937333}) = 0
poll([{fd=24, events=POLLOUT}], 1, 1000) = 0 (Timeout)

are there any http headers that i'm missing or is there anyway to terminate those connections, any ideas??

i will add that as well but i don't think it will affect much

Ok, try also to set CURLOPT_FORBID_REUSE: http://php.net/manual/en/function.curl-setopt.php
For example if you have a group of files to download from the same source, add this option only in the last loop, so curl can use the previous connection and your script should execute faster.
Bye!

i used that as well, but the sleeping httpd connections are increasing, i'm not 100% sure if my downloading script is causing this but how can i know which file is initiating the connection in ssh??

after this time i think this is an attack there is one ip with over 1900 open connections, how can i ban it from ssh

If these connections are related to httpd service then check the access.log and the error.log of your web server, if you are using Apache then go to /var/log/apache2/. From there you can read what kind of requests are done and if there is something strange.

Use nslookup to check the source IP, it could be a spider, if that is the problem then you can use the robots.txt file to stop connections.

Otherwise in order to block an IP you have to add a rule to the server firewall, you can use iptables or the ufw interface which is the same but easier. Check the documentation and be careful to not block ssh access to your IP:

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.