Hi all,
i'm writing a Qt SIGNAL/SLOT capable class in order to be able to work with raw socket in a Qt application.
To achieve this, i'm using a QSocketNotifier to monitor socket incoming data event in a S/S way.
Now the problem: entering the event loop, causes the process CPU utilization to raise to 100%, locking every event.and the QSocketNotifier signal activated(int socket) is never triggered. Removing the QSocketNotifier declaration, the loop works normally.
Here i've made a reduced compilable code set to better show the unwanted behavior:
(The program create a ICMP raw socket: to create ICMP traffic, just ping yourself at 127.0.0.1)
NOTE: raw socket requires root privileges: run the test program in root mode.
/*
* File: main.cpp
* Author: root
*
* Created on 29 agosto 2011, 17.28
*/
#include <QCoreApplication>
#include <QSocketNotifier>
#include <sys/socket.h>
#include <netinet/in.h>
#include <fcntl.h>
#include <errno.h>
int main(int argc, char *argv[]) {
QCoreApplication app(argc, argv);
int sock = socket(AF_INET, SOCK_RAW, IPPROTO_ICMP);
if(sock == -1){
qDebug("Socket Creation Failure");
return -10;
}
//ALSO TRYED IN NON BLOCKING MODE; SAME RESULTS
//fcntl(sock, F_SETFL, fcntl(sock, F_GETFL, 0) | O_NONBLOCK);
//THESE LINES ALLOW THE PROGRAM TO STOP FOR THE FIRST AVAILABLE ICMP PACKET
//COMMENT THEM IF YOU WISH NOT TO WAIT FOR THE FIRST PCK OUTSIDE EVENT LOOP.
unsigned char buff[1024];
int length = recv(sock, buff, 1024, 0);
qDebug(("Received: " + QString::number(length)).toStdString().c_str());
//THE GUILTY LINE: COMMENT THIS TO SEE A NORMAL CALM EMPTY EVENT LOOP
QSocketNotifier notifier(sock, QSocketNotifier::Read);
return app.exec();
}
I've monitored the process with strace: that's the main loop start point
pipe2([5, 6], O_NONBLOCK|O_CLOEXEC) = 0
rt_sigaction(SIGCHLD, {0xb6acbf20, [], SA_NOCLDSTOP}, {SIG_DFL, [], 0}, 8) = 0
socket(PF_INET, SOCK_RAW, IPPROTO_ICMP) = 7
clock_gettime(CLOCK_MONOTONIC, {16395, 460509112}) = 0
poll([{fd=3, events=POLLIN}, {fd=7, events=POLLIN}], 2, 0) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {16395, 460708575}) = 0
poll([{fd=3, events=POLLIN}, {fd=7, events=POLLIN}], 2, -1) = 1 ([{fd=7, revents=POLLIN}])
clock_gettime(CLOCK_MONOTONIC, {16395, 752324322}) = 0
poll([{fd=3, events=POLLIN}, {fd=7, events=POLLIN}], 2, -1) = 1 ([{fd=7, revents=POLLIN}])
...
Then the program starts to flood the last two lines.
I've tried to change the family, type and protocol during the socket creation in:
int sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
and this is the result:
pipe2([5, 6], O_NONBLOCK|O_CLOEXEC) = 0
rt_sigaction(SIGCHLD, {0xb6aaff20, [], SA_NOCLDSTOP}, {SIG_DFL, [], 0}, 8) = 0
socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 7
clock_gettime(CLOCK_MONOTONIC, {16759, 34270708}) = 0
poll([{fd=3, events=POLLIN}, {fd=7, events=POLLIN}], 2, 0) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {16759, 34399931}) = 0
poll([{fd=3, events=POLLIN}, {fd=7, events=POLLIN}], 2, -1
In this case, the programs start waiting for incoming events, without producing anymore output and without consuming any CPU.
You can see there are no significant changes in the call sequence
Hope someone will find the trick!
Thanks,
Regards,
Gianluca