When we were stress testing the fix8 engine ( open source version ) in our environment, we noticed something weird. We are seeingweird spikes in processing times. We are calculating the times within the app using the sending time compared to the time that we receive the message within the application.
Whats weird about spikes is that they seem to happen between random time intervals. And that the peak at around 80ms in this load test.
Also, when we decreased the load of the application slightly ( by around 1000 messages ) the spikes were gone.
We are using 'threadded' processing model. << With that moddel we have observed the tcpbuffer size on the machine, it showed us that the application was not taking out of the buffer fast enough - we saw increased buffer sizes at the tiems of the spikes.
Also, it seems that the value that we were using - about 11000 messages/s was close to the fix8 limit, as is when we increased the rate to 12000 messages/s we started seeing that the latency time was much more consistent and close to 80ms. We are using quite large messages in this load test. The messages are almost identical to each other in this test. Both server and client were located on the same host dudring the test. The messages are spread evenly thoughout the each second, the do not come in bunch.
Please see attached screen shot to see latency result ( green line represents the latency orange the message rate ).
Any hints on this?
This is our compilation setup:
export CXXFLAGS="-O3 -flto"
./configure --prefix=/.../1.3.4/ --with-mpmc=tbb --enable-doxygen=no --enable-fillmetadata=no --with-tbb=.../4.4up5/ --with-poco=.../1.7.4 --enable-doxygen=no --with-thread=stdthread
Any hints as to root cause?
Is it just that the open source version is not fast enough for this case?