polymon was developed on FreeBSD. Most likely, Linux ncurses are not compatible with BSD ncurses, or the code is not using ncurses in a compatible way.
We will look into porting polymon, but this will not be a hight priority task. We wish there was something more portable than ncurses, but we do not know of anything suitable for our purposes. Note, you can run tests on Linux and polymon on FreeBSD -- they just use TCP and UDP to talk to each other.
Yes, server's schedule is not synchronized with the client side. We may have enough time to add the code to "fix" this, but I am not sure. This synchronization feature has been missing starting from the very first release of Polygraph...
While this is very annoying for log analysis, it should have absolutely no effect on the test itself: The server side does not change its performance depending on phases. You just need to specify --idle_tout on the server side so that the server waits for clients to quit first.
If you notice any server side performance differences caused by the phase schedule, please let us know!
It was not an oversight.
Currently, there is no way to vary cachability run-time. Implementing such a feature is difficult because, as opposed to hit ratio or request rate, overall cachability depends on individual cachability of objects and the latter should not change during a test. That is, if an object is [un]cachable in the beginning of the test, then it probably should remain [un]cachable for the entire duration of the test. Preserving individual object cachability while changing overall cachability ratio is difficult without storing cachability status of individual objects.
Another, lesser challenge is that the cachability is determined on the server side and, as you know, the phase schedule on the server side is not synchronized with clients.
Given the factors above and the ultimate desire to make fill as "natural" as possible, we left cachability constant.
Note that uncachable objects during the fill phase are "compensated" with shorter ramp phases and with the fill phase integrated into the test. Constant fill rate and proxy cache size calculated using "configured cache capacity" may also help to reduce the total duration of the test sequence, depending on the product under test.
Yes, you can. Such a configuration does not meet the Cache-Off requirements, but you may want to do it if you are having ARP-related problems, until you can get them fixed. You'll have to edit polymix-3.pg to make it work.
//open_conn_lmt = 4; // open connections limit
req_rate = benchPolyMix3.peak_req_rate;
addr rbt_ips = [ '172.16.1.101' ]; addr srv_ips = [ '172.16.1.102' ];
This is a bug/oversight in Polygraph version 2.5.1. The PolyMix-3 working set size may be too large by a factor of three. This causes a lower than expected hit ratio.
The next release will have a PGL function to compute working set length given fill and peak rates. For now, please use a constant that gives the same working set size as 4 hours of peak request rate. If you make a mistake in calculations, it is not a problem. The "soft registration" report is not meant for reporting exact product performance. We just want to be sure that you can complete a PolyMix-3 test and we know what bugs to fix.
I know that there has been a limit of 1000 robots per machine, but we were wondering if, since our FreeBSD machines are P3 500s rather than P2 400s, it would be legitimate to run 1200 or 1250 robots per machine?
For custom workloads, any number of robots that your environment can support is just fine. For standard workloads, like PolyMix-2, we recommend that you obey the limits if you want to publish the results. It saves you from unknowingly overloading your PCs and from extra questions about the test validity.
Also, please note that increasing the number of robots per host increases host memory requirements and additional OS tuning may be needed to accommodate for more traffic.
Also, for the third cache-off, will the robot/server machines be better than P2 400s, and will the number of robots allowed per machine change? Thanks.
We have discussed that internally, but have not reach a conclusion yet. Clearly, any reasonable limit can be justified with some hand-waving. Perhaps we need more input from the vendors on this.
Deciding on a good upper limit is tricky for several reasons:
The various ``factors'' in phase configurations are ``sticky.'' That is, the next phase inherits factors from the previous phase unless the factors are explicitly specified again.
Take a look at the phase schedule table in polyclt console output to see actual factors.
There are two better ways to do what you want:
You can use this formula:
2 * ProxyCacheSize * 1024 * 1024 --------------------------------- 0.6 * FillRate * 11
On the top, we have the cache size, in kilobytes, multipled by two. On the bottom we have the fill rate, multipled by 0.6, multipled by the mean object size of 11 kilobytes. 0.6 is the approximate fraction of responses that are cachable misses during the fill phase.
NOTE: The fill phase duration also depends on the working set size. The working set must be ``primed'' before the fill phase ends. The working set size depends on the duration of the top2 phase. If the ProxyCacheSize is too small, the fill phase will take longer than given by the above formula.
The downtime test is separate from other tests, and uses its own workload file. The most recent one is named downtime-2.pg, which you can find in the Polygraph source distribution.
We use just one client (robot) and one server. The robot generates three requests per second.
After staring the test, watch the polygraph console output closely. There are four phases: warm, load, dark, and meas. When polygraph reaches the dark phase, we cut off all power to the proxy and switches.
The dark phase lasts for five seconds. When polygraph reaches the meas phase, we restore the power.
The test can be stopped as soon as the caching proxy begins serving cache misses AND cache hits. At this point we can analyze polygraph's log files to determine the duration of time between restoring the power and receiving the first cache miss and the first cache hit.