Running some performance tuning, the app needed to know how many flops (floating point operations) per cycle the system could handle.
I used SiSoftware’s Sandra benchmarking app. It told me that my Intel Pentium D Dual Core 1.8ghz proc was producing 10.86gflops, but not the flops per clock cycle.
From this we know: a) the total gigaflops (10.86), b) the number of cores (2), and c) the number of clock cycles per second (1.8ghz)
Example of the standard formula:
The formula to determine total gigaflops is:
Flops per cycle x # of Cores x Clock speed.
This involves four values:
a = flop per clock cycle
b = clock speed (ghz)
c = cores
n = gflops
For a dual core 3ghz system with 4 flops per cycle, we can deduce 24gflops (a x c x b = n, or 4 x 2 x 3 = 24) . But I only have the total gflops, clock speed, and number of cores.
a = n / b / c
Or in my case:
10.86 gflops / 1.8ghz / 2 cores = 3.01 flops per cycle (per core). So the E2610 chip at 1.8ghz produces 3 flops per cycle per core, or 6 flops total. Ta da.
Note: It’s worth mentioning that in this case, 10.86 gflops and 1.8ghz seem like closely related numbers, and that it would be quick to figure out how many gflops a system can handle by its clock speed (i.e. 1.8ghz equals 10.86gflops). This is not the case. In the first example of a dual core 3ghz proc producing 24gflops, you can’t deduce the one from the other. It was just a coincidence in my case, so don’t do that.