Using 10 Timed loops on RT with 1Mhz clock, and a dt of 1000 (1000 microseconds, or 1 ms) and all of them Accessing an FGV (an array of doubles) by index causes a massive CPU increase . If I only launch 1 Timed loop, the result is a very small change in CPU. Can you please explain to me why there is a non-linear relationship between the CPU and how many Timed Loops I launch, as well as what I can do to eliminate this.
NOTE:
The same thing happens on the PC so it's not just an RT issue.