In summary, the 'Risk' percentage is calculated roughly as the number of impacted tests in the processing queue divided by the number of unimpacted tests in the processing queue (with certain weightings attached to each).
The rationale is that when a change to the codebase is made, the risk percentage gives you some idea of how risky the change was and therefore what kind of likelihood there is that there may be a subsequent test failure. You'll notice that there is a shaded area of the graph that also describes the risk. This shows how the risk is expected to change as the tests are progressively executed. You'll notice that if you make a change that impacts a number of fast running tests, then the risk will drop quickly (shown by the shaded area falling sharply).
There are many changes planned for this feature in later releases. In retrospect, it should have been done very differently. The risk metric is less valuable than it was originally believed to be, and there are future opportunities to make this view more configurable to show information that may be more valuable depending on your needs. Some time ago I actually wanted to take it out, but the feature has proved to be popular with a number of people that really wanted it kept in. Because this view simulates an entire test execution run continuously, it is also quite expensive in terms of processing power on larger solutions. You may find the engine is less responsive if you keep it open with many tests in the processing queue.