It is counter-intuitive that you'd want to look at 20 displays, each of (say) 20" @ 1600x1200 resolution (i.e. a 8000x4800 pixels display area) through glasses with 2x960x1080 pixels. Ok, I agree, the view is going to be pretty dire and you will get neck ache, but it doesn't seem to be as bad as viewing 8000x4800 at 960x1080 should suggest, because you can swing your head around and lean-in to view any screen in more detail.
A 'resolution multiplier' of 37 (i.e. pixel count of display area / pixel count of Rift) as suggested here is wayyy too much to be optimal, but when (not if) higher resolution VR glasses become available the ratio for this demo will come crashing down.
To display the screens effectively, rather than having the screens static in the VR space (i.e. rendered on the inside of a sphere centered on the user's head), we should use a transform that enlarges the screen in the center of the users view while compressing the edges. This is a bit like the distortion of a reflection you see in a spoon, or as in this drawing by Escher:
Or you can visualize the same concept viewing an Earth globe:
hyperboloid which is a reasonable approximation to what you could use. The console images would only wrap around the front section (so every console is in view) and the shape would appear to rotate as you turned your head, a bit like a rotating teacup viewed from the teapot (alhough we're using a single eggcup...)
So I name this technique the rotating eggcup transform...