Darkness behind closed eyelids still produces moving streets, familiar faces and sharp locations. The visual cortex fires almost as strongly as during waking perception, even though the retina contributes minimal input. In rapid eye movement sleep, internally generated signals rise through the thalamus and reach the same cortical maps that usually process light from the outside world.
Those scenes are assembled by predictive coding machinery that the brain uses during waking life to guess what comes next. Instead of updating models with fresh photons, it now loops its own output as input, letting neural networks in the occipital and temporal lobes replay and recombine stored representations. The hippocampus and amygdala inject memory traces and emotional salience, so each hallucinated street corner or voice is anchored in prior experience, not random noise.
Because the same population codes, synaptic weights and connectivity patterns are engaged, the brain’s internal simulation carries the same status as external perception. Multisensory association areas still integrate vision, sound and bodily signals, while consciousness tracks the virtual world and not the sleeping body. The result is a fully rendered scene that feels authoritative, even though it is powered by prediction rather than incoming data.