A compositional theory of perceptual representations would explain how the accuracy conditions of a given type of perceptual state depend on the contents of constituent perceptual representations and the way those constituents are structurally related. Such a theory would offer a basic framework for understanding the nature, grounds, and epistemic significance of perception. But an adequate semantics of perceptual representations must accommodate the holistic nature of perception. In particular, perception is replete with context effects, in which the way one perceptually represents one aspect of a scene (including the position, size, orientation, shape, color, motion, or even unity of an object) normally depends on how one represents many other aspects of the scene. The ability of existing accounts of the semantics of perception to analyze context effects is at best unclear. Context effects have even been thought to call into question the very feasibility of a systematic semantics of perception. After outlining a compositional semantics for a rudimentary set of percepts, I draw on empirical models from perceptual psychology to show how such a theory must be modified to analyze context effects. Context effects arise from substantive constraints on how perceptual representations can combine and from the different semantic roles that perceptual representations can have. I suggest that context effects are closely tied to the objectivity of perception. They arise from a perceptual grammar that functions to facilitate the composition of reliably accurate representations in an uncertain but structured world.