This paper focuses on the gap between discrete, symbolic representations of music (like common music notation) and continuous, numerical representations (like audio and control signals). The often ignored realm in between consists of levels of modulation and articulation, in which both representations are needed and information has to be passed between them. The generalized time functions introduced in Desain & Honing (1992) will be shown to be a sound basis for describing the kinds of musical knowledge associated with these levels. Improvements and extensions to the initial formalism are presented and some problems are stated that may serve as prototypes for specific kinds of interdependency between representations.