kunstderfuge.comThe Largest Resource of Classical Music in .mid Files Listen to 5 files/day for free,
|
M.I.D.I. and Computer Music Extract from Grove Music enc.
MIDI [Musical Instrument Digital Interface]By David Burnand
A hardware and software standard established in 1983 for the communication of musical data between devices such as synthesizers, drum machines and computers. It has virtually replaced earlier methods of playing one synthesizer from the keyboard of another, or of synchronizing the performance of one drum machine or sequencer to that of another. The information exchanged may include notes, program changes, volumes and other elements. The basic MIDI protocol provides up to 16 independent channels of information. However, some interfaces can provide multiples of 16, enabling many independent channels of information to flow between devices. MIDI has various applications. It can be used to connect several synthesizers in order to thicken sound; to emulate multi-track recording, with the difference that tracks contain editable data rather than recorded sound; to control program changes and effects automatically or remotely; to edit synthesizer voices or samples, using MIDI connections to computers; to create effects such as delays using different instruments; toController (Fr. contrôleur, dispositif de contrôle; Ger. Steuereinrichtung; It. dipositivo di controllo) devices such as sequencers, drum machines and video, using MIDI time code; to automate mixing processes, such as fades and mutes; and to transmit notes and other musical events generated from computer algorithms.
Controller (Fr. contrôleur, dispositif de contrôle; Ger. Steuereinrichtung; It. dipositivo di controllo)By Hugh Davies
In electronic instruments, the device that transmits the player’s actions, via electrical connections, to relevant parts of the instrument’s sound generating and shaping circuitry. Usually the controller is a keyboard (often permitting some level of touch-sensitivity), but some are designed to utilize the techniques of string, wind and percussion players. Other kinds of controllers include ribbon controllers, joysticks, slide or rotary faders, thumbwheels, or computer control devices such as alphanumeric keyboards, mice, light-pens, and touch-sensitive screens. Some instruments are played without direct physical contact [...]; in other instruments or sound installations the electrical circuitry for any of a variety of parameters is affected by the detection of movement, for example by a video camera or the interruption of a light beam. In many cases the controller is independent of the console: connection may be made via a cable or by radio transmission. In early electronic instruments the controller operated only within a single instrument (the equivalent of the remote control aspect of every acoustic keyboard instrument); towards the mid-1960s Voltage control was introduced in the earliest modular synthesizers to vary specific functions of different modules. Around the end of the 1970s some synthesizer manufacturers introduced their own protocols, permitting similar control linkages between different instruments of their own manufacture, and in 1983, with the introduction of MIDI, this was expanded to cover (in principle if not always in practice) all electronic instruments and independent controllers with compatible connection ports. Because acoustic keyboard instruments invariably involve a similar degree of operation by remote control, their keyboards may also be designated as controllers.
PitchThe particular quality of a sound (e.g. an individual musical note) that fixes its position in the scale. Certain sounds used in music that occupy no particular scale position, such as those produced by cymbals or the side drum, can be said to be of indefinite pitch. Pitch is determined by what the ear judges to be the most fundamental wave-frequency of the sound (even when, as for example with difference tones, this is an aural illusion, not actually present in the physical sound wave). Experimental studies, in which listeners have been tested for their perception and memory of pitch differences among sounds with wave-frequencies known to the experimenter, have shown that marked differences of timbre, loudness and musical context affect pitch, albeit in relatively small degree. But long-term memory, called Absolute pitch, enables some people to identify the pitch of sounds quite apart from their contextual relation to other sounds. [...] Pitch is expressed by combining a frequency value (such as 440 Hz) with a note name. a' = 440 Hz is a pitch, as is g' = 440. If g' is 440, in equal temperament, then a' will be 494 Hz; if a' = 440, g' will be 392 Hz. Frequencies and pitches by themselves are simply natural phenomena; it is only when they are connected to pitch standards that they take on a musical dimension. A pitch standard is a convention of uniform pitch that is understood, prescribed and generally used by musicians at a given time or place. The statement 'Cammerton was at a' = 415', for example, combines the name of a pitch standard (Cammerton or 'chamber pitch') with a note-name (a') and a frequency (415 Hz). Over the last 400 years in Europe, the point that has been considered optimal for pitch standards has varied by about six semitones, depending on time and place. This article discusses the pitch standards that have been used in various places and periods in Europe. The concept of pitch standards and attempts to measure pitch systems in non-Western music are also discussed. [...]
Computer music
1. Early efforts From modest beginnings as a highly specialized area of creative research, for the most part isolated on the margins of post-World War II developments in electronic music, the technology of computer music has advanced to the point where hardly a single aspect of this medium remains untouched by its influence. Analogue devices are progressively being replaced by digital equivalents throughout the entire communications industry. In the case of the music synthesizer and its derivatives, such design changes transformed the industry in less than a decade, the process of conversion being all but complete by the early 1990s. In addition, the increasingly powerful processing capabilities of computers have stimulated the exploration of new horizons in musical composition, from the initial formulation of creative ideas to the production of finished works. The use of the computer as a tool for composition goes back almost to the dawn of commercial computing. In 1955 Lejaren Hiller and Leonard Isaacson investigated the use of mathematical routines to generate music information at the University of Illinois at Champaign-Urbana. Probability routines, inspired by Hiller’s earlier work as a chemical engineer, provided the basis for a series of composing programs that generated music data in the form of an alphanumeric code, subsequently transcribed by hand into a conventional music score. Less than a year later, in Europe, Xenakis started work on his own series of composing programs based on theories of probability known as ‘stochastics’, which similarly generated music data as alphanumeric code. The desire to combine the processes of score generation and acoustic realization led him in due course to develop a fully integrated system that eliminated the intermediate transcription stage, the music data passing directly to a synthesizer for electronic reproduction. The techniques of digital sound synthesis, whereby the processes of audio generation itself are directly consigned to the computer, also date back to the 1950s, most notably to the pioneering work of Max Mathews at the Bell Telephone Laboratories in Murray Hill, New Jersey. In 1957 he began work on a series of experimental programs which with the support of other researchers have been developed into an extended generic family of programs known collectively as the musicn series (e.g. music4bf, music5, music11). With the increasing power and accessibility of computers in recent years, such software-based methods of music synthesis have gained significantly in popularity. Modern musicn derivatives such as csound, developed by Barry Vercoe at MIT, are available in versions adapted to a variety of computers from sophisticated work stations to personal computers.
2. Principles of digital audio In order to appreciate how such synthesis tools can be used for creative purposes, it is necessary to understand some basic principles of digital audio. All methods of digital recording, processing and synthesis are ultimately concerned with the representation of acoustical functions or pressure waves as a regular succession of discrete numerical approximations known as samples. The reproduction of a digital sound file requires the services of a digital-to-analogue converter which sequentially translates these sample values into an equivalent series of voltage steps. These are then amplified and passed to a conventional loudspeaker for acoustic conversion. Such procedures are regularly encountered in a domestic environment whenever one listens to a conventional compact disc or the sound output from a CD-ROM. In the digital recording of acoustic material acoustic signals are captured by a conventional microphone to produce an equivalent voltage function. This in turn is passed to an analogue-to-digital converter which continuously samples its instantaneous value to produce a regular series of numerical approximations. Two factors constrain the fidelity that can be achieved by a digital audio system. The first, the rate at which the individual samples are recorded or generated, determines the absolute range of audio frequencies that can be reproduced. As a simple rule of thumb, the upper frequency limit, known as the Nyquist frequency, is numerically equivalent to half the sampling rate; thus a system recording or reproducing an acoustic function at 20,000 samples per second can achieve a maximum bandwidth of only 10kHz. In practice, the usable bandwidth is limited to about 90% of the theoretical maximum to allow the smooth application of special filters that ensure that any spurious high-frequency components that may be generated at or above the Nyquist frequency are eliminated. In the early days of computer sound synthesis, technical constraints often severely limited the use of higher-order sampling rates, with the result that the available bandwidths were often inadequate for good-quality music reproduction. Modern multimedia computers are capable of handling sound information at professional audio sampling rates, typically 44,100 or 48,000 samples per second, thus allowing the entire frequency range of the human ear (about 17–20 kHz, depending on age) to be accurately reproduced. Older systems, however, are often restricted to much lower sampling rates, which are generally adequate only for speech applications. The other factor determining fidelity is the numerical accuracy or quantization of the samples themselves. A number of technical expedients have been developed to improve the basic performance of conventional analogue-to-digital and digital-to-analogue converters. However, these devices are constrained by the numerical accuracy of each individual sample, which in turn is determined by the number of binary bits available to code each value as an integer. This requirement to use finite approximations raises the possibility of numerical coding errors which in turn degrade the quality of the resulting sound. 16-bit converters, which allow quantization errors to be restricted to a tiny fraction of 1% (about 15 parts in a million), represent the minimum acceptable standard for good-quality music audio. Converters with a reduced resolution of just eight bits per sample are becoming increasingly rare. If a digital synthesis system is to work in real time while generating acoustic functions at 44,100 or 48,000 samples per second (or twice this rate in the case of a stereo system where samples for each sound channel have to be generated separately), all the background calculations necessary to determine each sample value will have to be completed within the tiny fraction of a second that separates one sample from its successor. Although many modern computers can meet such demanding operational criteria even for quite complex synthesis tasks, until the late 1980s such resources were rare, even at an institutional level. As a result many well-established software synthesis programs, including the musicn series and its derivatives, were designed in the first instance to support a non-real-time mode of operation. Here a delay is deliberately built into the synthesis process such that the computer is allowed to calculate all the samples for a complete musical passage over whatever period of time actually proves necessary. The samples are stored in correct sequence on a computer disc, and once this sound file has been computed in its entirety the samples are recovered and sent to the digital-to-analogue converter for conversion and reproduction. In the early days of computer music the delays between the start of the calculation process and final audition of the results were often considerable, forcing composers to take a highly empirical approach to the composition process. As computing power increased, these delays dropped from a matter of hours to minutes or even seconds, thus leading finally to the possibility of live synthesis, where the program is able to calculate the samples fast enough for direct output.
3. Sound synthesis and processing Fundamental to most software synthesis systems is the provision of a basic library of functions that may be used as the building-blocks for a particular sequence of synthesis operations. Many of these functions simulate the hardware components of a traditional analogue studio, such as oscillators, filters, modulators and reverberators, although an increasing number of more specialist functions have been developed over the years to model particular instrumental characteristics, such as the excitation of the human voice-box or the vibration of a string. In the case of musicn programs, each integral grouping of these components is identified as an ‘instrument’, broadly analogous to the individual instruments of a traditional orchestra. These ‘instruments’ collectively form an ‘orchestra’, ready to receive performance data from an associated ‘score’. Since these instruments are simulations that are no more than ordered statements of computer code, the opportunities for varying their design and application are extensive. The only real constraints are general ones imposed by the computing environment itself, for example the maximum number of instrumental components that can be accommodated in the memory at any one time, and the overall processing performance of the system. It is possible, for example, to synthesize finely crafted textures by directly specifying the evolution of each spectral component in terms of its frequency, amplitude and duration. Such a strategy involves considerable quantities of score data and the simultaneous use of a number of instruments, one for each component. Alternatively, highly complex instruments can be constructed with the capacity to generate complete musical gestures in response to a simple set of initial score commands. Although software synthesis methods are not nearly as well known to the music community at large as the custom-designed hardware systems that predominate in the commercial sector, their significance should not be underestimated, given the steadily increasing power and availability of the personal computer. With the rapid development of information systems such as the Internet, an increasing number of powerful synthesis programs can be located and downloaded for local use by means of a simple modem and telephone link. Since many of these facilities are being made available at little or no charge, their impact on future activities, professional and amateur, is likely to be considerable. The origins of the all-digital synthesizer, like those of the personal computer, date back to the 1970s and the invention of the microprocessor. The fabrication of a complete computer on a silicon chip led to the development of new types of processors designed for all manner of applications, including digital synthesis and signal processing. This prospect was specially attractive to commercial manufacturers, since the superior performance of custom-designed hardware opened up possibilities of live synthesis from digital circuits which in many instances required less physical space and were ultimately cheaper and more reliable than their analogue counterparts. Developments in this context were further stimulated by the introduction of the Musical Instrument Digital Interface (MIDI) in 1983 as a universal standard for transferring performance information in a digitally coded form between different items of equipment such as music keyboards, synthesizers and audio processors. It quickly became apparent that major composition and performance possibilities could be opened up by extending MIDI control facilities to personal computers. What has distinguished the commercial MIDI synthesizers from all-software synthesis methods such as those described above is the set of functional characteristics associated with each design. One of the earliest all-digital synthesizers, the Yamaha DX7, which appeared in the same year as MIDI, relies exclusively on the techniques of frequency modulation for its entire repertory of sounds. These techniques are based on research originally carried out by John Chowning at Stanford University in the 1970s using a musicn software synthesis program. The use of a custom-designed processor facilitated the registration of patents that forced other manufacturers to develop rival hardware architectures, each associated with a unique set of synthesis characteristics. Methods employed have ranged from additive synthesis, where composite sounds are assembled from individual frequency components, to phase distortion techniques that seek to modify the spectra of synthesized material during the initial process of generation. The latter shares some features with FM techniques, where one wave-form is used to modulate the functional characteristics of another. The synthesis of sounds from first principles is subject to a number of constraints. Although particularly evident in cases where hardware features limit the choice and use of synthesis methods, such difficulties are also encountered in software-based environments, even those that permit skilled users to write their own synthesis routines from first principles rather than relying on library functions provided with the program. The root of the problem lies in the character of many natural sounds which can prove exceedingly hard to replicate by formulaic means, such as the transient components associated with the attack of an acoustic trumpet or oboe. In the commercial sector, the ability to imitate instrumental sounds is especially important, and impediments to the production of a realistic repertory of voices have inspired a number of manufacturers to pursue an alternative method of synthesis known as sampling. This is essentially a three-stage process of sound capture, optional intermediate processing and re-synthesis, starting with the selection of suitable source sounds that are first digitized and then loaded into a memory bank as segments of numeric audio data. A variety of processing techniques may then be employed to control the processes of regeneration, ranging from the insertion of a simple loop-back facility to allow sounds to be artificially prolonged, to sophisticated facilities that allow multiple access to the data for the purposes of transposition upwards or downwards and the generation of polyphonic textures. Although commercial samplers, like synthesizers, incorporate custom-designed hardware to meet the specifications of individual manufacturers, their general architecture comes very close to that encountered in a conventional computer. Whereas the methods employed in the design of the digital synthesizer clearly developed from earlier work in software synthesis, the progression in the case of sampling techniques has undoubtedly been in the reverse direction. As a result, many software synthesis programs, including the musicn family, now provide sophisticated facilities for the processing of externally generated sound material, and such modes of operation are gaining in popularity. The blurring of a clear distinction between systems that rely on proprietary hardware and those that do not becomes even more evident when consideration is given to the wider spectrum of digital tools that have become available for manipulating and processing sound material of any origin, natural or synthetic. These range from simple editing facilities, which are little more than the digital equivalent of a razor-blade and splicing block, to more complex tools, which enhance the content of sound material by added reverberation, echo or chorus effects, or directly modify its spectral content by means of subtractive techniques such as filtering. The resources available for such applications range from self-contained processing units, which can be manually operated by means of controls on their front panels, to sophisticated computer-based facilities, which make extensive use of interactive computer graphics.
4. Systems applications As a result of the adoption of the MIDI communications protocol as a means of networking synthesis and processing devices at a control level, many of the techniques described above can be physically integrated as part of a single system. This consolidation has been taken a stage further with the development of matching communication standards for the high-speed transfer of the audio signals themselves in a digital format between different items of equipment. The personal computer is proving increasingly important in this context as a powerful command and control resource at the hub of synthesis networks, in many instances handling both MIDI and audio information simultaneously. The personal computer has proved particularly attractive as a programmable means of controlling the flow of MIDI data between devices, and a variety of software products are now commercially available. One of the simpler modes of operation involves generating MIDI data for a sequence of musical events by means of a keyboard, the computer being programmed to register the pitch, duration and amplitude (a measure of the key velocity) for each note in a data file, and the time at which each event occurs. Reversing this process allows the performance to be reproduced under computer control, using either the original synthesizer voice or an entirely different one. More elaborate sequencing procedures involve the layering of several performance components for a number of synthesizers using MIDI tracks in parallel, and/or direct editing of the MIDI data using graphic editing tools. Significantly, MIDI data is not concerned with detailed specification of the actual sound, merely with those characteristics that describe the articulation of its component elements in terms of note-events. A useful parallel may be drawn with the basic note elements of a musical score, for procedurally it is only a small step to the design of software that can generate traditional score information directly from MIDI data. The functional characteristics of programs specifically designed for the production of high-quality music scores are discussed in §VI below, but it should be noted that most sequencing packages provide at least basic facilities for reproducing MIDI data in common music notation, and in some the visual layout of the score is quite sophisticated. Sequencing software represents only one aspect of the range of computer-based tools that are now available for use with MIDI equipment. These extend from composing tools, which directly generate MIDI performance data for instantaneous performance, to special editing facilities, which temporarily reconfigure the MIDI communication link in an exclusive mode in order to address and directly modify the internal voice-generating algorithms that determine the functional characteristics of a particular synthesizer. Such has been the impact of this universal protocol that most synthesis systems, whether commercial or institutional, as well as software synthesis programs such as csound, make some provision for MIDI control. The progressive merging of hardware and software technologies means that it will soon not be possible to make any useful distinctions between hardware products such as synthesizers and audio processors and the all-purpose computer workstation with the capacity to service every conceivable music application. The increasing accessibility of powerful resources for music-making has created opportunities for everyone to explore this medium of expression, though how much music of lasting significance it will produce remains to be seen.
Electro-acoustic music [extract]By Simon Emmerson, Denis Smalley
Music in which electronic technology, now primarily computer-based, is used to access, generate, explore and configure sound materials, and in which loudspeakers are the prime medium of transmission. There are two main genres. Acousmatic music is intended for loudspeaker listening and exists only in recorded form (tape, compact disc, computer storage). In live electronic music the technology is used to generate, transform or trigger sounds (or a combination of these) in the act of performance; this may include generating sound with voices and traditional instruments, electro-acoustic instruments, or other devices and controls linked to computer-based systems. Both genres depend on loudspeaker transmission, and an electro-acoustic work can combine acousmatic and live elements.
Electronic instruments
Instruments that incorporate electronic circuitry as an integral part of the sound-generating system. This article also discusses instruments that are properly classed as ‘electric’ or ‘electroacoustic’. There are three reasons for this. First, historically and technically the development of electronic instruments resulted from experiments, often only partly successful, in the application of electrical technology to the production or amplification of acoustic sound; in many areas electronic instruments have superseded their electric predecessors, and they have also opened up their own, entirely new possibilities for composition and performance. Second, all electric instruments require electronic amplification, so that there is some justification for considering them alongside instruments that are fully electronic. Third, common usage dictates ‘electronic instruments’ rather than ‘electric (or electroacoustic) instruments’ as the generic term for all instruments in which vibrations are amplified and heard as sound through a loudspeaker, whether the sound-generating system is electroacoustic or electronic. The total quantity of electronic instruments built in the 70 years since the first models were manufactured already numbers many millions, and the day is not far off when they will outnumber all other instruments made throughout human history (especially if all the digital watches, pocket calculators, home computers, mobile telephones and electronic games machines that can play melodies or produce other sounds are taken into account). Well over 500 patents for electronic instruments (in some instances several for a single instrument) were granted in Britain, France, Germany and the USA up to 1950 alone; statistics since that date would show a considerable acceleration. Electronic instruments are now used in all forms of contemporary Western music by performers and composers of all tastes and styles. Following the spread of electronic organs in the late 1940s and the 1950s to many parts of the world where electricity supplies were newly installed and often barely adequate, the electric guitar became similarly widespread in the 1960s and 70s. By the beginning of the 1980s the synthesizer was starting to be used in areas such as India and West Africa and to be heard in concerts given by rock musicians visiting China.
Sampler [sound sampler] (Fr. échantilloneur; It. campionatore)By Hugh Davies
An electronic musical instrument which has no sound of its own, but whose sounds are entirely derived from recordings. The term is borrowed from the technique of analysis that forms part of a digital recording process, in which sound waveforms are sampled in minute slices (typically between 40,000 and 50,000 times per second). The earliest such digital samplers were constructed during the 1970s. The term has recently been additionally applied to earlier analogue instruments based on any form of recording mechanism, of which the best-known is the magnetic-tape-based Mellotron; other less well-known analogue sampling instruments date from the 1930s. A digital sampler normally contains the following features for editing sections of stored samples: transposition (sometimes by means of a built-in or external keyboard), looping, reversal, insertion and removal. Since the mid-1980s self-contained ‘black box’ samplers without keyboards have been manufactured, often optionally linked to a microcomputer for ease of editing samples, while during the 1990s, with increased computer memory and storage capacity, this also became possible entirely within microcomputers. From around 1980 a number of digital synthesizers began featuring sampling in addition to or instead of synthesized sounds, sometimes offering users the possibility to create or edit their own sound samples; this trend has become more common in a wide range of synthesizers and other electronic keyboard instruments, to the extent that it is no longer straightforward to distinguish between a sampler and a synthesizer, especially when an external Controller is linked to the sampler via MIDI.
Bibliography
On MIDI: P. Manning: Electronic and Computer Music (Oxford, 1985, 2/1993) C. Anderton: MIDI for Musicians (New York, 1986) R. Penfold: Computers and Music (Tonbridge, 1989, 2/1992) F. Rumsey: MIDI Systems and Control (London, 1990, 2/1994) J. Rothstein: MIDI: a Comprehensive Introduction (Oxford and Madison, WI, 1992, 2/1995) G. Haus, ed.: Music Processing (Oxford and Madison, WI, 1993) P. Lehrman and T. Tully: MIDI for the Professional (New York, 1993)
On computer and music: M.V. Mathews and others: The Technology of Computer Music (Cambridge, MA, 1969) J. Reichardt, ed.: Cybernetic Serendipity: the Computer and the Arts (New York, 1969) H.B. Lincoln, ed.: The Computer and Music (Ithaca, NY, 1970) J. Watkinson: The Art of Digital Audio (Stoneham, MA, 1989) H. Schaffrath, ed.: Computer in der Musik: über den Einsatz in Wissenschaft, Komposition, und Pädagogik (Stuttgart, 1991) D.S. Davis: Computer Applications in Music: a Bibliography (Madison, WI, 1988); suppl.1, vol.i (Madison, WI, 1992) P. Desain and H. Honing: Musc, Mind and Machine (Amsterdam, 1992) A. Marsden and A. Pople, eds.: Computer Representations and Models in Music (London, 1992) J.H. Paynter and others, eds.: Companion to Contemporary Musical Thought (London, 1992) H. Kupper: Computer und Musik: mathematische Grundlagen und technische Möglichkeiten (Mannheim, 1994) C. Roads and others: The Computer Music Tutorial (Cambridge, MA, 1996) E. Selfridge-Field, ed.: Beyond MIDI: the Handbook of musical Codes (Cambridge, MA, 1997) R.L. Wick: Electronic and Computer Music: an Annotated Bibliography (Westport, CT, 1997)
On Sampler: H. Davies: ‘A History of Sampling’, Experimental Musical Instruments, v/2 (1989), 17–20; enlarged in Organised Sound, i/1 (1996), 3–11 K.C. Pohlmann: ‘Fundamentals of Digital Audio’, The Compact Disc: a Handbook of Theory and Use (Madison, WI, 1989) C. Roads: The Computer Music Tutorial (Cambridge, MA, 1996), 22–44, 117–33 M. Russ: Sound Synthesis and Sampling (Oxford, 1996)
On Controllers: H. Davies: ‘Elektronische instrumenten: Classificatie en mechanismen’, Elektrische Muziek: drie jaar acquisitie van elektrische muziekinstrumenten (The Hague, 1988; Fr. trans., 1990) [exhibition catalogue]; rev. as ‘Electronic Instruments: Classification and Mechanisms’, I Sing the Body Electric, ed. H.-J. Braun (Hofheim, forthcoming), 43–58 J. Pressing: Synthesizer Performance and Real-Time Techniques (Oxford, 1992), 375–89 C. Roads: The Computer Music Tutorial (Cambridge, MA, 1996), 613–58
On Pitch: E. Regener: Pitch Notation and Equal Temperament: a Formal Study (Berkeley and Los Angeles, 1973)
|
|
|||||
.mid files: 20,000+ (1,000+ composers, 400+ pages)
|
Last updating:
06-05-2024 10.10
|
Report us an error
|