This is a convention for storing musical instrument directivities or recordings obtained at multiple receivers and for multiple played notes. The data is saved in the frequency domain and the convention is based on the (not yet existing) GeneralTFE convention.
- Directivity data: The instrument is encoded by the source, whereas different notes are encoded by the emitter. The data is saved as complex (or real) values in Obj.Data.Real and Obj.Data.Imag at the discrete frequencies specified in Obj.N.
- Example 1 - full spectrum: The directivity of a cymbal could be saved as a discrete single sided spectrum for equidistant frequencies between 0 Hz and the Nyquist frequency. In this case the played notes would refer to different playing styles of the cymbal (different strength or hitting locations). In this case Obj.Data.N would be of size N, because the frequencies are identical for each emitter (played note).
- Example 2 - full spectrum: A recording of a violin (single tones, scales, or complete pieces) could be saved as a discrete single sided spectrum for equidistant frequencies between 0 Hz and the Nyquist frequency. In this case Obj.Data.N would be of size N, because the frequencies are identical for each emitter (played note). The data can be regarded as a raw format from which directivities can be computed.
- Example 3 - harmonic spectrum: The directivity of a violin could be saved by means of the energy of the fundamental frequency and N-1 overtones of each played note. In this case Obj.Data.N would be of size NE, because the frequencies are different for emitter (each played note).
- Example 4 - fractional octave spectrum: The directivity of a violin could alternatively be saved as energy in N fractional octaves. In this case Obj.Data.N would be of size N, because the frequencies are identical for each emitter (played note). This format could be most easily used by recent room acoustical simulation algorithms but is not recommended because this simplified representation looses part of the original representation.
- Example 5 - moving instrument: The influence of the musician on the directivity could be investigated by repeated recordings of the same note/scale/piece for different positions of the musician and/or the instrument. The position can be coded in the meata data. In all other cases the musician and the instrument are ideally not moved during the recording session(s).
- Meta data: The complete description of the provided data unfortunately requires more meta data than in other SOFA conventions.
- EmitterPosition: Gives a emitting position of each note. In most cases, it won't be possible to specify the position of the emitter, and a null matrix should be given.
- EmitterMidiNote: For tonal instruments, this specifies the midi note number according to (The complete MIDI 1.0 detailed specification (version 96.1, third edition), https://www.midi.org/specifications-old/item/the-midi-1-0-specification) where a note number of 69 refers to A4 (fundamental frequency specified by Obj.TuningFrequency). In case of atonal instruments a null vector should be given.
- EmitterComment: Gives an additional verbal description of each note, which is highly recommend for documenting the data. This might be the musical dynamic (pp., ff., etc.), the string on which the note was played, the playing style (pizzicato, legato, etc.), or the location at which a cymbal was hit.
- GLOBAL:MusicianPostion: Specifies the position of the musician inside the microphone array and relative to the instrument. It is recommended that the direction of azimuth 0 deg. and elevation 0 deg. is used as a reference point/virtual audience towards which the musician is oriented.
- SourceViewDefinition: Because there will not be an agreement of how the source view is defined across different instruments, this has to be specified (e.g. 'Viewing direction of the bell' in case of a trumpet)
- M: Number of measurements. Must be M=1.
- E: Number of emitters, which for example correspond to played notes, or cymbal hits (see examples above).
- R: Number of receiver, which in this case refers to the number of microphones that were used for recording the data.
- N: Number of saved frequencies (see examples above).
- Musical Acoustics: The stored data can generally serve research in musical acoustics that is related to the acoustic behaviour of natural sources.
- Room acoustical simulation: The stored data can be used in room acoustical simulation and auralization and the SOFA convention is intended to establish a common data format for the directivity of natural sources.
This version uses SOFA 1.0 which reflects the AES69-2015 standard.
|GLOBAL:SOFAConventions||MusicalInstrumentDirectivity||rm||attribute||This conventions stores directivities of tonal instruments, atonal instruments, and singers/speakers, i.e., this requires some additional meta data entries. This convention is based on the not existing GeneralTFE.|
|GLOBAL:DataType||TF||rm||attribute||We store frequency-dependent data here|
|GLOBAL:License||No license provided, ask the author for permission||m||attribute|
|GLOBAL:RoomType||free field||m||attribute||The room information can be arbitrary|
|GLOBAL:InstrumentType||m||attribute||e.g. 'Violin' or 'human singer'|
|GLOBAL:InstrumentManufacturer||m||attribute||e.g. 'Stradivari, Lady Blunt, 1721'|
|GLOBAL:Musician||m||attribute||e.g. 'Christiane Schmidt', or 'artificial excitation by R2D2'|
|GLOBAL:MusicianPosition||m||attribute||e.g. 'sitting behind the instrument, facing the virtual audience at azimuth 0 deg. and elevation 0 deg.'|
|ListenerPosition||[0 0 0]||m||IC||double|
|ReceiverPosition||[0 0 0]||m||rCI||double|
|SourcePosition||[0 0 1]||m||IC||double||In order to store different directions/positions around the listener, SourcePosition is assumed to vary|
|SourcePosition:Units||degree, degree, metre||m||attribute|
|SourcePosition:Definition||m||attribute||Definition of the SourcePosition, e.g., 'Position of the bell' for a trumpet|
|SourceView||[0 0 1]||m||IC||double||Gives the orientation of the instrument, singer/speaker|
|SourceView:Units||degree, degree, metre||m||attribute|
|SourceView:Definition||m||attribute||Definition of the SourceView, e.g., 'Viewing direction of the bell' for a trumpet|
|SourceUp||[0 0 1]||m||IC||double||Gives the orientation of the instrument, singer/speaker|
|SourceUp:Units||degree, degree, metre||m||attribute|
|SourceUp:Definition||m||attribute||Definition of the SourceUp, e.g., 'Viewing direction of the valves' for a trumpet|
|EmitterPosition||[0 0 0]||m||eCI||double|
|EmitterPosition:Units||degree, degree, metre||m||attribute|
|EmitterMidiNote||cartesian||E||double||Defines the played note, e.g. 69=A4, 70=A#4, etc. (According to 'The complete MIDI 1.0 detailed specification' (version 96.1, third edition), https://www.midi.org/specifications-old/item/the-midi-1-0-specification). Not mandatory, but recommended for tonal instruments.|
|EmitterDescription||cartesian||S, ES||attribute||verbal description of playing style, e.g., 'played on A string (pianissimo, pizzicato)', Not mandatory but highly recommended.|
|Data.Real||0||m||mREn||double||The real part of the complex spectrum|
|Data.Imag||0||m||MREN||double||The imaginary part of the complex spectrum|
|TuningFrequency||I, E||double||Defines the fundamental frequency of A4 (midi note number 69). Not mandatory, but highly recommended for tonal instruments and singers|
|TuningFrequency:Units||hertz||attribute||Unit of values given in TuningFrequency|
|N||0||m||N, NE||double||Frequency values|
|N_Units||hertz||m||attribute||Unit of the values given in N|