Talk:SimpleHeadphoneIR
Type of data
- Unmerged: When measuring headphones, we have two headphones (=two transmitters, T1 and T2) and we have two mics places in the ears (=two receivers, R1 and R2). For the first measurement: T1-->R1,R2; next measurement: T2-->R1, R2. So, the IRs T1-->R1 and T2-->R2 are the interesting ones and are usually further processed. But we also have the uninteresting IRs T1-->R2 and T2-->R1. They represent the cross-talk data, which we actually do not use, but we store them! And we could use SOFA for storing them. This could be covered by the conventions GeneralFIR.
- Merged: Now, for any further processing purposes, we are usually interested just in two IRs: #1: T1 --> R1 and #2: T2 --> R2. Thus, the number of receivers (here: 2) defines the number of IRs and transmitters, with a strict one-to-one correspondence between transmitters and receivers. I think that this could be the main property of HeadphoneIR: a strict one-to-one correspondence between transmitters and receivers.
- Raw: I think that that's the "merged", is it?
- equalization: I found this term in our previous discussions, please comment on that
Single subject vs multiple subjecs
At the moment, all SOFA files are for a single subject, i.e., one subject --> one file. For HpIRs, it makes sense to have a file containing data from several subjects, i.e., many subjects --> one file. What do you think? How would you like to deal with that issue?
Measurement repetition
Usually, when a measurement is repeated, something in the measurement setup changes. For example, for HRTF measurements, we change the direction of the source and repeat the measurement. In the SOFA file, this is noted as a different entry in SourcePosition.
Now, for HpIR, we have multiple measurements, which are just repetitions, i.e., the subjects put the HP on, we measure, the subjects takes the HP down, put it on again, we re-measure, and so on. What changes it the time, and the counter of the performed measurement. Neither the subject changes nor the Source, Emitter, Listener, Receiver attributes.
My question: do you also have this issue? How do you deal with that? How would you like to consider that in SOFA conventions?
Other aspects discussed until now
- SourcePosition:
- Suggestion 1: a fictive sound source position (example given: 0.5 m in front of the listener). The definition of the virtual position is unclear yet. The choice for the distance needs to be defined.
- Suggestion 2: the actual position of the headphones, usually congruent with ListenerPosition
- EmitterPosition:
- Suggestion 1: the actual position of the headphone drivers, according to SOFA rules must be relative to the SourcePosition.
The naming of the headphone related attributes needs to be clarified. The attributes proposed so far are:
- Producer:
- Model:
- FormFactor:
- EarcupDesign
- Technology
- FrequencyResponse
- Sensitivity
- t.b.n for Stimulus Type (???)
These attributes should be defined, and also the ambiguity should be reduced. For example, what does "FrequencyResponse" tell when we store the IRs in the numeric way in the same file anyway. When we provide the same information twice, which overrides which?
Further, the structure of the attributes should be defined as well: Candidates (with XXX being one of the above-stated attributes):
- Headphones.XXX does not work because SOFA does not allow nested structures.
- GLOBAL_HeadphonesXXX is allowed, but the difference to SourceXXX should be clearly defined then.
- In SOFA, the headphones are represented by the object Source. Thus, we could use GLOBAL_SourceXXX.