Talk:SimpleHeadphoneIR: Difference between revisions
| (18 intermediate revisions by 4 users not shown) | |||
| Line 17: | Line 17: | ||
** MG2: I agree with you. The better solution should be b) a new convention. Moreover, I feel that this convention is tightly connected with what we called: ''equalization filters for a one-to-one correspondence between transmitters and receivers''. Am I correct? Thus I suggest for the future something similar to ''EqSingleHedphoneFIR''. |
** MG2: I agree with you. The better solution should be b) a new convention. Moreover, I feel that this convention is tightly connected with what we called: ''equalization filters for a one-to-one correspondence between transmitters and receivers''. Am I correct? Thus I suggest for the future something similar to ''EqSingleHedphoneFIR''. |
||
** PM2: I understand MG2. My suggestion: Let's define the headphones conventions and then discuss the difference between the equ IRs and the HpIRs. |
** PM2: I understand MG2. My suggestion: Let's define the headphones conventions and then discuss the difference between the equ IRs and the HpIRs. |
||
** MG3: Absolutely agreed. |
|||
At the moment, the simple case is to have one-to-one correspondence. We could work on that first, and then think about multiple subjects in a file for both measured, inverted and equalized filters. We would need a clear and consistent definition for HeadphoneIR first. |
At the moment, the simple case is to have one-to-one correspondence. We could work on that first, and then think about multiple subjects in a file for both measured, inverted and equalized filters. We would need a clear and consistent definition for HeadphoneIR first. |
||
| Line 25: | Line 26: | ||
For other issues, see the following sections. |
For other issues, see the following sections. |
||
==Subject/Headphone correspondences== |
|||
==Single subject vs multiple subjects== |
|||
At the moment, all SOFA files are for a single subject, i.e., one subject --> one file. For HpIRs, it makes sense to have a file containing data from several subjects, i.e., many subjects --> one file. What do you think? How would you like to deal with that issue? |
At the moment, all SOFA files are for a single subject, i.e., one subject --> one file. For HpIRs, it makes sense to have a file containing data from several subjects, i.e., many subjects --> one file. What do you think? How would you like to deal with that issue? |
||
| Line 42: | Line 43: | ||
PM2: one subject --> many headphones is equivalent to one subject --> repeated measurements of any headphones. Thus, I suggest to consider this case as measurement repetition. I further propose to change the name to ''SimpleHeadphoneIR'', because ''Single'' is ambiguous (it is not clear to what single refers to: subject or measurement, and it implies single headphones, which is definitely not the case). But we have conventions ''SimpleFreeFieldHRIR'' which also always consider a single subject and repeated measurements. Thus ''SimpleHeadphoneIR'' directly corresponds to ''SimpleFreeFieldHRIR''. I already created a short description on the main page. I hope that you agree... |
PM2: one subject --> many headphones is equivalent to one subject --> repeated measurements of any headphones. Thus, I suggest to consider this case as measurement repetition. I further propose to change the name to ''SimpleHeadphoneIR'', because ''Single'' is ambiguous (it is not clear to what single refers to: subject or measurement, and it implies single headphones, which is definitely not the case). But we have conventions ''SimpleFreeFieldHRIR'' which also always consider a single subject and repeated measurements. Thus ''SimpleHeadphoneIR'' directly corresponds to ''SimpleFreeFieldHRIR''. I already created a short description on the main page. I hope that you agree... |
||
FB: agree |
|||
MG3: Now, with string-array support, things become easier. Agreed. |
|||
Moreover, we can define "one headphones --> many subjects" with different ListenerShortName and ListenerDescriptions (not GLOBAL:). What do you think about it? |
|||
==Measurement repetition== |
==Measurement repetition== |
||
| Line 65: | Line 71: | ||
** PM: guys, so what I don't understand then, is the following: we'll have many measurements in the file, but we do not provide any information (but the MeasurementDate) about what is different in the measurements? How do I know distinguish between the measurements then? |
** PM: guys, so what I don't understand then, is the following: we'll have many measurements in the file, but we do not provide any information (but the MeasurementDate) about what is different in the measurements? How do I know distinguish between the measurements then? |
||
** MG2: I’ve got the point! What actually changes among measurements (with same setup) is the emitter positions. Following this observation we might move here at repetition-level the discussion on Tracking. Do you agree with me? |
** MG2: I’ve got the point! What actually changes among measurements (with same setup) is the emitter positions. Following this observation we might move here at repetition-level the discussion on Tracking. Do you agree with me? |
||
** PM2: Agreed. So, which attributes would you like to have as global attributes, i.e., one entry per file; and which as measurement-specific attributes, i.e., for each measurement a separate entry? |
** PM2: Agreed. So, which attributes would you like to have as global attributes, i.e., one entry per file; and which as measurement-specific attributes, i.e., for each measurement a separate entry? |
||
** FB2: I understand your point, but I don't see how we can determine the exact EmitterPosition as it is defined now (with respect to the ListenerPosition = interaural center) and I did not find a solution to this in the Tracking section. I would have two practical workarounds to this: either add some random numbers to the (assumed) emitter position [0 -0.09 0; 0 0.09 0] (not elegant?), or further specification in GLOBAL:comment : singe subject, single/multiple headphone/s with/without repositioning. |
|||
** PM3: It's not a problem. If you don't know the exact position of Emitters, just leave it at default. When repeating the measurement, the time will only change. In the comments, you still can describe if and how the repositioning was done. Agreed? |
|||
** FB3: Agree to PM3 |
|||
** MG3: Agreed |
|||
Measurement-specific attributes: |
Measurement-specific attributes: |
||
| Line 86: | Line 96: | ||
* MG2: I try to propose a new repetition-level attribute related to Tracking which should be quantitative and optional, of course: ''DeltaEmitterPosition'': spatial deviation from the global EmitterPosition |
* MG2: I try to propose a new repetition-level attribute related to Tracking which should be quantitative and optional, of course: ''DeltaEmitterPosition'': spatial deviation from the global EmitterPosition |
||
* PM2: another level of spatial relation is not defined in SOFA (EmitterPosition is already in the local coordinates of SourcePosition). But that's not a problem: The mechanism you would like to see is already considered by allowing to vary EmitterPosition for each M. Agreed? |
* PM2: another level of spatial relation is not defined in SOFA (EmitterPosition is already in the local coordinates of SourcePosition). But that's not a problem: The mechanism you would like to see is already considered by allowing to vary EmitterPosition for each M. Agreed? |
||
* MG3: Agree with PM2 |
|||
==Handling of SOFA-obligatory data== |
==Handling of SOFA-obligatory data== |
||
| Line 115: | Line 126: | ||
PM2: An HRTF measurement with wearing headphones is similar to an HRTF measurement with wearing a hut :-). Thus, these data will have the same format as those from an usual HRTF measurement and I suggest to use ''SimpleFreeFieldHRIR'' to store them. Also, ListenerDescription should contain something like "Listener was sitting in the center of the loudspeaker arc, was wearing the headphones XXXX and a sombrero.". Agreed? |
PM2: An HRTF measurement with wearing headphones is similar to an HRTF measurement with wearing a hut :-). Thus, these data will have the same format as those from an usual HRTF measurement and I suggest to use ''SimpleFreeFieldHRIR'' to store them. Also, ListenerDescription should contain something like "Listener was sitting in the center of the loudspeaker arc, was wearing the headphones XXXX and a sombrero.". Agreed? |
||
FB2: Agreed to PM2. |
|||
MG3: Perfect, olè! ;) |
|||
Just to finish this section, uff… the joint usage of HpIR+HRIR (such QU_KEMAR_anechoic_SennheiserHD25_0.5m.mat) has not to be encouraged, agree with me? |
|||
== Headphone related attributes == |
== Headphone related attributes == |
||
| Line 144: | Line 160: | ||
MG2: All these attributes are somehow redundant once SourceModel is considered with an available product sheet. But anyway, we have to answer to the following question: why might it be important to perform a meta-analysis on model characteristics? |
MG2: All these attributes are somehow redundant once SourceModel is considered with an available product sheet. But anyway, we have to answer to the following question: why might it be important to perform a meta-analysis on model characteristics? |
||
PM3: With manufacturer and model, a meta analysis can be performed by using the headphone specs provided by the manufacturer. I'm afraid that including too much information on the headphone specs in the SOFA files (which actually should contain IRs only) will lead to an inconsistency. Further, attributes like EarCup or FormFactor must be optional, because we cannot ensure that everybody will have these data or know what would be the appropriate values. By storing the manufacturer and the type in the SOFA file, the link to the corresponding specs will be unique anyway. So providing more information than manufacturer/model does not help. I thus suggest to store specs and IRs in separate files. If you think that it is not clear what specs are describing the measured headphones, we could provide a link to the particular specs file. For example, for the specs of Sennheiser HD 520 II, we could store SourceURI = 'http://mypdfmanuals.com/user-manual,SENNHEISER,HD%2B520+II,3708267.pdf '. This way, we would avoid inconsistencies because the meta data would always correspond to those given by the manufacturer. And, in the case of having specs which differ from the manufacturer specs, one could provide a link to a custom-made specs file. Note that you still can provide more attributes, however, they don't have to be defined in conventions. |
|||
*FB: Agreed to PM3. |
|||
*[[User:Bbboren|Bbboren]] ([[User talk:Bbboren|talk]]) 18:24, 6 August 2014 (CEST): Agreed re:PM3. |
|||
* MG3: smart choice, agreed. I’ve noticed that in the proposed SimpleHeadphoneIR there isn't SourceURI at measurement level, but only GLOBAL:SourceURI. |
|||
| Line 150: | Line 170: | ||
** MG2: This attribute is useless if we consider the IR. But, if I want to store raw data, i.e. recordings without processing, we need to specify the stimulus type, e.g. a sine sweep response. How have you handled this issue in the SimpleFreeFieldHRIR convention? I’m probably missing something here... |
** MG2: This attribute is useless if we consider the IR. But, if I want to store raw data, i.e. recordings without processing, we need to specify the stimulus type, e.g. a sine sweep response. How have you handled this issue in the SimpleFreeFieldHRIR convention? I’m probably missing something here... |
||
** PM2: In SimpleFreeFieldHRIR, we store impulse responses. |
** PM2: In SimpleFreeFieldHRIR, we store impulse responses. |
||
**[[User:Bbboren|Bbboren]] ([[User talk:Bbboren|talk]]) 18:28, 6 August 2014 (CEST) Also if you're using sine-sweeps and want to store the harmonic distortion products along with the impulse response, it would be useful to know the type of signal. But maybe that's not a common enough occurrence to worry about for these purposes. |
|||
** PM3: I agree with Bbboren. For storing data other than IRs, I suggest to create an other datatype which would be able to store freqency-sweep responses. Then, we could create conventions which use that datatype for headphones (or maybe even for any kind of electro-acoustic device). For SimpleHeadphoneIR, I suggest to stick to IR. Agreed? |
|||
** MG3: Agreed. Furthermore, just to have a clear view of the naming philosophy, you are going to call such conventions without “IR” suffix, correct? |
|||
* qualitative data about microphone/receiver position, e.g. blocked ear canal, open ear canal, at the eardrum |
* qualitative data about microphone/receiver position, e.g. blocked ear canal, open ear canal, at the eardrum |
||
** MG2: Optional (when tracking data are not available). |
** MG2: Optional (when tracking data are not available). |
||
** Because such an information is not clearly defined yet, in SimpleFreeFieldHRIR, we use the attribute Comments for storing that information. I suggest to use that information for the headphones as well. |
** Because such an information is not clearly defined yet, in SimpleFreeFieldHRIR, we use the attribute Comments for storing that information. I suggest to use that information for the headphones as well. |
||
** FB: I think this should be stored in GLOBAL:ReceiverDescription rather than in GLOBAL:Comments, and could provide any information found to be useful (eg. simple specification as microphone type, or publication etc.) |
|||
** PM: Agreed to FB. ReceiverDescription seems to be the perfect place. MG: do you agree? |
|||
** MG3: Agreed with both of you. |
|||
---- |
---- |
||
Latest revision as of 23:18, 25 August 2014
General
We actually want to store headphone IRs. Here is the summary of the discussion until now about the general way how can we represent measurements with headphones in SOFA:
- Unmerged: When measuring headphones, we have two headphones (=two transmitters, T1 and T2) and we have two mics places in the ears (=two receivers, R1 and R2). For the first measurement: T1-->R1,R2; next measurement: T2-->R1, R2. So, the IRs T1-->R1 and T2-->R2 are the interesting ones and are usually further processed. But we also have the uninteresting IRs T1-->R2 and T2-->R1. They represent the cross-talk data, which we actually do not use, but we store them! And we could use SOFA for storing them. This could be covered by the conventions GeneralFIR.
- Merged: Now, for any further processing purposes, we are usually interested just in two IRs: #1: T1 --> R1 and #2: T2 --> R2. Thus, the number of receivers (here: 2) defines the number of IRs and transmitters, with a strict one-to-one correspondence between transmitters and receivers. I think that this could be the main property of HeadphoneIR: a strict one-to-one correspondence between transmitters and receivers.
- Raw and compensated IRs (Merged) : they share the same representation, except the information used to obtain the compensated signals from the raw ones.
- Raw: data from the recordings.
- Compensated: inverse filtering of the stimulus plus free-field/diffuse-field compensation in order to extrapolate the impulse response. It can be approximately seen as DTFs from HRTFs.
- Equalization: we can think of storing equalization filters for different techniques. For a specific headphone model, we could have these equalization filters:
- a one-to-one correspondence between transmitters and receivers (e.g. single measurement equalization)
- a one-to-all repetitions for a specific subject (e.g. mean equalization)
- a one-to-all repetitions for all the available subjects and, in the further development, grouping similar headphones (e.g. machine learning approach)
- Inverted: In most cases, the merged HeadphoneIR data is used to design a headphone filter by inverting the HeadphoneIRs. This filter is then used for equalizing (cancelling out) the transfer function of the headphone during auralization. It might be good to store the inverted HeadphoneIRs because (a) the inversion itself is not trivial and others could use the filters without having to think about the proccesing, (b) documentation of an inversion method in conjunction with Matlab/C/Java-code -> reproducable research. There are two possibilities in dealing with the inverted IRs (a) storing merged and inverted Data in the same SOFA object and (b) storing them in separate objects. (a) might be difficult to handle because there usually is only one headphone filter altough there might be many HeadphoneIRs. Both, merged and inverted IRs could be saved in Data.IR and meta data could be used to tell what IR is inverted. This, however might be confusing. (b) can be achieved by matching subject IDs and description across objects holding merged and inverted data, wich can easily be done.
- PM: It seems to me like the "inverted HpIR" is not a HpIR but a filter used for filtering sounds. So, I suggest to not use SingleHeadphoneIR for storing filters. Instead I suggest 1) for the moment to use GeneralFIR and 2) later define new conventions which clearly state what is special on that filters.
- MG2: I agree with you. The better solution should be b) a new convention. Moreover, I feel that this convention is tightly connected with what we called: equalization filters for a one-to-one correspondence between transmitters and receivers. Am I correct? Thus I suggest for the future something similar to EqSingleHedphoneFIR.
- PM2: I understand MG2. My suggestion: Let's define the headphones conventions and then discuss the difference between the equ IRs and the HpIRs.
- MG3: Absolutely agreed.
At the moment, the simple case is to have one-to-one correspondence. We could work on that first, and then think about multiple subjects in a file for both measured, inverted and equalized filters. We would need a clear and consistent definition for HeadphoneIR first. It seems like the one-to-one correspondence is the major property which clearly defines our HeadphoneIR conventions. So, this is how I see HeadphonesIR:
- How is a headphone represented in HeadphoneIR? The headphones as the product is represented by the Source, the individual headphones' drivers are represented by the Emitters.
- What is special on HeadphoneIR compared to, say, GeneralFIR? It is the one-to-one correspondence between the Emitters and the Receivers.
For other issues, see the following sections.
Subject/Headphone correspondences
At the moment, all SOFA files are for a single subject, i.e., one subject --> one file. For HpIRs, it makes sense to have a file containing data from several subjects, i.e., many subjects --> one file. What do you think? How would you like to deal with that issue?
FBrinkmann: Note that the above mentioned case is the most common, but there might also be the case that we want to save HeadphoneIRs for one subject but different types of headphones (For example when trying to find headphone filters best matching a set of different headphone types).
MG: Fabian’s observation sounds very desirable. Following his suggestion, the focus moves from headphones to subjects/group of headphones which is fine for me but conceptually merits a different convention. Let’s call it something similar to GroupHeaphoneIR.
PM: Agreed: then, we should call these conventions SingleHeadphoneIR.
MG2: further steps… (a rough sketch)
MultipleHeadphoneIR: one headphones --> many subjects
GroupHeaphoneIR: one subject --> many headphones
PM2: one subject --> many headphones is equivalent to one subject --> repeated measurements of any headphones. Thus, I suggest to consider this case as measurement repetition. I further propose to change the name to SimpleHeadphoneIR, because Single is ambiguous (it is not clear to what single refers to: subject or measurement, and it implies single headphones, which is definitely not the case). But we have conventions SimpleFreeFieldHRIR which also always consider a single subject and repeated measurements. Thus SimpleHeadphoneIR directly corresponds to SimpleFreeFieldHRIR. I already created a short description on the main page. I hope that you agree...
FB: agree
MG3: Now, with string-array support, things become easier. Agreed. Moreover, we can define "one headphones --> many subjects" with different ListenerShortName and ListenerDescriptions (not GLOBAL:). What do you think about it?
Measurement repetition
Usually, when a measurement is repeated, something in the measurement setup changes. For example, for HRTF measurements, we change the direction of the source and repeat the measurement. In the SOFA file, this is noted as a different entry in SourcePosition.
Now, for HpIR, we have multiple measurements, which are just repetitions, i.e., the subjects put the HP on, we measure, the subjects takes the HP down, put it on again, we re-measure, and so on. What changes it the time, and the counter of the performed measurement. Neither the subject changes nor the Source, Emitter, Listener, Receiver attributes.
My question:
- do you also have this issue? How do you deal with that? How would you like to consider that in SOFA conventions?
- FBrinkmann: At the moment we store our IRs as separate wav/mat files with consecutive numbering - wich is not an option for SOFA.
- MG: I’ve suggested so far to store repetition on Obj.Data.IR = NaN(M,R,N); M repetitions, R channels and N samples.
- PM: Agreed.
- At the moment, I suggest to have a variable called MeasurementTimeCreated in which, for each measurement, the date/time of the corresponding measurement would be stored.
- FBrinkmann: I would suggest to save identical MeasurementTime Created for HeadphoneIRs measured on the same day and subject. This makes the data look somehow simpler, and I don't see a large advantage in saving exact times for each IR.
- MG: Practically speaking, I agree with Fabian but we lose generality. One subject could perform several recording sessions made in different days and years. Subjects change their ear shape over time providing different acoustic contribution to HpIRs.
- PM: MeasurementDate (sorry, it's called Date, not Time, my mistake) save the date and the time using the number of seconds from 1970-01-01 00:00:00. So the actual time will be provided anyway. If you agree, we define MeasurementDate to be optional.
- MG2: Agreed. One can manage timestamps as he/she prefers.
- Further, a variable called MeasurementDescription, in which, for each measurement, a string containing description of the corresponding measurement, would be stored. Note that we'd need string arrays in such a case, a feature currently not implemented in the Matlab API.
- FBrinkmann: I think MeasurementDescription is already covered by the GLOBAL variables suggested in the HeadphoneIR convention version 0.1 - or do I miss something? In most cases it might be sufficient to specify these meta data entries once because the setup usually does not change. In this case we won't need string arrays. However having string arrays available (without knowing about the amount of work this would take) would make things way more flexible. This would for example make it possible to save IRs for one subject but different types of headphones.
- MG: the use of several MeasurementDescription leads to a huge amount of redundant data because the setup usually does not change, as Fabian said. Different consideration should be noted with the case of single subject and different headphones which produces my comment on section single subject vs. multiple subject
- PM: guys, so what I don't understand then, is the following: we'll have many measurements in the file, but we do not provide any information (but the MeasurementDate) about what is different in the measurements? How do I know distinguish between the measurements then?
- MG2: I’ve got the point! What actually changes among measurements (with same setup) is the emitter positions. Following this observation we might move here at repetition-level the discussion on Tracking. Do you agree with me?
- PM2: Agreed. So, which attributes would you like to have as global attributes, i.e., one entry per file; and which as measurement-specific attributes, i.e., for each measurement a separate entry?
- FB2: I understand your point, but I don't see how we can determine the exact EmitterPosition as it is defined now (with respect to the ListenerPosition = interaural center) and I did not find a solution to this in the Tracking section. I would have two practical workarounds to this: either add some random numbers to the (assumed) emitter position [0 -0.09 0; 0 0.09 0] (not elegant?), or further specification in GLOBAL:comment : singe subject, single/multiple headphone/s with/without repositioning.
- PM3: It's not a problem. If you don't know the exact position of Emitters, just leave it at default. When repeating the measurement, the time will only change. In the comments, you still can describe if and how the repositioning was done. Agreed?
- FB3: Agree to PM3
- MG3: Agreed
Measurement-specific attributes:
- MeasurementDate (measurement has been just repeated)
- EmitterPosition (headphones have been repositioned)
- SourceManufacturer (headphones changed)
- SourceModel (headphones changed)
- SourceURI (headphones changed)
Tracking: tracking headphones position, once put on, is a challenging issue, but sometimes it's possible to give a label to the each repositionament e.g. simple labels: comfortable, not comfortable. t.b.d
- FBrinkmann: It might be hard to establish comparability of the suggested labels (comfortable, not comfortable...) across subjects and research institutes (what does comfortable mean, where does not comfortable start?). Moreover, I'm not sure about the relevance of these labels: What does it tell us about the IR if the position was comfortable (I think most headphones available are comfortable to wear)? In my opinion the goodness of the headphone position is best reflected by the repeatability: Good positioning means little variance across repeated measurements of the same subject. I thus tend to dismiss this attribute.
- MG: In principle, I agree. Simple labels have been proposed due to the challenging issue, i.e. tracking headphones position. But of course, the quantitative data could be stored in EmitterPosition.
- MG: The latter comment seems to be misleading after reading the one related to Tracking headphones position. Could you clarify your opinion on that?
- PM: Please define Tracking.
- MG2: I try to propose a new repetition-level attribute related to Tracking which should be quantitative and optional, of course: DeltaEmitterPosition: spatial deviation from the global EmitterPosition
- PM2: another level of spatial relation is not defined in SOFA (EmitterPosition is already in the local coordinates of SourcePosition). But that's not a problem: The mechanism you would like to see is already considered by allowing to vary EmitterPosition for each M. Agreed?
- MG3: Agree with PM2
Handling of SOFA-obligatory data
- SourcePosition:
- Suggestion 1: a fictive sound source position (example given: 0.5 m in front of the listener). The definition of the virtual position is unclear yet. The choice for the distance needs to be defined.
- Suggestion 2: the actual position of the headphones, usually congruent with ListenerPosition
- FBrinkmann: I prefer 2
- PM: Agreed.
- EmitterPosition:
- Suggestion 1: the actual position of the headphone drivers, according to SOFA rules must be relative to the SourcePosition.
- FBrinkmann: I agree. I think this is already included in the conventions 0.1 draft -> EmitterPosition = [0 -0.09 0; 0 0.09 0]
- PM: Agreed on your agreement :-).
- Suggestion 1: the actual position of the headphone drivers, according to SOFA rules must be relative to the SourcePosition.
MG: Let's consider the case of QU_KEMAR_anechoic_SennheiserHD25_0.5m.mat HRTFs from TU-Berlin; SourcePosition can be also seen as virtual position, e.g. previously measured without headphones at 0.5 m away from the listener, i.e. SourcePosition = [0 0 0.5]. Following the latter observation plus ListenerPosition=SourcePosition=[0 0 0] (always in the current convention), it's hard to describe EmitterPosition related to SourcePosition. I propose to anchor both Receiver- and Emitter- Position to ListenerPosition and use SourcePosition for the description of fictive sound sources.
Summarizing my observation:
- ListenerPosition = [0 0 0];
- SourcePosition = [0 0 0]; (no dimensions) / SourcePosition = [x y z]; (virtual sources)
- ReceiverPosition = [0 -0.09 0; 0 0.09 0];
- EmitterPosition = [0 -0.09 0; 0 0.09 0];
What do you think about it?
PM: I don't understand the statement "SourcePosition can be also seen as virtual position, e.g. previously measured without headphones at 0.5 m away from the listener". When measured without headphones, then we measure HRTFs, not HpIRs, right? So, at the moment, I agree with Fabian and vote for providing the actual positions of the headphones and the listener. But maybe I just did not understand the concept of "virtual position" - try to explain, please. In the meantime, I go for the actual positions...
MG2: What I would like to define with virtual position is the follow scenario: one can employ HRTFs as stimuli, e.g. the QU_KEMAR_anechoic_SennheiserHD25_0.5m.mat. I usually perform the convolution between HpIR and HRIR, separately, but my main concern regards the loss of generality of SingleHeadphoneIR convention. Another issue that is currently coming into my mind, just to be clarified: what happens if the actual SourcePosition is coming from a loudspeaker and the listener is wearing headphones? We should consider a specific setup for a SimpleFreeFieldHRIR, shouldn’t we?
PM2: An HRTF measurement with wearing headphones is similar to an HRTF measurement with wearing a hut :-). Thus, these data will have the same format as those from an usual HRTF measurement and I suggest to use SimpleFreeFieldHRIR to store them. Also, ListenerDescription should contain something like "Listener was sitting in the center of the loudspeaker arc, was wearing the headphones XXXX and a sombrero.". Agreed?
FB2: Agreed to PM2.
MG3: Perfect, olè! ;) Just to finish this section, uff… the joint usage of HpIR+HRIR (such QU_KEMAR_anechoic_SennheiserHD25_0.5m.mat) has not to be encouraged, agree with me?
The naming of the headphone related attributes needs to be clarified.
In general, all attributes considering the headphones have the prefix Source. The attributes on which we agreed so far are:
- SourceManufacturer: manufacturer name, optional.
- SourceModel: model name from manufacturers, optional.
The attributes proposed and currently being under discussion so far are (t.b.d.: to be defined):
- FormFactor: Circumaural, Supra-aural, Earphones, etc. (MG: see my comment at the end of this list).
- FBrinkmann: What do you mean by this?
- PM: I think that this is redundant information as already given by the SourceModel. What do you think?
- EarcupDesign: Closed, Open, etc. (MG: see my comment at the end of this list).
- Technology: Transducer technology, e.g. Dynamic (MG: see my comment at the end of this list)
- FBrinkmann: What do you mean by this?
- Sensitivity: t.b.d
- FBrinkmann: I find the attributes suggested in the draft 0.1 to be very reasonable already, and would suggest to use them and maybe add Sensitivity and Tracking of headphone position
- PM: Please define Sensitivity.
- PM2: I see that in our AES proceedings, the sensitivity was defined as "electroacoustic transducer sensitivity (transfer factor) in mV/Pa". Can you tell me, if this is a value which is supposed to be measured or is this just the value from the specs given by the manufacturer?
MG: New attributes (w.r.t version 0.1) want to facilitate the relationship between product design attributes and acoustic responses of headphones. I feel that headphone characteristics (e.g. form factor, ear-cup design, etc.) have to be considered like the anthropometry for HRIRs. We should follow some standards in order to define these labels. Have you any strong feelings on this issue?
PM: If you think that you need those attributes: OK, no problem. Please define them, particularly paying attention to avoid redundancy (information provided already by the SourceModel).
MG2: All these attributes are somehow redundant once SourceModel is considered with an available product sheet. But anyway, we have to answer to the following question: why might it be important to perform a meta-analysis on model characteristics?
PM3: With manufacturer and model, a meta analysis can be performed by using the headphone specs provided by the manufacturer. I'm afraid that including too much information on the headphone specs in the SOFA files (which actually should contain IRs only) will lead to an inconsistency. Further, attributes like EarCup or FormFactor must be optional, because we cannot ensure that everybody will have these data or know what would be the appropriate values. By storing the manufacturer and the type in the SOFA file, the link to the corresponding specs will be unique anyway. So providing more information than manufacturer/model does not help. I thus suggest to store specs and IRs in separate files. If you think that it is not clear what specs are describing the measured headphones, we could provide a link to the particular specs file. For example, for the specs of Sennheiser HD 520 II, we could store SourceURI = 'http://mypdfmanuals.com/user-manual,SENNHEISER,HD%2B520+II,3708267.pdf '. This way, we would avoid inconsistencies because the meta data would always correspond to those given by the manufacturer. And, in the case of having specs which differ from the manufacturer specs, one could provide a link to a custom-made specs file. Note that you still can provide more attributes, however, they don't have to be defined in conventions.
- FB: Agreed to PM3.
- Bbboren (talk) 18:24, 6 August 2014 (CEST): Agreed re:PM3.
- MG3: smart choice, agreed. I’ve noticed that in the proposed SimpleHeadphoneIR there isn't SourceURI at measurement level, but only GLOBAL:SourceURI.
- the stimulus type (e.g. sine sweep): t.b.d
- MG2: This attribute is useless if we consider the IR. But, if I want to store raw data, i.e. recordings without processing, we need to specify the stimulus type, e.g. a sine sweep response. How have you handled this issue in the SimpleFreeFieldHRIR convention? I’m probably missing something here...
- PM2: In SimpleFreeFieldHRIR, we store impulse responses.
- Bbboren (talk) 18:28, 6 August 2014 (CEST) Also if you're using sine-sweeps and want to store the harmonic distortion products along with the impulse response, it would be useful to know the type of signal. But maybe that's not a common enough occurrence to worry about for these purposes.
- PM3: I agree with Bbboren. For storing data other than IRs, I suggest to create an other datatype which would be able to store freqency-sweep responses. Then, we could create conventions which use that datatype for headphones (or maybe even for any kind of electro-acoustic device). For SimpleHeadphoneIR, I suggest to stick to IR. Agreed?
- MG3: Agreed. Furthermore, just to have a clear view of the naming philosophy, you are going to call such conventions without “IR” suffix, correct?
- qualitative data about microphone/receiver position, e.g. blocked ear canal, open ear canal, at the eardrum
- MG2: Optional (when tracking data are not available).
- Because such an information is not clearly defined yet, in SimpleFreeFieldHRIR, we use the attribute Comments for storing that information. I suggest to use that information for the headphones as well.
- FB: I think this should be stored in GLOBAL:ReceiverDescription rather than in GLOBAL:Comments, and could provide any information found to be useful (eg. simple specification as microphone type, or publication etc.)
- PM: Agreed to FB. ReceiverDescription seems to be the perfect place. MG: do you agree?
- MG3: Agreed with both of you.
The attributes which as the result of our discussion, won't be considered are so far:
- FrequencyResponse: declared (from the manufacturer) frequency range, e.g. 12-20000 Hz
- FBrinkmann: IMHO not needed
- MG: I agree that this is not mandatory nor useful.
- PM: Agreed, attribute won't be considered.
Further, the structure of the attributes should be defined as well: Candidates (with XXX being one of the above-stated attributes):
- Headphones.XXX does not work because SOFA does not allow nested structures.
- GLOBAL_HeadphonesXXX is allowed, but the difference to SourceXXX should be clearly defined then.
- In SOFA, the headphones are represented by the object Source. Thus, we could use GLOBAL_SourceXXX.
- FBrinkmann: I am in favour of this.
- MG: I proposed GLOBAL_HeadphonesXXX in order to characterize this convention. Now, I realize that GLOBAL_SourceXXX should be the most adequate for headphones. SOFA API already has the field GLOBAL:SOFAConventions, thus users are able to know that a “.sofa” file belongs to a HeadphoneIR convention and to automatically distinguish between typologies of Source and characteristic attributes.
- PM: Agreed. We will use SourceXXX as much as possible.