I wonder: How SoundID would interact with OEM's audio processing of Android?
Ok this is the usecase that matched mine...
I have my Samsung smartphone that bundled of "Dolby Atmos" feature
There is a mode of that feature, the mode "Movie" would let the traditional stereo panning become more suitable to headphone listening, by doing headphone virtualization.
Now the main question of the situation is: what audio processing is done first
if Dolby Atmos is done last, then SoundID would not correct frequency accurateness that produced by Dolby Atmos
On my guess: it is that SoundID is processed first then passing to Dolby Atmos
1 comment
Probably dependson what content yourlistening to. A properly created and mastered dolby atmos track uses metadata in the track file to render the voices within a virtual space. So i would assume that gets read with the other track data first??
Then soundID would eq over that.. else you would notice deficiencies in the virtual render