I don’t think that AI-Hardware crossovers will be a thing any time soon, because the existing systems are cheap, simple and effective. On a big rave sound system you may find huge subs, special speakers for the kicks and them some more speakers for mids and highs … many people want it to be just loud, but clarity is a good thing … on a decent sound system it is not necessary to single out vocals from the mix and send them to different speakers, nothing would be gained …
On the other hand, in an experimental setting this may be interesting to explore - but I find it unlikely that we will see dedicated AI hardware for this. On laptops visual and sound-exploring artists already create all kinds of mindblowing things … and stems is one of the most discussed and anticipated features at the moment. It’s an interesting time …
It already exist in the game engines like unreal and unity, godot or cryengine and this is good example how games works. Applying 3D to the stereo track music is impossible, producers need designed daw for Dolby Atmos mixing&mastering
Google: “Grateful Dead Wall Of Sound”. Their technician sorta invented the line array, and sent every instrument, plus vocals to its seperate stack of speakers.
But stereo just works. At least if you have a decent soundsystem with low harmonic distortion. A top of the bill stereo system is still cheaper than going seperate on a crappy system.