My trip to California last week had already brought me to both Dolby and DTS to experience new home theater product demos. Knowing that I’d be in L.A., however briefly, my friend (and occasional Roundtable contributor) Chris from Big Picture Big Sound clued me in to the existence a third audio-themed conference that I might want to make time for. After some last-minute schedule adjusting, I was able to squeeze in a presentation for SRS Labs’ new Multi-Directional Audio (or “MDA”) format.
What’s SRS Labs, you may ask? Admittedly, the company doesn’t have the name recognition value of a Dolby or DTS. However, it’s a pretty big player in the area of virtual surround sound and volume-adjusting processors. Its technology is built into many HDTVs, A/V receivers, cell phones and more. (One of the company’s slogans is: “You’ve heard us. But you may not have heard of us.”) Some of our readers may recall that I reviewed the SRS Volume Leveling Adaptor last year. Although I wasn’t much impressed with that product’s glitchy performance, I could chalk that up to one bad experience and not necessarily hold it against the entire company. I was more than willing to attend this presentation with an open mind.
Multi-Directional Audio is an “object-based” sound format, much like the new Dolby Atmos theatrical format that I wrote about earlier this week. The basic concept behind this implementation of object-based sound is exactly the same as Atmos, so I’ll direct you back to that article rather than repeat myself here.
That being the case, what’s the difference between MDA and Atmos? According to Chief Technology Officer Alan Kraemer, he doesn’t necessarily see the products as being in competition, other than the fact that Dolby’s technology is proprietary in nature. SRS wants MDA to be an open standard that all object-based sound is created in, much as current multi-channel soundtracks are created in PCM format. Once an MDA soundtrack is completed by the sound mixers for a movie, TV show, videogame, etc., that MDA master could then be transcoded to any number of distribution formats, including Atmos. Kraemer says that MDA is, “Not a codec. It’s a language.” He describes it as codec-agnostic and open for compatibility. By standardizing one common mastering format, the industry could avoid a needless format war while at the same time fostering competition between companies.
Also, and perhaps of more interest to readers of this web site, SRS wants to make an aggressive push to bring MDA and object-based sound to the home market, whereas Dolby’s Atmos is strictly a theatrical format for the time being. The demonstration I received, which was held in a conference room in the Hilton Universal City hotel, was presented on an 11.1 speaker system that appeared to be consumer-grade gear, centered around an HDTV display. Don’t get me wrong, they were nice speakers (nicer than the ones I own), but the point is that this could feasibly be a home theater product once the details are worked out as far as how to integrate this processing into an A/V receiver. (The demo was played from a computer workstation.) Kraemer claims that SRS has been working with several CE manufacturers and other home audio players to make that happen, though he couldn’t reveal specifics or give any timeframes. (This recent SRS press release may offer a hint.)
The centerpiece of the demo was a one-minute short film that featured a helicopter, police sirens and other sound effects that swirled around the room. In fact, here’s the exact video (downgraded to basic stereo for YouTube, of course):
This was played first in 11.1, and then mapped down to 5.1 and 2.0 to demonstrate the format’s scalability and SRS’ virtual surround processing, which attempted to simulate the inactive sound channels.
I’ll be honest; I wasn’t wowed by the demo. Part of that may have been the venue. Again, this was in a hotel conference room. Over the two previous days, I had just sampled Dolby Atmos and some impressive DTS products in professional listening rooms. This wasn’t exactly a fair comparison. On the other hand, the MDA format did what object-based sound is supposed to do. Specific sounds in the clip deftly navigated through all channels, and a single soundtrack was adaptable for multiple output configurations. As a proof-of-concept demo, MDA works. Once SRS figures out how to integrate this with a delivery codec (or even multiple competing codecs) and get it into an A/V receiver, this could be a viable consumer product. And the things that Alan had to say about creating an open, codec-agnostic standard made a lot of sense to me. I would much prefer to see that happen than for Dolby, DTS and various other companies to all go head-to-head with proprietary object-based formats.
One question remains unanswered, however: From what source will viewers listen to MDA soundtracks in the home? Alan sounded doubtful that the Blu-ray spec would be revised to incorporate MDA or any other new sound formats. (He seems to be one of those people pessimistic about the future of physical media in general.) So what does that leave us with? Streaming media? Streaming would of course bring with it a whole lot of issues regarding compression and internet bandwidth that still need to be worked out. If that’s where this is headed, I’m not sure how long it will take to bring MDA to the home.
Before I left, Alan had one last, very intriguing demo that showcased some of the fascinating potential for MDA. While playing a recorded clip from a football game, he used a tablet computer with prototype software he called “MDA Director” to manipulate the soundtrack in real time. For example, he was able to isolate the color commentary voiceover and drag it from speaker to speaker around the room using the touch-screen interface, placing it wherever he wanted. If you’d like the announcers to sound like they’re sitting right beside you, that’s where they’ll be. Maybe you prefer them to come from the surround channels, or even overhead? No problem. He did the same with the sounds of cheerleaders and the stadium crowd, and then easily switched languages on-the-fly.
Obviously, when watching a movie, you probably don’t wish to mess around with the artistic sound design that the filmmakers want you to hear. However, this could be useful for audio commentary tracks or other supplemental features. Or, say you’re watching a concert film, and would like the sound to simulate the experience of sitting in the front row, and then move around to different seats in the venue. You can do that. The possibilities for this are very cool.