mario-8bit-thumb

The Death of Pixels?

Researchers at the University of Bath have made a breakthrough that could potentially make both pixels and image resolution defunct. Read on to see why they believe that vectors will lead to new and improved standards for visual fidelity.

Our current high definition world is built around the pixel. This is true for our resolution standards such as SD (720 x 480), HD (1920 x 1080) and now Ultra HD (3840 × 2160). In recent times, as multimedia productions including photography and filmmaking have gone digital, our visual fidelity is ultimately locked to a single resolution. That is, once an image has been encoded at lower resolution, it’s stuck there unless a higher resolution source can be found.

This is true for 2k sourced digital video, a 480i DVD or an internet-friendly JPEG. In each case, any attempt to upscale, whether on the fly by a video processor or by using cutting-edge software, introduces errors so glaring that a child can find them.

This simple but devastating error in upsampling pixels by computer is the reason why computer text is typically stored used vector-based math rather than per-pixel bitmap information. While a bitmap is just a grid of pixels of varying density (resolution) and color information (RGB values), vectors express images through a series of mathematical equations that capture the contours of an image.

SMshow081 final 1920 The Death of Pixels?

This distinction between per-pixel mapping and geometric-derived vector-based graphics is a leading cause for the visual limitations found in modern videogames. Having to store all loaded textures at the best possible resolution (best by balancing visual fidelity versus data size) occupies RAM and VRAM to a crippling degree, made even more devastating by the time spent removing detail in order to optimize RAM usage.

Storing images as vectors ensures that fidelity is never lost. That is, a 1080p photograph encoded into a vector-based format can be reproduced at a gamut of resolutions as the math will scale down with the display resolution, and yet always be ready to display at 1080p.

Until now, using vectors to capture photorealistic images has been difficult beyond geometric-friendly shapes. The color detail between shapes (space between spaces) is lost. Apparently, a breakthrough in research by Philip Willis and John Patterson of the University of Bath in England has overcome this former limitation.

Their new codec is called Vectorized Streaming Video (VSV). It currently bridges two areas of pixel-derived resolution: the capture/input device and the display/output device. That bridge hardly seems like it will eliminate pixels and resolution, since they’re still the alpha and omega of the image. Like many breakthroughs, however, this codec could lead to other derivative and associated innovations.

Replace the pixel-based image sensor on the camera and pixel-based screen on the display with vector-based equivalents, and suddenly both pixels and resolution are no longer part of equation. Likewise, the amount of computing that goes into the vectors’ equations would be reduced. This kind of playback could scale movies to tremendous heights without introducing the noise or artifacts associated with upscaled images.

Naturally, a vector-based display pipeline couldn’t add details that aren’t present in the original source, but it would be far superior in terms of correctly scaling the existing detail.

For now, the researchers are seeking commercial partners. The codec seems to have the potential to alleviate the amount of bandwidth that bitmap-based video codecs and images currently occupy. Whether or not vectors will revolutionize how we display and handle images remains to be seen, especially given the five-year timetable that has been bandied about. Nevertheless, the possibilities posed by this breakthrough suggest that the time may come when we move beyond pixels and resolution to new as-of-yet undefined standards.

[Information for this post was obtained from ExtremeTech and the University of Bath. The post's banner image comes from this page.]

15 comments

  1. This could have some very interesting ramifications for streaming video, if it reduces bandwidth and improves scalability from very high-quality sources. It will be difficult to market, however. People like to hear resolution numbers in their video specs. High numbers make them feel better about what they’re watching. “Ooh, that’s 1080p at 35.5 Mb/s bit rate! Wow, it must look incredible!!” The vector people will need to come up with something comparable.

    • I think that is an immediate concern as well- I was having a conversation about this yesterday and without a new metric it is difficult to relate the concept of eliminating resolution. Resolution is oddly specialized in a way that say, horsepower is not.
      I can see something along the lines of significant figures as denoting the degree of precision and complexity with the associated math. Whatever it is should be marketable so long as “bigger numbers” can be translated as “better.”

      • Apple will buy them, reduce the quality to something below what we get from a public domain DVD, and brand it “The Most Awesome Thing Ever You Need to Buy RIGHT NOW!!!”

        • JM

          Jonathan Ive will invent his own vector using only bézier curves and call it The Unobtrusive Codec.

          • Give apple about five years AFTER it’s been introduced to market (CD ripping/burning, MP3 players, smart phones, touch screen tablets)… Then we’ll see the new iView(TM) with your RetinaCinema(TM) technology. Never before seen by human eyes!!!(disclaimer in small print – except as seen 99% the same produced several years ago by the company that first invented it without our advertising budget and BS-PR(TM))

            Apple the inventors and revolutionary designers (We made it in white with one button) bring it to you for the low-low price of twice what everyone else sells it for!! You must get it from us because we invented it! Why must you believe this? Because Apple marketing tells you we did it first and that you must have it! Or you’re a miserable loner with no life because you aren’t an apple drone!

            Then a year or two down the line you’ll be having to listen to all those people who aren’t gadget freaks like us, telling you how Apple have re-invented the wheel, no matter what you tell them… ;-)

          • William Henley

            Worse yet, Apple will then seek to patent the technology, even though it has already been patented and that there is prior art, then sue everyone else who is doing it.

  2. JM

    The HEVC H.265 codec to launch in 2014 does 4K at 60fps at 10Mbps.

    Sony’s new XAVC codec and Red Ray’s codec are in the same ballpark.

    If VSV can do 8K at 120fps at 5Mbps in 2017 will we have a blu-ray killer?

  3. You should check out http://www.euclideon.com/ to see what will replace polygon rendering in the near future! fascinating stuff, and even if these guys don’t finish the development, it’s clearly going that way. Unlimited on-screen detail, no worries about texture maps, etc… :-D a 3D modeller could actually build a clay model, paint it, scan it, and it’ll look the same in-game, without having to do the whole ‘multiple polygon models’ thing.

    I’m sure with this vectorized video manipulation, they’ll find a way of numerically defining its advantages. ;-)

  4. William Henley

    Truthfully, I am not at all convinced by this. Vector graphics is fine if your source is CGI based – ie 3D graphics are already vector based – this is why you can take an old PS, PS2 or Wii or Gamecube game, put it in an emulator, and play in HD resolutions (except for Final Fantasy 7, where the 3D graphics were stored as bitmapped graphics on the game disc).

    The issue with video, and the demonstration, is that you are starting with pixilated video, then converting it to vectors. This may work for scalling down, but doesn’t really work for scalling up. Your source is still locked at a specific resolution, because of your CCD sensors, and you can’t add information that wasn’t there.

    Converting to vector MIGHT give you less jaggies when upconverting, but any good upconverting software already does stuff like this. When converting to vectors, you are doing stuff like edge detection, estimation, etc, which is exactly what you do when you upconvert.

    Worse yet, because you are doing this edge detection and estimation, which isn’t perfect, it means that the resulting video when you scale down actually is not as good as when you just downsample pixel based stuff. I have been looking at this technology over the years, and have not been impressed with what I have seen. It is why no one is using it – several groups have worked with this idea before, and all seem to get similar results.

    Unless someone can come up with a completely new technology for capturing video (something other than CCDs), I don’t see vector based video ever taking off – as Brian said in the article, your begining and end (alpha and omega) are still based in pixels, and all you have done is added an additional conversion method in the middle.

    What is really bad, though, is the “demonstration” in the video. It honestly looks like someone just took video into Adobe Premiere and ran it through a “Sketch” or “Glowing Edges” filter, or whatever its called now. I’ve been producing the exact same effect for 15 years in music videos I’ve produced in Premiere. The effect looks so familer to me, that I question if the video is actually a working version of their software, or if they just threw that together to try to get a grant for their project.

    Once again, though, Vector based imaging works great with CG stuff.

  5. EM

    Typically, computer text is stored using neither vector-based math nor per-pixel bitmap information. Usually each character is represented by a single byte or a small number of bytes representing the character’s place in a character-code table; for example, a capital A usually has codepoint 65, and a lowercase a usually has codepoint 97. This is a very compact storage system, and it depends on matching those codepoints against a computer font in order to render the text appropriately. Now, the fonts nowadays are most commonly vector-based (though bitmaps have also been used); perhaps that’s what you had in mind.

    • William Henley

      Eh, depends on the font format as well. I know this from working a few years in advertising – many fonts (especially TrueType fonts) start pixilating if it gets too big.

      I am trying to remember the different type of fonts, but I think it is PostScript that are vector based.

      I think OpenType fonts can be either.

      Anyways, chasing a rabbit. The point I was going for is that quite a few fonts are actually bitmapped.

      Now, if you are working in layers in Photoshop, then you could say the graphics are bitmapped and the text is vector-based. Maybe that is what the article is refering to.