Researchers at the University of Bath have made a breakthrough that could potentially make both pixels and image resolution defunct. Read on to see why they believe that vectors will lead to new and improved standards for visual fidelity.
Our current high definition world is built around the pixel. This is true for our resolution standards such as SD (720 x 480), HD (1920 x 1080) and now Ultra HD (3840 × 2160). In recent times, as multimedia productions including photography and filmmaking have gone digital, our visual fidelity is ultimately locked to a single resolution. That is, once an image has been encoded at lower resolution, it’s stuck there unless a higher resolution source can be found.
This is true for 2k sourced digital video, a 480i DVD or an internet-friendly JPEG. In each case, any attempt to upscale, whether on the fly by a video processor or by using cutting-edge software, introduces errors so glaring that a child can find them.
This simple but devastating error in upsampling pixels by computer is the reason why computer text is typically stored used vector-based math rather than per-pixel bitmap information. While a bitmap is just a grid of pixels of varying density (resolution) and color information (RGB values), vectors express images through a series of mathematical equations that capture the contours of an image.
This distinction between per-pixel mapping and geometric-derived vector-based graphics is a leading cause for the visual limitations found in modern videogames. Having to store all loaded textures at the best possible resolution (best by balancing visual fidelity versus data size) occupies RAM and VRAM to a crippling degree, made even more devastating by the time spent removing detail in order to optimize RAM usage.
Storing images as vectors ensures that fidelity is never lost. That is, a 1080p photograph encoded into a vector-based format can be reproduced at a gamut of resolutions as the math will scale down with the display resolution, and yet always be ready to display at 1080p.
Until now, using vectors to capture photorealistic images has been difficult beyond geometric-friendly shapes. The color detail between shapes (space between spaces) is lost. Apparently, a breakthrough in research by Philip Willis and John Patterson of the University of Bath in England has overcome this former limitation.
Their new codec is called Vectorized Streaming Video (VSV). It currently bridges two areas of pixel-derived resolution: the capture/input device and the display/output device. That bridge hardly seems like it will eliminate pixels and resolution, since they’re still the alpha and omega of the image. Like many breakthroughs, however, this codec could lead to other derivative and associated innovations.
Replace the pixel-based image sensor on the camera and pixel-based screen on the display with vector-based equivalents, and suddenly both pixels and resolution are no longer part of equation. Likewise, the amount of computing that goes into the vectors’ equations would be reduced. This kind of playback could scale movies to tremendous heights without introducing the noise or artifacts associated with upscaled images.
Naturally, a vector-based display pipeline couldn’t add details that aren’t present in the original source, but it would be far superior in terms of correctly scaling the existing detail.
For now, the researchers are seeking commercial partners. The codec seems to have the potential to alleviate the amount of bandwidth that bitmap-based video codecs and images currently occupy. Whether or not vectors will revolutionize how we display and handle images remains to be seen, especially given the five-year timetable that has been bandied about. Nevertheless, the possibilities posed by this breakthrough suggest that the time may come when we move beyond pixels and resolution to new as-of-yet undefined standards.