What You Need to Know About DJI Mavic’s 2 Pro HQ Mode

In this post we are going to be talking about the DJI Mavic 2 Pro and specifically the new HQ mode. What I wanted to do is explain what exactly that is and what DJI are specifically doing with the data and the camera at this mode.

The new Mavic 2 Pro from DJI has a 20 megapixel sensor. That sensor is 5544 pixels wide by 3694 pixels high. It’s a 3 : 2 sensor and it is pretty much the same size and design as what you would see on any one inch sensor camera like the Phantom 4 Pro as well as what you see on the Sony RX100 and things like that. If we compare that to the sensor that’s been used on other aircraft, for instance the Mavic Pro as well as the DJI Mavic 2 Zoom (you can see the 1/2.3 sensor at the bottom) and in comparison to the 1 inch 20 megapixel sensor used in the DJI Mavic 2 Pro drone looks much smaller in size.

As you can see it is substantially larger than that smaller sensor.

Before we move on, the one thing to understand is whilst this 20 megapixel sensor is all lovely and fantastic for stills it does present some issues in video. The reason is 4K resolution is actually only about 8 megapixels and if we were to look at that against the sensor its 3840×2160 pixels. So you see, in the middle that is the actual size of what the 4K video would be on the sensor.

So the first thing we need to understand is what they actually do. The process to reduce the image down is called “down scaling”: you take the full image from the sensor and then pass that into the SOC onboard the aircraft or in the camera. That then does the downscale in and then it outputs the video depending on the resolution you have chosen. In this case we can see its output into 4K Ultra HD 3840 x 2160.

There is a huge amount more I could talk about with regards to what data they actually take from the sensor (because the sensor is actually 3 by 2 and the output is 16×9 so they don’t quite take the full sensor; they actually crop it top and bottom but take the full width). However that isn’t relevant to this. The thing to understand is the processor takes the full image from the sensor, downscales, it makes it smaller and puts it out to the resolution you have set.

There are a number of methods of doing this downscaling. The first is called Pixel Binning. The second is Line Skipping and Sampling. The output results is heavily dependent on the method of downscaling chosen and the ability of that SOC. For instance, Pixel Binning which is done between the sensor and the SOC is the easiest to do but produces the worst results and Sampling is the hardest to do but produces the best results.

If we take something like the Panasonic GH4 it does an absolutely fantastic job of taking this full sized image and dropping it down to 4K resolution because it’s doing sampling. However the processor on both of these drones isn’t powerful enough to be able to do that so they do use other methods. And with all of these methods there are drawbacks and the downscaling with sampling produces very few. However when you do it with the other ones it does do some destructive to the image and they are called destructive methods of downscaling. The downside to that is with Pixel Binning and Line Skipping, for instance, you can get a loss of detail aliasing and various digital artifacts in the image. So whilst taking a full image and pushing it down to a smaller size is the simplest method depending on how you do it depends on the results you get.

So to get around this DJ I introduced the new HQ mode to be able to give you just a bit better image out of the DJI Mavic 2 Pro. They did this as follows: rather than take the whole image and downscale it now the sensor only takes the images of the 4K resolution. With taking just the 3840 x 2160 pixels from the middle of the sensor this section is then put back into the SOC. However, it is not downscaled and it is simply processed, put in its container for HD 4K and put out at the resolution it’s taken. We call this one-to-one pixel output. You take the same size in and the same size out.

The advantage to this is there’s far less onboard processing with no downscaling. The result of this is you get much better video quality out because you’re not doing the downscaling on the SOC, so there are no aliasing. You should get more detail compared to the destructive methods of downscaling like Line Skipping and especially Pixel Binning.

Overall using high-quality subsampling or something of the full image sensor would produce better image quality when done correctly, like for instance, done on the GH5. However it is extremely processor intensive so doing this crop of the center of the image 1 to 1 pixel output will give the best possible output.

There are however some downsides to doing this and those downsides are our field of view. The DJI Mavic 2 Pro has a standard field of view of 77 degrees and it has the same field of view of all images. FOV is how wide an image the drome sees through the lens. So you can see on the image below, the sensor at the bottom and you can see through the lens it’s able to see that size of image on the beach, for instance.

The downside to take in a cropped image from the sensor is that it reduces the field of view. So if we look at the bottom here, you can see we take a 4K section from the middle of the sensor and if you look you can see that the available field of view for the camera is reduced because the image size we’re taking from the sensor is smaller in this mode. It takes only a 55 degree field of view from the drome so you’re actually getting a reduced output.

Obviously this image is exaggerated, it doesn’t lose quite as much as I’m showing here. However it gives you the idea if you take a cropped section of the sensor what you then do is reduce the amount of available image that it’s able to capture. So in field of view mode you get, as I said, 77 degrees of field of view in all resolutions. It doesn’t matter if you’re doing 720p, 1080p or 4K because you’re taking that full sensor data and downscaling it to the resolution you’ve chosen. HQ mode only gives you 55 degrees of view and the reason for that is you’re taking only that exact 4K pixel size from the middle of the sensor and because you’re taking a smaller slice of the sensor you’re getting a reduced field of view.

Overall HQ mode will offer much improved image quality because you’re not doing all of that destructive processing. However it is available at the expense of reduced field of view as well as gaining a little bit of noise in the shadows. The reason for that is it’s not processing out that noise in the SOC.

Overall it should give you much less processing out of facts like sharpening and processing errors that you get from the codecs and especially the down scaling process. When using the camera the thing to be aware of with HQ mode is that it is available in both h.264 and h.265.

One last thing I just wanted to mention on this was if you take the size of that HQ mode: 4K it’s still larger than the available sensor area on the DJI Mavic Pro and the DJI Mavic 2 Zoom model. So whilst you’re only taking a smaller center section of the sensor that area is still larger than the overall area you have available on the 1/2.3 sensors.

Finally I wanted to talk a little bit about the 10-bit Dlog-M color profile. This was introduced by Hasselblad alongside the Mavic 2 Pro and it is something those guys have been working on. The first thing to know is it only works in h.265 or HEVC. The reason for this is the h.264 codec doesn’t have the available profile needed. They need to use the h.265 profile 1-10 which allows them to get that extra color data into it. So whilst you can use it in HQ and non HQ mode you do actually have to be in HEVC or stroke h.265 to be able to use it. So keep that in mind.

I hope this post about HQ mode on the DJI Mavic 2 Pro was useful for you!