Subscribe to this thread
Home - General / All posts - Reason for fuzzy/burry image rendering
gjsa100 post(s)
#19-Apr-23 03:05

I have imported a thresholded integer value geotiff with a native pixel resolution 1' / 0.3 m in to Manifold 9.0.179.0

It displays as shown below.

1. Why are the pixels not shown by default with clear edges?

2. How can I get the edges to be unblurred?

I can recall seeing this issue discussed in an old thread, but I can't locate it despite searching.

danb

2,064 post(s)
#20-Apr-23 23:40

Hi gjsa,

See Adam's comments in this post:

I believe that this is to do with the interpolation used when rendering

Manifold System 9.0.176.3 + 8.0.32.1 (georeference.org)

I am going through a process of LiDAR QA at the moment and frequently miss some subtle manifestations of incorrect swath alignment and intra swath distance in the DEM. My colleague using QGIS however always picks these out, and we have determined that at least in part, it is because he is looking at the image using nearest neighbour resampling whereas in M9 I get a slightly blurred bilinear interpolated imagery.

As such it was on my to do list to see where this is at and to submit a feature request to include bicubic and NN as options.


Landsystems Ltd ... Know your land | www.landsystems.co.nz

gjsa100 post(s)
#21-Apr-23 00:29

Hi - that makes sense. I guess I'm still at a loss to explain why the image is being resampled at all for display purposes.

If I load this image into QGIS or ArcGIS or any other raster-capable GIS package and zoom into a scale that can identify individual pixels, then I can clearly see those individual pixels.

In Manifold 9, that has never been possible. I'm provided with the non-choice of viewing the pixels being interpolated which makes, as you explain above, visual interpretation painful for no apparent technical reason.

This is how it should display:

Dimitri


7,413 post(s)
#21-Apr-23 07:30

then I can clearly see those individual pixels.

Clearly see individual pixels? Nope. What you're looking at are usually thousands of pixels, that in an accepted fiction represent a single pixel.

If you zoom in to where a black square that supposedly represents one data pixel is one inch (2.54 cm) in size on your screen, you're looking at over nine thousand pixels, 9216 to be exact if your monitor uses typical 96 dots per inch pixel grid density.

Unless you're looking at the data in native resolution, where each pixel of data is exactly one pixel on screen you're looking at some sort of fiction. If you zoomout so each pixel on screen corresponds to more than one pixel in the data, you're going to be looking at a fiction, likely some interpolation that involves clipping and averaging. If you zoom in so each pixel on screen corresponds to less than the size of a single pixel, you'll also be looking at a fiction, the only question being what kind of fiction/interpolation it is.

What kind of fiction is "best" depends on what the purpose is for the visualization and what zoom level is being used. Release 8 used the same fiction when zooming in that Arc and Q use: it painted blocks of onscreen pixels to represent regions of pixel values, using, for example, slightly over 9000 pixels to paint a single pixel when zoomed in so it fills 1 inch of the screen.

If you think about it, that, of course, is a lie, because that's not what raster data is in the real world. There isn't a razor sharp geographic boundary every 30 meters on planet Earth that gives different colors when you step over that boundary in the world of Landsat imagery. If you want to have a better visualization that looks more like the real world does when using 30 meter pixels, it's better to have an interpolation. The black and white sample image doesn't show that, but a real world, full color remote sensing image would.

That's why 9 goes to the extra effort of doing an interpolation, because that gives a more realistic visualization of the raster data that most people in GIS work with: remote sensed images like drone, aerial, and satellite photos. That also captures the real world effect in other types of raster data without presenting fake impressions of razor sharp accuracy.

If you want to zoom far in and edit individual pixels as if you are doing vector editing, sure, there a different way of representing pixels using no interpolation could be better. That's also technically easier than interpolating, so it is the way lazy programmers do it whether that's what is the ultimate objective for the presentation or not.

But seriously, how many people edit remote sensing images pixel by pixel? Do you think it's as many as 1/10th of 1%? I've done some of that when trying to remove artifacts like unwanted reflectance highlights in satellite photos, and it is extremely difficult to do with any tool, at least if you want the result to look natural. If you're trying for a deep fake (which is what edited images are, just with benign intent) it's even harder because you don't get a natural interpolation of colors relative to adjacent pixels as is typical for most remote sensed images. But still, if you want to edit individual pixels, as I have on rare occasion, it would be nice to have the classic block style available. I've even requested that, as a low priority item.

So why does 9 go to the effort of presenting interpolated pixels? Because at most zoom levels that provides a more natural appearance to the scene than fake, razor sharp edges, and a more natural appearance to the scene is what over 99+% of users seem to want. I'ts also more natural to use interpolation at zoom levels where clearly that's better for a more natural appearance, and then to continue doing that interpolation as zooms go far, far into the image, that being more natural than at some zoom level suddenly to stop interpolation and instead switch to sharp bordered blocks, which would be confusing for most users. Therefore, interpolated views are the default.

As for providing Release 8 style presentation when zoomed far in, that's always been in the plans as a possible option, but given that out of many thousands of requests in the years that 9 has been out, not a single person besides me has ever requested it, well, nobody has ever bothered to implement it. That's a good thing I suppose because people would rightfully be annoyed if frequent requests by many people had a lower priority than a single request one person had submitted, and that request being accompanied by a note it was a low priority.

As for what type of interpolation is used when interpolation is used, that's a related but different topic. There, also, the default choice varies depending on what the purpose of the interpolation is and the workflow that is being used. For example:

I am going through a process of LiDAR QA at the moment and frequently miss some subtle manifestations of incorrect swath alignment and intra swath distance in the DEM.

LiDAR data is vector data, not raster data. If you are looking at a DEM the above isn't doing QA directly on the LiDAR data, it is examining a raster created from the LiDAR data, so that brings in the interpolation process used to create the raster and then based on the raster making inferences about the original vector LiDAR data. Using different interpolation methods to create the DEM (9 offers many) provides a wide variety of options to create a raster that may be better at showing what you're looking for.

But if you want to visually detect anomalies in the original, vector LiDAR data, like incorrect swath alignment, etc., I'd suggest looking directly at the vector data with well-chosen vector symbology.

Another possibility (just a wild guess - haven't actually tried it), if you must look at the interpolated raster somebody else created and not the original vector data, is to use various filter transforms, like the edge detection ones, to see if they can highlight such anomalies.

danb

2,064 post(s)
#21-Apr-23 22:37

Sometimes Dimitri, it feels like you really suck all the oxygen out of a perfectly reasonable point. I am not going to debate that LiDAR data is vector data, you are correct, but I use the term collectively and casually to mean las, intensity, DEM, DSM etc as these are the source and derivatives that I am familiar with and rightly or wrongly collectively refer to as the LiDAR dataset.

In one sense, you are also correct that I should be looking at the las component rather than the DEM, but again you know nothing of the process with which we have been charged and the specification which the derivatives are being tested against. In fact we look at both las and DEM, but many of the issues we are looking for are most clearly visually manifest in the DEM/DSM products. It has been noted during this process that the interpolation method currently used by Manifold display pipeline is unhelpful to this particular task, end of story.

There is no implied criticism of Manifold 9 which I will always use and enthuse about as I understand and can make use of what makes it so vastly superior to anything else out there. My only interest is in improving the product. AdamW has already agreed and as such I have always assumed that it will appear at some point.

As I pointed out in my initial response to gjsa, I have recently uncovered a workflow need and so said that I would send in a feature request. I do send in feature requests and bug reports from time to time, but they can take a lot of time to prepare (dependent upon the request) and so most of the time I don't until the need becomes pressing. Voting machine to me has always felt like a black box and I have always found it slightly irksome that you have the benefit of being able to see the list and say that no-one has voted while the rest of us just have to take your word for it (it is very disarming). In the case of rendering interpolation, I concede that this may well be the case. It was certainly low on my list and Adam had already implied that it would appear in due course.

Perhaps here I should be trying to promote the ability to edit linked las files and their metadata in place. This would be enormously valuable for the sort of workflows that you mention and again a bit of a coup as I believe Global Mapper/lastools would currently be the go to tool to do this sort of thing, well GM and other more expensive and specialist las tools.

Anyway, I feel sure that this will elicit a long barraged response, but I believe that both gjsa's initial post and my response were perfectly reasonable. I will submit a feature request and I guess personally speaking it is always my hope that Adam sees and hopefully responds to these posts as he tends to provide a clear and constructive response demonstrating a desire to understand more complex needs and how we might like them to work. I appreciate that Adam does this off his own back, but again what a coup for Manifold having the lead developer with his amazing depth of knowledge available to the user base.


Landsystems Ltd ... Know your land | www.landsystems.co.nz

Dimitri


7,413 post(s)
#22-Apr-23 11:51

Looks like a case of crossed wires.

There's nothing unreasonable about either gjsa's initial post or your response. It's just that the two posts are talking about two different things.

gjsa's post asked a question, "Why are the pixels not shown by default with clear edges?" A short, but uneducational, answer would be "because that provides a better look in over 99% of use cases." A response like I gave that explains the issues involved and why the default choice is what it is a much longer response.

With respect, your post discussed something other than gjsa's question. It reported your organization's workflow to find anomalies in vector LiDAR data by visually looking at a DEM derived from the LiDAR data, with the hypothesis that Q worked better in that because your colleague was "looking at the image using nearest neighbour resampling whereas in M9 I get a slightly blurred bilinear interpolated imagery."

But that's a different issue than gjsa's question. If nearest neighbor interpolation and bicubic interpolation were added to raster windows in addition to bilinear interpolation, when zoomed far in the pixels would still be shown interpolated raster-style with fuzzy edges and not as vector blocks with sharp edges. They'd just use the interpolation method of your choice to generate those fuzzy edges.

In my post, I tried to make clear when I had stopped answering gjsa's question and was shifting gears to something else, by using the phrase "As for what type of interpolation is used when interpolation is used, that's a related but different topic." That was a mistake, combining two very different discussions in the same post, and for that I apologize. I should have just answered gjsa's question in one post, and then discussed the issue you raised in a separate post.

---

Now, to switch gears explicitly away from gjsa's question, to discuss your comments about adding different interpolation methods for viewing rasters in addition to bilinear interpolation.

In my prior post I didn't discuss your colleague's hypothesis that adding NN and bicubic to bilinear interpolation would help you do what you want. I just suggested three other possibilities that already existed:

1) Use a different interpolation method to create the DEM. There are many to choose from.

2) Look directly at the vector data with well-chosen vector symbology.

3) Use filter transforms possibly to highlight such anomalies.

In no way am I arguing with you about sending in whatever suggestions you want. I have no religious attachment to any interpolation method so if you prefer NN or bicubic to bilinear, that's absolutely no issue for me. I'm just suggesting alternatives which already exist.

Just saying, there are tools and workflow already in the toolkit that are worth a try to see if they can help you do what you want with greater ease.

danb

2,064 post(s)
#22-Apr-23 22:25

Thanks Dimitri,

You probably caught me at a bad time, but I stand by most if not all of the points I made. In terms of:

1) Use a different interpolation method to create the DEM. There are many to choose from.

2) Look directly at the vector data with well-chosen vector symbology.

3) Use filter transforms possibly to highlight such anomalies.

1. The DEM and the details of its creation methods are one of the vendor provided products which we are charged with quality assuring rather than generating. I have tinkered with all of the interpolators as I agree some would produce a better result than what we have in certain situations. Having said this, the provided DEM is fit for most of its proposed uses and is sufficient to pick up some of the scan and processing errors not readily evident from the point cloud data. It is these fundamental data issues that I seek to identify and get vendor corrected. The DEM has proved to be very useful to this process visually once you tune in on what you are looking for and my assumption here is that NN would aid this by producing a display clarity similar to that of M8, Q and Arc. My apologies if this assumption is incorrect.

2. This is currently impractical due to the size of the size of the supply blocks despite having a purpose-built system with an ever-growing array of PCie SSD's. Each supply block so far is averaging around 35 billion points. While it is incredible that I can even work with this volume of data on a PC in M9, it has to be said that it is somewhat slow and cumbersome. I no longer attempt las libraries for this size of data so would be back to single or more manageable sunsets of las files. By contrast fly around the DEM using the panning and zoom tools so this has provided a far better visual route for spotting certain classes of issues.

3. My M9 QA project has probably around 100 separate queries built to analyse particular aspects of the data as it comes in report the results back in both visually and in tabular form. I am always trying to think of new ways to capture the signal of particular issues from both source las and its derivatives. If fact, a recent post regarding AND options for filling of hydrological sinks means that I will be implementing and testing a hopefully more robust method for detecting both pits within the DEM as well as aerosol spikes using a modified workflow build around the 'prepare watershed' toolset. Wonderful. This is is one of the many thinks I love about manifold. You can do it and even my clunky homemade implementation will likely be far more performant than that from Redlands finest.

Attachments:
Clipboard-1.png


Landsystems Ltd ... Know your land | www.landsystems.co.nz

Dimitri


7,413 post(s)
#23-Apr-23 06:34

that NN would aid this by producing a display clarity similar to that of M8, Q and Arc.

One way that it might be possible to get a look at the difference between bilinear and NN is discussed in the Sub-pixel Reprojection topic.

That topic also has illustrations comparing how different pixel interpolations cause different visual appearance in the reprojected images. With the example image used in that topic, the "nearest" interpolations tend to produce a more pixelated look in some places than other interpolations. The look of the results also depends on the projection used.

Besides providing useful visual samples to see how different interpolations work on aerial imagery (as well as demonstrating why 9 automatically smooths and interpolates pixels on zoomed in views, to provide a better appearance), the topic raises the possibility of getting a different look at the DEM the vendor provides by re-projecting it using forward or inverse nearest neighbor.

It could be that doing such a reprojection, especially into a projection that will exaggerate pixelation effects, could create a visual presentation that makes it easier to visually pick out anomalies. That's a pretty indirect approach, but reprojection of even big images is reasonably fast in 9 so it might be worth a try.

Two other ideas come to mind based on the sample image you attached to the post.

1) It might be that different style intervals and palettes could reveal anomalies. You can copy the image and paste it, which would generate a second image from the same table of tiles (and thus not increase storage required), and then style that image differently. Stacking such images with partial transparency could provide an overall effect that shows anomalies which a single such image might not.

2) Different hill shading might also reveal anomalies. Again, you could create multiple images with different azimuth, altitude, and z scale for different hill shading effects, and even stack them with partial transparency to get a combined effect.

gjsa100 post(s)
#16-May-23 07:35

Thanks for the detailed post. Its appreciated and has broadened my perspective. It is also good to clarify when 'pixel' refers to the pixels on a monitor vs the pixel dimensions on the ground sensed by the satellite vehicle sensor. I'm always referring to the latter.

I think this is perhaps a case of where one-size-fits-all isn't going to work. Interpolation does make sense for displaying very high-resolution imagery, where the sensed data is made up of pixels that are much smaller than the objects being imaged, particularly in the case of RGB or multiband imagery.

But the imagery in my OP represents 2.4' (0.8m) pixels that have been processed/classified into a single band with discrete values. In this case it's a binary classification but could just as well have been a 2, 3 or 10 class random forest classification.

Here, the remotely sensed pixels are similar in size or larger than the objects inside them (trees, shrubs, grasses, etc), and interpolation makes no sense. The edges of the data pixels and the values of those pixels are clearly defined and need to be represented that way visually.

My eyes and brain are just telling me that it's painful to view the data at the scale that it needs to be reviewed at (appropriate for the pixel size of the satellite sensor) when all those pixels are blurred.

tjhb
10,094 post(s)
#21-Apr-23 04:13

+1 Dan. Exactly agree.

gjsa100 post(s)
#18-May-23 01:59

Solved with the new feature in Manifold Edge release 9.0.180.2 that allows a choice between Nearest Neighbour and Bilinear resampling in the raster style settings

danb

2,064 post(s)
#18-May-23 19:54

Yes as is often the case, this is an amazing turnaround time from engineering and I love the way it is per layer. Thanks very much for this and I guess now I have no excuse for missing those tail tail signs in the LiDAR DEM data.


Landsystems Ltd ... Know your land | www.landsystems.co.nz

Manifold User Community Use Agreement Copyright (C) 2007-2021 Manifold Software Limited. All rights reserved.