|
I looked through the PDF and there are built-in means to perform some computations but not others. Some of the steps could be done in the UI, others need writing SQL. Point by point: (*) offset SRTM data to remove gross bias -- that's just subtraction, Transform, Arithmetic, (*) calculate a difference grid (SRTM resolution, approx 30 m) at the lidar boundary -- subtract the more detailed image from the less detailed, use the Join dialog to bring two images together (into a two-channel image), then subtract one of the channels from the other, (*) extend the difference grid beyond the overlap area (into the area where SRTM is the only source) by calculating the mean difference in a 3-cell square neighbourhood (using scipy.ndimage.uniform_filter), then replacing nodata values beyond the overlap area with mean values; this extends the difference values by one cell beyond the overlap -- this cannot be done exactly as written using built-in means, you can extend into missing pixels using TileFillMissingNearest with the desired radius (1), and then fill that with mean values over the 3x3 window using TileFilter, but the mean will be affected by the copies of the nearest values already; then again, since you are interpolating anyway, and you are only extending twice (in this and the next step), twice nearest might already be as good as two successive means over 3x3, so you might just do TileFillMissingNearest and be done with it, (*) this process is repeated once more to extend the values two cells (60 m) beyond the overlap -- see above, if you are fine with twice nearest instead of twice mean over 3x3, you just do TileFillMissingNearest once with a radius of 2 in the previous step, (*) apply Gaussian smoothing (scipy.ndimage.gaussian_filter) with standard deviations of 1.4 and 2 cells, extending the difference values about 400 m beyond the overlap -- sure, Transform, Filter, filter = blur gaussian, specify radius, etc, or use SQL to do the same, (*) weighting values are calculated that smoothly change from 1.0 at the original boundary to 0.0 at the edge of the extended difference -- this is the step which I don't completely understand: how do you determine the weight for the pixel, do you compute the distance from the pixel to the original pixels in the LIDAR image (the more detailed one)? isn't that expensive? or do you retain data from the two extensions into missing pixels in the previous steps? that sure looks better, and yes, it could be done in Manifold, too, you'd fill the image with pixels extended once with, I guess, 0.66, and fill the image with pixels extended twice with 0.33, then you merge two images together with the first one on top (all this is done in SQL) and these are the weights, (*) the difference surface is subtracted from SRTM to create an adjusted coarser resolution DEM that exactly matches the lidar elevations at the edge of the lidar extent -- yep, subtract one more time, Transform, Arithmetic. In sum, the process can be repeated, but not exactly in the way it is specified. Or, rather, it can be repeated exactly in the way it is specified using a combination of SQL and scripts, but that'd, of course, take more effort than just SQL. Without scripts, the main difference is in the step that extends into missing pixels, the built-in means extend by copying the nearest value. This produces visible artifacts if you extend far enough, but since you are only extending by two pixels (unless I misread something), that is unlikely to be an issue.
|