Subscribe to this thread
Home - Cutting Edge / All posts - Manifold System

10,118 post(s)
#06-Mar-18 07:30

SHA256: d0cb2f330b0a87e2c5914ffbd49f9aa4bf6034634889055ef9665c20de90cf74

SHA256: f76afbb1be83ac9046cfdb6024de202b462008d8c9f6bcca2b01f3213b09c7f4


10,118 post(s)
#06-Mar-18 07:32


There is support for contouring.

New query functions:

TileContourLine - takes a 1-channel image and height, and returns the contour line for the specified height. If the specified height is lower or higher than any of the image pixels, the function returns a NULL value.

TileContourLines - takes a 1-channel image and a set of heights specified as a table, and returns a table of contour lines with fields for: Geom, Value (height). The set of heights does not have to be sorted and may contain duplicates (ignored). The returned table contains a record for each unique height. Some of the geoms in the returned table might be NULL values.

TileContourArea - takes a 1-channel image, minimum height, maximum height and returns the contour area for the specified height range. If none of the image pixels is in the specified height range, the function returns a NULL value.

TileContourAreas - takes a 1-channel image and a set of heights specified as a table, and returns a table of contour areas with fields for: Geom, ValueMin, ValueMax. The set of heights does not have to be sorted and may contain duplicates (ignored). For N unique heights, the returned table contains N+1 records: area below the smallest height, N-1 areas between subsequent heights and area above the biggest height. Some of the geoms in the returned table might be NULL values. The returned areas cover the entire image.

TileContour...Par - parallel variants for each of the above functions.

New Transform templates:

Contour Lines - allows specifying minimum height, maximum height, height step, and produces contour lines.

Contour Areas - allows specifying minimum height, maximum height, height step, and produces contour areas.

(Note: we will allow setting the height range automatically using a Full Range button. For now, you can quickly determine the height range by switching to the Style pane, setting the style to RGBA channels, selecting the desired channel, then right-clicking it and selecting Full Range. You don't have to apply the style after.)


There is support for tracing (converting raster to vector).

New query functions:

TileTraceArea - takes an image, a value to trace and a quantization factor, and returns the area covering pixels with the specified value. The quantization factor is used to extend the area to pixels with values close to the traced value. If the quantization factor is negative or zero, any differences in pixel values are considered significant and the returned area covers pixels with values exactly coinciding with the traced value. If the quantization factor is positive, pixel values are quantized using the following formula: pixelNew = floor(pixel / quant) * quant, pixels that quantize to the same value are considered to be the same and the returned area covers them all. (Example: with quant size of 100, pixel values between 0 and 99.9... are considered to be the same, pixel values between 100 and 199.9... are considered to be the same, etc.) If the image contains more than one channel, the trace value has to be an xN value of the appropriate size. The quantization factor remains a single numeric value that is applied to all channels simultaneously.

TileTraceAreas / TileTraceAreasX2 / TileTraceAreasX3 / TileTraceAreasX4 - takes an image and a quantization factor, and returns a table of areas covering pixels with different values with fields for: Geom, Value. TileTraceAreas works with 1-channel images and returns traced values as FLOAT64. TileTraceAreasX2 works with 2-channel images and returns traced valueas as FLOAT64X2, etc.

(Note: the functions track the number of different pixel values they encounter and stop if that number gets too big - currently, the limit is 20,000. Having too many different pixel values usually means that the image has to be preprocessed or that the function has to be called with a higher value of the quantization factor to make the results usable.)

TileTrace...Par - parallel variants for each of the above functions.

New Transform templates:

Trace Areas - allows specifying quantization factor and produces areas covering pixels with different values.

Things we are going to improve or add:

Currently, both contouring and tracing ignore the 'Restrict to Selection' option. We will extend the transforms to use that option.

Both contouring and tracing create huge geoms with lots of branches. A common post-processing step is decomposing these huge geoms into smaller objects. While this is perfectly doable using transforms that we already have, there are optimizations that we can use specifically for data produced by contouring and tracing, so we might add an option to decompose data directly into contouring and tracing functions and transforms.

End of list.


10,118 post(s)
#06-Mar-18 07:34

Compared to Manifold 8, contouring / tracing in Manifold 9:

(a) can work with much bigger images (where Manifold 8 would first slow down significantly and then just run out of memory and stop, Manifold 9 will work),

(b) is noticeably faster with just one thread and a lot faster with multiple threads,

(c) can be called directly from queries and scripts,

(d) has live preview.

Performance numbers:

Test 1.

Build 8 contour lines on 2,000 x 3,000 pixels.

Manifold 8: 3.107 sec.

Manifold 9: 0.890 sec with 1 thread, 0.242 sec with 8 threads.

Test 2.

Build 12 contour areas on 3,600 x 3,600 pixels.

Manifold 8: 14.365 sec.

Manifold 9: 5.210 sec with 1 thread, 1.068 sec with 8 threads.

Test 3.

Build 10 contour lines on 21,500 x 21,600 pixels.

Manifold 8: 1646.015 sec.

Manifold 9: 90.641 sec with 1 thread, 28.431 sec with 8 threads.

Test 4.

Build 11 contour areas on 21,500 x 21,600 pixels.

Manifold 8: canceled after 12+ hours, would have likely finished in 18 hours.

Manifold 9: 117.942 sec with 1 thread, 44.999 sec with 8 threads.

Test 5.

Trace areas on 10,300 x 7,300 pixels.

Manifold 8: 969.668 sec to trace a single (biggest) area.

Manifold 9: 18.604 sec to trace all 250+ areas with 1 thread, 7.471 sec with 8 threads.


3,269 post(s)
#06-Mar-18 12:02

Thanks, Adam. Will you be posting some sample SQL?

134 post(s)
#06-Mar-18 12:12

I'll second that request. Data for the tests and or a map file would be great for learning purposes.


7,064 post(s)
#06-Mar-18 14:05

We'll try to get a video out quickly, but working with just about any DEM will be fine and easy. Play around with the Transform templates to create contour areas or contour lines.

One tip if you are downloading elevation data sets from USGS servers in the US, using things like the various ASTER DEMs or GMTED data sets: USGS publishes these now with raster tiles in YX order (... here we go again...) so that can mess up re-projection and contour generation (results in XY axes flips in both cases).

It is easy to fix. Right after importing the data, make a manual adjustment to the properties as follows.:

Suppose you've just imported the terrain elevation DEM for Mount Hood in Oregon. That imports as a drawing and a table called...

ASTGTM2_N45W122_dem (the drawing)

ASTGTM2_N45W122_dem Tiles (the table)

Right click on the ASTGTM2_N45W122_dem Tiles table and choose properties. Right click onto the value for the FieldCoordSystem.Tile property and choose Edit. You'll see it is something like:

EPSG:4326,mfd:{ "LocalOffsetX": -122.00013888888888, "LocalOffsetY": 44.999861111111116, "LocalScaleX": 0.0002777777777777778, "LocalScaleY": 0.0002777777777777778}

Edit that to change it to

EPSG:4326,mfd:{ "LocalOffsetX": -122.00013888888888, "LocalOffsetY": 44.999861111111116, "LocalScaleX": 0.0002777777777777778, "LocalScaleY": 0.0002777777777777778, "Axes": "XY" }

All you are doing is adding

, "Axes": "XY"

to the end of the value. One of the nice things of having all properties accessible is that you can do stuff like that. We will add an autodetect and fix to the build so you won't have to do that manually.


10,118 post(s)
#06-Mar-18 14:19

That imports as a drawing and a table [...]

As an image and a table. (Not a big deal, it is clear from context that we are importing an image.)


10,118 post(s)
#06-Mar-18 14:16

Attached is a simple example for contours.

The queries are pretty straightforward.

Creating a single contour:


VALUES (TileContourArea([german_alps], 1000, 2000));

Creating a set of contours:


EXECUTE CALL TileContourAreasPar([german_alps],

  (VALUES (500), (1000), (1500), (3500)), -- heights


You can also inspect the queries created by the transform templates (Edit Query).

All tests were done using transforms, no custom queries involved. That is, we were measuring the time it takes to build contours by clicking a button in the UI in Manifold 8 and in Manifold 9. We had to alter the build of Manifold 8 slightly to report the time it takes to build contours / perform the tracing, but that was the extent of changes.



7,064 post(s)
#06-Mar-18 16:11

Two new videos published showing contours in action... enjoy!

Mike Pelletier

2,036 post(s)
#06-Mar-18 17:57

This is great build. Looking forward to giving it a go. Is the result in Mfd 9 the same as in 8 for contours and traces? Same algorithm?


3,269 post(s)
#06-Mar-18 19:53

I can't speak for 8, but I took a 3.5GB raster of Wicomico County (56,844 x 38,728 pixels) and created 10m contours. The results are here


I did it both in ArcGIS and Manifold.

ArcGIS took 10m 23s, and the CPU usage looked like this (you can see one core moderately firing):

Manifold completed in 83s, and the CPU usage looked like this (you can see all 8 cores saturated):

I don't have ArcGIS Pro, so I can't test that. If anyone has ArcPro, I could send you the file to test out.



3,269 post(s)
#06-Mar-18 20:53

just another update. I rebooted my computer, and ArcGIS ran the contours in 98s. It's always a good idea to reboot :-)

I have a student using ArcGIS Pro. He ran it on a laptop with a SSD, and it ran in 54s. So, not apples-to-apples, but at least this is an idea.


9,986 post(s)
#06-Mar-18 21:08

Did you retest Manifold 9 after reboot as well??


3,269 post(s)
#06-Mar-18 21:17

yes. Same. Good question.

I have been messing with files all day, so I'm going to start fresh. I want to make sure that I didn't mix up the different DEMs. So here goes, I am using the 1m DEM for the county (56,844 x 38,728 pixels). I am going to run 10m contours.

Sure enough:

Manifold took 83s

ArcGIS took 10m

I must have used the 2004 DEM the last time. I am sending my student the right DEM now to try in ArcPro.

Sorry for any confusion. I wish we could go back and make edits beyond 15 minutes.

I will report the ArcPro values shortly.


9,986 post(s)
#06-Mar-18 21:33

This is really cool work Art. Much appreciated.


3,269 post(s)
#06-Mar-18 21:38

thanks, but I'm trying to do it while teaching lab, so I keep getting mixed up :-)

I've got to slow down and do this thing right - ha, ha.

Mike Pelletier

2,036 post(s)
#06-Mar-18 22:32

Thanks for doing this Art. If possible, it would be good to inspect a few small areas for the quality (realism, smoothness) of the contours with 9 and ESRI.


3,269 post(s)
#06-Mar-18 22:53

I like that idea. I will probably do that tomorrow. The other option of course is to make my students do it.


3,269 post(s)
#06-Mar-18 23:52

ArcPro: 3m 52s


7,064 post(s)
#07-Mar-18 06:02

For 10m steps across the full range of heights, or just for a limited range of heights? Slower than 9 but not bad for PRO either way, I suppose a result of 64-bitness. How did 9 do on that SSD-equipped system?

Which reminds me...

I was at AMD a couple of weeks ago and saw some interesting hardware. There are motherboards now that will host *two* 32 core Threadrippers so you can put 64 real CPU cores to work on a task in parallel with 9.

Given the very large SSDs now available, lots of RAM and 64 cores, that would make for a heck of a personal system. It's not all that expensive, either, when you consider the cost of ESRI software. A 32 core Threadripper is only $2000, so you can assemble such a system for likely under $6000.

226 post(s)
#07-Mar-18 16:55

I have been a little confused in the last year about parallelism in Manifold. From I what I had been reading over the previous decade or so I thought the big improvements in performance was going to come from being able to use the massive parallelism of graphics cards. Now it seems that that CPU parallelism is being stressed a bit more. Is that an accurate impression?

I may have missed some threads (pun very much intended) on this topic in the forum but could you or Adam provide some clarification on how much of the capabilities in Manifold 9 are able to take advantage of the Nvidias graphics processors and how much they are improved? Does the rendering in 9 make significant use of CUDA?

I know parallelism on a graphics card and with CPUs are not mutually exclusive but where in your opinion in general is money better spent for getting the most out of Manifold 9; on the graphics card or on the CPU(s)?


1,945 post(s)
#07-Mar-18 19:21

Great question and one which I have also pondered.

Admittedly by very casual periodic investigation, I haven't been able to detect any significant GPU activity on my system when running processes in 9 (or previous beta builds) despite the system recognizing the card.

Perhaps I am looking in the wrong place or when running the wrong processes. It has by no means been an exhaustive exploration.

Landsystems Ltd ... Know your land |


7,064 post(s)
#08-Mar-18 16:13

Great questions...

Now it seems that that CPU parallelism is being stressed a bit more. Is that an accurate impression?

No, it is more that after many years when there was no CPU parallelism at all, bringing CPU parallelism into the mix as an even partner might seem to make it more emphasized.

9 is both CPU parallel and GPU parallel, with the ability to automatically do CPU parallelism, or GPU parallelism, or any proportional mix of both, from one millisecond to the next depending on what works fastest at that moment. Both are there all the time and the Radian optimizer (all 9 is built on Radian technology) on the fly choose the best mix for that particular job, that load, and the resources available. That's what 9 does.

Using GPU parallelism for something that can be done much faster in CPU parallelism is a blunder, and failing to use GPU for something that can be done must faster on GPU is also a blunder. You also have to have CPU parallelism even when dispatching to GPU parallelism because you need the infrastructure performance of CPU parallelism to make best use of many GPU cores.

The software must be able to do both and the software must be able to choose on the fly at any given instant for any given task as it stands at that moment which will be better, or a blend of both.

CPU parallelism is getting more visibility these days because it is becoming clearer in a hands-on way to more people just what happens when you really can do either CPU parallelism or GPU parallelism. For the first time, people can see in a hands on way for themselves that what I've been telling them for years is true.

You can tell people over and over for years, as I have done, that GPU parallelism is wonderful but that it is faster only in cases where there is some significant math to do. It doesn't make sense to try to parallelize 1 + 1 and dispatch that to 5000 GPU cores. That's slower than adding 1 + 1 to get 2 immediately in CPU. As people now work on tasks every day that have nothing to do with math they're realizing "by dang, it's true that if all I'm doing is moving bytes around on disk the GPU doesn't do anything. It's the CPU parallelism that's making it faster."

Before Manifold, nobody had a commercial desktop application that was fully CPU parallel so nobody had a point of reference as to just what you could accomplish if you really parallelized the heck out the entire application and really used, say, eight Core i7 cores to the max. It turns out that when you use them to the fully parallel maximum capability they have, those eight Core i7 cores (16 hypercores) are truly fire-breathing beasts.

A single, modern Core i7, Ryzen or Threadripper core is phenomenally more powerful in a general purpose way than a single NVIDIA GPU core of whatever is the latest GPU generation, and that CPU core is immediately available in a well-crafted system within the native code already running with no need to prep a dispatch to the GPU.

Even without CPU parallelism, a single, well-written thread on a CPU is already pretty darned powerful, so powerful that it doesn't pay to dispatch to GPU unless you are already well into math territory. If you now have even more CPU power by virtue of fully CPU parallel firepower, you raise the bar even higher in terms of task computational complexity before it is worth dispatching to GPU.

could you or Adam provide some clarification on how much of the capabilities in Manifold 9 are able to take advantage of the Nvidias graphics processors and how much they are improved?

As always, I'm flattered by any requests to pontificate but I think the published web pages and other info do a better job than I could do here.

Visit the Release 9 page at and read the topics on the left hand navigation bar. They do a good job of explaining how 9 uses GPU and CPU parallelism. The FAQ covers some of that as well as does the GPGPU topic in the user manual. All those should be read carefully by anyone interested in parallelism.

In a nutshell, basically everything in 9 is both CPU parallel and GPU parallel with the optimizer choosing when it makes sense to use which, or a mix of both. You generally get a gain from GPU when doing math-heavy things, but surprisingly many day-in and day-out activities in GIS are not particularly math-heavy so they are faster to do using CPU parallelism in most cases.

Does the rendering in 9 make significant use of CUDA?

Rendering and CUDA are two different things, and using CUDA for rendering instead of the built-in GPU rendering facilities would make it slower.

GPUs are designed to do rendering in dedicated hardware, which (of course) Manifold uses. You can see that when you choose Tools - Options and use the Advanced, hardware acceleration allowed option for the Engine.

Keep in mind that CUDA is for GPGPU parallel applications, which usually are not worth doing unless they involve significant math. To oversimplify a bit, rendering is not really about big math, it's just a huge amount of bit stuff that must be done really fast. It turns out the sorts of parallel hardware you create to do that can usually (with a few tweaks) be also good for parallel math, but the two in general are different things.

where in your opinion in general is money better spent for getting the most out of Manifold 9; on the graphics card or on the CPU(s)?

That depends on the work you do. If you are doing ultra heavy math and lots of it you should weight investment more to the GPU. If you are doing a mix of general purpose GIS you should still have a GPU, of course, but a more or less cheapo one with weighting your investment more to more CPU cores, more RAM and a bigger SSD.

Buy an eight to ten core Ryzen or equivalent multicore CPU and load it up with plenty of RAM. Get a nice, big SSD as well. GPUs are dirt cheap so you should always get at least *some* GPGPU capable card to plug in, even a cheapo one. Not much point in getting a super-expensive card since it's not likely you'll ever be doing the super math heavy work where you'd notice the difference between a $200 card and a $1200 card.

To some extent, GPUs have become the economic victims of their own success. They have become so fast at the narrow range of things they do well that it doesn't make much sense to spend beyond mid-range.

In contrast, CPUs are just starting to get rolling in the manycore thing so if you want to take a more balanced approach to performance in the hardware you buy it is probably wiser right now to spend more on the CPU, memory and SSD parts of the system than overspending on GPU.

Therefore, if you want to spend big money, instead of spending $5000 for four $1200 GPU cards I'd buy one 32-core Threadripper, lots of RAM and a terabyte+ SSD and spend, say, $300 or $400 on a GPU card. That's a more general purpose system. On the other hand, if you are one of those people doing really intense math (you already know it if you are one of those guys) then it pays to spend a bit more on the GPU card.

To include a comment from another post...

Admittedly by very casual periodic investigation, I haven't been able to detect any significant GPU activity on my system when running processes in 9 (or previous beta builds) despite the system recognizing the card.

You're probably not doing anything that involves significant math. Most stuff in GIS requires very little math.

Contours, for example, are very simple math. They're mainly about getting lots of data off disk fast and doing a huge number of many simple comparisons, a task that can be done faster using CPU parallelism than by doing the same thing in GPU parallelism plus the overhead of dispatching to GPU.

Running fully parallel on a modern, multicore CPU is so danged amazingly powerful that you have to be doing some significantly heavy math before it is faster to take advantage of GPU parallelism.

There's also the reality that when Manifold does decide to blip something out to GPU it does that so incredibly fast that very few tools can see that operation happening before it is done. After all, if it wasn't super duper fast it wouldn't be worth doing, right?


I want to emphasize that none of this stuff about CPU parallelism being so fast is a bad thing. It's part of the mix and it is good to see that Intel and AMD are fielding manycore CPUs, with AMD leading the charge at getting the price right. A 32-core Threadripper for $2000 is a pretty aggressive price.

Likewise, it's great that terabyte class SSDs are becoming affordable. You can load up a machine with plenty of CPU cores, lots of memory, big SSD and still have plenty in the bank for a killer GPU that Manifold will put to work in every case where that makes sense. There are some applications, watersheds, for example, where having a GPU really does help the bigger math that's involved, so it's great you can get a killer GPU at a really low price.

226 post(s)
#10-Mar-18 19:43

Thanks for the 'patriachation' Very informative as always.


3,269 post(s)
#07-Mar-18 19:50

I decided to take on a new test - fresh data, fresh computer, etc. The earlier tests was doing the work across the network, and we suspect that the teaching computer in the lab has a bad network connection!

Anyway, I ran some tests with GDAL, Manifold Viewer, and ArcGIS. It turns out that ArcGIS was faster, but I wonder if Manifold 9 would be faster than Viewer, simply because I could not save the .map file in Viewer, so some resources were used up when doing the import of the data perhaps.

You can see the results here.


3,269 post(s)
#07-Mar-18 21:26

if anyone wants to try out the data, just email me, and I can get you a link to my ftp site. I just don't want to publicize the link on the Internet since I have some download limitations.


10,118 post(s)
#09-Mar-18 09:15

Thanks for the test!

We will take a look at the data for sure.


7,064 post(s)
#07-Mar-18 05:38

and created 10m contours.

Art, looking at your illustration it seems you only created 10m contours on a very small subset of the data. Could it be you used the default settings min height, max height and step for the transform template? That would produce results like shown in the image, where there is a thin band of contours between the 0 and 100 range.

I could be wrong, but given what I suspect is the topography of Wicomico County, the creation of contour lines at each 10m step for heights from the lowest elevation in the county to the highest elevation of the county should have resulted in a much denser mass of lines.

It is nice to see 9 outperforming Arc in this. The roughly 7 times faster for smaller data (the overall data is big but if you are doing only a few contours the task is much smaller) is about what you expect given that Arc's algorithms are very good but single threaded, while tossing 8 cores at it should go faster, albeit not quite 8 times faster.

But... the more interesting task would be to compute contour lines at 10m steps for the entire range of heights. That would be a fuller test of 9's ability to handle bigger data and throughput. If I were a betting man I'd wager that would crash ArcGIS Pro, or, at least, take far longer than 10m. 9 will take much longer too, but proportionately I would expect (could be wrong... be interesting to see...) 9 would gain even more over Arc as the job scales up.

Great work! Thanks for the reports!


3,269 post(s)
#07-Mar-18 13:32

actually, if you remember from the Manifold User meeting, Wicomico County is really flat :-) Drearily so. I'm going to run things again on a different county.

194 post(s)
#07-Mar-18 17:19

Flat Florida test

5 ft contour lines on 2.8G DEM took 363.645 sec core i7 6700 4 cores 8 threads

Would be interested to know ArcGIS Pro timing

mxb file on dropbox for a week: (1.5GB)

DEM source (seems have to copy and paste link:

194 post(s)
#07-Mar-18 17:44

dropbox not happy; will put here

DEM source

194 post(s)
#08-Mar-18 04:06

GDAL took about 20 min vs 6 min for M9. Contours look and placement nearly identical

QGIS 2.18\bin\gdal_contour" -a elev sftopo_2236.tiff gdalcontours2236.shp -i 5.0


10,118 post(s)
#07-Mar-18 07:49

Is the result in Mfd 9 the same as in 8 for contours and traces? Same algorithm?

The result in 9 is the same as in 8 up to various quirks which 9 handles better (like flat areas of exactly the contour height, 8 sometimes produces weird connections inside while 9 carefully follows the edges).

The algorithms are different, they arrive at the same answers with 9 doing it faster.


6,400 post(s)
#07-Mar-18 18:33

Here is the german UI file for v

Following each new build your done in a few minutes.

If a transform template creates new fields or components it uses default names like [Value] for a field, [... Table] or [... Image] as appendix for the for a component name.

There are no substitutes in the localization files for these and I'm not sure that they should be. It would require an additional preprocessing of transform queries which localization of name schemes ist used in an input table and may be fall back to the default scheme on failure.

Imagine you run the 64bit manifold localized but the 32bit manifold not localized. Its easy to import data in the 32bit instance then and try to use them in the 64bit instance later.

I feel you can do too much when localization meets internals. And with Mfd9 Tranforms we get insight of internals. A drawback is, that localization of transform commands ist reflected in the name of components added by the transform but not in the in the compontentype appendix. This is true for the name of a table automatically created when you create a drawing, image or labels component. Feels a bit mingled but I'm really not sure where to draw the line for localization.

I'd draw the line between new fields and new component names. Only the later should be completely localized IMO.

As long as we do not have localized display and input of datatypes throughout the UI I think the mixing of field names is second priority.


Do you really want to ruin economy only to save the planet?


10,118 post(s)
#09-Mar-18 08:11

We agree drawing the line is difficult.

Here is our current thinking:

Field / value type names: should not be localized. Ie, in the schema dialog, an NVARCHAR field should be displayed as 'nvarchar' regardless of the current language. Because if the dialog tries to be friendly and starts displaying localized names for types, the user will have to face a parallel name system every time on every contact with SQL.

Field names for system fields in MAP files and on other data sources / in tables returned by built-in query functions: should not be localized. Because otherwise queries start depending on the current language, a query written on a system with German localization may suddenly refuse to work on a system with English localization and vice versa.

Component type names: this is where the battle is. The argument for localizing them is that these names are very often used in discussions and aren't too technical, and maybe discussions in non-English languages are better if those names are localized. The argument against localizing them is that CREATE MAP / ALTER MAP will stay unlocalized because localizing SQL will, again, make queries depend on the current language, and we absolutely don't want that. The difference between field type names and component type names is that component type names are less technical and used way more often in casual talk.

Components created by transforms by default include the name of the transform (localized) and frequently also the name of the component type (unlocalized). This does look strange and kind of contrary to what we do in other places in the UI where we do localize the component type. We will perhaps change the logic in the transforms to localize the component type parts of the names as well. The thing that we are uneasy about is that unlocalized names of component types are still going to be with us everywhere - in the MFD_ROOT table and in the CREATE ... / ALTER ... / DROP ... statements. (In retrospect, maybe we should have just CREATE TABLE ... and CREATE COMPONENT 'map' ..., and similar or simpler ALTER and DROP. That way component type names would be a little less encroaching in SQL.)


10,118 post(s)
#10-Mar-18 06:33

A follow-up:

We discussed localizing component type names in components created by transforms, and decided not to do it right now because on closer look we have another case where using localized component type names is bad: names of components exposed by dataports. Currently, if you link a JPEG named 'Lands.jpeg', you get an image named 'Lands' and a table named 'Lands Tiles'. Other dataports behave similarly. The word 'Tiles' in 'Land Tiles' is best kept unlocalized, because if it is localized, it is going to change if the current language changes, and that will break other components referencing that table (images, queries, etc).


6,400 post(s)
#10-Mar-18 13:45

I support this and add another argument. Mfd9 is a new world with the growing help system being a big help. At that point every user depends on his basic knowledge of English and probably coming from Mfd8 or from Arc with localizations comingout years after a new version is used to the special terms in GIS. For that reason I still have added the original term for a lot of transforms. I neede this for myself. What is the difference between Merge and Union? What is the difference between the various Topology Overlays? Why invent new german terms when they do not appear in help and the english terms are well known ?

For those foreign users who are not familiar with GIS it may be more helpful to find a short translation of the introduction of the help system with links to the original articles in help. English component names are no miracle for non-native GIS users. As long as we use the technical GIS language translation feels artifial at times.

Do you really want to ruin economy only to save the planet?


7,064 post(s)
#10-Mar-18 14:29

Klaus, I agree, and also because sometimes the English words themselves are just tokens, in that they are artificially chosen or have come down as a matter of tradition, and might not exactly match the work being done. So, you may as well use them as try to do a translation which doesn't really have a basis in meaning. It's like trying to translate "xerox"... no point in that.

Also, the terms might need improvement. English is great to have as an interoperability format, but it's not the native language for many of us (not mine, for example). Sometimes it can be a struggle to pick the right word for something new.

Example: we weren't sure what to name the box currently called "Similarity" in Trace Areas. Lots of talk, lots of cruising through the thesaurus and then we figured that would have to do until somebody could think up a better term. If anybody has a better idea for this or anything else, please contribute! :-)

This is a case where the parallel intelligence of the community can help give us all a better lexicon...


6,400 post(s)
#10-Mar-18 17:25

Following this discussion I think I will replace the translation of "tile" that I never loved although it shares all associations with the translation "Kachel". "Tile" refers to a system table type and field.

But there are localizations strings for report in the status bar and there is plenty of space in this control.

So I try to build a bridge between localized UI and system for the next edge build and add some translations in addition to the technical term where it might be helpful. It's a test and comments welcome.

Do you really want to ruin economy only to save the planet?


6,400 post(s)
#12-Mar-18 06:46

Here "Kacheln" is replaced with "Tiles" again. ( file version

Actually in the Command section of the UI file with descriptions for the status bar tiles are not mentioned at all. So I drop the idea of a hint to the meaning of terms not localized included in the ui file.


Do you really want to ruin economy only to save the planet?

Manifold User Community Use Agreement Copyright (C) 2007-2021 Manifold Software Limited. All rights reserved.