Thanks for the info. As the GPGPU topic states and as the images you've posted prove, your GPU is indeed set up with CUDA and Manifold is indeed recognizing it. (help about, and call function screenshots). So don't waste any time thinking the GPU isn't ready to do parallel work. You write The data files that I am using are all stored locally on my machine,
but you don't mention exactly how you're using the data you get from the CSV file, for example, whether you're importing it or linking. Leaving the data stored in the CSV instead of importing it into Manifold is a guarantee of wretched performance. (See the Performance Tips topic... full of useful advice). It's not clear exactly (I don't take anything for granted) what you're doing when you write I know and understand the process of adding schema and a new Geom field with lat/lon coordinate system, as well as Composing the lat/lon point data.
I assume you've created a drawing, in which case you shouldn't be talking about CSV because that's not what you're using any more. You're using a Manifold drawing. It could be you're still working with a table that you imported from the CSV. I don't think that's an issue but, like I said, it doesn't pay to take anything for granted when debugging something done by someone just getting used to the system. You also don't mention exactly how you're interpolating, and all the settings that you're using, or the exact results of the interpolation. It looks like you're making some incorrect assumptions, for example, since this is a computationally heavy task
No. Interpolation is pretty trivial arithmetic. Some forms involve slightly more computation but it's not demanding math. Usually where it blows up in terms of taking long is when people tell the system to do something truly enormous, by specifying unnecessary, unrealistically large options and so forth. For example, if you tell the system to create an interpolated raster that ends up being 200 GB in size when given the resolution of your starting data a mere 200 MB would already be overkill, it will faithfully follow your orders. My guess is you're at risk of doing some of that inadvertently because you've already stated you're ignoring the advice in the interpolation topic for the Transform pane: To keep units straight, it is best to do interpolations using drawings that have been projected into linear coordinate systems, and not radial coordinate systems like Latitude / Longitude.
But you've said you want a resolution of 0.01°, so you're still using a radial coordinate system. That can still be done, of course, but there's more opportunities for error doing it that way (hence the advice in the user manual to use a linear coordinate system... takes but a second to arrange). The above are just guesses. To help you debug your process we have to know every detail of exactly what you're doing (the exact nature of the data in your project ) and how you're doing it (all settings for how you're interpolating, whether you're using the transform pane, whether you're using some query you've written or a script, etc.) Regarding: I noticed in an earlier reading of the GPGPU topic page that the example of the Task Manager display includes monitors/graphs for "CUDA" and "Compute", neither of which I seem to have in mine, even though my GPU clearly has CUDA capability. This is what seems to be the issue, in my opinion, that my GPU is somehow not set for computations, but graphics processing only, and I cannot for the life of me figure out how to change it.
Thinking that the GPU isn't configured for computations because the Task Manager performance display on your computer doesn't show a Compute display is leaping to the wrong conclusion. (It also indicates the GPGPU topic might have just been skimmed and not actually read.) The Manifold user manual doesn't normally duplicate Windows documentation to explain how to operate standard Windows dialogs, but in this case it explicitly talks about what you mentioned in the GPGPU topic, in the section headed Windows Task Manager and GPGPU. That has useful illustrations and comments such as... Windows Task Manager's Performance tab may not report actual GPGPU utilization using default settings. [... illustrations, text, etc. ...] If we switch to the GPU report, depending on how we have the GPU displays configured we may not see any GPU usage. GPUs have many uses, and the Task Manager displays might be configured to show specialized functions such as 3D, or Video Decode, and not GPGPU parallel computation. Generally, we have to choose Compute_0, Compute_1, or Cuda in the pull down lists to get a display of GPGPU function. [...illustration, text...] In the illustration above choosing Cuda in the upper display shows 100% utilization of the GPU, all 1280 CUDA cores in the GTX 1060 GPU working at full saturation to perform the computation. [...] Note, however, that despite the GPU being used 100% with all cores running 100%, the readout in the lower left corner of the display still shows 0% GPU. If that is all we look at, we might wrongly conclude the GPU is not being utilized.
Task Manager has a lot of capabilities, so it pays to get familiar with it. If anything in the above is unclear, search for information on Task Manager on the web and you'll find other discussions of it that you may find more useful, like: By default, the Task Manager has enough room to display only four different graphs. However, if you want to change one of them and monitor some other feature, you can click or tap on the small arrow button on the top-left corner of a graph and select what to watch. You can choose from things like Video Processing, Legacy Overlay, Security, Cuda, VR, and so on, depending on the actual features offered by that video card.
That's from this site. That site is for Windows 8 but it's the same for 10 and 11. The resolution of Task Manager is pretty poor when it comes to GPU. If the application is mostly disk bound with only quick use of GPU every now and then, Task Manager can miss that. It could also be that Manifold's optimizer realized that it's quicker to execute on CPU cores instead of dispatching to GPU for a trivial computation. There are other possible issues discussed in the GPGPU topic that should not be overlooked. For example, there's the discussion in the Windows and GPGPU section. If you unknowingly command an unnecessarily large operation and you're using the same GPU for display and computation you could end up triggering the Windows system's bad habit of flushing GPU computation to make sure your GPU is fully available in case the mouse cursor moves a pixel and should be updated (just kidding about that, as even Windows 10 isn't that stupid...). That's usually not an issue but without knowing all details we can't say. Regarding performance overall... There are plenty of other hardware and software details that could matter, like those listed in the Performance Tips topic. Most cases of interpolation are disk bound, not compute bound, so it pays to look at what might affect performance given the usual interplay between things like random access and slower memory. For example, people try working with large data, dozens of gigabytes in size, when they only have 8 GB of RAM in a machine. That's going to be slow. Or they do something else that guarantees Windows will be doing lots of paging but they have a really slow page store, have the max size of their Manifold process set too low, etc. To get into all that people have to know more about your setup and more about your data, like how big it is, how big the .map project is, etc. Overall, my guess is what's going on is one or more factors: 1. Possible command to do unnecessarily detailed rasters. For example, if your data points are a kilometer apart it doesn't make sense to do an interpolation where the pixels are ten centimeters in size. That doesn't add precision. 2. Transform - interpolation settings that add busy work but don't really do anything given your data. 3. Unrealistic expectations thinking the task is compute intensive when in reality it's almost certainly disk bound. 4. Slow data store architecture, such as linking a CSV instead of importing into the .map file, insufficient memory for fast operation, slow page files, running the project or having tmp space on a USB stick or other dead-slow data store, etc. 5. Something else that causes slowness, like running a virtual Windows system on an old Linux laptop, overall slow processor or system, etc. The GPU isn't a miracle worker. It can help a lot in certain specific cases but if the overall system architecture includes one or more tourniquets it can't force blood to flow through a stone. A 1060 was a fine GPU for parallel computations. But a 1060 is nine years old, a very old card that predates the modern shift to SSD on the motherboard and other essential performance enhancing system features. If you're running a 1060 it could be that the rest of your system isn't particularly super fast when it comes to disk bound tasks. So, tell us a lot more details about your system, exactly what your project looks like, and exactly what you're doing by way of interpolation and we can drill further into what's going on. It could be that making a simple change in 1. or 2. above could make your work go dramatically faster. Worth a look.
|