Now it seems that that CPU parallelism is being stressed a bit more. Is that an accurate impression?
No, it is more that after many years when there was no CPU parallelism at all, bringing CPU parallelism into the mix as an even partner might seem to make it more emphasized.
9 is both CPU parallel and GPU parallel, with the ability to automatically do CPU parallelism, or GPU parallelism, or any proportional mix of both, from one millisecond to the next depending on what works fastest at that moment. Both are there all the time and the Radian optimizer (all 9 is built on Radian technology) on the fly choose the best mix for that particular job, that load, and the resources available. That's what 9 does.
Using GPU parallelism for something that can be done much faster in CPU parallelism is a blunder, and failing to use GPU for something that can be done must faster on GPU is also a blunder. You also have to have CPU parallelism even when dispatching to GPU parallelism because you need the infrastructure performance of CPU parallelism to make best use of many GPU cores.
The software must be able to do both and the software must be able to choose on the fly at any given instant for any given task as it stands at that moment which will be better, or a blend of both.
CPU parallelism is getting more visibility these days because it is becoming clearer in a hands-on way to more people just what happens when you really can do either CPU parallelism or GPU parallelism. For the first time, people can see in a hands on way for themselves that what I've been telling them for years is true.
You can tell people over and over for years, as I have done, that GPU parallelism is wonderful but that it is faster only in cases where there is some significant math to do. It doesn't make sense to try to parallelize 1 + 1 and dispatch that to 5000 GPU cores. That's slower than adding 1 + 1 to get 2 immediately in CPU. As people now work on tasks every day that have nothing to do with math they're realizing "by dang, it's true that if all I'm doing is moving bytes around on disk the GPU doesn't do anything. It's the CPU parallelism that's making it faster."
Before Manifold, nobody had a commercial desktop application that was fully CPU parallel so nobody had a point of reference as to just what you could accomplish if you really parallelized the heck out the entire application and really used, say, eight Core i7 cores to the max. It turns out that when you use them to the fully parallel maximum capability they have, those eight Core i7 cores (16 hypercores) are truly fire-breathing beasts.
A single, modern Core i7, Ryzen or Threadripper core is phenomenally more powerful in a general purpose way than a single NVIDIA GPU core of whatever is the latest GPU generation, and that CPU core is immediately available in a well-crafted system within the native code already running with no need to prep a dispatch to the GPU.
Even without CPU parallelism, a single, well-written thread on a CPU is already pretty darned powerful, so powerful that it doesn't pay to dispatch to GPU unless you are already well into math territory. If you now have even more CPU power by virtue of fully CPU parallel firepower, you raise the bar even higher in terms of task computational complexity before it is worth dispatching to GPU.
could you or Adam provide some clarification on how much of the capabilities in Manifold 9 are able to take advantage of the Nvidias graphics processors and how much they are improved?
As always, I'm flattered by any requests to pontificate but I think the published web pages and other info do a better job than I could do here.
Visit the Release 9 page at http://manifold.net/info/manifold9.shtml and read the topics on the left hand navigation bar. They do a good job of explaining how 9 uses GPU and CPU parallelism. The FAQ covers some of that as well as does the GPGPU topic in the user manual. All those should be read carefully by anyone interested in parallelism.
In a nutshell, basically everything in 9 is both CPU parallel and GPU parallel with the optimizer choosing when it makes sense to use which, or a mix of both. You generally get a gain from GPU when doing math-heavy things, but surprisingly many day-in and day-out activities in GIS are not particularly math-heavy so they are faster to do using CPU parallelism in most cases.
Does the rendering in 9 make significant use of CUDA?
Rendering and CUDA are two different things, and using CUDA for rendering instead of the built-in GPU rendering facilities would make it slower.
GPUs are designed to do rendering in dedicated hardware, which (of course) Manifold uses. You can see that when you choose Tools - Options and use the Advanced, hardware acceleration allowed option for the Engine.
Keep in mind that CUDA is for GPGPU parallel applications, which usually are not worth doing unless they involve significant math. To oversimplify a bit, rendering is not really about big math, it's just a huge amount of bit stuff that must be done really fast. It turns out the sorts of parallel hardware you create to do that can usually (with a few tweaks) be also good for parallel math, but the two in general are different things.
where in your opinion in general is money better spent for getting the most out of Manifold 9; on the graphics card or on the CPU(s)?
That depends on the work you do. If you are doing ultra heavy math and lots of it you should weight investment more to the GPU. If you are doing a mix of general purpose GIS you should still have a GPU, of course, but a more or less cheapo one with weighting your investment more to more CPU cores, more RAM and a bigger SSD.
Buy an eight to ten core Ryzen or equivalent multicore CPU and load it up with plenty of RAM. Get a nice, big SSD as well. GPUs are dirt cheap so you should always get at least *some* GPGPU capable card to plug in, even a cheapo one. Not much point in getting a super-expensive card since it's not likely you'll ever be doing the super math heavy work where you'd notice the difference between a $200 card and a $1200 card.
To some extent, GPUs have become the economic victims of their own success. They have become so fast at the narrow range of things they do well that it doesn't make much sense to spend beyond mid-range.
In contrast, CPUs are just starting to get rolling in the manycore thing so if you want to take a more balanced approach to performance in the hardware you buy it is probably wiser right now to spend more on the CPU, memory and SSD parts of the system than overspending on GPU.
Therefore, if you want to spend big money, instead of spending $5000 for four $1200 GPU cards I'd buy one 32-core Threadripper, lots of RAM and a terabyte+ SSD and spend, say, $300 or $400 on a GPU card. That's a more general purpose system. On the other hand, if you are one of those people doing really intense math (you already know it if you are one of those guys) then it pays to spend a bit more on the GPU card.
To include a comment from another post...
Admittedly by very casual periodic investigation, I haven't been able to detect any significant GPU activity on my system when running processes in 9 (or previous beta builds) despite the system recognizing the card.
You're probably not doing anything that involves significant math. Most stuff in GIS requires very little math.
Contours, for example, are very simple math. They're mainly about getting lots of data off disk fast and doing a huge number of many simple comparisons, a task that can be done faster using CPU parallelism than by doing the same thing in GPU parallelism plus the overhead of dispatching to GPU.
Running fully parallel on a modern, multicore CPU is so danged amazingly powerful that you have to be doing some significantly heavy math before it is faster to take advantage of GPU parallelism.
There's also the reality that when Manifold does decide to blip something out to GPU it does that so incredibly fast that very few tools can see that operation happening before it is done. After all, if it wasn't super duper fast it wouldn't be worth doing, right?
I want to emphasize that none of this stuff about CPU parallelism being so fast is a bad thing. It's part of the mix and it is good to see that Intel and AMD are fielding manycore CPUs, with AMD leading the charge at getting the price right. A 32-core Threadripper for $2000 is a pretty aggressive price.
Likewise, it's great that terabyte class SSDs are becoming affordable. You can load up a machine with plenty of CPU cores, lots of memory, big SSD and still have plenty in the bank for a killer GPU that Manifold will put to work in every case where that makes sense. There are some applications, watersheds, for example, where having a GPU really does help the bigger math that's involved, so it's great you can get a killer GPU at a really low price.