I don't know about that... with only 20504 CUDA cores given the two A40 cards, are you sure that will be enough? :-)
OK, more seriously, it's obviously a very intense and powerful configuration. Whether it is the best for what you want depends on the details of what you're doing. For example, two A40 cards are way overkill for GPU computation using CUDA cores. They're also overkill for most rendering tasks, unless you're rendering frames for a CGI-heavy Hollywood movie. Of course, if your users are working with virtual workstations and using the server for rendering using 3D software that works with that configuration, then maybe 25 of them will be able to keep the A40s occupied. It all depends. On the other hand, doing AI/machine learning the tensor cores will be really super.
My initial impression is that two CPUs with 16 cores each seems light on CPU cores (total of 32) for a machine that's intended to be time shared by 25 power users. The new Xeon Gold series are very good technology and are fast at single threads, so nobody's going to get fired for choosing those, but with many users my guess (just a guess) is that you'd do better with more cores where each individual core might not be as fast as an individual Xeon Gold core. Having 32 or 64 slightly slower cores per CPU probably would give you more throughput overall with 25 power users than having 16 slightly faster cores per CPU.
That's where you get into the hall-of-mirrors labyrinth of configuring more complex machines based on the cost of various deals you can put together. If your IT department is buying a ready-built machine from Dell, they're going to be constrained by what Dell offers, as compared to a mix and match setup put together from the open market.
My own, personal experience in this has been that you get the best bang for the buck in terms of manycore processors by leveraging higher quality consumer gear or the lower end of "server" gear, buying not the very latest CPU generation but usually a six month or one year back generation and loading up with lots of cores, RAM, and fast M.2 SSD, going for more cores on the CPU and one CPU instead of two CPUs. That's especially true these days when extra aggressive pricing on a new CPU generation seems to be a thing of the past.
For example, in the past I've had great luck buying AMD Threadrippers one or two generations back that have a gonzo number of cores for an absurdly low price than the latest Intel chip with fewer cores that has a higher price. But there are a lot of sensible reasons why IT departments often resist that approach, given the very high cost of maintaining ad hoc system configurations, and especially if you start out with technology already a year or more old.
In the case of your proposed system, it looks like the architecture is balanced to do the really high speed, intensive computing (massive AI work, etc) on the GPUs with the CPUs playing a support role. I'd therefore go with more cores in the CPUs so there are always extra cores to handle the many small tasks that you get with 25 users. How that works out most optimally in terms of price and comparison between Intel and AMD will take a lot of tinkering with spreadsheets and available deals.
I'll close by emphasizing that the above is just a gut feeling, not a real analysis. The performance you get and whether users are happy or not with big, complicated, expensive configurations will depend very much on the specifics of software being run, from OS to applications, and how the various software packages are used.