No, for any given computer configuration there's no way around that, but at least it's a one-time cost. If you have a fast computer with plenty of RAM, fast cores, and fast SSD it will go faster, of course, than with a slow machine with small RAM and slow disk, but it's still a one-time cost.
.sid format is particularly slow because it is designed to be fast as a lossy viewing format, not as a format for precision analysis. For analytics, the first thing that has to happen is extracting what data there is in the .sid into a form where analytics can be done on it.
Ultimately, no matter how fast the analytic engine might be, it's not going to be able to go any faster than the storage technology used for data access. That's yet another example of Gene Amdahl's famous observation that a parallel process can't go any faster than the slowest, serial link.
To cobble up an extreme analogy to help illustrate the point, it's like trying to do audio editing to create a new mix by storing your source audio on cassette tape or a reel-to-reel tape recorder as a working medium: very slow, because to get to every little bit you have to run the tape backwards and forwards. But if you can load all the source audio tracks into memory in a modern digital workstation, well, then you can cut and paste bits and pieces of tracks instantly however you like.
To carry that analogy into the job you're doing now, you have to "play the tape" of the .sid file all the way through just once, so that its contents can be captured into fast, modern format. Thereafter you can work with it much quicker.
I should also add, in the "throw hardware at it" department, that I'm not a big fan of throwing faster hardware at a task in a brute force way, but I have to admit to being surprised sometimes how fast that can be. The workstations I normally use are pretty old, mainly because I'm too lazy to migrate all my files and such to newer machines. Manifold normally is so fast on those that I don't feel a need to improve hardware. But occasionally when working with non-parallel software I'll RDP into a much faster machine to do a job and I often get a "whoa!" feeling at seeing how a state of the art, really fast machine with lots of RAM and fast SSD can do something significantly faster than the older machines I normally use.
I'm also getting better at not being so totally slovenly at how I use Manifold. I use 9 as a personal organizer and information manager, and I also keep "master projects" on hand that within them have a table of projects that I frequently open for a particular theme. Just one click launches the project in a Manifold session for me, without having to worry about remembering where the project was saved or what portable edition of Manifold I'm working with. For example, there's one for Travel where I have a list of Manifold projects related to travel to various places. I like archaeology so when I travel to Istanbul I get ready for the trip by launching my project for Turkey, planning walking routes in my free time using maps showing archaeology, where the hotel is, the stop to catch a ride to the airport, and stuff like that. There are tables/drawings with favorite hotels, restaurants, contacts, etc. Anyway, as a result of actively using Manifold right now I have 14 sessions open on my task bar.
But that's not so smart if the memory cache size set in Tools - Options for each is a bigger number, like 16 GB, because those sessions can all start competing with each other to grab and to use RAM, and if you launch a big job in one you could end up with lower performance as a result of memory thrashing between that and other sessions or other Windows programs trying to grab big chunks of RAM. It's also not smart if the cache is set really low but I want to do a bigger job.
I've suggested a solution to that, which would be a command line option or a launch option for Manifold that allows setting a cache size for each session. You could leave the default at a small number, so that simple projects with a few hundred notes used as personal information managers don't try to grab 16 GB, but for big tasks you could tell it to go ahead and grab 32 or 48 or whatever GB.
Of course, if you don't have a dozen big sessions running at the same time the above is a "don't care," but I like the idea of having many things going, so when I need something, like a password manager, it's right there at my fingertips.