That's probably worth a try but it may not help as there's a special point cloud index that is used when linking to las/laz but which has not yet been implemented for point clouds imported into the project. There's also the significant amount of time to import from las/laz.
Come to think of it, that's part of the issue with extracting data via a query from the las/laz file: it's no longer just a viewing situation, but the data has to be decompressed from las/laz and made available within the project, so I'd expect the system has to basically import it anyway, at least virtually into temp storage, so the real data is available to the query engine. So you may as well import the data and add relevant indexes, like Art recommends.
las/laz is a great format for interchange and archival storage and for specific uses dedicated to LiDAR, like viewing and like various transformations purpose-built to work with it (as in LAStools), but it's not a high performance format for general purpose GIS things like unrestricted SQL. So if you're going to do a lot of general purpose GIS on the point data set it's probably better to import from las/laz and into a faster data store.
There are two options for that: native Manifold storage, or use a general purpose DBMS that's very fast at handling points, like PostgreSQL.
Manifold's internal DBMS is tuned for general purpose GIS use where individual objects aren't just points but are often (even usually) lines or areas that consist of many coordinates. That's why 9 is often faster than general purpose DBMS packages like PostgreSQL for general purpose GIS use. (as seen in the "Rendering Shootout" videos in the videos page at https://manifold.net/info/videos_gallery.shtml )
But DBMS packages like PostgreSQL were created explicitly to handle very large numbers of records and transactions where each record is tiny. That's what points are, so they're fast at that. Well written DBMS packages that have had 40+ years of tuning have also become so fast that even with GIS data that has "fat" records and transactions they'll often run faster than Manifold's native data store once you get into the 100 to 200 million objects range.
In this case, I'd guess a data set of 137 million records consisting just of points and a handful of small attributes in each record to have faster data access when stored in PostgreSQL, especially when you rig up Postgres for parallel processing (not trivial, but doable). That's a special case that plays to Postgres's strengths, where it will shine.
If you're doing a lot of this sort of work, you could install a PostgreSQL/PostGIS database and load it up with the las/laz data and then do the queries in the server, linking to the results with 9. Of course, loading up a Postgres database with 137 million records from the las/laz file will be painfully slow, but it will be a one-time pain (kind of like it being a one-time pain to import into the Manifold project).
That obviously doesn't make sense for one-off projects, where it would be better to just bite the bullet, link the data into a 9 project and just run the query. Just like "a watched pot never boils", don't sit there and watch the thing crank away. Run the query when you're headed out the door to go home for the evening so when you come in the next day it will be done and you'll be thinking "oh, that was quick...". :-)
Hope that helps!