Subscribe to this thread
Home - Cutting Edge / All posts - Manifold 9.0.180.2
adamw


10,447 post(s)
#17-May-23 14:46

9.0.180.2

manifold-9.0.180.2.zip

SHA256: 5ec5c826ff97f95d327903167740312171d3ba9bee503bfac43b7bee167dd3a9

manifold-viewer-9.0.180.2.zip

SHA256: f20c95dddbf3a7faf6898847ee34186b52a6a5ce7208283809ac76e205c48a88

sql4arc-9.0.180.2.zip

SHA256: 8ab3336fa6c5061f10127e4727942fea296ecbb5272219170558e30286c29596

adamw


10,447 post(s)
#17-May-23 14:47

Changes

The Style pane allows specifying the resample method for an image, in the Options tab. Available choices: bilinear (default) / nearest neighbor.

Rendering a map with image layers that are completely offscreen performs faster.

Rendering a HTTP map image puts the watermark into the middle bottom, to keep the corners free for legend / north arrow / scale bar.

Web data sources tunneled by the TCP / HTTP server may use persistent cache if their parent is a database such as PostgreSQL or, say, SQLITE. (Web data sources such as image servers might be created either directly in MAP files or in databases like PostgreSQL. The 'Save cached data between sessions' option can be used to let web data sources save all downloaded data in their parent data source so that future requests for the same data do not have to re-request it. When used in the context of the TCP or HTTP server, web data sources created in the MAP file will not be able to use the cache because the MAP file is opened in read-only mode. However, web data sources accessed through an intermediate data source that is a database will now be able to use the cache. This significantly improves the performance of web data sources at the cost of some storage space.)

The amount of memory to use for cache can be decreased below 4 GB, down to 1 GB. The default is still 4 GB.

MANIFOLDSRV ignores the amount of memory to use for cache specified as a user option and instead allows specifying it via the new -memory:xxx command-line parameter. The value of the command-line parameter is in GB. The default is 4 GB. (The reason the server ignores the value of the user option is that server instances frequently run under specialized user accounts, accessing user options for these accounts might be hard or even impossible.)

The startup database operated by the TCP / HTTP server is made available for scripts, they can access it using Application.GetDatabaseRoot().

The TCP server allows controlling server scripts via the new -scripts:xxx command-line parameter. Available choices:

  • -scripts:on - allows running scripts (currently applies to calling script functions in queries, computed fields and constraints, in the future will also apply to running script components),
  • -scripts:off - disallows running scripts, the default.

The query engine always runs queries from data sources for the TCP server remotely. (See the post below for details.)

The buttons in the HTTP map UI use PNG icons and show tooltips.

The buttons in the HTTP map UI are grouped into a toolbar placed at the center of the top edge of the map. The toolbar can be collapsed and expanded.

The default number of connections for the HTTP server is increased from 20 to 100, to account for web browsers creating multiple connections to the server for the same client. The maximum number of connections for the HTTP server for a non-Server edition of Manifold is also increased from 20 to 100.

The page served by the HTTP server declares compliance to HTML 5, has the viewport meta tag, etc.

The HTTP server caches rendered map images for performance.

End of list.

artlembo


3,424 post(s)
#17-May-23 18:43

this runs a little faster, but not too much. OSM is definitely the rate limiting factor. I suppose that the more the site is used, the more caching will take place, right?

As a workaround, I did the following:

1. In QGIS I downloaded the OSM tiles using the Generate XYZ Tiles command.

2. Once the mbtiles were created, I converted them to a TIF with gdal_translate. Just a small caveat:

a. I kept getting errors when trying to import the .mbtiles into Manifold. I think I've set my gdal path correctly, but who knows. I finally gave up. It gave an error of:

2023-05-17 14:41:40 *** Failed to load library: gdal303.dll

Cannot Open File

b. Manifold does not seem to read the coordinate information in the TIF file. Even when exporting out a geotiff. So instead, I modified the gdal_translate to include -of tfw=true. That creates a world file that Manifold can read.

3. Once the tiff is brought into Manifold, things are really fast, even in the browser. It's about a 1GB image for Philadelphia, so I'll see what the Tompkins county data is like later tonight.

adamw


10,447 post(s)
#18-May-23 09:50

Caching helps the initial render and then it helps repeat renders. The effect is biggest in multi-user scenarios.

If you have layers that need to fetch data from other servers, like OSM, they would usually be the limiting factor on rendering, yes. Fetching data from OSM will quite likely take longer than rendering all layers combined.

Yes, with layers like OSM, the more you use the site, the faster it becomes. Downloaded tiles are cached and not re-downloaded again. By default, caching is per-session, all downloaded tiles are gone after you stop the server. You can make the cache persist by putting an image server into an intermediate data source, eg, a SQLite database, and turning on 'Save cached data between sessions' option on the image server.

The 'Failed to load library: gdal303.dll' error might indicate that you have an older / newer version of GDAL than we support. We support a range of versions, but you might be out of that range. Or it might happen because GDAL DLLs that are in the PATH are, say, 32-bit (we are 64-bit only).

We'd be interested to see the GeoTIFF file in which we failed to parse the coordinate system. This should not happen.

Dimitri


7,471 post(s)
#18-May-23 10:58

OSM is definitely the rate limiting factor. I suppose that the more the site is used, the more caching will take place, right?

Not in a persistent way f you've created the OSM image server data source in the part of the project that's saved in the .map.

First, a few words on how caches work when you use them with image servers:

Suppose you're working in desktop mode on a project to which you have read/write access. In your project you've added an OSM image server data source, and you checked the 'Save cached data between sessions' option in the data source dialog.

The File - Create - New Data Source topic discusses in detail what that means and what capabilities it gives you. What it means is that tiles you pull in from the image server will be saved in a folder within the .map project file. Those saved tiles are the cache, and, like the other contents of the .map project they'll be saved in the project when you save it. When you open the project again those tiles will still be in that cache, ready to be used. That has two big benefits and two possibly significant downsides.

One big benefit is that your image server will seem to be running much faster, because any panning or zooming you do that requires tiles that have already been downloaded will happen instantly, since those tiles are already in the .map project and can be displayed much faster than even a fast Internet connection to OSM, Bing, or Google can get them. If you're working for a town and all your work is in that town, a side effect of the normal browsing you do in your projects will be to automatically load up the cache with most of the tiles for the image server you use as a background, so only rarely will 9 ever have to fetch some previously unused tiles from Bing or whatever.

A second big benefit is that if you open the project in a setting where you don't have Internet, say, on a laptop far out in the field, you'll still be able to use that image server layer just as if you were connected, so long as the panning and zooming you do falls within the tiles that were previously downloaded in your panning and zooming sessions. I use that effect a lot when I'm planning on doing some work in the field where I don't think I'll have an Internet connection (by phone or whatever) for my notebook. Before I go out into the field I'll create a Bing or Google satellite or streets image server with the cache option turned on, and then I'll pan and zoom in my map the regions I expect to be visiting, usually saving some locations so I know I'll have the key views I'll be needing. I save the project, and then when I'm out in the field I can go to those locations and I'll have the views I want from the image server even if there is no Internet connection.

But there are two main downsides. The first is that if you're doing a lot of panning and zooming with cache turned on, the cache can get really big, hundreds of MB, and thus a project that maybe had only a few MB of vector layers can end up being a few hundred MB in size because of the large size of the cache. It would be nice to have a "max cache" parameter set (I'll suggest that) to prevent projects from growing too huge if somebody doesn't notice how big they are gettting.

The second downside is that the system can't write cached tiles into the .map project if the .map project is read only. You can't create a new component in a read-only .map file, so you can't create a new table or add records to an existing table in a read-only project. If you opened the .map project read/write, created the image server with cache turned on and then used it, the project will have the cache file created. If you later open that .map project read only, tiles that have been downloaded during the current working session will be cached in virtual storage, but since the .map project file is not writeable when the session ends they can't be saved for future use.

It's that last point which affects how caches for image servers don't work past the immediate session with TCP or HTTP servers: even if you created the project with the cache option turned on for, say, an OSM data source in the project, as adamw mentioned, web data sources created in the MAP file will not be able to save the cache for future use because the MAP file is opened in read-only mode by the HTTP server. Server services (or command prompt sessions) are often stopped and started again to do things like make slight changes to the map being served, so the virtual cache will only be around until the server service/session is stopped.

How can you get around that? By creating the OSM data source in a read-write data source, like PostgreSQL. Create a .map project and link in a PostgreSQL server to which you have read/write access. Within that data source, create the OSM image server. The cache table will be created within the PostgreSQL server as well. You can copy the image from the OSM image server and paste it into the main part of the .map, and then you can use the image as a layer in whatever map you'll be serving with Server.

Save the file and close it. You can now use that .map file in Server. Even though the .map file's components (like the OSM image layer in the map) are all read-only, the cache is back in the read/write PostgreSQL database and can be updated.

Adamw gives the example of using an SQLite database. If you don't want to install and operate a PostgreSQL server you can just use an SQLite database file, linking that in and creating the OSM data source within that SQLite data source. That will work too. But my experience with SQLite is that it is slower than PostgreSQL, so I'd go with PostgreSQL.

Regarding this:

this runs a little faster, but not too much.

That may or may not be improved by the other two important performance upgrades in in build 182.2, depending on what you are serving.

The first upgrade:

Rendering a map with image layers that are completely offscreen performs faster.

Big web maps often have very many layers, perhaps 20 or 30 or more, and some of those can be huge. The "Euro Dem" terrain relief layer that Manifold uses for testing which has high resolution terrain relief for much of Europe is around 190 GB. But that only covers Europe, so if you want to create a web-served world map that shows terrain relief you might use a background layer that's a lot lower resolution to cover the whole world with the Europe layer covering Europe. You could also have a hires, big terrain layer that covers the US, and for other regions of interest.

If people are looking at a spot in Asia in the map window, there's no need for the system to waste any time thinking about rendering an image layer that only covers Europe. Build 180.2 adds that optimization. Manifold's pretty fast so with a single user that's not something that's usually noticeable. For example, the "rendering 110 GB of images" video has some very large images in Boston, Paris, Hong Kong and so on but it still renders everything instantly without this capability, showing, for example, Paris very fast even though there are huge images off screen.

But when the HTTP server is handling possibly thousands of users, it makes sense to avoid any wasted effort. Even with one user if there are 30 or so layers that are huge and are covering different parts of the world, it can make a noticeable difference in speed to cut out any consideration of layers that are not in view. If you have such a map, 180.2 will be visibly faster.

There is also the second optimization:

The HTTP server caches rendered map images for performance.

You can see that in action by browsing the web map in your browser, and then clicking the Zoom in button to zoom, say, ten steps into the map. Now, click the Zoom out button ten steps. You'll see the zoom outs will happen instantly, because they're showing cached views. Click the Zoom in button and now those zoom ins will also happen instantly, because they're also now showing views that are in cache. Same also for the zoom to fit button, which now displays from cache.

This doesn't help if you don't repeat the same views, for example, by panning in random ways or by using the mouse wheel to zoom to different zooms than previously used, but it can still be a big help. As more navigation controls like locations and "back" and "forward" arrows are added, the cache will be a bigger help.

Hope that helps!

adamw


10,447 post(s)
#17-May-23 14:48

Regarding running queries on different data sources and the change made for the TCP server.

As you know, Manifold allows accessing data from data sources other than the main MAP file within a query. Eg:

--SQL9

SELECT * FROM [server]::[data];

...will return records from the table named 'data' on the data source named 'server'.

If 'data' is a query, however, the logic is more complicated. All data sources may run queries using the Manifold query engine, with Manifold syntax. Some data sources, eg, databases like PostgreSQL or SQL Server, may also run queries using their own query engine, with their own syntax. If the data source for 'server' has its own query engine, we check whether 'data' should be run on the server's native query engine or on the Manifold query engine. (We do this by inspecting the text of the query and looking for the '$manifold$' directive.) If 'data' is a native query, we ask the server to run it. If 'data' is a Manifold query, we run it on the client. Because the server does not know how to run a Manifold query, only the Manifold client does.

This also applies to:

--SQL9

EXECUTE [[ SELECT * FROM [data]; ]] ON [server];

It looks like we ask the *server* to run 'SELECT * FROM [data];', and we kind of do, but if we were to be precise, we are asking the *data source*. The data source then follows the same logic as above: it checks whether 'data' is a native query or a Manifold query, and then either asks the server to run it, or runs it on the client -- again, because the server does not know how to run a Manifold query, only the client does.

All of the above is generally desired. But recently we ran into an exception. If the data source is our TCP server (Manifold Server), we would like both of the above statements to run the query on the server despite the query being a Manifold one. Because unlike other types of data sources, Manifold Server *can* run Manifold queries.

Manifold Server data sources already were running queries on the server in many circumstances. Eg, if the served MAP file had a query component, connecting to the server, then opening and running that query would run it on the server. Connecting to the server, then opening a command window for the server and running statements in that command window would also run those statements on the server. But opening a command window for the main MAP file and running one of the above two statements, or statements similar to those, would run the query on the client. The change fixes that - attempting to run queries on a Manifold Server data source will now always run them on the server.

End of note.

Manifold User Community Use Agreement Copyright (C) 2007-2021 Manifold Software Limited. All rights reserved.