The Reproject Component dialog for images includes an option to specify reprojection method. Available methods are:
- 'standard (fast)' - the traditional reprojection method used in 8 and used in 9 for display, the default,
- 'direct sub-pixel (more accurate)' - an improved method which takes longer, but produces a better quality image.
The 'direct sub-pixel' reprojection image allows specifying the number of pixel subdivisions. The default number of subdivisions is 1, this is quite enough for most cases. Increasing the number of subdivisions improves image quality (with diminishing returns) at the cost of processing time.
New query functions:
- CoordConvertTileSet - takes a coordinate converter, an image in the source coordinate system, a rectangle in the target coordinate system and a number of pixel subdivisions, and returns a table with image tiles projected to the target coordinate system.
- CoordConvertTileSetPar - a parallel variant of CoordConvertTileSet.
(We might rename these two functions in the near future and reuse those names to build similar functions for the standard reprojection method.)
The GPGPU interface is no longer tied to a particular version of CUDA and can use CUDA devices starting with compute capability 2.0 (Fermi).
The GPGPU modules are loaded from the GPGPU.DAT file included into the installs of 64-bit versions of Manifold and Manifold Viewer. The file contains multiple versions of each module, the GPGPU interface automatically loads modules of the latest version that can be used on the CUDA device present on the system. If the system has multiple CUDA devices, the GPGPU interface loads modules of the latest version that can be used on all devices, to enforce consistency and avoid recompilations. Currently, GPGPU.DAT includes modules for compute capability 2.0 (Fermi), 3.0 (Kepler) and 5.0 (Maxwell). Performance differences between versions are mostly small, but positive, with later versions performing very slightly better than earlier ones.
There is a new Join dialog to transfer data between tables, invoked via the Edit - Join command for a table window.
The current component is shown on the left and cannot be changed. The component to join data from is shown on the right. It can be a table or a query in the data source of the current component.
The line below components specifies the join criteria. The only join method available for tables is 'equal', with a field chosen from each table.
The dialog tries to guess the field to use as a key for both tables. For the target table, the dialog tries to use a field with a BTREE / BTREENULL index (but not MFD_ID), with a type preference (numbers > text > everything else) and a name preference ('..._id' = '... id' > '...id' > everything else). For the joined table, the dialog uses a similar logic, but first tries to use a field with the same name as in the target table.
The bulk of the dialog is a list of fields in the target table. Fields that participate in unique indexes (BTREE / BTREENULL, this also includes the implied system index on MFD_ID in MAP files), fields used as a join criteria and computed fields cannot be written to. Other fields can be specified to be populated from data in the joined table, showing the transfer method in the middle column and the source field in the last column. To set a field to be populated from the joined table, edit (double-click or focus and press Enter or F2) the empty space for either the transfer method or the source field. Editing the transfer method lists all transfer methods which can be used with the target field and with any of the joined fields. Editing the source field lists either all fields which can be used with the selected transfer method or all fields which can be used with any transfer method if there is no transfer method selected yet. Editing both the transfer method and the source field allows selecting '(none)', excluding the field from the join.
If the join criteria is such that for each record in the target table, there is at most one record in the joined table, the list of transfer methods is limited to 'copy' (used when the type of the target field exactly coincides with the type of the joined field) and 'convert' (used when the type of the target field is different from the type of the joined field, but there is a conversion available).
If the join criteria is such that for each record in the target table, there might be multiple records in the joined table, the list of transfer methods includes all transfer methods available for aggregated fields in the Transform pane (numeric aggregates, spatial aggregates, string aggregates, etc).
If the target table allows adding new fields, the dialog enables the Add button in the toolbar above the list. Clicking the Add button shows a drop-down menu of fields from the joined table. Selecting a field from the menu adds a new field to the list of fields in the dialog, sets the transfer method for the added field to 'copy' and the source field to the selected field. The name of the new field can be edited. The type of the new field is not shown but is set to coincide with that of the source field. New fields are highlighted using the bluish 'computed' color and have a distinctive icon on the record header. Unwanted new fields can be selected and deleted using the Delete button in the toolbar.
The list of fields can be filtered by name. The filter matches the name of both the target field and the name of the joined field, if it is specified.
To perform a join, click Join Component at the bottom of the dialog. Joining copies data over to the target table. Any changes to the data in the joined table made *after* the join will not automatically appear in the the target table. However, you can save the join operation using the 'Save update query' option - this will create a query which will repeat the join whenever this is convenient. You can have any number of saved joins. You can also customize what these joins do by editing their code.
The Edit - Join command is disabled for queries, readonly tables (tables that miss a unique index and so cannot be updated, or tables on readonly data sources) and system tables (MFD_META / MFD_ROOT / MFD_SRID and such).
(Note 1: You can join data from a table or query in a different data source, too. Create a query in the data source of the target table and set query text to 'TABLE [datasource]::[table]', citing the component on the foreign data source. This query works as an alias for the table on the foreign data source. In particular, it will be visible in the Join dialog for the target table.)
(Note 2: We are going to create a quick cheatsheet of how to use the dialog, invokable by F1. For now, we only have a placeholder page, but it is going to grow. We might create cheatsheets for other dialogs in the future as well.)
The Stack Horizontal / Stack Vertical commands for a layout window support stacking frames of varying width / height (previously, the frames were stacked evenly using the width / height of the active frame).
The new Align Center Horizontal / Align Center Vertical commands for a layout window align layout frames to common center.
(Fix) Switching the cursor mode in a layout window no longer fails to update the cursor mode icon on the toolbar.
The various favorites dialogs require selecting new favorites to add them to the list of current favorites. Adding new favorites to the list of current favorites does not remove them from the dialog.
ECW support is updated to the latest version of ECW SDK, incorporating fixes to reading ECW and JPEG2K files.
WEBP support is updated, incorporating fixes to reading and writing WEBP files.
(Fix) Reading PBF no longer sometimes skips blocks of lines and areas (mostly happens on large files).
Reading PBF splits PBF data tags into multiple tables based on the tag name: tags starting with 'addr' / tags starting with 'building' / tags starting with 'source' / all other tags. (This change allows importing whole-world PBFs without running into the current limit of 2 billion records per table. After the limit is removed, we will read tags into the single table again.)
Reading PBF is optimized to perform significantly faster, particularly on huge files. (We want to allow importing whole-world files. We removed the roadblock with the tags above, removed two or three similar roadblocks elsewhere, and are now working on performance. Reading a whole-world PBF used to take multiple days - only to run into a limit on the number of records per table and fail. Now the process no longer runs into the limit, but it still takes about a full day to complete, because the amounts of intermediate data processed are huge - over a terabyte, with complex organization. We are going to improve the time it takes to handle all this. We are learning a lot from dealing with vector data sets of this size as well.)
(Fix) Attempting to export an XLS file no longer sometimes fails in the creation of the initial XLS file.
Reading KML recognizes more variants of geometry data (all types in the semi-official Google samples).
Reading KML recognizes more variants of URL links (NetworkLink, etc).
End of list.