Subscribe to this thread
Home - General / All posts - Manifold server observations
artlembo


3,260 post(s)
online
#27-May-22 02:26

My apologies for some of the formatting. Out of the country and doing this on my phone. If one of the admins wants to reformat the code so things read better, feel free to do so with my approval. Sorry I couldn’t do it myself at the moment:

Just a couple of observations about server. I've created two .map files that have server running with a client (manifoldserver1.map, manifoldclient1.map) - you can get the data for the sql4arc.gdb data from my website. So, you can recreate all of this. I also loaded the data into a postgres server

Manifoldclient1.map is hitting against a manifoldserver1.map. There is a query in manifoldserver1.map that selects all the parcels that are in the Fall Creek watershed. The query in Manifoldserver1.map is defined as:

EXECUTE

[[

SELECT p.*, watershed

FROM tcparcelsutm AS p,

watershedsutm AS w

WHERE ST_Intersects(p.geom, w.geom)

AND w.watershed = 'Fall Creek'

]] ON postgres

Running Server Queries

so, in manifoldclient1.map, we can run that as:

SELECT *

FROM [ManifoldServer]::[Query]

just like it’s illustrated in the Help files.

we can also tunnel directly to the Postgres server as:

SELECT p.*

FROM [ManifoldServer]::[postgres]::[public.tcparcelsutm] AS p,

[ManifoldServer]::[postgres]::[public.watershedsutm] AS w

WHERE GeomTouches(p.geom, w.geom,0)

AND w.watershed = 'Fall Creek'

notice, we are using GeomTouches, so this is Manifold SQL hitting the database.

No double tunneling???

we can't seem to tunnel into the ManifoldServer app with:

EXECUTE

[[

SELECT p.*

FROM tcparcelsutm AS p,

watershedsutm AS w

WHERE ST_Intersects(p.geom, w.geom)

AND w.watershed = 'Fall Creek'

]] ON [ManifoldServer]::[postgres]

it throws an error: Cannot parse query

Not sure if this is expected, and part of the design. Or, if I'm referencing the server app incorrectly.

Tunnel when a Server database is dragged in

but, we can drag the postgres folder from ManifoldServer1.map into our local .map file, and rename it postgres_inMfd, and issue the following query, to tunnel into Postgres:

SELECT * FROM

(

EXECUTE

[[

SELECT p.*

FROM tcparcelsutm AS p,

watershedsutm AS w

WHERE ST_Intersects(p.geom, w.geom)

AND w.watershed = 'Fall Creek'

]] ON [postgres_inMfd]

) AS a

notice again, we are issuing the ST_Intersects, so this is Postgres in action. But, we have to go through the extra step to drag the postgres canister into our local manifoldclient1.map file. Not a big deal, but the double-tunnel would be great here.

Easiest Distributed Computing on the Planet

Now, here again is a slick idea. I'm querying Postgres directly, and I'm also querying the geodatabase directly, and using the results back in the local .map file:

SELECT sum(asmt) AS sumAsmt, propclass

FROM

(

EXECUTE

[[

SELECT p.*, watershed

FROM tcparcelsutm AS p,

watershedsutm AS w

WHERE ST_Intersects(w.geom, p.geom)

AND w.watershed = 'Fall Creek'

]] ON [postgres_inMfd]) AS a,

[ManifoldServer]::[sql4arc.gdb]::[Floodzonesutm.Shape] AS b

WHERE GeomTouches(a.geom, b.Shape,0)

AND zone = 'AE'

GROUP BY propclass

this illustrates the idea that if you have a really large dataset on a backend server, you can let that server do the heavy lifting (i.e. [postgres_inMfd] to trim it down to size a bit, and you can issue another query from another connection (i.e. sql4arc.gdb), all accessed via Server. And, the results of those two sub queries (that return smaller datasets from the 2 queries) can then be fed to another query where I get the sum(asmt). So, this is a great example of distributed queries. Thinking bigger here: you hit one database with Postgres, one with SQLServer, and one with Oracle. All 3 databases do the heavy lifting on the back and, and return those smaller subsets back to the calling query that puts the finishing touches on them.

Updating database - not just read only!!

And lastly, I see that using Server is not quite as read-only as one would think. This query actually updates the data in PostGIS (again, using postgres_inMfd that was dragged in to the project):

UPDATE

[postgres_inMfd]::[public.watershedsutm]

SET watershed = 'Fall Creek updated'

WHERE watershed = 'Fall Creek'

and also this:

EXECUTE [[ UPDATE

watershedsutm

SET watershed = 'Fall Creek New'

WHERE watershed = 'Fall Creek'

]] ON [postgres_inMfd]

so, wow... Connecting via manifoldserver does allow updates to occur.

Fixing the Manifold to Postgres Port

Oftentimes, a geodatabase has field names as capital letters. And, shapefiles certainly have that. When you drag those layers into Postgres, it maintains the capital letters. That makes writing queries in Postgres a pain. When you use the QGIS Database Manager, they give you an option to convert field names to lowercase. I wonder if something similar should be done in Manifold. If so, Manifold really becomes the best way to load Postgres, SQLServer or Oracle database - just click and drag. But, at the moment, when legacy GIS data has capital letters, that becomes cumbersome.

going to try some more things over the next few days. Server is going to have so much potential.

Dimitri

7,025 post(s)
#27-May-22 07:46

That makes writing queries in Postgres a pain.

If you don't wrap a field name in double-quotes, isn't it case insensitive in PostgreSQL? I could be wrong about this, but it seems to me that you can use

SELECT my_field FROM my_table ...

where you could use my_table or My_Table or my_Table to refer to a field which was created as My_table, right?

Q provides a "force lower case" feature because Q was originally created as a viewer for data in PostgreSQL, and so it knows the well-known quirk in PostgreSQL where, contrary to the SQL standard all unquoted field names are treated as lower case (the standard says they should all be treated as upper case). Using the Q feature to force all field names to lower case is thus backwards for all other database systems that follow the SQL standard.

Personally, I like the intrinsic case-insensitivity of field and table names in SQL. It make work convenient when many different formats are used with various approaches to upper and lower case in how people named the various bits and pieces.

There is always a risk in creating scripts or queries that are case sensitive. For example, what worked with data taken from a shapefile in one context (like the original shapefile) might not work in a different instance. Wouldn't it be better to write the Postgres queries without double quotes (which force case sensitivity) instead of relying on field names which are different from the original data in some contexts, but not in others.

But I guess if you really wanted to export using forced upper, lower, or title case, that could be an option. Something to suggest, I suppose.

artlembo


3,260 post(s)
online
#27-May-22 08:54

I’m pretty sure Postgres is case sensitive. Don’t have my computer now to recheck.

What I do is write a query that generates an ALTER statement. Then, I copy the results and run it in PostGres to convert all table and field names to lowercase.

marcus51 post(s)
#27-May-22 10:47

Without double-quotes tables names and field names are assumed to be lower case. However, the corresponding actual table and field names must also be in lower case for the query to work.

For example,

SELECT My_field FROM my_table

will not work if the actual field name is My_field. It will only work if the field name is my_field.

Dimitri

7,025 post(s)
#27-May-22 12:25

However, the corresponding actual table and field names must also be in lower case for the query to work.

That seems to contradict what the PostgreSQL manual says:

Key words and unquoted identifiers are case insensitive. Therefore:

UPDATE MY_TABLE SET A = 5;

can equivalently be written as:

uPDaTE my_TabLE SeT a = 5;

If key words and unquoted identifiers were case insensitive only if the actual field were lower case, it seems they should not write "unquoted identifiers are case insensitive" without any qualifiers.

Trying it out, it seems like you're right, but if so the Postgres doc should be clarified.

Dimitri

7,025 post(s)
#27-May-22 12:44

Ran out of time to edit my other reply to you, but trying this out with a sample database like that used in the Connect to PostgreSQL topic, where most of the fields are mixed case, it looks like you're right, and the solution therefore is to just use double quotes if you want to execute server side.

For example, if you launch a command window on the PostgreSQL data source so that queries in that command window execute server-side within PostgreSQL by default:

This works...

SELECT mexico_table."Population" FROM public.mexico_table;

But this does not...

SELECT mexico_table.Population FROM public.mexico_table;

It's the same for table names and field names that include spaces in them. Use double quote characters to delimit the identifier and all is well.

artlembo


3,260 post(s)
online
#27-May-22 13:17

Yes, and when writing queries that are returning say six or seven columns along with geometry columns that are in uppercase it is just a hassle to quote every column name. That’s why I often times will upload the data from QGIS, or run that query that alters all the column names.

Biggest surprise for me was the ability to update the PostGres database via server connections.

Dimitri

7,025 post(s)
#27-May-22 13:34

Sounds like an option to automatically double-quote identifiers in the Query Builder would do the trick.

There's room in the query builder to add a toolbar that could have a choice of styles for delimiting identifiers, such as " double quotes, [ ] square brackets, etc. Pick the option you want, and then whenever you double-click a field or table name in the query builder to add it to the query it automatically appears bracketed by " double-quotes, instead of by [ ] square brackets.

After all, the issue is how you write queries in Postgres. If there's a syntax that Postgres recommends to deal with their quirk it seems to me the right solution is to make it easy to write queries that use the Postgres recommendation, not to change the actual data to engineer around the quirk.

Do the latter, and now you have two different versions of your data, one that is the original and is used everywhere, and another version which is just used in Postgres to get around the "OK, well, we're not really case-insensitive for identifiers..." thing. And then what happens when you want to export back out from Postgres to interoperate with applications written around the original shapefiles?

That option for the query builder would be handy in other settings for writing server-side queries. For example, PostgreSQL wants you to delimit with " double quotes if the identifier has a space in it. So if you've followed the example topic and your table isn't named mexico_table but mexico table with a space, then you need to write stuff like:

SELECT "mexico table"."Population" FROM public."mexico table";

and not (like you get with just double-clicking from the query builder to add fields or tables):

[public.mexico table]

Such an option would help to handle field names and table names with spaces or other unSQL characters in them, besides fixing the pseudo-case insensitivity thing in Postgres.

adamw

10,011 post(s)
#01-Jun-22 13:19

Regarding double-tunneling with EXECUTE [[ ... ]] ON [a]::[b], this currently gets interpreted as EXECUTE on [a] with the rest of the text going into the next statement (and failing). We will change parsing to support nesting in this case (and in similar cases if we have them).

Manifold User Community Use Agreement Copyright (C) 2007-2021 Manifold Software Limited. All rights reserved.