Discussion:
delta relations in AFTER triggers
Kevin Grittner
2014-06-14 23:56:44 UTC
Permalink
Attached is a WIP patch for implementing the capture of delta
relations for a DML statement, in the form of two tuplestores --
one for the old versions and one for the new versions.  In the
short term it is intended to make these relations available in
trigger functions, although the patch so far doesn't touch any PLs
-- it just takes things as far as providing the relations as
tuplestores in the TriggerData structure when appropriate, for the
PLs to pick up from there.  It seemed best to get agreement on the
overall approach before digging into all the PLs.  This is
implemented only for INSERT, UPDATE, and DELETE since it wasn't
totally clear what the use cases and proper behavior was for other
triggers.  Opinions on whether we should try to provide deltas for
other cases, and if so what the semantics are, are welcome.

Once triggers can access this delta data, it will also be used for
incremental maintenance of materialized views, although I don't
want get too sidetracked on any details of that until we have
proven delta data available in triggers.  (One step at a time or
we'll never get there.)

I looked at the standard, and initially tried to implement the
standard syntax for this; however, it appeared that the reasons
given for not using standard syntax for the row variables also
apply to the transition relations (the term used by the standard).
There isn't an obvious way to tie that in to all the PLs we
support.  It could be done, but it seems like it would intolerably
ugly, and more fragile than what we have done so far.

Some things which I *did* follow from the standard: these new
relations are only allowed within AFTER triggers, but are available
in both AFTER STATEMENT and AFTER ROW triggers.  That is, an AFTER
UPDATE ... FOR EACH ROW trigger could use both the OLD and NEW row
variables as well as the delta relations (under whatever names we
pick).  That probably won't be used very often, but I can imagine
some cases where it might be useful.  I expect that these will
normally be used in FOR EACH STATEMENT triggers.

There are a couple things I would really like to get settled in
this round of review, so things don't need to be refactored in
major ways later:

(1)  My first impulse was to capture this delta data in the form of
tuplestores of just TIDs, and fetching the tuples themselves from
the heap on reference.  In earlier discussions others have argued
for using tuplestores of the actual rows themselves.  I have taken
that advice here, but still don't feel 100% convinced. What I am
convinced of is that I don't want to write a lot of code based on
that decision and only then have people weigh in on the side of how
I had planned to do it in the first place.  I hate it when that
happens.

(2)  Do we want to just pick names for these in the PLs rather than
using the standard syntax?  Implementing the standard syntax seemed
to require three new (unreserved) keywords, changes to the catalogs
to store the chosen relations names, and some way to tie the
specified relation names in to the various PLs.  The way I have
gone here just adds two new fields to the TriggerData structure and
leaves it to each PL how to deal with that.  Failure to do anything
in a PL just leaves it at the status quo with no real harm done --
it just won't have the new delta relations available.

Of course, any other comments on the approach taken or how it can
be improved are welcome.

At this point the only testing is that make check-world completes
without problems.  If we can agree on this part of it I will look
at the PLs, and create regression tests.  I would probably submit
each PL implementation as a separate patch.

I was surprised that the patch to this point was so small:
 5 files changed, 170 insertions(+), 19 deletions(-)
Hopefully that's not due to having missed something.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
David Fetter
2014-06-15 14:54:04 UTC
Permalink
On Sat, Jun 14, 2014 at 04:56:44PM -0700, Kevin Grittner wrote:
> Attached is a WIP patch for implementing the capture of delta
> relations for a DML statement, in the form of two tuplestores --
> one for the old versions and one for the new versions.

Thanks!

Any chance we might be able to surface the old version for the case of
UPDATE ... RETURNING?

Cheers,
David.
--
David Fetter <***@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: ***@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-06-15 16:33:46 UTC
Permalink
David Fetter <***@fetter.org> wrote:

> Any chance we might be able to surface the old version for the
> case of UPDATE ... RETURNING?

Not as part of this patch.

Of course, once delta relations are available, who knows what
people might do with them.  I have a hard time imagining exactly
how you would expose what you're talking about, but a column to
distinguish before and after images might work.  Incremental
maintenance of materialized views will require that in the form of
a count column with -1 for deleted and +1 for inserted, so there
might be some common ground when we get to that.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Alvaro Herrera
2014-06-16 19:59:00 UTC
Permalink
Kevin Grittner wrote:
> Attached is a WIP patch for implementing the capture of delta
> relations for a DML statement, in the form of two tuplestores --
> one for the old versions and one for the new versions.  In the
> short term it is intended to make these relations available in
> trigger functions, although the patch so far doesn't touch any PLs
> -- it just takes things as far as providing the relations as
> tuplestores in the TriggerData structure when appropriate, for the
> PLs to pick up from there.  It seemed best to get agreement on the
> overall approach before digging into all the PLs.  This is
> implemented only for INSERT, UPDATE, and DELETE since it wasn't
> totally clear what the use cases and proper behavior was for other
> triggers.  Opinions on whether we should try to provide deltas for
> other cases, and if so what the semantics are, are welcome.

TRUNCATE triggers shouldn't have delta relations, that's for sure, or
it'd completely negate the point of TRUNCATE triggers. I don't think we
have any other event, do we? People who get delta relations for
deleting all rows should be using DELETE, I think.

I am not sure about providing delta relations in the FOR EACH ROW case.
For what cases are them useful? In all cases I think NEW and OLD are
sufficient. I didn't read the standard, but if it's saying that in FOR
EACH ROW the new/deleted/changed row should be accessible by way of a
delta relation, then perhaps we should look into making NEW and OLD be
accesible via that relation rather than adding delta relations. It
might be more code, though.

Now, if you only have delta relations in FOR EACH STATEMENT, then it
seems to me you can optimize obtaining them only when such triggers
are defined; this lets you do away with the reloption entirely, doesn't
it?

I noticed that GetCurrentFDWTuplestore() changed its identity without
getting its comment updated. The new name seems awfully generic, and I
don't think it really describes what the function does. I think it
needs more renaminguu

> (1)  My first impulse was to capture this delta data in the form of
> tuplestores of just TIDs, and fetching the tuples themselves from
> the heap on reference.  In earlier discussions others have argued
> for using tuplestores of the actual rows themselves.

Can you please supply pointers to such discussion? I don't see any
point in not just storing TIDs, but perhaps I'm missing something.

> (2)  Do we want to just pick names for these in the PLs rather than
> using the standard syntax?  Implementing the standard syntax seemed
> to require three new (unreserved) keywords, changes to the catalogs
> to store the chosen relations names, and some way to tie the
> specified relation names in to the various PLs.

I think the only one for which we have a compulsion to follow someone
in this area would be PL/pgSQL, which probably needs to follow PL/SQL's
lead if there is one. Other than that I don't think we need to do
anything standard. We don't (yet) have PL/PSM which would need to have
the standard-mandated syntax.

> The way I have
> gone here just adds two new fields to the TriggerData structure and
> leaves it to each PL how to deal with that.  Failure to do anything
> in a PL just leaves it at the status quo with no real harm done --
> it just won't have the new delta relations available.

It seems to me that plpgsql support is mandatory, but other PLs can add
support as interested parties weigh in with patches. As with event
triggers, I don't feel the author of the main feature is responsible for
patching all PLs.


> + <varlistentry>
> + <term><literal>generate_deltas</literal> (<type>boolean</type>)</term>
> + <listitem>
> + <para>
> + Declare that a table generates delta relations when modified. This
> + allows <literal>AFTER</> triggers to reference the set of rows modified
> + by a statement. See
> + <xref linkend="sql-createtrigger"> for details.
> + </para>
> + </listitem>
> + </varlistentry>

Are you intentionally omitting the corresponding sql-createtrigger patch?

--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-06-18 21:30:06 UTC
Permalink
Alvaro Herrera <***@2ndquadrant.com> wrote:
> Kevin Grittner wrote:

> TRUNCATE triggers shouldn't have delta relations, that's for
> sure, or it'd completely negate the point of TRUNCATE triggers.
> I don't think we have any other event, do we?  People who get
> delta relations for deleting all rows should be using DELETE, I
> think.

That sounds reasonable.  The only other issue was INSTEAD OF
triggers, but I don't think we need them there, either.

> I am not sure about providing delta relations in the FOR EACH ROW
> case.  For what cases are them useful?

In an accounting application for courts I have seen a case where
each receivable (asset) account had a contra (liability) account of
"uncollected receivables", because until and unless the Clerk of
Courts collected the judgment there was no obligation to pay the
money back out.  Any financial transaction had to be in a database
transaction, and not only did all the transaction detail need to
balance to zero, but any receivable detail row needed to be
immediately followed by a balancing contra account row (and vice
versa).  A FOR EACH ROW trigger, on seeing one of these rows, could
check for the required balancing row.  Now this would only be
solved by the standard feature if both rows were always inserted by
a single statement, which might not always have been the case; so
even with this example I am stretching a bit.  But it's close
enough to suggest that there might be legitimate uses.

And there is the fact that the standard seems to want this to be
supported.

> In all cases I think NEW and OLD are sufficient.  I didn't read
> the standard, but if it's saying that in FOR EACH ROW the
> new/deleted/changed row should be accessible by way of a delta
> relation, [...]

No, it says that you can specify *both* the row variables for the
row under consideration and the tables for the full set of rows
affected by the statement *for the same FOR EACH ROW trigger*.

> Now, if you only have delta relations in FOR EACH STATEMENT, then
> it seems to me you can optimize obtaining them only when such
> triggers are defined; this lets you do away with the reloption
> entirely, doesn't it?

That was intended to minimize the situations under which there was
a performance hit where the new delta relations were not needed in
an AFTER trigger.  If we only generate these for FOR EACH STATEMENT
triggers, that certainly reduces the need for that, but I'm not
sure it eliminates it -- especially if we're generating tuplestores
for the full rows rather than their TIDs.

It is already generating the tuplestores only if there is an AFTER
trigger for the type of statement being executed, but I agree that
it would be a very rare FOR EACH ROW trigger that actually used it.
Of course, that is one argument for the standard syntax -- we
could test whether any of the trigger definitions for that
operation on that table specified each transition table, and only
generate them if needed based on that.

> I noticed that GetCurrentFDWTuplestore() changed its identity
> without getting its comment updated.  The new name seems awfully
> generic, and I don't think it really describes what the function
> does.  I think it needs more renaminguu

Good point.  Will fix that in the next version.

>> (1)  My first impulse was to capture this delta data in the form
>> of tuplestores of just TIDs, and fetching the tuples themselves
>> from the heap on reference.  In earlier discussions others have
>> argued for using tuplestores of the actual rows themselves.
>
> Can you please supply pointers to such discussion?

That was in a matview-related discussion, so perhaps we should
ignore that for now and discuss what's best for triggers here.  If
matviews need something different, the rows could always be
materialized from the TIDs at a later point.

> I don't see any point in not just storing TIDs, but perhaps I'm
> missing something.

I think there was some question about performance, but I really
have a hard time seeing the performance being significantly worse
with a TID tuplestore for most reasonable uses; and I think the TID
tuplestore could be a lot faster if (for example) you have a table
with a large, TOASTed text or bytea column which is not referenced
(in selection criteria or the SET clause) and is not used in the
FOR EACH STATEMENT trigger.

I think there might also have been some question about visibility.
A TID tuplestore would need to use a different snapshot (like maybe
SnapshotAny) in the same query where it joined to other tables with
a normal MVCC snapshot.

>> (2)  Do we want to just pick names for these in the PLs rather
>> than using the standard syntax?  Implementing the standard
>> syntax seemed to require three new (unreserved) keywords,
>> changes to the catalogs to store the chosen relations names, and
>> some way to tie the specified relation names in to the various
>> PLs.
>
> I think the only one for which we have a compulsion to follow
> someone in this area would be PL/pgSQL,

... which currently hard-codes the row variable names rather than
using standard syntax to specify them.

> which probably needs to follow PL/SQL's lead if there is one.

I don't believe we can create PL/SQL trigger functions, can we?

> Other than that I don't think we need to do anything standard.
> We don't (yet) have PL/PSM which would need to have the
> standard-mandated syntax.

The stated reason for not specifying the row variable names in the
CREATE TRIGGER statement is the difficulty of tying that in to all
the supported PLs.  I couldn't see any reason for that to be easier
for the tables than the row variables.  Of course, if we intend to
support PL/PSM and we want to use standard CREATE TRIGGER syntax
for that (rather than having trigger functions in that language act
as though some particular name was specified) it seems better to me
that we implement it now.  But I don't really see why we would need
to do that to have a conforming PL/PSM implementation.

>> The way I have gone here just adds two new fields to the
>> TriggerData structure and leaves it to each PL how to deal with
>> that.  Failure to do anything in a PL just leaves it at the
>> status quo with no real harm done -- it just won't have the new
>> delta relations available.
>
> It seems to me that plpgsql support is mandatory, but other PLs
> can add support as interested parties weigh in with patches.  As
> with event triggers, I don't feel the author of the main feature
> is responsible for patching all PLs.

I like that point of view!

> Are you intentionally omitting the corresponding
> sql-createtrigger patch?

Yes, because that depends on decisions made about whether to use
standard syntax and (if not) a round of bikeshedding about what
fixed names to use in plpgsql.

Thanks for the feedback!

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas
2014-06-17 20:07:55 UTC
Permalink
On Sat, Jun 14, 2014 at 7:56 PM, Kevin Grittner <***@ymail.com> wrote:
> I looked at the standard, and initially tried to implement the
> standard syntax for this; however, it appeared that the reasons
> given for not using standard syntax for the row variables also
> apply to the transition relations (the term used by the standard).
> There isn't an obvious way to tie that in to all the PLs we
> support. It could be done, but it seems like it would intolerably
> ugly, and more fragile than what we have done so far.

I'm not too familiar with this area. Can you describe what the
standard syntax for the row variables is, as opposed to our syntax?
Also, what's the standard syntax for the the transition relations?

> Some things which I *did* follow from the standard: these new
> relations are only allowed within AFTER triggers, but are available
> in both AFTER STATEMENT and AFTER ROW triggers. That is, an AFTER
> UPDATE ... FOR EACH ROW trigger could use both the OLD and NEW row
> variables as well as the delta relations (under whatever names we
> pick). That probably won't be used very often, but I can imagine
> some cases where it might be useful. I expect that these will
> normally be used in FOR EACH STATEMENT triggers.

I'm concerned about the performance implications of capturing the
delta relations unconditionally. If a particular trigger actually
needs the delta relations, then the time spent capturing that
information is well spent; but if it doesn't, then it's a waste.
There are lots of people using FOR EACH STATEMENT triggers right now
who won't be happy if those start requiring O(n) additional temporary
storage, where n is the number of rows modified by the statement.
This is likely an even bigger issue for per-row triggers, of course,
where as you say, it probably won't be used often.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
David Fetter
2014-06-18 16:39:33 UTC
Permalink
On Tue, Jun 17, 2014 at 04:07:55PM -0400, Robert Haas wrote:
> On Sat, Jun 14, 2014 at 7:56 PM, Kevin Grittner <***@ymail.com> wrote:
> > I looked at the standard, and initially tried to implement the
> > standard syntax for this; however, it appeared that the reasons
> > given for not using standard syntax for the row variables also
> > apply to the transition relations (the term used by the standard).
> > There isn't an obvious way to tie that in to all the PLs we
> > support. It could be done, but it seems like it would intolerably
> > ugly, and more fragile than what we have done so far.
>
> I'm not too familiar with this area. Can you describe what the
> standard syntax for the row variables is, as opposed to our syntax?
> Also, what's the standard syntax for the the transition relations?

The good:
- Generating the tuplestores. Yay!

The bad:
- Generating them exactly and only for AFTER triggers
- Requiring that the tuplestores both be generated or not at all.
There are real use cases described below where only one would be relevant.
- Generating the tuplestores unconditionally.

The ugly:
- Attaching tuplestore generation to tables rather than callers (triggers, DML, etc.)

The SQL standard says:

<trigger definition> ::=
CREATE TRIGGER <trigger name> <trigger action time> <trigger event>
ON <table name> [ REFERENCING <transition table or variable list> ]
<triggered action>

<trigger action time> ::=
BEFORE
| AFTER
| INSTEAD OF

<trigger event> ::=
INSERT
| DELETE
| UPDATE [ OF <trigger column list> ]

<trigger column list> ::=
<column name list>

<triggered action> ::=
[ FOR EACH { ROW | STATEMENT } ]
[ <triggered when clause> ]
<triggered SQL statement>

<triggered when clause> ::=
WHEN <left paren> <search condition> <right paren>

<triggered SQL statement> ::=
<SQL procedure statement>
| BEGIN ATOMIC { <SQL procedure statement> <semicolon> }... END

<transition table or variable list> ::=
<transition table or variable>...

<transition table or variable> ::=
OLD [ ROW ] [ AS ] <old transition variable name>
| NEW [ ROW ] [ AS ] <new transition variable name>
| OLD TABLE [ AS ] <old transition table name>
| NEW TABLE [ AS ] <new transition table name>

<old transition table name> ::=
<transition table name>
<new transition table name> ::=
<transition table name>

<transition table name> ::=
<identifier>

<old transition variable name> ::=
<correlation name>

<new transition variable name> ::=
<correlation name>

Sorry that was a little verbose, but what it does do is give us what we need at
trigger definition time. I'd say it's pilot error if a trigger
definition says "make these tuplestores" and the trigger body then
does nothing with them, which goes to Robert's point below re:
unconditional overhead.

> > Some things which I *did* follow from the standard: these new
> > relations are only allowed within AFTER triggers, but are available
> > in both AFTER STATEMENT and AFTER ROW triggers. That is, an AFTER
> > UPDATE ... FOR EACH ROW trigger could use both the OLD and NEW row
> > variables as well as the delta relations (under whatever names we
> > pick). That probably won't be used very often, but I can imagine
> > some cases where it might be useful. I expect that these will
> > normally be used in FOR EACH STATEMENT triggers.
>
> I'm concerned about the performance implications of capturing the
> delta relations unconditionally.

Along that same line, we don't always need to capture both the before
tuplestores and the after ones. Two examples of this come to mind:

- BEFORE STATEMENT triggers accessing rows, where there is no after part to use, and
- DML (RETURNING BEFORE, e.g.) which only touches one of them. This
applies both to extant use cases of RETURNING and to planned ones.

I'm sure if I can think of two, there are more.

In summary, I'd like to propose that the tuplestores be generated
separately in general and attached to callers. We can optimize this by
not generating redundant tuplestores.

Cheers,
David.
--
David Fetter <***@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: ***@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-06-18 22:30:34 UTC
Permalink
David Fetter <***@fetter.org> wrote:
> Robert Haas wrote:
>> Kevin Grittner <***@ymail.com> wrote:

> The good:
>     - Generating the tuplestores.  Yay!

Thanks for that.  ;-)

> The bad:
>     - Generating them exactly and only for AFTER triggers

The standard only allows them for AFTER triggers, and I'm not sure
what the semantics would be for any others.

>     - Requiring that the tuplestores both be generated or not at
>       all.  There are real use cases described below where only
>       one would be relevant.

Yeah.

>     - Generating the tuplestores unconditionally.

Well, there are conditions.  Only when the reloption allows and
only if there is an AFTER trigger for the type of operation in
progress.

> The ugly:
>     - Attaching tuplestore generation to tables rather than
        callers (triggers, DML, etc.)

I'm not sure what you're getting at here.  This patch is
specifically only concerned with generating delta relations for DML
AFTER triggers, although my hope is that it will be a basis for
delta relations used for other purposes.  This seems to me like the
right place to initially capture the data for incremental
maintenance of materialized views, and might be of value for other
purposes, too.

> [formal definition of standard CREATE TRIGGER statement]

> Sorry that was a little verbose, but what it does do is give us
> what we need at trigger definition time.  I'd say it's pilot
> error if a trigger definition says "make these tuplestores" and
> the trigger body then does nothing with them, which goes to
> Robert's point below re: unconditional overhead.

Yeah, the more I think about it (and discuss it) the more I'm
inclined to suffer the additional complexity of the standard syntax
for specifying transition relations in order to avoid unnecessary
overhead creating them when not needed.  I'm also leaning toward
just storing TIDs in the tuplestores, even though it requires mixed
snapshots in executing queries in the triggers.  It just seems like
there will otherwise be to much overhead in copying around big,
unreferenced columns for some situations.

> Along that same line, we don't always need to capture both the
> before tuplestores and the after ones.  Two examples of this come
> to mind:
>
> - BEFORE STATEMENT triggers accessing rows, where there is no
> after part to use,

Are you talking about an UPDATE for which the AFTER trigger(s) only
reference the before transition table, and don't look at AFTER?  If
so, using the standard syntax would cover that just fine.  If not,
can you elaborate?

> and
> - DML (RETURNING BEFORE, e.g.) which only touches one of them.
> This applies both to extant use cases of RETURNING and to planned
> ones.

I think that can be sorted out by a patch which implements that, if
these deltas even turn out to be the appropriate way to get that
data (which is not clear to me at this time).  Assuming standard
syntax, the first thing would be for the statement to somehow
communicate to the trigger layer the need to capture a tuplestore
it might otherwise not generate, and there would need to be a way
for the statement to access the needed tuplestore(s).  The
statement would also need to project the right set of columns.
None of that seems to me to be relevant to this patch.  If this
patch turns out to provide infrastructure that helps, great.  If
you have a specific suggestion about how to make the tuplestores
more accessible to other layers, I'm listening.

> In summary, I'd like to propose that the tuplestores be generated
> separately in general and attached to callers. We can optimize
> this by not generating redundant tuplestores.

Well, if we use the standard syntax for CREATE TRIGGER and store
the transition table names (if any) in pg_trigger, the code can
generate one relation if any AFTER triggers which are going to fire
need it.  I don't see any point in generating exactly the same
tuplestore contents for each trigger.  And suspect that we can wire
in any other uses later when we have something to connect them to.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
David Fetter
2014-06-19 02:49:00 UTC
Permalink
On Wed, Jun 18, 2014 at 03:30:34PM -0700, Kevin Grittner wrote:
> David Fetter <***@fetter.org> wrote:
> > Robert Haas wrote:
> >> Kevin Grittner <***@ymail.com> wrote:
>
> > The good:
> >     - Generating the tuplestores.  Yay!
>
> Thanks for that.  ;-)

Sorry, I just can't resist references to Spaghetti Westerns.
https://en.wikipedia.org/wiki/The_Good,_the_Bad_and_the_Ugly

> > The bad:
> >     - Generating them exactly and only for AFTER triggers
>
> The standard only allows them for AFTER triggers, and I'm not sure
> what the semantics would be for any others.

As, so here's where we differ. You're looking at deltas, a very nice
capability to have. I'm looking at the before and after tuplestores
as components of which deltas, among many other things, could be
composed.

> >     - Requiring that the tuplestores both be generated or not at
> >       all.  There are real use cases described below where only
> >       one would be relevant.
>
> Yeah.
>
> >     - Generating the tuplestores unconditionally.
>
> Well, there are conditions.  Only when the reloption allows and
> only if there is an AFTER trigger for the type of operation in
> progress.

For deltas, this is just the thing.

I'm vaguely picturing the following as infrastructure:

- Instead of modifying Rel, we modify Query to contain two more bools
default false: hasBeforeTuplestore and hasAfterTuplestore
- Each use case we implement would set 0 or more of these to true.
For the delta use case, appropriate trigger definitions would set
both.

This is vague because I haven't really gotten hacking on it, just
exploring what I hope are the relevant parts of the code.

> > The ugly:
> >     - Attaching tuplestore generation to tables rather than
>         callers (triggers, DML, etc.)
>
> I'm not sure what you're getting at here.  This patch is
> specifically only concerned with generating delta relations for DML
> AFTER triggers, although my hope is that it will be a basis for
> delta relations used for other purposes.  This seems to me like the
> right place to initially capture the data for incremental
> maintenance of materialized views, and might be of value for other
> purposes, too.

Hrm. I don't really see this stuff as table properties. The
materialized view case is an obvious example where the matview, not
the relations underneath, wants this information. The relations
underneath may have their own concerns, but it's the matview whose
existence should ensure that the tuplestores are being generated.

Once the last depending-on-one-of-the-tuplestores things is gone, and
this could simply be the end of a RETURNING query, the tuplestores go
away.

> > [formal definition of standard CREATE TRIGGER statement]
>
> > Sorry that was a little verbose, but what it does do is give us
> > what we need at trigger definition time.  I'd say it's pilot
> > error if a trigger definition says "make these tuplestores" and
> > the trigger body then does nothing with them, which goes to
> > Robert's point below re: unconditional overhead.
>
> Yeah, the more I think about it (and discuss it) the more I'm
> inclined to suffer the additional complexity of the standard syntax
> for specifying transition relations in order to avoid unnecessary
> overhead creating them when not needed.  I'm also leaning toward
> just storing TIDs in the tuplestores, even though it requires mixed
> snapshots in executing queries in the triggers.

So in this case one tuplestore with two TIDs, either of which might be
NULL?

> just seems like there will otherwise be to much overhead in copying
> around big, unreferenced columns for some situations.

Yeah, it'd be nice to have the minimal part be as slim as possible.

> > Along that same line, we don't always need to capture both the
> > before tuplestores and the after ones.  Two examples of this come
> > to mind:
> >
> > - BEFORE STATEMENT triggers accessing rows, where there is no
> > after part to use,
>
> Are you talking about an UPDATE for which the AFTER trigger(s) only
> reference the before transition table, and don't look at AFTER?If
> so, using the standard syntax would cover that just fine.  If not,
> can you elaborate?

Sorry I was unclear. I was looking at one of the many things having
these tuplestores around could enable. As things stand now, there is
no access of any kind to rows with any per-statement trigger, modulo
user-space hacks like this one:

http://people.planetpostgresql.org/dfetter/index.php?/archives/71-Querying-Rows-in-Statement-Triggers.html

Having the "before" tuplestore available to a BEFORE STATEMENT trigger
would make it possible to do things with the before transition table
that are fragile and hacky now.

> > and
> > - DML (RETURNING BEFORE, e.g.) which only touches one of them.
> > This applies both to extant use cases of RETURNING and to planned
> > ones.
>
> I think that can be sorted out by a patch which implements that, if
> these deltas even turn out to be the appropriate way to get that
> data (which is not clear to me at this time).

Again, I see the tuplestores as infrastructure both deltas and many
other things, so long as they're attached to the right objects. In my
opinion, the right objects would include materialized views, triggers,
and certain very specific kinds of DML of which all the RETURNING ones
are one example. They would not include the underlying tables.

> standard
> syntax, the first thing would be for the statement to somehow
> communicate to the trigger layer the need to capture a tuplestore
> it might otherwise not generate, and there would need to be a way
> for the statement to access the needed tuplestore(s).

Right. Hence my proposal to make the existence of the tuplestores
part of Query, writeable by the types of triggers which specify that
they'll be needed.

> The statement would also need to project the right set of columns.
> None of that seems to me to be relevant to this patch.  If this
> patch turns out to provide infrastructure that helps, great.  If you
> have a specific suggestion about how to make the tuplestores more
> accessible to other layers, I'm listening.

See above :)

> > In summary, I'd like to propose that the tuplestores be generated
> > separately in general and attached to callers. We can optimize
> > this by not generating redundant tuplestores.
>
> Well, if we use the standard syntax for CREATE TRIGGER and store
> the transition table names (if any) in pg_trigger, the code can
> generate one relation if any AFTER triggers which are going to fire
> need it.  I don't see any point in generating exactly the same
> tuplestore contents for each trigger.  And suspect that we can wire
> in any other uses later when we have something to connect them to.

Yes. I just don't think that Rel is the place to connect them.

Cheers,
David.
--
David Fetter <***@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: ***@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-06-19 04:14:06 UTC
Permalink
David Fetter <***@fetter.org> wrote:
> On Wed, Jun 18, 2014 at 03:30:34PM -0700, Kevin Grittner wrote:

>> the more I think about it (and discuss it) the more I'm
>> inclined to suffer the additional complexity of the standard
>> syntax for specifying transition relations in order to avoid
>> unnecessary overhead creating them when not needed.  I'm also
>> leaning toward just storing TIDs in the tuplestores, even though
>> it requires mixed snapshots in executing queries in the
>> triggers.
>
> So in this case one tuplestore with two TIDs, either of which
> might be NULL?

No, one or two tuplestores, depending on need, each with TIDs of
either the "before" set or the "after" set of all tuples affected
by the DML statement, however many that may be.  More or less like
this first draft patch, except with TIDs instead of copies of the
rows, and with better selectivity about when the tuplestores are
generated.

> Having the "before" tuplestore available to a BEFORE STATEMENT
> trigger would make it possible to do things with the before
> transition table that are fragile and hacky now.

How do you propose to have an accurate "before" tuplestore of
affected rows before the scan runs and before the BEFORE ... FOR
EACH ROW triggers fire?  That would be particularly interesting to
try to generate if the scan involves evaluating any VOLATILE
functions.

> Again, I see the tuplestores as infrastructure both deltas and
> many other things, so long as they're attached to the right
> objects.  In my opinion, the right objects would include
> materialized views, triggers, and certain very specific kinds of
> DML of which all the RETURNING ones are one example.  They would
> not include the underlying tables.

Right now I've presented something for capturing the information
and allowing it to be accessed from triggers.  I don't think the
means of capturing it precludes passing it along to other
consumers.  I would like to get this part working before trying to
wire it up to anything other than triggers.  The best way to kill
an effort like this is to allow scope creep.  Now, if you think
that something fundamentally belongs at another level, that's
something to address -- but the point where we capture data to pass
to triggers seems like a natural and sound place to capture it for
other purposes.  And since almost all the code for this patch is in
trigger.c, this seems like I'm in the right place for a trigger
feature.

>> standard syntax, the first thing would be for the statement to
>> somehow communicate to the trigger layer the need to capture a
>> tuplestore it might otherwise not generate, and there would need
>> to be a way for the statement to access the needed
>> tuplestore(s).
>
> Right.  Hence my proposal to make the existence of the
> tuplestores part of Query, writeable by the types of triggers
> which specify that they'll be needed.

> I just don't think that Rel is the place to connect them.

I don't know what you mean by that.  I've already said that I now
think we should use the standard CREATE TRIGGER syntax to specify
the transition tables, and that if we do that we don't need the
reloption that's in the patch.  If you ignore the 19 lines of new
code to add that reloption, absolutely 100% of the code changes in
the patch so far are in trigger.c and trigger.h.  That reloption
was never anything I would consider as *connecting* the tuplestores
to the Rel anyway -- it was simply an attempt to minimize
unnecessary work.  No matter how I try, I'm not seeing what you
mean by references to "connecting to Rel".

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-06-19 15:57:18 UTC
Permalink
Kevin Grittner <***@ymail.com> wrote:

> I've already said that I now think we should use the standard
> CREATE TRIGGER syntax to specify the transition tables, and that
> if we do that we don't need the reloption that's in the patch.
> If you ignore the 19 lines of new code to add that reloption,
> absolutely 100% of the code changes in the patch so far are in
> trigger.c and trigger.h.

Although nobody has actually framed their feedback as a review, I
feel that I have enough to work with to throw the patch into
Waiting on Author status.  Since I started with the assumption that
I was going to be using standard syntax and got a ways into that
before convincing myself it was a bad idea, I should have a new
version of the patch working that way in a couple days.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-06-21 18:06:26 UTC
Permalink
Kevin Grittner <***@ymail.com> wrote:
> Kevin Grittner <***@ymail.com> wrote:
>
>> I've already said that I now think we should use the standard
>> CREATE TRIGGER syntax to specify the transition tables, and that
>> if we do that we don't need the reloption that's in the patch.
>> If you ignore the 19 lines of new code to add that reloption,
>> absolutely 100% of the code changes in the patch so far are in
>> trigger.c and trigger.h.
>
> Although nobody has actually framed their feedback as a review, I
> feel that I have enough to work with to throw the patch into
> Waiting on Author status.  Since I started with the assumption
> that I was going to be using standard syntax and got a ways into
> that before convincing myself it was a bad idea, I should have a
> new version of the patch working that way in a couple days.

Here is v2.

This implements the standard syntax for transition tables, but
leaves transition row values alone.  (They remain as OLD and NEW in
plpgsql, for example.)  I took a first pass at the documentation
changes; I hope they are fairly readable.  I didn't create new
regression tests yet, since those will be a lot more interesting
when there is a PL to use with this.  That does mean there's not a
lot to test yet.  You can create triggers with transition tables
specified, and they should show correctly in psql and behave
correctly in pg_dump.

I think the new columns in pg_trigger should be null capable (which
currently means moving them to the variable part of the record).
This seems like it is better done once plpgsql support is there and
we have regression tests, since techniques for that seem a little
fragile.

I didn't change the tuplestores to TID because it seemed to me that
it would preclude using transition relations with FDW triggers, and
it seemed bad not to support that.  Does anyone see a way around
that, or feel that it's OK to not support FDW triggers in this
regard?

Does this look good otherwise, as far as it goes?

Unless people want the tuplestores changed to hold TIDs of the
tuples rather than the tuples themselves, or there are other
problems with the generation of the tuplestores, I think this is
done as far as capturing the data and passing it to the triggers.
I don't think any of the existing execution node types quite fit
this, although there are some which are similar enough to crib most
of the code from.  Have I missed a node type that can be bent to
this purpose?

What I'm looking for in this CF is to confirm the approach for
capturing the data, and get any suggestions people want to offer on
the PL implementations to use it -- at which point I think it can
be Returned with Feedback to be finished after this CF.  Now is a
great time to tell me what I've done wrong in the work so far, or
make suggestions for the next phase.  :-)

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
David Fetter
2014-06-27 20:00:30 UTC
Permalink
On Sat, Jun 21, 2014 at 11:06:26AM -0700, Kevin Grittner wrote:
> Kevin Grittner <***@ymail.com> wrote:
> > Kevin Grittner <***@ymail.com> wrote:
> >
> >> I've already said that I now think we should use the standard
> >> CREATE TRIGGER syntax to specify the transition tables, and that
> >> if we do that we don't need the reloption that's in the patch.
> >> If you ignore the 19 lines of new code to add that reloption,
> >> absolutely 100% of the code changes in the patch so far are in
> >> trigger.c and trigger.h.
> >
> > Although nobody has actually framed their feedback as a review, I
> > feel that I have enough to work with to throw the patch into
> > Waiting on Author status.  Since I started with the assumption
> > that I was going to be using standard syntax and got a ways into
> > that before convincing myself it was a bad idea, I should have a
> > new version of the patch working that way in a couple days.
>
> Here is v2.

Thanks!

I've taken the liberty of making an extension that uses this.
Preliminary tests indicate a 10x performance improvement over the
user-space hack I did that's similar in functionality.

Please find attached the extension, etc., which I've published to
https://github.com/davidfetter/postgresql_projects/tree/test_delta_v2

Cheers,
David.
--
David Fetter <***@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: ***@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
Kevin Grittner
2014-06-28 14:35:10 UTC
Permalink
David Fetter <***@fetter.org> wrote:
> On Sat, Jun 21, 2014 at 11:06:26AM -0700, Kevin Grittner wrote:

>> Here is v2.

> I've taken the liberty of making an extension that uses this.
> Preliminary tests indicate a 10x performance improvement over the
> user-space hack I did that's similar in functionality.

Wow, this goes well beyond what I expected for a review!  Thanks!

As I said in an earlier post, I think that this is best committed
as a series of patches, one for the core portion and one for each
PL which implements the ability to use the transition (delta)
relations in AFTER triggers.  Your extension covers the C trigger
angle, and it seems to me to be worth committing to contrib as a
sample of how to use this feature in C.

It is very encouraging that you were able to use this without
touching what I did in core, and that it runs 10x faster than the
alternatives before the patch.

Because this review advances the patch so far, it may be feasible
to get it committed in this CF.  I'll see what is needed to get
there and maybe have a patch toward that end in a few days.  The
minimum that would require, IMV, is a plpgsql implementation,
moving the new pg_trigger columns to the variable portion of the
record so they can be null capable, more docs, and regression
tests.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
David Fetter
2014-06-29 22:14:16 UTC
Permalink
On Sat, Jun 28, 2014 at 07:35:10AM -0700, Kevin Grittner wrote:
> David Fetter <***@fetter.org> wrote:
> > On Sat, Jun 21, 2014 at 11:06:26AM -0700, Kevin Grittner wrote:
>
> >> Here is v2.
>
> > I've taken the liberty of making an extension that uses this.
> > Preliminary tests indicate a 10x performance improvement over the
> > user-space hack I did that's similar in functionality.
>
> Wow, this goes well beyond what I expected for a review!  Thanks!

It was the minimum I could come up with to test whether the patch
worked.

> As I said in an earlier post, I think that this is best committed
> as a series of patches, one for the core portion and one for each
> PL which implements the ability to use the transition (delta)
> relations in AFTER triggers.

Right. I'm still holding out hope of having the transition relations
available in some more general way, but that seems more like a
refactoring job than anything fundamental.

> Your extension covers the C trigger angle, and it seems to me to be
> worth committing to contrib as a sample of how to use this feature
> in C.

It's missing a few pieces like surfacing transition table names. I'll
work on those. Also, it's not clear to me how to access the pre- and
post- relations at the same time, this being necessary for many use
cases. I guess I need to think more about how that would be done.

> It is very encouraging that you were able to use this without
> touching what I did in core, and that it runs 10x faster than the
> alternatives before the patch.

The alternative included was pretty inefficient, so there's that.

Cheers,
David.
--
David Fetter <***@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: ***@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-06-30 16:22:21 UTC
Permalink
David Fetter <***@fetter.org> wrote:

> It's missing a few pieces like surfacing transition table names.
> I'll work on those.  Also, it's not clear to me how to access the
> pre- and post- relations at the same time, this being necessary
> for many use cases.  I guess I need to think more about how that
> would be done.

If you're going to do any work on the C extension, please start
from the attached.  I renamed it to something which seemed more
meaningful (to me, at least), and cleaned up some cosmetic issues.
The substance is the same.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Robert Haas
2014-06-30 15:03:06 UTC
Permalink
On Sat, Jun 28, 2014 at 10:35 AM, Kevin Grittner <***@ymail.com> wrote:
> David Fetter <***@fetter.org> wrote:
>> On Sat, Jun 21, 2014 at 11:06:26AM -0700, Kevin Grittner wrote:
>
>>> Here is v2.
>
>> I've taken the liberty of making an extension that uses this.
>> Preliminary tests indicate a 10x performance improvement over the
>> user-space hack I did that's similar in functionality.
>
> Wow, this goes well beyond what I expected for a review! Thanks!
>
> As I said in an earlier post, I think that this is best committed
> as a series of patches, one for the core portion and one for each
> PL which implements the ability to use the transition (delta)
> relations in AFTER triggers. Your extension covers the C trigger
> angle, and it seems to me to be worth committing to contrib as a
> sample of how to use this feature in C.
>
> It is very encouraging that you were able to use this without
> touching what I did in core, and that it runs 10x faster than the
> alternatives before the patch.
>
> Because this review advances the patch so far, it may be feasible
> to get it committed in this CF. I'll see what is needed to get
> there and maybe have a patch toward that end in a few days. The
> minimum that would require, IMV, is a plpgsql implementation,
> moving the new pg_trigger columns to the variable portion of the
> record so they can be null capable, more docs, and regression
> tests.

Not to rain on your parade, but this patch hasn't really had a serious
code review yet. Performance testing is good, but it's not the same
thing.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
David Fetter
2014-06-30 15:22:07 UTC
Permalink
On Mon, Jun 30, 2014 at 11:03:06AM -0400, Robert Haas wrote:
> On Sat, Jun 28, 2014 at 10:35 AM, Kevin Grittner <***@ymail.com> wrote:
> > David Fetter <***@fetter.org> wrote:
> >> On Sat, Jun 21, 2014 at 11:06:26AM -0700, Kevin Grittner wrote:
> >
> >>> Here is v2.
> >
> >> I've taken the liberty of making an extension that uses this.
> >> Preliminary tests indicate a 10x performance improvement over the
> >> user-space hack I did that's similar in functionality.
> >
> > Wow, this goes well beyond what I expected for a review! Thanks!
> >
> > As I said in an earlier post, I think that this is best committed
> > as a series of patches, one for the core portion and one for each
> > PL which implements the ability to use the transition (delta)
> > relations in AFTER triggers. Your extension covers the C trigger
> > angle, and it seems to me to be worth committing to contrib as a
> > sample of how to use this feature in C.
> >
> > It is very encouraging that you were able to use this without
> > touching what I did in core, and that it runs 10x faster than the
> > alternatives before the patch.
> >
> > Because this review advances the patch so far, it may be feasible
> > to get it committed in this CF. I'll see what is needed to get
> > there and maybe have a patch toward that end in a few days. The
> > minimum that would require, IMV, is a plpgsql implementation,
> > moving the new pg_trigger columns to the variable portion of the
> > record so they can be null capable, more docs, and regression
> > tests.
>
> Not to rain on your parade, but this patch hasn't really had a serious
> code review yet. Performance testing is good, but it's not the same
> thing.

Happy to help with that, too.

What I wanted to start with is whether there was even rudimentary
functionality, which I established by writing that extension. I
happened to notice, basically as a sanity check, that doing this via
tuplestores happened, at least in one case, to be quicker than doing
it in user space with temp tables.

Cheers,
David.
--
David Fetter <***@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: ***@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-07-05 14:38:20 UTC
Permalink
Kevin Grittner <***@ymail.com> wrote:

> Because this review advances the patch so far, it may be feasible
> to get it committed in this CF.  I'll see what is needed to get
> there and maybe have a patch toward that end in a few days.

It appears that I need to create a new execution node and a way for
SPI calls to use it.  That seems beyond the scope of what is fair
to include in this CF, even if I got something put together in the
next couple days.

FWIW, I think that once I have the other pieces, what I initially
posted is committable as the first patch of a series.  A second
patch would add the new execution node and code to allow SPI calls
to use it.  The patch that David submitted, as modified by myself
and with further refinements that David is working on would be the
third patch.  An implementation in plpgsql, would be the fourth.
Other PLs would be left for people more familiar with those
languages to implement.

What I was hoping for in this CF was a review to confirm the
approach before proceeding to build on this foundation.  David
found nothing to complain about, and went to the trouble of writing
code to confirm that it was actually generating complete results
which were usable.  Robert doesn't feel this constitutes "a serious
code review".  I'm not aware of any changes which are needed to the
pending patch once the follow-on patches are complete.  I'm moving
this to Needs Review status.  People will have another chance to
review this patch when the other code is available, but if we want
incremental maintenance of materialized views in 9.5, delaying
review of what I have submitted in this CF until the next CF will
put that goal in jeopardy.

The one thing I don't feel great about is that it's using
tuplestores of the actual tuples rather than just their TIDs; but
it seems to me that we need the full tuple to support triggers on
FDWs, so the TID approach would be an optimization for a subset of
the cases, and would probably be more appropriate, if we do it at
all, in a follow-on patch after this is working (although I think
it is an optimization we should get into 9.5 if we are going to do
it).  If you disagree with that assessment, now would be a good
time to explain your reasoning.

A minor point is psql tab-completion for the REFERENCING clause.
It seems to me that's not critical, but I might slip it in anyway
before commit.

I took a look at whether I could avoid making OLD and NEW
non-reserved keywords, but I didn't see how to do that without
making FOR at least partially reserved.  If someone sees a way to
do this without creating three new unreserved keywords
(REFERENCING, OLD, and NEW) I'm all ears.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Marti Raudsepp
2014-07-28 14:34:17 UTC
Permalink
On Sat, Jul 5, 2014 at 5:38 PM, Kevin Grittner <***@ymail.com> wrote:
> it seems to me that we need the full tuple to support triggers on
> FDWs, so the TID approach would be an optimization for a subset of
> the cases, and would probably be more appropriate, if we do it at
> all, in a follow-on patch
> If you disagree with that assessment, now would be a good
> time to explain your reasoning.

Maybe I just have a limited imagination because I've never found a use
for FDWs personally. But recording changes from a trigger on a FDW
table doesn't seem that useful, since you can only capture changes
done by the local node. I expect that in many situations there are
multiple writers accessing the same underlying remote table. Certainly
it's can't guarantee the consistency of materialized views.

> I took a look at whether I could avoid making OLD and NEW
> non-reserved keywords, but I didn't see how to do that without
> making FOR at least partially reserved. If someone sees a way to
> do this without creating three new unreserved keywords
> (REFERENCING, OLD, and NEW) I'm all ears.

Sorry, I know I am very late to make this point, so feel free to ignore this.

I'm not a fan of the SQL standard syntax for this feature. One nice
thing about PostgreSQL's triggers is that you can declare the trigger
function once and re-use it on many tables. It would make more sense
if the same function declaration could say what variable/relation
names it wants to use. They're more like function argument names, not
some metadata about a table-function relationship.

Putting these in the CREATE TRIGGER command means you have to repeat
them for each table you want to apply the trigger to. It introduces
the possibility of making more mistakes without any gain in
flexibility.

But then again, I understand that there's value in supporting standard syntax.

Regards,
Marti


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-07-28 15:24:27 UTC
Permalink
Marti Raudsepp <***@juffo.org> wrote:
> On Sat, Jul 5, 2014 at 5:38 PM, Kevin Grittner <***@ymail.com> wrote:
>> it seems to me that we need the full tuple to support triggers on
>> FDWs, so the TID approach would be an optimization for a subset of
>> the cases, and would probably be more appropriate, if we do it at
>> all, in a follow-on patch
>> If you disagree with that assessment, now would be a good
>> time to explain your reasoning.
>
> Maybe I just have a limited imagination because I've never found a use
> for FDWs personally. But recording changes from a trigger on a FDW
> table doesn't seem that useful,

It's a matter of whether AFTER triggers on an FDW can see the
modified data in table form.  We just recently added the ability
for FDW triggers to see the data in *row* form; it seemed odd to
immediately follow that with a new way to get at the data and
cripple FDW triggers for it.

> since you can only capture changes
> done by the local node. I expect that in many situations there are
> multiple writers accessing the same underlying remote table. Certainly
> it's can't guarantee the consistency of materialized views.

While I expect the techniques used here to help with development of
incremental maintenance of materialized views, this seems like a
useful feature in its own right.  I think the question is what the
basis would be for supporting access to the changes in row format
but not table format for FDWs, if we're supporting both formats for
other tables.

>> I took a look at whether I could avoid making OLD and NEW
>> non-reserved keywords, but I didn't see how to do that without
>> making FOR at least partially reserved.  If someone sees a way to
>> do this without creating three new unreserved keywords
>> (REFERENCING, OLD, and NEW) I'm all ears.
>
> Sorry, I know I am very late to make this point, so feel free to ignore this.
>
> I'm not a fan of the SQL standard syntax for this feature. One nice
> thing about PostgreSQL's triggers is that you can declare the trigger
> function once and re-use it on many tables. It would make more sense
> if the same function declaration could say what variable/relation
> names it wants to use. They're more like function argument names, not
> some metadata about a table-function relationship.
>
> Putting these in the CREATE TRIGGER command means you have to repeat
> them for each table you want to apply the trigger to. It introduces
> the possibility of making more mistakes without any gain in
> flexibility.
>
> But then again, I understand that there's value in supporting standard
> syntax.

Do you have some other suggestion?  Keep in mind that it must allow
the code which will *generate* the transition tables to know
whether any of the attached triggers use a given transition table
for the specific operation, regardless of the language of the
trigger function.  Using the standard syntax has the advantage of
making it pretty easy to put the information exactly where it is
needed for easy access at run time.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Marti Raudsepp
2014-07-28 17:27:01 UTC
Permalink
On Mon, Jul 28, 2014 at 6:24 PM, Kevin Grittner <***@ymail.com> wrote:
> Do you have some other suggestion? Keep in mind that it must allow
> the code which will *generate* the transition tables to know
> whether any of the attached triggers use a given transition table
> for the specific operation, regardless of the language of the
> trigger function.

You will need to access the pg_proc record of the trigger function
anyway, so it's just a matter of coming up with syntax that makes
sense, right?

What I had in mind was that we could re-use function argument
declaration syntax. For instance, use the "argmode" specifier to
declare OLD and NEW. Shouldn't cause grammar conflicts because the
current OUT and INOUT aren't reserved keywords.

We could also re-use the refcursor type, which already has bindings in
some PLs, if that's not too much overhead. That would make the
behavior straightforward without introducing new constructs, plus you
can pass them around between functions. Though admittedly it's
annoying to integrate cursor results into queries.

Something like:

CREATE FUNCTION trig(OLD old_rows refcursor, NEW new_rows refcursor)
RETURNS trigger LANGUAGE plpgsql AS '...';

Or maybe if the grammar allows, we could spell out "NEW TABLE", "OLD
TABLE", but that's redundant since you can already deduce that from
the refcursor type.

It could also be extended for different types, like tid[], and maybe
"record" for the FOR EACH ROW variant (dunno if that can be made to
work).

Regards,
Marti


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Pavel Stehule
2014-07-29 06:49:50 UTC
Permalink
2014-07-28 19:27 GMT+02:00 Marti Raudsepp <***@juffo.org>:

> On Mon, Jul 28, 2014 at 6:24 PM, Kevin Grittner <***@ymail.com> wrote:
> > Do you have some other suggestion? Keep in mind that it must allow
> > the code which will *generate* the transition tables to know
> > whether any of the attached triggers use a given transition table
> > for the specific operation, regardless of the language of the
> > trigger function.
>
> You will need to access the pg_proc record of the trigger function
> anyway, so it's just a matter of coming up with syntax that makes
> sense, right?
>
> What I had in mind was that we could re-use function argument
> declaration syntax. For instance, use the "argmode" specifier to
> declare OLD and NEW. Shouldn't cause grammar conflicts because the
> current OUT and INOUT aren't reserved keywords.
>
> We could also re-use the refcursor type, which already has bindings in
> some PLs, if that's not too much overhead. That would make the
> behavior straightforward without introducing new constructs, plus you
> can pass them around between functions. Though admittedly it's
> annoying to integrate cursor results into queries.
>
> Something like:
>
> CREATE FUNCTION trig(OLD old_rows refcursor, NEW new_rows refcursor)
> RETURNS trigger LANGUAGE plpgsql AS '...';
>

I dislike this proposal - it is strongly inconsistent with current trigger
design

Regards

Pavel


>
> Or maybe if the grammar allows, we could spell out "NEW TABLE", "OLD
> TABLE", but that's redundant since you can already deduce that from
> the refcursor type.
>
> It could also be extended for different types, like tid[], and maybe
> "record" for the FOR EACH ROW variant (dunno if that can be made to
> work).
>
> Regards,
> Marti
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
Marti Raudsepp
2014-07-29 07:41:33 UTC
Permalink
On Tue, Jul 29, 2014 at 9:49 AM, Pavel Stehule <***@gmail.com> wrote:
> I dislike this proposal - it is strongly inconsistent with current trigger
> design

The real point I was trying to convey (in my previous email) is that
these declarations should be part of the trigger *function* not the
function-to-table relationship. CREATE TRIGGER shouldn't be in the
business of declaring new local variables for the trigger function.
Whether we define new syntax for that or re-use the argument list is
secondary.

But the inconsistency is deliberate, I find the current trigger API
horrible. Magic variables... Text-only TG_ARGV for arguments...
RETURNS trigger... No way to invoke trigger functions directly for
testing.

By not imitating past mistakes, maybe we can eventually arrive at a
language that makes sense.

Regards,
Marti


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Pavel Stehule
2014-07-29 07:59:37 UTC
Permalink
2014-07-29 9:41 GMT+02:00 Marti Raudsepp <***@juffo.org>:

> On Tue, Jul 29, 2014 at 9:49 AM, Pavel Stehule <***@gmail.com>
> wrote:
> > I dislike this proposal - it is strongly inconsistent with current
> trigger
> > design
>
> The real point I was trying to convey (in my previous email) is that
> these declarations should be part of the trigger *function* not the
> function-to-table relationship. CREATE TRIGGER shouldn't be in the
> business of declaring new local variables for the trigger function.
> Whether we define new syntax for that or re-use the argument list is
> secondary.
>
> But the inconsistency is deliberate, I find the current trigger API
> horrible. Magic variables... Text-only TG_ARGV for arguments...
> RETURNS trigger...
>


A notation RETURNS TRIGGER I don't like too much too - RETURNS void or
RETURNS record are much more natural.

My dream is some like CREATE OR REPLACE TRIGGER FUNCTION trg() RETURNS
RECORD

but it is only syntactic sugar - and I don't see any benefit why we should
to implement it.


> No way to invoke trigger functions directly for
> testing.
>

It is horrible idea. I can agree, it is a limit - but not too hard - there
is simple possibility to take code from trigger to auxiliary function. But
current design is simply and robust with few possible user errors.


>
> By not imitating past mistakes, maybe we can eventually arrive at a
> language that makes sense.
>

Sorry I disagree. Can be subjective is this API is too or not too bad for
redesign. More objective arguments - there are no performance issue, no
security issue. I am thinking, so it has sense, so I don't see reason why
to change it and why we should to have two API. Last argument, if we change
something, then we should to use a ANSI SQL syntax everywhere it is
possible (when we don't get any new special functionality).

Regards

Pavel


>
> Regards,
> Marti
>
Kevin Grittner
2014-07-31 21:27:00 UTC
Permalink
Marti Raudsepp <***@juffo.org> wrote:

> The real point I was trying to convey (in my previous email) is
> that these declarations should be part of the trigger *function*
> not the function-to-table relationship. CREATE TRIGGER shouldn't
> be in the business of declaring new local variables for the
> trigger function. Whether we define new syntax for that or re-use
> the argument list is secondary.

I think the reason the standard includes it in the trigger
definition is that it allows the trigger code to be specified in
the CREATE TRIGGER statement; and in fact, PostgreSQL is the first
database product I have used which requires a trigger function to
be created first.

<triggered action> ::=
  [ FOR EACH { ROW | STATEMENT } ]
      [ WHEN <left paren> <search condition> <right paren> ]
      <triggered SQL statement>

<triggered SQL statement> ::=
    <SQL procedure statement>
  | BEGIN ATOMIC { <SQL procedure statement> <semicolon> }... END

If we ever support this part of the standard, we need to be able to
specify these in the CREATE TRIGGER statement (or stay with the
concept of reserving "special names" for these relations).  So if
we use non-standard syntax now, we are likely to be looking at an
ugly hybrid of techniques in the future.

That said, if we want to have a way to specify this for a function
(for now), we could use the CREATE FUNCTION's WITH clause.  I
assume that a mismatch between the name specified there and the
name which is used in the function body would be a run-time error
when the trigger function is fired, not an error at the time that
CREATE FUNCTION is run, since we only ensure valid syntax at CREATE
FUNCTION time; we don't resolve any table names or column names at
that time.  For example, on a freshly-created database with no
tables created:

test=# create function xxx() returns trigger language plpgsql
test-# as $$ begin update adsf set qwerty = 'xxx'; end; $$;
CREATE FUNCTION

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Amit Khandekar
2014-08-07 12:11:21 UTC
Permalink
On 21 June 2014 23:36, Kevin Grittner <***@ymail.com> wrote:
> Kevin Grittner <***@ymail.com> wrote:
> I didn't change the tuplestores to TID because it seemed to me that
> it would preclude using transition relations with FDW triggers, and
> it seemed bad not to support that. Does anyone see a way around
> that, or feel that it's OK to not support FDW triggers in this
> regard?

I think it is ok to use tuplestores for now, but as mentioned by you
somewhere else in the thread, later on we should change this to using
tids as an optimization.

>
> Does this look good otherwise, as far as it goes?

I didn't yet extensively go through the patch, but before that, just a
few quick comments:

I see that the tupelstores for transition tables are stored per query
depth. If the
DML involves a table that has multiple child tables, it seems as though
a single tuplestore would be shared among all these tables. That means
if we define
such triggers using transition table clause for all the child tables, then
the trigger function for a child table will see tuples from other child tables
as well. Is that true ? If it is, it does not make sense.
For fdw tuplestore, this issue does not arise because the DML won't involve
multiple target tables I suppose.

-----------

I tried to google some SQLs that use REFERENCING clause with triggers.
It looks like in some database systems, even the WHEN clause of CREATE TRIGGER
can refer to a transition table, just like how it refers to NEW and
OLD row variables.

For e.g. :
CREATE TRIGGER notify_dept
AFTER UPDATE ON weather
REFERENCING NEW_TABLE AS N_TABLE
NEW AS N_ROW
FOR EACH ROW
WHEN ((SELECT AVG (temperature) FROM N_TABLE) > 10)
BEGIN
notify_department(N_ROW.temperature, N_ROW.city);
END

Above, it is used to get an aggregate value of all the changed rows. I think
we do not currently support aggregate expressions in the where clause, but with
transition tables, it makes more sense to support it later if not now.


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-08-07 14:19:19 UTC
Permalink
Thanks for your review and comments, Amit!

Amit Khandekar <***@enterprisedb.com> wrote:
> On 21 June 2014 23:36, Kevin Grittner <***@ymail.com> wrote:
>> Kevin Grittner <***@ymail.com> wrote:
>> I didn't change the tuplestores to TID because it seemed to me that
>> it would preclude using transition relations with FDW triggers, and
>> it seemed bad not to support that.  Does anyone see a way around
>> that, or feel that it's OK to not support FDW triggers in this
>> regard?
>
> I think it is ok to use tuplestores for now, but as mentioned by you
> somewhere else in the thread, later on we should change this to using
> tids as an optimization.

Well, the optimization would probably be to use a tuplestore of
tids referencing modified tuples in the base table, rather than a
tuplestore of the data itself.  But I think we're in agreement.

>> Does this look good otherwise, as far as it goes?
>
> I didn't yet extensively go through the patch, but before that, just a
> few quick comments:
>
> I see that the tupelstores for transition tables are stored per query
> depth. If the
> DML involves a table that has multiple child tables, it seems as though
> a single tuplestore would be shared among all these tables. That means
> if we define
> such triggers using transition table clause for all the child tables, then
> the trigger function for a child table will see tuples from other child tables
> as well. Is that true ?

I don't think so.  I will make a note of the concern to confirm by testing.

> If it is, it does not make sense.

I agree.

> I tried to google some SQLs that use REFERENCING clause with triggers.
> It looks like in some database systems, even the WHEN clause of CREATE TRIGGER
> can refer to a transition table, just like how it refers to NEW and
> OLD row variables.
>
> For e.g. :
> CREATE TRIGGER notify_dept
>   AFTER UPDATE ON weather
>   REFERENCING NEW_TABLE AS N_TABLE
>   NEW AS N_ROW
>   FOR EACH ROW
>   WHEN ((SELECT AVG (temperature) FROM N_TABLE) > 10)
>   BEGIN
>     notify_department(N_ROW.temperature, N_ROW.city);
>   END
>
> Above, it is used to get an aggregate value of all the changed rows. I think
> we do not currently support aggregate expressions in the where clause, but with
> transition tables, it makes more sense to support it later if not now.

Interesting point; I had not thought about that.  Will see if I can
include support for that in the patch for the next CF; failing
that; I will at least be careful to not paint myself into a corner
where it is unduly hard to do later.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Amit Khandekar
2014-08-12 06:54:16 UTC
Permalink
On 7 August 2014 19:49, Kevin Grittner <***@ymail.com> wrote:
> Amit Khandekar <***@enterprisedb.com> wrote:
>> On 21 June 2014 23:36, Kevin Grittner <***@ymail.com> wrote:
>>> Kevin Grittner <***@ymail.com> wrote:
>>> I didn't change the tuplestores to TID because it seemed to me that
>>> it would preclude using transition relations with FDW triggers, and
>>> it seemed bad not to support that. Does anyone see a way around
>>> that, or feel that it's OK to not support FDW triggers in this
>>> regard?
>>
>> I think it is ok to use tuplestores for now, but as mentioned by you
>> somewhere else in the thread, later on we should change this to using
>> tids as an optimization.
>
> Well, the optimization would probably be to use a tuplestore of
> tids referencing modified tuples in the base table, rather than a
> tuplestore of the data itself. But I think we're in agreement.
Right, that's what I meant.

>> I see that the tupelstores for transition tables are stored per query
>> depth. If the
>> DML involves a table that has multiple child tables, it seems as though
>> a single tuplestore would be shared among all these tables. That means
>> if we define
>> such triggers using transition table clause for all the child tables, then
>> the trigger function for a child table will see tuples from other child tables
>> as well. Is that true ?
>
> I don't think so. I will make a note of the concern to confirm by testing.
Thanks. I will wait for this.

>
>> I tried to google some SQLs that use REFERENCING clause with triggers.
>> It looks like in some database systems, even the WHEN clause of CREATE TRIGGER
>> can refer to a transition table, just like how it refers to NEW and
>> OLD row variables.
>>
>> For e.g. :
>> CREATE TRIGGER notify_dept
>> AFTER UPDATE ON weather
>> REFERENCING NEW_TABLE AS N_TABLE
>> NEW AS N_ROW
>> FOR EACH ROW
>> WHEN ((SELECT AVG (temperature) FROM N_TABLE) > 10)
>> BEGIN
>> notify_department(N_ROW.temperature, N_ROW.city);
>> END
>>
>> Above, it is used to get an aggregate value of all the changed rows. I think
>> we do not currently support aggregate expressions in the where clause, but with
>> transition tables, it makes more sense to support it later if not now.
>
> Interesting point; I had not thought about that. Will see if I can
> include support for that in the patch for the next CF; failing
> that; I will at least be careful to not paint myself into a corner
> where it is unduly hard to do later.
We currently do the WHEN checks while saving the AFTER trigger events,
and also add the tuples one by one while saving the trigger events. If
and when we support WHEN, we would need to make all of these tuples
saved *before* the first WHEN clause execution, and that seems to
demand more changes in the current trigger code.

More comments below :

---------------

Are we later going to extend this support for constraint triggers as
well ? I think these variables would make sense even for deferred
constraint triggers. I think we would need some more changes if we
want to support this, because there is no query depth while executing
deferred triggers and so the tuplestores might be inaccessible with
the current design.

---------------

The following (and similarly other) statements :
trigdesc->trig_insert_new_table |=
(TRIGGER_FOR_INSERT(tgtype) &&
TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable)) ? true : false;

can be simplfied to :

trigdesc->trig_insert_new_table |=
(TRIGGER_FOR_INSERT(tgtype) &&
TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable));

-------------

AfterTriggerExecute()
{
.....
/*
* Set up the tuplestore information.
*/
if (trigdesc->trig_delete_old_table || trigdesc->trig_update_old_table)
LocTriggerData.tg_olddelta =
GetCurrentTriggerDeltaTuplestore(afterTriggers->old_tuplestores);
.....
}
Above, trigdesc->trig_update_old_table is true if at least one of the
triggers of the table has transition tables. So this will cause the
delta table to be available on all of the triggers of the table even
if only one out of them uses transition tables. May be this would be
solved by using LocTriggerData.tg_trigger->tg_olddelta rather than
trigdesc->trig_update_old_table to decide whether
LocTriggerData.tg_olddelta should be assigned.

---------------

GetCurrentTriggerDeltaTuplestore() is now used for getting fdw
tuplestore also, so it should have a more general name.

---------------

#define TRIGGER_USES_TRANSITION_TABLE(namepointer) \
((namepointer) != (char *) NULL && (*(namepointer)) != '\0')
Since all other code sections assume trigger->tgoldtable to be
non-NULL, we can skip the NULL check above.

---------------

We should add a check to make sure the user does not supply same name
for OLD TABLE and NEW TABLE.

---------------

The below code comment needs to be changed.
* Only tables are initially supported, and only for AFTER EACH STATEMENT
* triggers, but other permutations are accepted by the parser so we can give
* a meaningful message from C code.
The comment implies that with ROW triggers we do not support TABLE
transition variables. But the patch does make these variables visible
through ROW triggers.

--------------

Other than these, there are no more issues that I could find.


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-08-12 14:39:25 UTC
Permalink
Amit Khandekar <***@enterprisedb.com> wrote:
> On 7 August 2014 19:49, Kevin Grittner <***@ymail.com> wrote:
>> Amit Khandekar <***@enterprisedb.com> wrote:

>>> I tried to google some SQLs that use REFERENCING clause with triggers.
>>> It looks like in some database systems, even the WHEN clause of CREATE TRIGGER
>>> can refer to a transition table, just like how it refers to NEW and
>>> OLD row variables.
>>>
>>> For e.g. :
>>> CREATE TRIGGER notify_dept
>>>   AFTER UPDATE ON weather
>>>   REFERENCING NEW_TABLE AS N_TABLE
>>>   NEW AS N_ROW
>>>   FOR EACH ROW
>>>   WHEN ((SELECT AVG (temperature) FROM N_TABLE) > 10)
>>>   BEGIN
>>>     notify_department(N_ROW.temperature, N_ROW.city);
>>>   END
>>>
>>> Above, it is used to get an aggregate value of all the changed rows. I think
>>> we do not currently support aggregate expressions in the where clause, but with
>>> transition tables, it makes more sense to support it later if not now.
>>
>> Interesting point; I had not thought about that.  Will see if I can
>> include support for that in the patch for the next CF; failing
>> that; I will at least be careful to not paint myself into a corner
>> where it is unduly hard to do later.
> We currently do the WHEN checks while saving the AFTER trigger events,
> and also add the tuples one by one while saving the trigger events. If
> and when we support WHEN, we would need to make all of these tuples
> saved *before* the first WHEN clause execution, and that seems to
> demand more changes in the current trigger code.

In that case my inclination is to get things working with the less
invasive patch that doesn't try to support transition table usage
in WHEN clauses, and make support for that a later patch.

> ---------------
>
> Are we later going to extend this support for constraint triggers as
> well ? I think these variables would make sense even for deferred
> constraint triggers. I think we would need some more changes if we
> want to support this, because there is no query depth while executing
> deferred triggers and so the tuplestores might be inaccessible with
> the current design.

Hmm, I would also prefer to exclude that from an initial patch, but
this and the WHEN clause issue may influence a decision I've been
struggling with.  This is my first non-trivial foray into the
planner territory, and I've been struggling with how best to bridge
the gap between where the tuplestores are *captured* in the trigger
code and where they are referenced by name in a query and
incorporated into a plan for the executor.  (The execution level
itself was almost trivial; it's getting the tuplestore reference
through the parse analysis and planning phases that is painful for
me.)  At one point I create a "tuplestore registry" using a
process-local hashmap where each Tuplestorestate and its associated
name, TupleDesc, etc. would be registered, yielding a Tsrid (based
on Oid) to use through the parse analysis and planning steps, but
then I ripped it back out again in favor of just passing the
pointer to the structure which was stored in the registry; because
the registry seemed to me to introduce more risk of memory leaks,
references to freed memory, etc.  While it helped a little with
abstraction, it seemed to make things more fragile.  But I'm still
torn on this, and unsure whether such a registry is a good idea.

Any thoughts on that?

> ---------------
>
> The following (and similarly other) statements :
> trigdesc->trig_insert_new_table |=
> (TRIGGER_FOR_INSERT(tgtype) &&
> TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable)) ? true : false;
>
> can be simplfied to :
>
> trigdesc->trig_insert_new_table |=
> (TRIGGER_FOR_INSERT(tgtype) &&
> TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable));

Yeah, I did recognize that, but I always get squeamish about
logical operations with values which do not equal true or false.
TRIGGER_FOR_INSERT and similar macros don't necessarily return true
for something which is not false.  I should just get over that and
trust the compiler a bit more....  :-)

> ---------------
>
> AfterTriggerExecute()
> {
> .....
> /*
> * Set up the tuplestore information.
> */
> if (trigdesc->trig_delete_old_table || trigdesc->trig_update_old_table)
> LocTriggerData.tg_olddelta =
> GetCurrentTriggerDeltaTuplestore(afterTriggers->old_tuplestores);
> .....
> }
> Above, trigdesc->trig_update_old_table is true if at least one of the
> triggers of the table has transition tables. So this will cause the
> delta table to be available on all of the triggers of the table even
> if only one out of them uses transition tables. May be this would be
> solved by using LocTriggerData.tg_trigger->tg_olddelta rather than
> trigdesc->trig_update_old_table to decide whether
> LocTriggerData.tg_olddelta should be assigned.

Good point.  Will do.

> ---------------
>
> GetCurrentTriggerDeltaTuplestore() is now used for getting fdw
> tuplestore also, so it should have a more general name.

Well, "delta" *was* my attempt at a more general name.  I need to
do another pass over the naming choices to make sure I'm being
consistent, but I attempted to use these distinctions:

  transition - OLD or NEW, ROW or TABLE data for a trigger; this is
  the terminology used in the SQL standard

  oldtable/newtable - transition data for a relation representing
  what a statement affected; corresponding to the REFERENCING
  [OLD|NEW] TABLE clauses in the SQL standard

  tuplestore - for the planner and executor, since we may find
  other uses for feeding in tuplestores; I don't want to assume in
  the naming there that these are from triggers at all

  delta - a tuplestore representing changes to data; perhaps this
  is too close in concept to transition in the trigger code since
  there is no other source for delta data and the naming should
  just  use transition

Any specific suggestions?  Maybe eliminate use of "delta" and make
that GetCurrentTriggerTransitionTuplestore()?

> ---------------
>
> #define TRIGGER_USES_TRANSITION_TABLE(namepointer) \
> ((namepointer) != (char *) NULL && (*(namepointer)) != '\0')
> Since all other code sections assume trigger->tgoldtable to be
> non-NULL, we can skip the NULL check above.

I intentionally left both tests in initially because I wasn't sure
which representation would be used.  Will review whether it is time
to get off the fence on that.  ;-)

> ---------------
>
> We should add a check to make sure the user does not supply same name
> for OLD TABLE and NEW TABLE.

I already caught that and implemented a check in my development code.

> ---------------
>
> The below code comment needs to be changed.
> * Only tables are initially supported, and only for AFTER EACH STATEMENT
> * triggers, but other permutations are accepted by the parser so we can give
> * a meaningful message from C code.
> The comment implies that with ROW triggers we do not support TABLE
> transition variables. But the patch does make these variables visible
> through ROW triggers.

Oops.  Will fix.


Thanks for the review!

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Amit Khandekar
2014-08-14 09:23:03 UTC
Permalink
On 12 August 2014 20:09, Kevin Grittner <***@ymail.com> wrote:
> Amit Khandekar <***@enterprisedb.com> wrote:
>> On 7 August 2014 19:49, Kevin Grittner <***@ymail.com> wrote:
>>> Amit Khandekar <***@enterprisedb.com> wrote:
>
>>>> I tried to google some SQLs that use REFERENCING clause with triggers.
>>>> It looks like in some database systems, even the WHEN clause of CREATE TRIGGER
>>>> can refer to a transition table, just like how it refers to NEW and
>>>> OLD row variables.
>>>>
>>>> For e.g. :
>>>> CREATE TRIGGER notify_dept
>>>> AFTER UPDATE ON weather
>>>> REFERENCING NEW_TABLE AS N_TABLE
>>>> NEW AS N_ROW
>>>> FOR EACH ROW
>>>> WHEN ((SELECT AVG (temperature) FROM N_TABLE) > 10)
>>>> BEGIN
>>>> notify_department(N_ROW.temperature, N_ROW.city);
>>>> END
>>>>
>>>> Above, it is used to get an aggregate value of all the changed rows. I think
>>>> we do not currently support aggregate expressions in the where clause, but with
>>>> transition tables, it makes more sense to support it later if not now.
>>>
>>> Interesting point; I had not thought about that. Will see if I can
>>> include support for that in the patch for the next CF; failing
>>> that; I will at least be careful to not paint myself into a corner
>>> where it is unduly hard to do later.
>> We currently do the WHEN checks while saving the AFTER trigger events,
>> and also add the tuples one by one while saving the trigger events. If
>> and when we support WHEN, we would need to make all of these tuples
>> saved *before* the first WHEN clause execution, and that seems to
>> demand more changes in the current trigger code.
>
> In that case my inclination is to get things working with the less
> invasive patch that doesn't try to support transition table usage
> in WHEN clauses, and make support for that a later patch.
Agreed.
>
>> ---------------
>>
>> Are we later going to extend this support for constraint triggers as
>> well ? I think these variables would make sense even for deferred
>> constraint triggers. I think we would need some more changes if we
>> want to support this, because there is no query depth while executing
>> deferred triggers and so the tuplestores might be inaccessible with
>> the current design.
>
> Hmm, I would also prefer to exclude that from an initial patch, but
> this and the WHEN clause issue may influence a decision I've been
> struggling with. This is my first non-trivial foray into the
> planner territory, and I've been struggling with how best to bridge
> the gap between where the tuplestores are *captured* in the trigger
> code and where they are referenced by name in a query and
> incorporated into a plan for the executor. (The execution level
> itself was almost trivial; it's getting the tuplestore reference
> through the parse analysis and planning phases that is painful for
> me.
I am not sure why you think we would need to refer the tuplestore in
the parse analysis and planner phases. It seems that we would need
them only in execution phase. Or may be I didn't understand your
point.

> ) At one point I create a "tuplestore registry" using a
> process-local hashmap where each Tuplestorestate and its associated
> name, TupleDesc, etc. would be registered, yielding a Tsrid (based
> on Oid) to use through the parse analysis and planning steps, but
> then I ripped it back out again in favor of just passing the
> pointer to the structure which was stored in the registry; because
> the registry seemed to me to introduce more risk of memory leaks,
> references to freed memory, etc. While it helped a little with
> abstraction, it seemed to make things more fragile. But I'm still
> torn on this, and unsure whether such a registry is a good idea.
I feel it is ok to use direct tuplestore pointers as handles like how
you have done in the patch. I may be biased with doing that as against
the above method of accessing tuplestore by its name through hash
lookup; the reason of my bias might be because of one particular way I
see how deferred constraint triggers can be supported. In the after
trigger event structure, we can store these delta tuplestores pointers
as well. This way, we don't need to worry about how to lookup these
tuplestores, and also need not worry about the mechanism that moves
these events from deferred event list to immediate event list in case
of SET CONSTRAINTS. Only thing we would need to make sure is to
cleanup these tuplestores exactly where the event structures get
cleaned up.

This is all my speculations. But what I think is that we don't have to
heavily refactor your patch changes in order to extend support for
deferred constraint triggers. And for WHEN clause, we anyways have to
do some changes in the existing trigger code.


>> ---------------
>>
>> The following (and similarly other) statements :
>> trigdesc->trig_insert_new_table |=
>> (TRIGGER_FOR_INSERT(tgtype) &&
>> TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable)) ? true : false;
>>
>> can be simplfied to :
>>
>> trigdesc->trig_insert_new_table |=
>> (TRIGGER_FOR_INSERT(tgtype) &&
>> TRIGGER_USES_TRANSITION_TABLE(trigger->tgnewtable));
>
> Yeah, I did recognize that, but I always get squeamish about
> logical operations with values which do not equal true or false.
> TRIGGER_FOR_INSERT and similar macros don't necessarily return true
> for something which is not false. I should just get over that and
> trust the compiler a bit more.... :-)
I am ok with either way :) . May be your change is more readable to
others as well, and it does seem to be readable to me now that I see
again.

> Well, "delta" *was* my attempt at a more general name. I need to
> do another pass over the naming choices to make sure I'm being
> consistent, but I attempted to use these distinctions:
>
> transition - OLD or NEW, ROW or TABLE data for a trigger; this is
> the terminology used in the SQL standard
Ah ok, so transition applies to both OLD/NEW and ROW/TABLE. Then is
that case, the one you suggested below
(GetCurrentTriggerTransitionTuplestore) sounds very good to me.
>
> oldtable/newtable - transition data for a relation representing
> what a statement affected; corresponding to the REFERENCING
> [OLD|NEW] TABLE clauses in the SQL standard
>
> tuplestore - for the planner and executor, since we may find
> other uses for feeding in tuplestores; I don't want to assume in
> the naming there that these are from triggers at all
>
> delta - a tuplestore representing changes to data; perhaps this
> is too close in concept to transition in the trigger code since
> there is no other source for delta data and the naming should
> just use transition
All these names sound good to me. Thanks.

>
> Any specific suggestions? Maybe eliminate use of "delta" and make
> that GetCurrentTriggerTransitionTuplestore()?
Yes this sounds good.

>
>> ---------------
>>
>> #define TRIGGER_USES_TRANSITION_TABLE(namepointer) \
>> ((namepointer) != (char *) NULL && (*(namepointer)) != '\0')
>> Since all other code sections assume trigger->tgoldtable to be
>> non-NULL, we can skip the NULL check above.
>
> I intentionally left both tests in initially because I wasn't sure
> which representation would be used. Will review whether it is time
> to get off the fence on that. ;-)
OK.


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Amit Khandekar
2014-08-14 09:45:53 UTC
Permalink
>> The execution level
>> itself was almost trivial; it's getting the tuplestore reference
>> through the parse analysis and planning phases that is painful for
>> me.
> I am not sure why you think we would need to refer the tuplestore in
> the parse analysis and planner phases. It seems that we would need
> them only in execution phase. Or may be I didn't understand your
> point.
Ah I think I understand now. That might be because you are thinking of
having an infrastructure common to triggers and materialized views,
right ?


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-08-14 22:34:57 UTC
Permalink
Amit Khandekar <***@enterprisedb.com> wrote:

>>> The execution level itself was almost trivial; it's getting the
>>> tuplestore reference through the parse analysis and planning
>>> phases that is painful for me.
>> I am not sure why you think we would need to refer the
>> tuplestore in the parse analysis and planner phases. It seems
>> that we would need them only in execution phase. Or may be I
>> didn't understand your point.
> Ah I think I understand now. That might be because you are
> thinking of having an infrastructure common to triggers and
> materialized views, right ?

Well, it's more immediate than that.  The identifiers in the
trigger function are not resolved to particular objects until there
is a request to fire the trigger.  At that time the parse analysis
needs to find the name defined somewhere.  It's not defined in the
catalogs like a table or view, and it's not defined in the query
itself like a CTE or VALUES clause.  The names specified in trigger
creation must be recognized as needing to resolve to the new
TuplestoreScan, and it needs to match those to the tuplestores
themselves.  Row counts, costing, etc. needs to be provided so the
optimizer can pick a good plan in what might be a complex query
with many options.  I'm finding the planner work here to be harder
than everything else put together.

On the bright side, once I'm done, I might know enough about the
planner to do things a lot faster next time.  :-)

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Amit Khandekar
2014-08-18 06:23:38 UTC
Permalink
On 15 August 2014 04:04, Kevin Grittner <***@ymail.com> wrote:
> Amit Khandekar <***@enterprisedb.com> wrote:
>
>>>> The execution level itself was almost trivial; it's getting the
>>>> tuplestore reference through the parse analysis and planning
>>>> phases that is painful for me.
>>> I am not sure why you think we would need to refer the
>>> tuplestore in the parse analysis and planner phases. It seems
>>> that we would need them only in execution phase. Or may be I
>>> didn't understand your point.
>> Ah I think I understand now. That might be because you are
>> thinking of having an infrastructure common to triggers and
>> materialized views, right ?
>
> Well, it's more immediate than that. The identifiers in the
> trigger function are not resolved to particular objects until there
> is a request to fire the trigger.
Ah ok, you are talking about changes specific to the PL language
handlers. Yes, I agree that in the plpgsql parser (and in any PL
handler), we need to parse such table references in the SQL construct,
and transform it into something else.
> At that time the parse analysis
> needs to find the name defined somewhere. It's not defined in the
> catalogs like a table or view, and it's not defined in the query
> itself like a CTE or VALUES clause. The names specified in trigger
> creation must be recognized as needing to resolve to the new
> TuplestoreScan, and it needs to match those to the tuplestores
> themselves.
One approach that comes to my mind is by transforming such transition
table references into a RangeFunction reference while in plpgsql
parser/lexer. This RangeFunction would point to a set-returning
catalog function that would return rows from the delta tuplestore. So
the tuplestore itself would remain a blackbox. Once we have such a
function, any language handler can re-use the same interface.

> Row counts, costing, etc. needs to be provided so the
> optimizer can pick a good plan in what might be a complex query
> with many options.
I am also not sure about the costing, but I guess it may be possible
to supply some costs to the FunctionScan plan node.


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-08-22 14:47:39 UTC
Permalink
Amit Khandekar <***@enterprisedb.com> wrote:
> On 15 August 2014 04:04, Kevin Grittner <***@ymail.com> wrote:

>> The identifiers in the trigger function are not resolved to
>> particular objects until there is a request to fire the trigger.
> Ah ok, you are talking about changes specific to the PL language
> handlers. Yes, I agree that in the plpgsql parser (and in any PL
> handler), we need to parse such table references in the SQL construct,
> and transform it into something else.
>> At that time the parse analysis
>> needs to find the name defined somewhere.  It's not defined in the
>> catalogs like a table or view, and it's not defined in the query
>> itself like a CTE or VALUES clause.  The names specified in trigger
>> creation must be recognized as needing to resolve to the new
>> TuplestoreScan, and it needs to match those to the tuplestores
>> themselves.

For now I have added the capability to register named tuplestores
(with the associated TupleDesc) to SPI.  This seems like a pretty
useful place to be able to do this, although I tried to arrange for
it to be reasonably easy to add other registration mechanisms
later.

> One approach that comes to my mind is by transforming such transition
> table references into a RangeFunction reference while in plpgsql
> parser/lexer. This RangeFunction would point to a set-returning
> catalog function that would return rows from the delta tuplestore. So
> the tuplestore itself would remain a blackbox. Once we have such a
> function, any language handler can re-use the same interface.

plpgsql seems entirely the wrong place for that.  What I have done
is for trigger.c to add the tuplestores to the TriggerData
structure, and not worry about it beyond that.  I have provided a
way to register named tuplestores to SPI, and for the subsequent
steps to recognize those names as relation names.  The change to
plpgsql are minimal, since they only need to take the tuplestores
from TriggerData and register them with SPI before making the SPI
calls to plan and execute.

>> Row counts, costing, etc. needs to be provided so the
>> optimizer can pick a good plan in what might be a complex query
>> with many options.
> I am also not sure about the costing, but I guess it may be possible
> to supply some costs to the FunctionScan plan node.

I went with a bogus row estimate for now.  I think we can arrange
for a tuplestore to keep a count of rows added, and a function to
retrieve that, and thereby get a better estimate, but that is not
yet done.

I think this is approaching a committable state, although I think
it should probably be broken down to four separate patches.  Two of
them are very small and easy to generate: the SPI changes and the
plpgsql changes are in files that are distinct from all the other
changes.  What remains is to separate the parsing of the CREATE
TRIGGER statement and the trigger code that generates the
tuplestores for the transition tables from the analysis, planning,
and execution phases which deal with a more generic concept of
named tuplestores.  Some of the changes for those two things both
touch files related to parsing -- for different things, but in the
same files.

To avoid polluting paring code with SPI and tuplestore includes, I
created a very thin abstraction layer for parse analysis to use.  I
didn't do the same for the executor yet, but it may be a good idea
there, too.

It probably still needs more documentation and definitely needs
more regression tests.  I have used these transition tables in
plpgsql in simple tests, but there may be bugs still lurking.

New patch attached.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Kevin Grittner
2014-08-26 23:26:13 UTC
Permalink
Kevin Grittner <***@ymail.com> wrote:

> I think this is approaching a committable state, although I think
> it should probably be broken down to four separate patches.

And here they are. This should net to the same set of changes as
the prior post, but the changes are logically separated. They are
labeled as v3 to match the last post.

trigger-transition-tables allows transition table names to be
specified in a REFERENCING clause of CREATE TRIGGER, per spec, and
creates tuplestores when needed in the TriggerData structure. It
doesn't worry about who does what with that data. This doesn't
depend on anything else.
15 files changed, 577 insertions(+), 43 deletions(-)

spi-tuplestore-registry allows tuplestores, with associated name
and TupleDesc, to be registered with the current SPI connection.
Queries planned or executed on that connection will recognize the
name as a tuplestore relation. It doesn't care who is registering
the tuplestores or what happens to them. It doesn't depend on
anything else.
5 files changed, 445 insertions(+)

executor-tuplestore-relations covers parse analysis,
planner/optimizer, and executor layers. It pulls information from
the registry in a couple places, but is not very intertwined with
SPI. That is the only registry it knows right now, but it should
be easy to add other registries if needed. It doesn't care where
the tuplestore came from, so we should be able to use this for
other things. I have it in mind to use it for incremental
maintenance of materialized views, but I expect that other uses
will be found. It has a logical dependency on the
spi-tuplestore-registry patch. While it doesn't have a logical
dependency on trigger-transition-tables, they both modified some of
the same files, and this won't apply cleanly unless
trigger-transition-tables is applied first. If you hand-correct
the failed hunks, it compiles and runs fine without
trigger-transition-tables.
30 files changed, 786 insertions(+), 9 deletions(-)

plpgsql-after-trigger-transition-tables takes the tuplestores from
TriggerData and registers them with SPI before trigger planning and
execution. It depends on the trigger-transition-tables and
spi-tuplestore-registry patches to build, and won't do anything
useful at run time without the executor-tuplestore-relations patch.
3 files changed, 37 insertions(+), 11 deletions(-)

Hopefully this will make review easier.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Heikki Linnakangas
2014-08-27 07:23:56 UTC
Permalink
On 08/27/2014 02:26 AM, Kevin Grittner wrote:
> spi-tuplestore-registry allows tuplestores, with associated name
> and TupleDesc, to be registered with the current SPI connection.
> Queries planned or executed on that connection will recognize the
> name as a tuplestore relation. It doesn't care who is registering
> the tuplestores or what happens to them. It doesn't depend on
> anything else.
> 5 files changed, 445 insertions(+)
>
> ...
>
> plpgsql-after-trigger-transition-tables takes the tuplestores from
> TriggerData and registers them with SPI before trigger planning and
> execution. It depends on the trigger-transition-tables and
> spi-tuplestore-registry patches to build, and won't do anything
> useful at run time without the executor-tuplestore-relations patch.
> 3 files changed, 37 insertions(+), 11 deletions(-)

This is a surprising way to expose the NEW/OLD relations to the
planner/executor. The problem is the same as with making PL/pgSQL
variables available to the planner/executor in queries within a PL/pgSQL
function, and the solution we have for that is the "parser hooks" you
pass to SPI_prepare_params. This tuplestore registry is a different
solution to the same problem - we could've implemented parameters with a
registry like this as well. Let's not mix two different designs.

I suggest adding a new hook to the ParseState struct, (p_rangevar_hook
?). The planner calls it whenever it sees a reference to a table, and
the hook function returns back some sort of placeholder reference to the
tuplestore. With variables, the hook returns a Param node, and at
execution time, the executor calls the paramFetch hook to fetch the
value of the param. For relations/tuplestores, I guess we'll need to
invent something like a Param node, but for holding information about
the relation. Like your TsrData struct, but without the pointer to the
tuplestore. At execution time, in the SPI_execute call, you pass the
pointer to the tuplestore in the ParamListInfo struct, like you pass
parameter values.

Does this make sense? In essence, make the relations work like PL/pgSQL
variables do. If you squint a little, the new/old relation is a variable
from the function's point of view, and a parameter from the
planner/executor's point of view. It's just a variable/parameter that
holds a set of tuples, instead of a single Datum.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Jim Nasby
2014-08-27 16:51:40 UTC
Permalink
On 8/27/14, 2:23 AM, Heikki Linnakangas wrote:
> Does this make sense? In essence, make the relations work like PL/pgSQL variables do. If you squint a little, the new/old relation is a variable from the function's point of view, and a parameter from the planner/executor's point of view. It's just a variable/parameter that holds a set of tuples, instead of a single Datum.

Something to keep in mind is that users will definitely think about NEW/OLD as tables. I suspect that it won't be long after release before someone asks why they can't create an index on it. :)
--
Jim C. Nasby, Data Architect ***@nasby.net
512.569.9461 (cell) http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
David Fetter
2014-08-27 17:19:19 UTC
Permalink
On Wed, Aug 27, 2014 at 11:51:40AM -0500, Jim Nasby wrote:
> On 8/27/14, 2:23 AM, Heikki Linnakangas wrote:
> >Does this make sense? In essence, make the relations work like
> >PL/pgSQL variables do. If you squint a little, the new/old relation
> >is a variable from the function's point of view, and a parameter
> >from the planner/executor's point of view. It's just a
> >variable/parameter that holds a set of tuples, instead of a single
> >Datum.
>
> Something to keep in mind is that users will definitely think about
> NEW/OLD as tables. I suspect that it won't be long after release
> before someone asks why they can't create an index on it. :)

Continuing with this digression, that request seems more likely with
views and foreign tables, given that they persist across statements.
I'm given to understand that other systems have at least the former.

Cheers,
David.
--
David Fetter <***@fetter.org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: ***@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-08-27 21:09:39 UTC
Permalink
Jim Nasby <***@nasby.net> wrote:

> Something to keep in mind is that users will definitely think about NEW/OLD as
> tables. I suspect that it won't be long after release before someone asks
> why they can't create an index on it. :)

I'm comfortable saying "No" to that.  But it's a good point -- I'll
review error checking and documentation to make sure that it is
clear.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-08-27 21:03:04 UTC
Permalink
Heikki Linnakangas <***@vmware.com> wrote:
> On 08/27/2014 02:26 AM, Kevin Grittner wrote:

>> spi-tuplestore-registry allows tuplestores, with associated name
>> and TupleDesc, to be registered with the current SPI connection.
>> Queries planned or executed on that connection will recognize the
>> name as a tuplestore relation.  It doesn't care who is registering
>> the tuplestores or what happens to them.  It doesn't depend on
>> anything else.
>> 5 files changed, 445 insertions(+)
>>
>> ...
>>
>> plpgsql-after-trigger-transition-tables takes the tuplestores from
>> TriggerData and registers them with SPI before trigger planning and
>> execution.  It depends on the trigger-transition-tables and
>> spi-tuplestore-registry patches to build, and won't do anything
>> useful at run time without the executor-tuplestore-relations patch.
>> 3 files changed, 37 insertions(+), 11 deletions(-)
>
> This is a surprising way to expose the NEW/OLD relations to the
> planner/executor. The problem is the same as with making PL/pgSQL
> variables available to the planner/executor in queries within a PL/pgSQL
> function, and the solution we have for that is the "parser hooks" you
> pass to SPI_prepare_params. This tuplestore registry is a different
> solution to the same problem - we could've implemented parameters with a
> registry like this as well. Let's not mix two different designs.
>
> I suggest adding a new hook to the ParseState struct, (p_rangevar_hook
> ?). The planner calls it whenever it sees a reference to a table, and
> the hook function returns back some sort of placeholder reference to the
> tuplestore. With variables, the hook returns a Param node, and at
> execution time, the executor calls the paramFetch hook to fetch the
> value of the param. For relations/tuplestores, I guess we'll need to
> invent something like a Param node, but for holding information about
> the relation. Like your TsrData struct, but without the pointer to the
> tuplestore. At execution time, in the SPI_execute call, you pass the
> pointer to the tuplestore in the ParamListInfo struct, like you pass
> parameter values.
>
> Does this make sense?

I see your point, but SPI first has to be made aware of the
tuplestores and their corresponding names and TupleDesc structures.
Does it make sense to keep the SPI_register_tuplestore() and
SPI_unregister_tuplestore() functions for the client side of the
API, and pass things along to the parse analysis through execution
phases using the techniques you describe?  That would eliminate the
need for the SPI_get_caller_tuplestore() function and the
parse_tuplestore.[ch] files, and change how the data is fetched in
parse analysis and execution phases, but that seems fairly minimal
-- there are exactly three places that would need to call the new
hooks where the patch is now getting the information from the
registry.

> In essence, make the relations work like PL/pgSQL
> variables do. If you squint a little, the new/old relation is a variable
> from the function's point of view, and a parameter from the
> planner/executor's point of view. It's just a variable/parameter that
> holds a set of tuples, instead of a single Datum.

I don't have to squint that hard -- I've always been comfortable
with the definition of a table as a relation variable, and it's not
too big a stretch to expand that to a tuplestore.  ;-)  In fact, I
will be surprised if someone doesn't latch onto this to create a
new "declared temporary table" that only exists within the scope of
a compound statement (i.e., a BEGIN/END block).  You would DECLARE
them just like you would a scalar variable in a PL, and they would
have the same scope.

I'll take a look at doing this in the next couple days, and see
whether doing it that way is as easy as it seems on the face of it.

Thanks!

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2014-08-27 21:11:06 UTC
Permalink
* Kevin Grittner (***@ymail.com) wrote:
> Heikki Linnakangas <***@vmware.com> wrote:
> > In essence, make the relations work like PL/pgSQL
> > variables do. If you squint a little, the new/old relation is a variable
> > from the function's point of view, and a parameter from the
> > planner/executor's point of view. It's just a variable/parameter that
> > holds a set of tuples, instead of a single Datum.
>
> I don't have to squint that hard -- I've always been comfortable
> with the definition of a table as a relation variable, and it's not
> too big a stretch to expand that to a tuplestore.  ;-)  In fact, I
> will be surprised if someone doesn't latch onto this to create a
> new "declared temporary table" that only exists within the scope of
> a compound statement (i.e., a BEGIN/END block).  You would DECLARE
> them just like you would a scalar variable in a PL, and they would
> have the same scope.
>
> I'll take a look at doing this in the next couple days, and see
> whether doing it that way is as easy as it seems on the face of it.

(not following this very closely, but saw this...)

Yes, please? :)

Thanks!

Stephen
Marti Raudsepp
2014-08-28 08:48:55 UTC
Permalink
On Thu, Aug 28, 2014 at 12:03 AM, Kevin Grittner <***@ymail.com> wrote:
>> In essence, make the relations work like PL/pgSQL
>> variables do. If you squint a little, the new/old relation is a variable
>> from the function's point of view, and a parameter from the
>> planner/executor's point of view. It's just a variable/parameter that
>> holds a set of tuples, instead of a single Datum.

> will be surprised if someone doesn't latch onto this to create a
> new "declared temporary table" that only exists within the scope of
> a compound statement (i.e., a BEGIN/END block). You would DECLARE
> them just like you would a scalar variable in a PL, and they would
> have the same scope.
>
> I'll take a look at doing this in the next couple days, and see
> whether doing it that way is as easy as it seems on the face of it.

We already have all this with refcursor variables. Admittedly cursors
have some weird semantics (mutable position) and currently they're
cumbersome to combine into queries, but would a new "relation
variable" be sufficiently better to warrant a new type?

Why not extend refcursors and make them easier to use instead?

Regards,
Marti


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Craig Ringer
2014-09-25 01:58:32 UTC
Permalink
On 08/28/2014 05:03 AM, Kevin Grittner wrote:
> I don't have to squint that hard -- I've always been comfortable
> with the definition of a table as a relation variable, and it's not
> too big a stretch to expand that to a tuplestore. ;-) In fact, I
> will be surprised if someone doesn't latch onto this to create a
> new "declared temporary table" that only exists within the scope of
> a compound statement (i.e., a BEGIN/END block). You would DECLARE
> them just like you would a scalar variable in a PL, and they would
> have the same scope.
>
> I'll take a look at doing this in the next couple days, and see
> whether doing it that way is as easy as it seems on the face of it.

Oracle's TABLE variables in PL/SQL are quite similar; it might be worth
observing how they work for comparison.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Heikki Linnakangas
2014-08-27 21:22:39 UTC
Permalink
On 08/28/2014 12:03 AM, Kevin Grittner wrote:
> Heikki Linnakangas <***@vmware.com> wrote:
>> I suggest adding a new hook to the ParseState struct, (p_rangevar_hook
>> ?). The planner calls it whenever it sees a reference to a table, and
>> the hook function returns back some sort of placeholder reference to the
>> tuplestore. With variables, the hook returns a Param node, and at
>> execution time, the executor calls the paramFetch hook to fetch the
>> value of the param. For relations/tuplestores, I guess we'll need to
>> invent something like a Param node, but for holding information about
>> the relation. Like your TsrData struct, but without the pointer to the
>> tuplestore. At execution time, in the SPI_execute call, you pass the
>> pointer to the tuplestore in the ParamListInfo struct, like you pass
>> parameter values.
>>
>> Does this make sense?
>
> I see your point, but SPI first has to be made aware of the
> tuplestores and their corresponding names and TupleDesc structures.
> Does it make sense to keep the SPI_register_tuplestore() and
> SPI_unregister_tuplestore() functions for the client side of the
> API, and pass things along to the parse analysis through execution
> phases using the techniques you describe?

Sorry, I didn't understand that. What do you mean by "first", and the
"client side of the API"? I don't see any need for the
SPI_register_tuplestore() and and SPI_unregister_tuplestore() functions
if you use the hooks.

>> In essence, make the relations work like PL/pgSQL
>> variables do. If you squint a little, the new/old relation is a variable
>> from the function's point of view, and a parameter from the
>> planner/executor's point of view. It's just a variable/parameter that
>> holds a set of tuples, instead of a single Datum.
>
> I don't have to squint that hard -- I've always been comfortable
> with the definition of a table as a relation variable, and it's not
> too big a stretch to expand that to a tuplestore. ;-) In fact, I
> will be surprised if someone doesn't latch onto this to create a
> new "declared temporary table" that only exists within the scope of
> a compound statement (i.e., a BEGIN/END block). You would DECLARE
> them just like you would a scalar variable in a PL, and they would
> have the same scope.

Yeah, that would be cool :-).

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Heikki Linnakangas
2014-09-01 17:07:42 UTC
Permalink
On 08/30/2014 12:15 AM, Kevin Grittner wrote:
> Heikki Linnakangas <***@vmware.com> wrote:
>> On 08/28/2014 12:03 AM, Kevin Grittner wrote:
>>> Heikki Linnakangas <***@vmware.com> wrote:
>>>> I suggest adding a new hook to the ParseState struct, (p_rangevar_hook
>>>> ?). The planner calls it whenever it sees a reference to a table, and
>>>> the hook function returns back some sort of placeholder reference to the
>>>> tuplestore. With variables, the hook returns a Param node, and at
>>>> execution time, the executor calls the paramFetch hook to fetch the
>>>> value of the param. For relations/tuplestores, I guess we'll need to
>>>> invent something like a Param node, but for holding information about
>>>> the relation. Like your TsrData struct, but without the pointer to the
>>>> tuplestore. At execution time, in the SPI_execute call, you pass the
>>>> pointer to the tuplestore in the ParamListInfo struct, like you pass
>>>> parameter values.
>>>>
>>>> Does this make sense?
>>>
>>> I see your point, but SPI first has to be made aware of the
>>> tuplestores and their corresponding names and TupleDesc structures.
>>> Does it make sense to keep the SPI_register_tuplestore() and
>>> SPI_unregister_tuplestore() functions for the client side of the
>>> API, and pass things along to the parse analysis through execution
>>> phases using the techniques you describe?
>>
>> Sorry, I didn't understand that. What do you mean by "first", and the
>> "client side of the API"? I don't see any need for the
>> SPI_register_tuplestore() and and SPI_unregister_tuplestore() functions
>> if you use the hooks.
>
> If we were to go with the hooks as you propose, we would still need
> to take the information from TriggerData and put it somewhere else
> for the hook to reference.

Sure.

> The hooks are generalized for plpgsql,
> not just for triggers, and it doesn't seem appropriate for them to
> be fishing around in the TriggerData structure.

PLpgSQL_execstate seems like the appropriate place.

> And what if we add other sources for tuplestores?

What about it?

> The lookup during parse analysis
> each time an apparent relation name is encountered must be simple
> and fast.

We already use hooks for ColumnRefs, which are called even more often,
and we haven't had a problem making that fast enough.

> I want named tuplestores to be easy to use from *all* PLs (for
> trigger usage) as well as useful for other purposes people may want
> to develop.

I'm not sure other PLs would even want to resolve the old/new relations
like PL/pgSQL does. It might be more natural to access the new/old
tuplestores as perl or python hashes or arrays, for example. But if they
do, it's not that difficult to write the hooks.

> I had to change the hashkey for plpgsql's plan
> caching, but that needs to be done regardless of the API (to
> prevent problems in the obscure case that someone attaches the same
> trigger function to the same table for the same events more than
> once with different trigger names and different transition table
> names). If you ignore that, the *entire* change to use this in
> plpgsql is to add these lines to plpgsql_exec_trigger():
>
> /*
> * Capture the NEW and OLD transition TABLE tuplestores (if specified for
> * this trigger).
> */
> if (trigdata->tg_newtable)
> {
> Tsr tsr = palloc(sizeof(TsrData));
>
> tsr->name = trigdata->tg_trigger->tgnewtable;
> tsr->tstate = trigdata->tg_newtable;
> tsr->tupdesc = trigdata->tg_relation->rd_att;
> tsr->relid = trigdata->tg_relation->rd_id;
> SPI_register_tuplestore(tsr);
> }
> if (trigdata->tg_oldtable)
> {
> Tsr tsr = palloc(sizeof(TsrData));
>
> tsr->name = trigdata->tg_trigger->tgoldtable;
> tsr->tstate = trigdata->tg_oldtable;
> tsr->tupdesc = trigdata->tg_relation->rd_att;
> tsr->relid = trigdata->tg_relation->rd_id;
> SPI_register_tuplestore(tsr);
> }
>
> With the new SPI functions, the code to implement this in each
> other PL should be about the same (possibly identical), and areas
> using SPI only need similar code to make tuplestores visible to the
> planner and usable in the executor if someone has another use for
> this. You just do the above once you have run SPI_connect() and
> before preparing or executing any query that references the named
> tuplestore.

With hooks, the code to implement them in other PLs would be about the
same too, if they want the same behavior.

> It remains available on that SPI connection until
> SPI_finish() is called or you explicitly unregister it (by name).

Yeah, I don't like that. The SPI interface is currently stateless. Well,
except for cursors and plans explicitly saved with SPI_keepplan. But the
way queries are parsed is stateless - you pass all the necessary
information as part of the SPI_execute call (or similar), using direct
arguments and the ParamListInfo struct.


If you don't want to use hooks, I nevertheless feel that the old/new
relations should be passed as part of the ParamListInfo struct, one way
or another. With hooks, you would set the parserSetup hook, which in
turn would set up the table-ref hook similar to the column-ref hooks,
but if you don't want to do that, you could also add new fields directly
to ParamListInfo for the relations, like the "ParamExternData
params[1]" array that's there currently.

> If we use the hooks, I think it will be several times as much code,
> more invasive, and probably more fragile. More importantly, these
> hooks are not used by the other PLs included with core, and are not
> used in any of the other core code, anywhere. They all use SPI, so
> they can do the above very easily, but adding infrastructure for
> them to use the hooks would be a lot more work, and I'm not seeing
> a corresponding benefit.

If they use SPI, they can use the hooks just as well.

> I think there is some room to change the API as used by the parser,
> planner, and executor so that no changes are needed there if we add
> some other registration mechanism besides SPI, but I think having a
> registry for tuplestores that sort of takes the place of the
> catalogs and related caches (but is far simpler, being process
> local) is a better model than what you propose.
>
> In summary, thinking of the definition of a named tuplestore as a
> variable is the wrong parallel; it is more like a lightweight
> relation, and the comparison should be to that, not to function
> parameters or local variables.

I'm not too convinced. Marti Raudsepp mentioned refcursor variables, and
I think he's on to something. Refcursor are a bit difficult to
understand, so I wouldn't use those directly, but it would make a lot of
sense if you could e.g. loop over the rows directly with a FOR loop,
e.g. "FOR recordvar IN new LOOP ... END LOOP" without having to do
"SELECT * FROM new". Introducing a new "relation variable" datatype for
this would be nice. I won't insist that you have to implement that right
now, but let's not foreclose the option to add it later.

BTW, do we expect the old/new relations to be visible to dynamic SQL,
ie. EXECUTE? I think most users would say yes, even though the old/new
variables are not - you have to pass them with EXECUTE USING. Admittedly
that supports thinking of them more as lightweight relations rather than
variables.

Another question is whether you would expect the NEW/OLD table to be
visible to functions that you call from the trigger? For example, if the
trigger does:

...
PERFORM myfunction()
...

Can myfunction do "SELECT * FROM new" ? If not, is there a work-around?
I guess that's not this patch's problem, because we don't have a way to
pass sets as arguments in general. Refcursor is the closest thing, but
it's cumbersome.

In any case, I think having the parser call back into SPI, in
parse_tuplestore.c, is a modularity violation. The tuplestores need to
somehow find their way into ParseState. If you go with the
SPI_register_tuplestore API, then you should still have the tuplestores
in the ParseState struct, and have SPI fill in that information. Now
that I look back, I think you also referred to that earlier in the
paragraph that I didn't understand. Let me try again:

> I see your point, but SPI first has to be made aware of the
> tuplestores and their corresponding names and TupleDesc structures.
> Does it make sense to keep the SPI_register_tuplestore() and
> SPI_unregister_tuplestore() functions for the client side of the
> API, and pass things along to the parse analysis through execution
> phases using the techniques you describe? That would eliminate the
> need for the SPI_get_caller_tuplestore() function and the
> parse_tuplestore.[ch] files, and change how the data is fetched in
> parse analysis and execution phases, but that seems fairly minimal
> -- there are exactly three places that would need to call the new
> hooks where the patch is now getting the information from the
> registry.

Yes, if you decide to keep the SPI_register_tuplestore() function, then
IMHO you should still use hooks to pass that information to the parser,
planner and executor.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Tom Lane
2014-09-01 18:09:44 UTC
Permalink
Heikki Linnakangas <***@vmware.com> writes:
> On 08/30/2014 12:15 AM, Kevin Grittner wrote:
>> If we were to go with the hooks as you propose, we would still need
>> to take the information from TriggerData and put it somewhere else
>> for the hook to reference.

> Sure.

FWIW, I agree with Heikki on this point. It makes a lot more sense for
the parser to provide hooks comparable to the existing hooks for resolving
column refs, and it's not apparent that loading such functionality into
SPI is sane at all.

OTOH, I agree with Kevin that the things we're talking about are
lightweight relations not variables.

regards, tom lane


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Marti Raudsepp
2014-09-02 14:46:21 UTC
Permalink
On Mon, Sep 1, 2014 at 9:09 PM, Tom Lane <***@sss.pgh.pa.us> wrote:
> OTOH, I agree with Kevin that the things we're talking about are
> lightweight relations not variables.

My worry is that PL/pgSQL and Postgres's SQL dialect is turning into a
Frankenstein monster with many ways to do the same thing, each having
different semantics that require effort to reason about.

Variables and function arguments are non-contriversial, every
experienced coder understands their semantics without thinking twice
-- even if they're not familiar with Postgres.

The concept of "lightweight relations" that pop into existence when a
certain kind of trigger definition is used somewhere in the function
stack, without a CREATE TABLE, without being discoverable in
information_schema etc., I find needs some more justification than
I've seen in this thread. So far I've only heard that it's more
convenient to implement in the current PostgreSQL code base.

I'm sure more questions would pop up in practice, but as Heikki
mentioned: Are such relations also visible to other functions called
by the trigger function?
* If yes, this introduces non-obvious dependencies between functions.
What happens when one trigger with delta relations invokes another
trigger, does the previous one get shadowed or overwritten? What are
the interactions with search_path? Can an unprivileged function
override relation names when calling a SECURITY DEFINER function?

* If not, this further inhibits developers from properly modularizing
their trigger code (this is already a problem due to the current magic
trigger variables).

Even if these questions have reasonable answers, it takes mental
effort to remember the details. Procedure code debugging, especially
triggers, is hard enough due to poor tooling; increasing the cognitive
load should not be done lightly.

You could argue that CREATE TEMP TABLE already has some of these
problems, but it's very rare that people actually need to use that. If
delta relations get built on this new mechanism, avoiding won't be an
option any more.

Regards,
Marti


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-09-03 19:49:29 UTC
Permalink
Marti Raudsepp <***@juffo.org> wrote:
> On Mon, Sep 1, 2014 at 9:09 PM, Tom Lane <***@sss.pgh.pa.us> wrote:

> The concept of "lightweight relations" that pop into existence when a
> certain kind of trigger definition is used somewhere in the function
> stack, without a CREATE TABLE, without being discoverable in
> information_schema etc., I find needs some more justification than
> I've seen in this thread. So far I've only heard that it's more
> convenient to implement in the current PostgreSQL code base.

It is required by the SQL standard.

> I'm sure more questions would pop up in practice, but as Heikki
> mentioned: Are such relations also visible to other functions called
> by the trigger function?
> * If yes, this introduces non-obvious dependencies between functions.
> What happens when one trigger with delta relations invokes another
> trigger, does the previous one get shadowed or overwritten?

This is indeed a killer objection.  As things stand in the patch, a
function called from a trigger function might have the table of the
same name (if it's not a not schema-qualified reference) shadowed,
or it might not -- depending on whether it was already planned.
That's obviously not acceptable.  Passing the metadata from the
TriggerData structure to the PLpgSQL_execstate structure to the
PLpgSQL_expr structure and on to the ParseState structure, and
passing it down to child ParseState structures as needed, along
with similar passing of the Tuplestorestate pointer (and associated
name) to the execution state structures should fix that.

> What are the interactions with search_path?

Pretty much the same as the interactions of RTEs with search_path.
If the apparent relation name is not schema-qualified, parse
analysis first tries to resolve the name as an RTE, and if that
fails it tries to resolve it as a named tuplestore, and if that
fails it goes to the catalogs using search_path.

> Can an unprivileged function override relation names when calling
> a SECURITY DEFINER function?

By changing things to the way Heikki and Tom suggest, any called
functions are not aware of or affected by a named tuplestore in the
caller's context.  (Changing *back*, actually -- I had this largely
done that way before; but it seemed like a rather fragile relay
race, passing the baton from one structure to another at odd
places.  I guess there's no helping that.  Or maybe once I post a
version changed back to that someone can show me something I missed
that makes it better.)

> You could argue that CREATE TEMP TABLE already has some of these
> problems, but it's very rare that people actually need to use that. If
> delta relations get built on this new mechanism, avoiding won't be an
> option any more.

Not true -- you don't have them unless you request them in CREATE
TRIGGER.  Nobody can be using this now, so a table owner must
*choose* to add the REFERENCING clause to the CREATE TRIGGER
statement for it to matter in the trigger function that is then
referenced.  Perhaps if we implement the ability to specify the
trigger code in the CREATE TRIGGER statement itself (rather than
requiring that a trigger function be created first) it will be
easier to look at and cope with.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-09-03 20:22:18 UTC
Permalink
Kevin Grittner <***@ymail.com> wrote:
> Marti Raudsepp <***@juffo.org> wrote:

>> What are the interactions with search_path?

>
> Pretty much the same as the interactions of RTEs with search_path.
> If the apparent relation name is not schema-qualified, parse
> analysis first tries to resolve the name as an RTE, and if that
> fails it tries to resolve it as a named tuplestore, and if that
> fails it goes to the catalogs using search_path.

Argh. s/RTE/CTE/

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Marti Raudsepp
2014-09-03 20:55:00 UTC
Permalink
On Wed, Sep 3, 2014 at 10:49 PM, Kevin Grittner <***@ymail.com> wrote:
> Marti Raudsepp <***@juffo.org> wrote:
>> The concept of "lightweight relations" that pop into existence when a
>> certain kind of trigger definition is used somewhere in the function
>> stack, without a CREATE TABLE, without being discoverable in
>> information_schema etc., I find needs some more justification than
>> I've seen in this thread. So far I've only heard that it's more
>> convenient to implement in the current PostgreSQL code base.
>
> It is required by the SQL standard.

I had a cursory read of the SQL 20nn draft and I don't get this
impression. The only place I could find discussing the behavior of
"transition tables" is in Foundation "4.39.1 General description of
triggers", which says:

"Special variables make the data in the transition table(s) available
to the triggered action. For a statement-level
trigger the variable is one whose value is a transition table."

There is no information about the scoping of such variables, so I
assume it refers to a regular locally scoped variable.

Did I miss something? Are you reading a different version of the spec?

Regards,
Marti


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-09-03 21:14:17 UTC
Permalink
Marti Raudsepp <***@juffo.org> wrote:
> On Wed, Sep 3, 2014 at 10:49 PM, Kevin Grittner <***@ymail.com> wrote:
>> Marti Raudsepp <***@juffo.org> wrote:
>>> The concept of "lightweight relations" that pop into existence when a
>>> certain kind of trigger definition is used somewhere in the function
>>> stack, without a CREATE TABLE, without being discoverable in
>>> information_schema etc., I find needs some more justification than
>>> I've seen in this thread. So far I've only heard that it's more
>>> convenient to implement in the current PostgreSQL code base.
>>
>> It is required by the SQL standard.
>
> I had a cursory read of the SQL 20nn draft and I don't get this
> impression. The only place I could find discussing the behavior of
> "transition tables" is in Foundation "4.39.1 General description of
> triggers", which says:
>
> "Special variables make the data in the transition table(s) available
> to the triggered action. For a statement-level
> trigger the variable is one whose value is a transition table."
>
> There is no information about the scoping of such variables, so I
> assume it refers to a regular locally scoped variable.
>
> Did I miss something?

Apparently. I did a search on the document and counted and got 101
occurrences of "transition table". I might be off by a few, but
that should be pretty close. Perhaps this, from 4.14 most directly
answers your point:

| A transient table is a named table that may come into existence
| implicitly during the evaluation of a <query expression> or the
| execution of a trigger. A transient table is identified by a
| <query name> if it arises during the evaluation of a <query
| expression>, or by a <transition table name> if it arises during
| the execution of a trigger. Such tables exist only for the
| duration of the executing SQL-statement containing the <query
| expression> or for the duration of the executing trigger.

> Are you reading a different version of the spec?

I'm looking at a draft of 200x from 2006-02-01.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Marti Raudsepp
2014-10-08 22:36:12 UTC
Permalink
On Thu, Sep 4, 2014 at 12:14 AM, Kevin Grittner <***@ymail.com> wrote:
>> Did I miss something?
>
> Apparently. I did a search on the document and counted and got 101
> occurrences of "transition table".
> | A transient table is a named table that may come into existence
> | implicitly during the evaluation of a <query expression> or the
> | execution of a trigger.

D'oh, I was reading only the sections about triggers. You are correct.

Anyway, I tried out the latest from your GitHub branch and it seems
most of my concerns no longer apply to the current version, as
transition tables are now local to the trigger function. Thanks.

----
Not sure if you're looking for feedback on this level yet, but I tried
breaking it and found that transition tuplestores don't work with
cursors.

Probably not a valid use case once we have some other way to pass
tuplestores between functions. I don't know if it could work anyway,
as cursors may outlive the trigger call. But in that case, a better
error message is in order.

create table test1(i int);
create or replace function test1trg() returns trigger language plpgsql as $$
declare
curs cursor for select * from newtab;
r record;
begin
for r in curs loop
end loop;
return new;
end;$$;
create trigger test1trg after insert on test1 referencing new table as
newtab execute procedure test1trg();
insert into test1 values(1);
ERROR: executor could not find named tuplestore "newtab"
CONTEXT: PL/pgSQL function test1trg() line 6 at FOR over cursor

----
I still see a chance of introducing security problems with SECURITY
DEFINER trigger functions. An attacker can overshadow table names
queried by such a function and inject arbitrary data into it. Similar
to search_path vulnerabilities, but there are ways to avoid that.

Perhaps there should be a restriction when using REFERENCING and the
function is SECURITY DEFINER: require that the trigger definer (if not
superuser) matches the function owner?

Regards,
Marti


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-09-15 14:25:37 UTC
Permalink
Tom Lane <***@sss.pgh.pa.us> wrote:
> Heikki Linnakangas <***@vmware.com> writes:
>
>> On 08/30/2014 12:15 AM, Kevin Grittner wrote:
>>> If we were to go with the hooks as you propose, we would still need
>>> to take the information from TriggerData and put it somewhere else
>>> for the hook to reference.
>
>> Sure.
>
> FWIW, I agree with Heikki on this point. It makes a lot more sense for
> the parser to provide hooks comparable to the existing hooks for resolving
> column refs, and it's not apparent that loading such functionality into
> SPI is sane at all.
>
> OTOH, I agree with Kevin that the things we're talking about are
> lightweight relations not variables.

Try as I might, I was unable to find any sensible way to use hooks.
If the issue was only the parsing, the route was fairly obvious,
but the execution side needs to access the correct tuplestore(s)
for each run, too -- so the sort of information provided by
relcache needed to be passed in to based on the context within the
process (i.e., if you have nested triggers firing, each needs to
use a different tuplestore with the same name in the same
function, even though it's using the same plan). On both sides it
seemed easier to pass things through the same sort of techniques as
"normal" parameters; I couldn't find a way to use hooks that didn't
just make things uglier.

I see the down side of making this a feature which can only be used
from SPI, so I've updated the patch to allow it from other
contexts. On the other hand, I see many uses for it where SPI *is*
used, and the SPI interface needs to change to support that. The
functions I had added are one way to do that on the
parsing/planning side without breaking any existing code. The same
information (i.e., the metadata) needs to be passed to the executor
along with references to the actual tuplestores, and that needs to
go through a different path -- at least for the plpgsql usage.

I broke out the changes from the previous patch in multiple commits
in my repository on github:

commit 2520db8fbb41c68a82c2c750c8543154c6d85eb9
Author: Kevin Grittner <***@postgresql.org>
Date: Mon Sep 15 01:17:14 2014 -0500

Use executor state rather than SPI to get named tuplestores.

spi.c and trigger.c will need a little more clean-up to match this,
and it's likely that not all places that need to pass the baton are
doing so, but at least this passes regression tests.

commit de9067258125226d1625f160c3eee9aff90ca598
Author: Kevin Grittner <***@postgresql.org>
Date: Sun Sep 14 22:01:04 2014 -0500

Pass the Tsrcache through ParserState to parse analysis.

commit e94779c1e22ec587446a7aa2593ba5f102b6a28b
Author: Kevin Grittner <***@postgresql.org>
Date: Sun Sep 14 19:23:45 2014 -0500

Modify SPI to use Tsrcache instead of List.

commit 7af841881d9113eb4c8dca8e82dc1867883bf75d
Author: Kevin Grittner <***@postgresql.org>
Date: Sun Sep 14 18:45:54 2014 -0500

Create context-specific Tsrcache.

Not used yet.

commit 35790a4b6c236d09e0be261be9b0017d34eaf9c9
Author: Kevin Grittner <***@postgresql.org>
Date: Sun Sep 14 16:49:37 2014 -0500

Create Tsrmd structure so tuplestore.h isn't needed in parser.

commit 93d57c580da095b71d9214f69fede71d2f8ed840
Author: Kevin Grittner <***@postgresql.org>
Date: Sun Sep 14 15:59:45 2014 -0500

Improve some confusing naming for TuplestoreRelation node.

Anyone who has already reviewed the earlier patch may want to look
at these individually:

https://github.com/kgrittn/postgres/compare/transition

Attached is a new patch including all of that. Hopefully I haven't
misunderstood what Heikki and Tom wanted; if I have, just let me
know where I went wrong.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Heikki Linnakangas
2014-09-23 16:46:49 UTC
Permalink
On 09/15/2014 05:25 PM, Kevin Grittner wrote:
> Tom Lane <***@sss.pgh.pa.us> wrote:
>> Heikki Linnakangas <***@vmware.com> writes:
>>
>>> On 08/30/2014 12:15 AM, Kevin Grittner wrote:
>>>> If we were to go with the hooks as you propose, we would still need
>>>> to take the information from TriggerData and put it somewhere else
>>>> for the hook to reference.
>>
>>> Sure.
>>
>> FWIW, I agree with Heikki on this point. It makes a lot more sense for
>> the parser to provide hooks comparable to the existing hooks for resolving
>> column refs, and it's not apparent that loading such functionality into
>> SPI is sane at all.
>>
>> OTOH, I agree with Kevin that the things we're talking about are
>> lightweight relations not variables.
>
> Try as I might, I was unable to find any sensible way to use hooks.
> If the issue was only the parsing, the route was fairly obvious,
> but the execution side needs to access the correct tuplestore(s)
> for each run, too -- so the sort of information provided by
> relcache needed to be passed in to based on the context within the
> process (i.e., if you have nested triggers firing, each needs to
> use a different tuplestore with the same name in the same
> function, even though it's using the same plan). On both sides it
> seemed easier to pass things through the same sort of techniques as
> "normal" parameters; I couldn't find a way to use hooks that didn't
> just make things uglier.

Hmph. You didn't actually use the same sort of techniques we use for
normal parameters. You invented a new TsrCache registry, which marries
the metadata for planning with the tuplestore itself. That's quite
different from the way we deal with parameters (TsrCache is a misnomer,
BTW; it's not a cache, but the primary source of information for the
planner). And instead of passing parameters to the SPI calls
individually, you invented SPI_register_tuplestore which affects all
subsequent SPI calls.

To recap, this is how normal parameters work:

In the parse stage, you pass a ParserSetupHook function pointer to the
parser. The parser calls the callback, which sets up more hooks in the
ParseState struct: a column-ref hook and/or a param ref hook. The parser
then proceeds to parse the query, and whenever it sees a reference to a
column that it doesn't recognize, or a $n style parameter marker, it
calls the column-ref or param-ref hook.

The column- or param-ref hook can return a Param node, indicating that
the parameter's value will be supplied later, at execution time. The
Param node contains a numeric ID for the parameter, and the type OID and
other information needed to complete the parsing.

At execution time, you pass a ParamListInfo struct to the executor. It
contains values for all the parameters. Alternatively, the values can be
supplied lazily, by providing a param-fetch hook in the ParamListInfo
struct. Whenever the executor needs the value of a parameter, and the
ParamListInfo struct doesn't contain it, it calls the paramFetch hook
which should fill it in ParamListInfo.

Now, how do we make the tuplestores work similarly? Here's what I think
we should do:

Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
Whenever the parser sees a RangeVar that it doesn't recognize (or
actually, I think it should call it *before* resolving regular tables,
but let's ignore that for now), it calls the p_tableref_hook. It can
return a new RelationParam node (similar to regular Param), which
contains a numeric ID for the table/tuplestore, as well as its tuple
descriptor.

For the execution phase, add a new array of Tuplestorestates to
ParamListInfo. Similar to the existing array of ParamExternalDatas.

The next question is how to pass the new hooks and tuplestores through
the SPI interface. For prepared plans, the current SPI_prepare_params +
SPI_execute_plan_with_paramlist functions work fine. However, there
doesn't seem to be any way to do one-shot queries with a ParserSetupHook
and ParamListInfo. That seems like an oversight, and would be nice to
address that anyway.

PS. the copy/read/write functions for TuplestoreRelation in the patch
are broken; TuplestoreRelation is not a subclass of Plan. (But if you go
with the approach I'm advocating for, TuplestoreRelation in its current
form will be gone)

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas
2014-09-23 17:03:06 UTC
Permalink
On Tue, Sep 23, 2014 at 12:46 PM, Heikki Linnakangas
<***@vmware.com> wrote:
> On 09/15/2014 05:25 PM, Kevin Grittner wrote:
> Now, how do we make the tuplestores work similarly? Here's what I think we
> should do:
>
> Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
> Whenever the parser sees a RangeVar that it doesn't recognize (or actually,
> I think it should call it *before* resolving regular tables, but let's
> ignore that for now), it calls the p_tableref_hook. It can return a new
> RelationParam node (similar to regular Param), which contains a numeric ID
> for the table/tuplestore, as well as its tuple descriptor.
>
> For the execution phase, add a new array of Tuplestorestates to
> ParamListInfo. Similar to the existing array of ParamExternalDatas.

I haven't been following this issue closely, but this sounds like a
really nice design.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Tom Lane
2014-09-23 17:51:58 UTC
Permalink
Robert Haas <***@gmail.com> writes:
> On Tue, Sep 23, 2014 at 12:46 PM, Heikki Linnakangas
> <***@vmware.com> wrote:
>> Now, how do we make the tuplestores work similarly? Here's what I think we
>> should do:
>>
>> Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
>> Whenever the parser sees a RangeVar that it doesn't recognize (or actually,
>> I think it should call it *before* resolving regular tables, but let's
>> ignore that for now), it calls the p_tableref_hook. It can return a new
>> RelationParam node (similar to regular Param), which contains a numeric ID
>> for the table/tuplestore, as well as its tuple descriptor.
>>
>> For the execution phase, add a new array of Tuplestorestates to
>> ParamListInfo. Similar to the existing array of ParamExternalDatas.

> I haven't been following this issue closely, but this sounds like a
> really nice design.

I'm on board with the parser hooks part of that. I don't especially agree
with the idea of a new sub-structure for ParamListInfo: if you do that you
will need a whole bunch of new boilerplate infrastructure to allocate,
copy, and generally manage that structure, for darn little gain. What I'd
suggest is just saying that some Params might have type INTERNAL with
Datum values that are pointers to tuplestores; then all you need to do is
remember which Param number has been assigned to the particular tuplestore
you want. There is already precedent for that in the recursive CTE code,
IIRC.

regards, tom lane


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Heikki Linnakangas
2014-09-23 19:38:05 UTC
Permalink
On 09/23/2014 08:51 PM, Tom Lane wrote:
> Robert Haas <***@gmail.com> writes:
>> On Tue, Sep 23, 2014 at 12:46 PM, Heikki Linnakangas
>> <***@vmware.com> wrote:
>>> Now, how do we make the tuplestores work similarly? Here's what I think we
>>> should do:
>>>
>>> Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
>>> Whenever the parser sees a RangeVar that it doesn't recognize (or actually,
>>> I think it should call it *before* resolving regular tables, but let's
>>> ignore that for now), it calls the p_tableref_hook. It can return a new
>>> RelationParam node (similar to regular Param), which contains a numeric ID
>>> for the table/tuplestore, as well as its tuple descriptor.
>>>
>>> For the execution phase, add a new array of Tuplestorestates to
>>> ParamListInfo. Similar to the existing array of ParamExternalDatas.
>
>> I haven't been following this issue closely, but this sounds like a
>> really nice design.
>
> I'm on board with the parser hooks part of that. I don't especially agree
> with the idea of a new sub-structure for ParamListInfo: if you do that you
> will need a whole bunch of new boilerplate infrastructure to allocate,
> copy, and generally manage that structure, for darn little gain. What I'd
> suggest is just saying that some Params might have type INTERNAL with
> Datum values that are pointers to tuplestores; then all you need to do is
> remember which Param number has been assigned to the particular tuplestore
> you want. There is already precedent for that in the recursive CTE code,
> IIRC.

Works for me.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas
2014-10-22 20:10:26 UTC
Permalink
On Tue, Sep 23, 2014 at 1:51 PM, Tom Lane <***@sss.pgh.pa.us> wrote:
>>> Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
>>> Whenever the parser sees a RangeVar that it doesn't recognize (or actually,
>>> I think it should call it *before* resolving regular tables, but let's
>>> ignore that for now), it calls the p_tableref_hook. It can return a new
>>> RelationParam node (similar to regular Param), which contains a numeric ID
>>> for the table/tuplestore, as well as its tuple descriptor.
>>>
>>> For the execution phase, add a new array of Tuplestorestates to
>>> ParamListInfo. Similar to the existing array of ParamExternalDatas.
>
>> I haven't been following this issue closely, but this sounds like a
>> really nice design.
>
> I'm on board with the parser hooks part of that. I don't especially agree
> with the idea of a new sub-structure for ParamListInfo: if you do that you
> will need a whole bunch of new boilerplate infrastructure to allocate,
> copy, and generally manage that structure, for darn little gain. What I'd
> suggest is just saying that some Params might have type INTERNAL with
> Datum values that are pointers to tuplestores; then all you need to do is
> remember which Param number has been assigned to the particular tuplestore
> you want. There is already precedent for that in the recursive CTE code,
> IIRC.

Studying this proposed design a bit further, I am a little fuzzy on
what is supposed to happen in this design during parse analysis. In
Kevin's current code, after checking whether a RangeVar might be a CTE
reference and before deciding that it must be a table reference,
scanNameSpaceForTsr() is used to check whether there's a tuplestore by
that name and, if so, then we insert a RTE with type RTE_TUPLESTORE
which references the tuplestore by name. To me, the obvious thing to
do here seems to be to instead call p_tableref_hook and let it return
a suitable RangeTblRef (or NULL if it wishes to take no action). In
the case where the hook wants to redirect the use of that name to a
tuplestore, it can add a range-table entry of type RTE_TUPLESTORE, and
that entry can store a parameter-index rather than, as in the current
design, a name. But then where does Heikki's notion of a
RelationParam get used?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Heikki Linnakangas
2014-10-22 21:04:35 UTC
Permalink
On 10/22/2014 11:10 PM, Robert Haas wrote:
> On Tue, Sep 23, 2014 at 1:51 PM, Tom Lane <***@sss.pgh.pa.us> wrote:
>>>> Add a new p_tableref_hook function pointer, similar to p_paramref_hook.
>>>> Whenever the parser sees a RangeVar that it doesn't recognize (or actually,
>>>> I think it should call it *before* resolving regular tables, but let's
>>>> ignore that for now), it calls the p_tableref_hook. It can return a new
>>>> RelationParam node (similar to regular Param), which contains a numeric ID
>>>> for the table/tuplestore, as well as its tuple descriptor.
>>>>
>>>> For the execution phase, add a new array of Tuplestorestates to
>>>> ParamListInfo. Similar to the existing array of ParamExternalDatas.
>>
>>> I haven't been following this issue closely, but this sounds like a
>>> really nice design.
>>
>> I'm on board with the parser hooks part of that. I don't especially agree
>> with the idea of a new sub-structure for ParamListInfo: if you do that you
>> will need a whole bunch of new boilerplate infrastructure to allocate,
>> copy, and generally manage that structure, for darn little gain. What I'd
>> suggest is just saying that some Params might have type INTERNAL with
>> Datum values that are pointers to tuplestores; then all you need to do is
>> remember which Param number has been assigned to the particular tuplestore
>> you want. There is already precedent for that in the recursive CTE code,
>> IIRC.
>
> Studying this proposed design a bit further, I am a little fuzzy on
> what is supposed to happen in this design during parse analysis. In
> Kevin's current code, after checking whether a RangeVar might be a CTE
> reference and before deciding that it must be a table reference,
> scanNameSpaceForTsr() is used to check whether there's a tuplestore by
> that name and, if so, then we insert a RTE with type RTE_TUPLESTORE
> which references the tuplestore by name. To me, the obvious thing to
> do here seems to be to instead call p_tableref_hook and let it return
> a suitable RangeTblRef (or NULL if it wishes to take no action). In
> the case where the hook wants to redirect the use of that name to a
> tuplestore, it can add a range-table entry of type RTE_TUPLESTORE, and
> that entry can store a parameter-index rather than, as in the current
> design, a name. But then where does Heikki's notion of a
> RelationParam get used?

I was thinking that the hook would return a RelationParam. When parse
analysis sees the returned RelationParam, it adds an entry for that to
the range table, and creates the RangeTblRef for it. The way you
describe it works too, but manipulating the range table directly in the
hook seems a bit too low-level.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Tom Lane
2014-10-22 21:29:02 UTC
Permalink
Heikki Linnakangas <***@vmware.com> writes:
> On 10/22/2014 11:10 PM, Robert Haas wrote:
>> Studying this proposed design a bit further, I am a little fuzzy on
>> what is supposed to happen in this design during parse analysis. In
>> Kevin's current code, after checking whether a RangeVar might be a CTE
>> reference and before deciding that it must be a table reference,
>> scanNameSpaceForTsr() is used to check whether there's a tuplestore by
>> that name and, if so, then we insert a RTE with type RTE_TUPLESTORE
>> which references the tuplestore by name. To me, the obvious thing to
>> do here seems to be to instead call p_tableref_hook and let it return
>> a suitable RangeTblRef (or NULL if it wishes to take no action). In
>> the case where the hook wants to redirect the use of that name to a
>> tuplestore, it can add a range-table entry of type RTE_TUPLESTORE, and
>> that entry can store a parameter-index rather than, as in the current
>> design, a name. But then where does Heikki's notion of a
>> RelationParam get used?

> I was thinking that the hook would return a RelationParam. When parse
> analysis sees the returned RelationParam, it adds an entry for that to
> the range table, and creates the RangeTblRef for it. The way you
> describe it works too, but manipulating the range table directly in the
> hook seems a bit too low-level.

The problem with that idea is that then the API for the hook has to cover
every possible sort of RTE that hooks might wish to create; I see no
reason to restrict them to creating just one kind. I agree that the hook
should avoid *physically* manipulating the rangetable, but it seems
reasonable to expect that it can call one of the addRangeTableEntryXXX
functions provided by parse_relation.c, and then return a RangeTblEntry*
gotten that way. So hooks would have an API more or less equivalent
to, eg, transformRangeFunction().

regards, tom lane


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas
2014-10-23 15:19:13 UTC
Permalink
On Wed, Oct 22, 2014 at 5:29 PM, Tom Lane <***@sss.pgh.pa.us> wrote:
>> I was thinking that the hook would return a RelationParam. When parse
>> analysis sees the returned RelationParam, it adds an entry for that to
>> the range table, and creates the RangeTblRef for it. The way you
>> describe it works too, but manipulating the range table directly in the
>> hook seems a bit too low-level.
>
> The problem with that idea is that then the API for the hook has to cover
> every possible sort of RTE that hooks might wish to create; I see no
> reason to restrict them to creating just one kind. I agree that the hook
> should avoid *physically* manipulating the rangetable, but it seems
> reasonable to expect that it can call one of the addRangeTableEntryXXX
> functions provided by parse_relation.c, and then return a RangeTblEntry*
> gotten that way. So hooks would have an API more or less equivalent
> to, eg, transformRangeFunction().

Right, that reasoning makes sense to me. Unlike regular parameters,
where the existence of the parameter is known at parse time but the
value isn't available until bind time, we would be creating a
RelationParam node and then, literally immediately, turning it into a
range-table entry. That seems like unnecessary complexity, and it's
also something we can invent later if a more compelling use case
emerges.

So what I'm imagining now is:

1. During parse analysis, p_tableref_hook gets control and calls
addRangeTableEntryForTuplestore(), creating an RTE of type
RTE_TUPLESTORE. The RTE stores an integer parameter-index.

2. Path generation doesn't need to do anything very exciting; it just
generates a Path node of type T_TuplestoreScan. The RTE is still
available, so the path itself doesn't need to know which tuplestore
we're referencing, because that information is present in the RTE.

3. At plan generation time, we look up the RTE for the path and
extract the parameter index, which is what gets stored in the
TuplestoreScan node.

4. At executor initialization time, we use the parameter index in the
TuplestoreScan to index into the EState's es_param_list_info and
retrieve the tuplestore.

This means that Kevin's notion of a Tsrcache goes away completely,
which means a lot of the function-signature changes in his last
version of the patch can be reverted. The EState doesn't need a
es_tsrcache either. The mapping from name (OLD/NEW) to parameter
index happens inside the p_paramref_hook and after that we use integer
indices throughout. All that sees good.

One thing that's not too clear to me is how we're imagining that the
TuplestoreScan will get it's tupledesc. Right now the Tsrcache stores
essentially (tupledesc, tuplestore), but I understood the suggestions
above to imply that the ParamListInfo should point only to the
tuplestore, not to the tupledesc. I *think* the information we need
to reconstruct the TupleDesc is mostly present in the RTE; Kevin
reused the ctecoltypes, ctecoltypmods, and ctecolcollations fields to
store that information, which (a) probably requires some thought about
renaming those fields but (b) seems like it ought to be enough to
construct a viable TupleDesc. It seems that for CTEs, we somehow
engineer things so that the RecursiveUnion's target-list is such that
we can apply ExecAssignResultTypeFromTL() to it and get the tupledesc
that matches its Tuplestorestate, but I'm kinda unclear about what
makes that work and whether we can use a similar trick here.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-09-23 21:22:58 UTC
Permalink
Thanks for reviewing this. I will spend some time looking at your
recommendations in detail and seeing what it would take to
implement them, and whether I agree that is better; but I wanted to
point out a couple things regarding the SPI interface where I'm not
sure you realize what's currently being done. I may want to argue
some of the rest if I don't agree after more detailed review; this
is just what jumps out on a first pass.

Heikki Linnakangas <***@vmware.com> wrote:

> instead of passing parameters to the SPI calls individually, you
> invented SPI_register_tuplestore which affects all subsequent SPI
> calls.

All subsequent SPI calls on that particular SPI connection until it
is closed, except for any tuplestores are later unregistered.
Nested SPI connections do not automatically inherit the named
tuplestores; whatever code opens an SPI connection would need to
register them for the new context, if desired. This seemed to me
to provide minimal disruption to the existing SPI callers who might
want to use this.

> The next question is how to pass the new hooks and tuplestores
> through the SPI interface. For prepared plans, the current
> SPI_prepare_params + SPI_execute_plan_with_paramlist functions
> work fine.

They work fine, I guess, in the *one* place they are used.
SPI_prepare_params() is called exactly *once* from plpgsql's
pl_exec.c, and SPI_execute_plan_with_paramlist() is called twice
from the same file. There are no other calls to either from
anywhere else in the code base.

> However, there doesn't seem to be any way to do one-shot queries
> with a ParserSetupHook and ParamListInfo. That seems like an
> oversight, and would be nice to address that anyway.

There are dozens of SPI_prepare* and SPI_exec* calls scattered all
over the backend, pl, and contrib code which appear to get along
fine without that. Partly it may be because it involves something
of a modularity violation; the comment for the function used for
this call (where it *is* used) says:

/*
* plpgsql_parser_setup set up parser hooks for dynamic parameters
*
* Note: this routine, and the hook functions it prepares for, are logically
* part of plpgsql parsing. But they actually run during function execution,
* when we are ready to evaluate a SQL query or expression that has not
* previously been parsed and planned.
*/

Can you clarify what benefit you see to modifying the SPI API the
way you suggest, and what impact it might have on existing calling
code?

> PS. the copy/read/write functions for TuplestoreRelation in the
> patch are broken; TuplestoreRelation is not a subclass of Plan.

I'm not sure what you mean by "broken" -- could you elaborate? It
is, in a lot of ways, parallel to the CommonTableExpr defined a
little above it in the parsenodes.h file -- a relation which can
only be referenced by unqualified name. It is after CTEs in
search order, and before anything else. Having such a node in the
tree after parse analysis allows a number of special cases to be
handled pretty much the same as they are for CTEs, which seems like
a good thing to me.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Heikki Linnakangas
2014-09-24 13:41:14 UTC
Permalink
On 09/24/2014 12:22 AM, Kevin Grittner wrote:
> Heikki Linnakangas <***@vmware.com> wrote:
>
>> instead of passing parameters to the SPI calls individually, you
>> invented SPI_register_tuplestore which affects all subsequent SPI
>> calls.
>
> All subsequent SPI calls on that particular SPI connection until it
> is closed, except for any tuplestores are later unregistered.
> Nested SPI connections do not automatically inherit the named
> tuplestores; whatever code opens an SPI connection would need to
> register them for the new context, if desired. This seemed to me
> to provide minimal disruption to the existing SPI callers who might
> want to use this.

Yeah, I got that. And note that I'm not saying that's necessarily a bad
design per se - it's just that it's different from the way parameters
work, and I don't like it for that reason.

You could imagine doing the same for parameters; have a
SPI_register_param() function that you could use to register parameter
types, and the parameters could then be referenced in any SPI calls that
follow (within the same connection). But as the code stands, SPI is
stateless wrt. to parameters, and tuplestores or relation parameters
should follow the lead.

>> The next question is how to pass the new hooks and tuplestores
>> However, there doesn't seem to be any way to do one-shot queries
>> with a ParserSetupHook and ParamListInfo. That seems like an
>> oversight, and would be nice to address that anyway.
>
> There are dozens of SPI_prepare* and SPI_exec* calls scattered all
> over the backend, pl, and contrib code which appear to get along
> fine without that.

Yeah. None of the current callers have apparently needed that
functionality. But it's not hard to imagine cases where it would be
needed. For example, imagine a variant of EXECUTE '...' where all the
PL/pgSQL variables can be used in the query, like they can in static
queries:

declare
myvar int4;
tablename text;
begin
...
EXECUTE 'insert into ' || tablename ||' values (myvar)';
end;

Currently you have to use $1 in place of the variable name, and pass the
variable's current value with USING. If you wanted to make the above
work, you would need a variant of SPI_execute that can run a one-shot
query, with a parser-hook.

Whether you want to use a parser-hook or is orthogonal to whether or not
you want to run a one-shot query or prepare it and keep the plan.

> Partly it may be because it involves something
> of a modularity violation; the comment for the function used for
> this call (where it *is* used) says:
>
> /*
> * plpgsql_parser_setup set up parser hooks for dynamic parameters
> *
> * Note: this routine, and the hook functions it prepares for, are logically
> * part of plpgsql parsing. But they actually run during function execution,
> * when we are ready to evaluate a SQL query or expression that has not
> * previously been parsed and planned.
> */

No, that's something completely different. The comment points out that
even though plpgsql_parser_setup is in pl_comp.c, which contains code
related to compiling a PL/pgSQL function, it's actually called at
execution time, not compilation time.

> Can you clarify what benefit you see to modifying the SPI API the
> way you suggest, and what impact it might have on existing calling
> code?

Well, we'll have to keep the existing functions anyway, to avoid
breaking 3rd party code that use them, so there would be no impact on
existing code. The benefit would be that you could use the parser hooks
and the ParamListInfo struct even when doing a one-shot query.

Or perhaps you could just use SPI_prepare_params +
SPI_execute_plan_with_paramlist even for one-shot queries. There is some
overhead when a SPIPlan has to be allocated, but maybe it's not big
enough to worry about. That would be worth measuring before adding new
functions to the SPI.

>> PS. the copy/read/write functions for TuplestoreRelation in the
>> patch are broken; TuplestoreRelation is not a subclass of Plan.
>
> I'm not sure what you mean by "broken" -- could you elaborate?

Sure:

> + /*
> + * _copyTuplestoreRelation
> + */
> + static TuplestoreRelation *
> + _copyTuplestoreRelation(const TuplestoreRelation *from)
> + {
> + TuplestoreRelation *newnode = makeNode(TuplestoreRelation);
> +
> + /*
> + * copy node superclass fields
> + */
> + CopyPlanFields((const Plan *) from, (Plan *) newnode);
> +
> + /*
> + * copy remainder of node
> + */
> + COPY_STRING_FIELD(refname);
> +
> + return newnode;
> + }

You cast the TuplestoreRelation to Plan, and pass it to CopyPlanFields.
That will crash, because TuplestoreRelation is nothing like a Plan:

> + /*
> + * TuplestoreRelation -
> + * synthetic node for tuplestore passed in to the query by name
> + *
> + * This is initially added to support trigger transition tables, but may find
> + * other uses, so we try to keep it generic.
> + */
> + typedef struct TuplestoreRelation
> + {
> + NodeTag type;
> + char *refname;
> + } TuplestoreRelation;

The corresponding code in outfuncs.c is similarly broken.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Craig Ringer
2014-09-25 02:05:10 UTC
Permalink
On 09/15/2014 10:25 PM, Kevin Grittner wrote:
> I broke out the changes from the previous patch in multiple commits
> in my repository on github:

*Thankyou*

That gives me the incentive to pull it and test it.

A nice patch series published in a git repo is so much easier to work
with than a giant squashed patch as an attachment.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-09-25 15:54:06 UTC
Permalink
Heikki Linnakangas <***@vmware.com> wrote:

> You cast the TuplestoreRelation to Plan, and pass it to CopyPlanFields.
> That will crash, because TuplestoreRelation is nothing like a Plan:

Oops. That's a copy/paste error I should have noticed. Fixed,
even though the node type might be going away. Since all of this
seems to be working very well from a user point of view, I'm going
to try to generate a lot more regression tests against the existing
code before taking another run at the API, to make sure that things
don't break in the refactoring.

I didn't hit the copy/out bugs in testing so far -- any suggestions
on a test that would exercise this code? (I'm probably missing
something obvious.)



Craig Ringer <***@2ndquadrant.com> wrote:

> On 09/15/2014 10:25 PM, Kevin Grittner wrote:
>
>> I broke out the changes from the previous patch in multiple commits
>> in my repository on github:
>
> *Thankyou*

> A nice patch series published in a git repo is so much easier to work
> with than a giant squashed patch as an attachment.

I have fixed the bug reported by Heikki; be sure to grab that.

I have been merging in changes to master as I go, so that bit rot
doesn't accumulate, but I don't squash or rebase; hopefully that
style works for you.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Craig Ringer
2014-09-26 00:39:48 UTC
Permalink
On 09/25/2014 11:54 PM, Kevin Grittner wrote:
> I have fixed the bug reported by Heikki; be sure to grab that.

Will do.

> I have been merging in changes to master as I go, so that bit rot
> doesn't accumulate, but I don't squash or rebase; hopefully that
> style works for you.

IMO it only really matters before the final push to master; before then
it's all just a matter of how you prefer to work.

I'm a big fan of rebasing my feature branches as I go:

git tag before-rebase
git pull --rebase
... do any merges during rebase ...
git tag -d before-rebase

For bug fixes I tend to commit them separately, then when I rebase I
squash them into the relevant patch. Git's "fixup! " commits are really
handy for this; if I have a commit:

Add widget support

Everyone wants more widgets.

and want to fix an issue in that commit I can just commit

fixup! Add widget support

It's spelled widget not wodget

and when I "git rebase --autosquash master" they get automatically
squashed into the relevant changeset. (I usually run with the config
rebase.autosquash enabled so this happens during my rebase pulls on top
of master).

I got in the habit while working on RLS, to keep me sane with that
patchset, and find it works well for me.

However, everyone has a different work style. Colour me happy if it's in
git at all.

--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Heikki Linnakangas
2014-09-26 12:21:28 UTC
Permalink
On 09/25/2014 06:54 PM, Kevin Grittner wrote:
> Heikki Linnakangas <***@vmware.com> wrote:
>
>> You cast the TuplestoreRelation to Plan, and pass it to CopyPlanFields.
>> That will crash, because TuplestoreRelation is nothing like a Plan:
>
> Oops. That's a copy/paste error I should have noticed. Fixed,
> even though the node type might be going away. Since all of this
> seems to be working very well from a user point of view, I'm going
> to try to generate a lot more regression tests against the existing
> code before taking another run at the API, to make sure that things
> don't break in the refactoring.
>
> I didn't hit the copy/out bugs in testing so far -- any suggestions
> on a test that would exercise this code? (I'm probably missing
> something obvious.)

There's some debugging code in tcop/postgres.c, search for
COPY_PARSE_PLAN_TREES. It won't catch everything, but probably would've
caught this one.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2014-06-18 21:54:18 UTC
Permalink
Robert Haas <***@gmail.com> wrote:
> Kevin Grittner <***@ymail.com> wrote:

> Can you describe what the standard syntax for the row variables
> is, as opposed to our syntax?  Also, what's the standard syntax
> for the the transition relations?

If you want either (or both), the standard has you using a
REFERENCING clause right before the FOR EACH part of the statement.
You can specify one or more sections in the format:

{ OLD | NEW } [ ROW | TABLE ] [ AS ] name

If you have more than one, they are separated by whitespace.  (No
commas or other visible delimiters.)  If you omit ROW/TABLE it
defaults to ROW.  You are only allowed to specify TABLE on an AFTER
trigger.  You are only allowed to specify ROW on a FOR EACH ROW
trigger.  (There is no prohibition against specifying TABLE on a
FOR EACH ROW trigger.)  You are only allowed to specify OLD for a
DELETE or UPDATE trigger.  (The ability for one trigger definition
to specify multiple operations is a PostgreSQL extension.)  You are
only allowed to specify NEW for an UPDATE or INSERT trigger.  You
may not repeat an entry.

Essentially, we have an implied clause on every FOR EACH ROW
trigger like:

  REFERENCING OLD ROW AS OLD NEW ROW AS NEW

>> Some things which I *did* follow from the standard: these new
>> relations are only allowed within AFTER triggers, but are
>> available in both AFTER STATEMENT and AFTER ROW triggers.  That
>> is, an AFTER UPDATE ... FOR EACH ROW trigger could use both the
>> OLD and NEW row variables as well as the delta relations (under
>> whatever names we pick).  That probably won't be used very
>> often, but I can imagine some cases where it might be useful.  I
>> expect that these will normally be used in FOR EACH STATEMENT
>> triggers.
>
> I'm concerned about the performance implications of capturing the
> delta relations unconditionally.  If a particular trigger
> actually needs the delta relations, then the time spent capturing
> that information is well spent; but if it doesn't, then it's a
> waste.  There are lots of people using FOR EACH STATEMENT
> triggers right now who won't be happy if those start requiring
> O(n) additional temporary storage, where n is the number of rows
> modified by the statement.  This is likely an even bigger issue
> for per-row triggers, of course, where as you say, it probably
> won't be used often.

That is why I added a reloption which must be specifically enabled
for a table in order to generate these deltas.  That would be an
inconvenience for those wanting to use the new feature, but should
prevent a performance regression for any tables where it is not
specifically turned on.  That's not perfect, of course, because if
you turn it on for an UPDATE ... AFTER EACH STATEMENT trigger where
you want it, you do suffer the overhead on every AFTER trigger on
that table.  Perhaps this is sufficient reason to use the standard
syntax for the new delta tables -- the would then be generated only
in the specific cases where they were needed.  And I think I could
lose the reloption.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Loading...