Discussion:
Proposal: Store "timestamptz" of database creation on "pg_database"
Fabrízio de Royes Mello
2012-12-27 00:48:52 UTC
Permalink
Hi all,

This proposal is about add a column "datcreated" on "pg_database" to store
the "timestamp" of the database creation.

A couple weeks ago I had a trouble with a PostgreSQL instance, actually our
ERP had some strange behaviors with some data loss, but I searched for
ERRORs in log files (OS and PG) and I found nothing.

Looking at the files and directories in the cluster noticed something
strange, the date / time of the file "base/9999/PG_VERSION" (database of
our ERP) was different compared to when we create it. So I used the
following SQL to check the date / time of creation of the databases in the
cluster:

fabrizio=# SELECT datname,
(pg_stat_file('base/'||oid||'/PG_VERSION')).modification AS datcreated
fabrizio-# FROM pg_database;
datname | datcreated
-----------+------------------------
template1 | 2012-12-26 12:11:53-02
template0 | 2012-12-26 12:11:54-02
postgres | 2012-12-26 12:11:54-02
fabrizio | 2012-12-26 12:12:02-02
(4 rows)

This isn't an elegant solution to do that, but worked fine. However, why
not we have a column to store this information?

Somebody have another idea?

Regards,
--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Josh Berkus
2012-12-27 01:08:13 UTC
Permalink
Post by Fabrízio de Royes Mello
Hi all,
This proposal is about add a column "datcreated" on "pg_database" to store
the "timestamp" of the database creation.
I agree that it would be useful. However, if we're going to get into
created dates, we should at least consider adding them to the other
catalogs, particularly pg_class.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2012-12-27 01:30:29 UTC
Permalink
Post by Josh Berkus
Post by Fabrízio de Royes Mello
This proposal is about add a column "datcreated" on "pg_database" to store
the "timestamp" of the database creation.
I agree that it would be useful. However, if we're going to get
into created dates, we should at least consider adding them to the
other catalogs, particularly pg_class.
I was thinking more-or-less the same thing. Along those lines, however,
perhaps we should put them into a separate catalog to avoid the
increased size of pg_class and friends..? Also, we'd probably have 2 of
those, one for global and one for per-database objects, ala pg_depend
and pg_shdepend, and then a view that brings it all together, resolves
the OIDs to names, etc.

Thanks,

Stephen
Tom Lane
2012-12-27 04:13:54 UTC
Permalink
Post by Stephen Frost
Post by Josh Berkus
Post by Fabrízio de Royes Mello
This proposal is about add a column "datcreated" on "pg_database" to store
the "timestamp" of the database creation.
I agree that it would be useful. However, if we're going to get
into created dates, we should at least consider adding them to the
other catalogs, particularly pg_class.
I was thinking more-or-less the same thing.
This has been debated, and rejected, before.

To mention just one problem, are we going to add nonstandard,
non-backwards-compatible syntax to every single kind of CREATE to allow
pg_dump to preserve the creation dates? Another interesting question is
whether we should likewise track the last ALTER time, or perhaps whether
a sufficiently major ALTER redefinition should update the creation time.

I'm inclined to think that anyone who really needs this should be
pointed at event triggers. That feature (if it gets in) will allow
people to track creation/DDL-change times with exactly the behavior
they want.

regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2012-12-27 04:32:44 UTC
Permalink
Post by Tom Lane
To mention just one problem, are we going to add nonstandard,
non-backwards-compatible syntax to every single kind of CREATE to allow
pg_dump to preserve the creation dates?
Perhaps 'ALTER' would be a better place to put it, but concerns around
how to make pg_dump work with it hardly strikes me as a serious argument
against this. I agree that we may be overloading ourselves with syntax
but that's a compromise we made long ago in order to have pg_dump be
able to act like a regular 'user'.
Post by Tom Lane
Another interesting question is
whether we should likewise track the last ALTER time, or perhaps whether
a sufficiently major ALTER redefinition should update the creation time.
Yes, tracking the last 'ALTER' time would be useful as well, as it's own
field. 'ALTER' wouldn't change the 'CREATE' time, except perhaps if it
has an explicit 'make the CREATE time X' option.
Post by Tom Lane
I'm inclined to think that anyone who really needs this should be
pointed at event triggers. That feature (if it gets in) will allow
people to track creation/DDL-change times with exactly the behavior
they want.
I considered that and rejected it. Event triggers will be great to
allow people to customize and/or specialize exactly what is tracked and
how, but I dislike that they would be the only way to get this
information. I'm on the fence about if, assuming event triggers go in,
we provide this kind of information through a 'default' set of event
triggers. I wouldn't want users to be able to modify those event
triggers and I'd expect the results to go into a system table that we
wouldn't want users messing with either.

This information could be extremely useful for forensics, debugging, ETL
processes (many of which create tables as part of their processes), etc.

Thanks,

Stephen
Josh Berkus
2012-12-27 05:33:01 UTC
Permalink
Post by Stephen Frost
This information could be extremely useful for forensics, debugging, ETL
processes (many of which create tables as part of their processes), etc.
I'd say "moderately useful" at best. Quite a number of things could
make the creation dates misleading or not distinctive (think partition
replacement, restore from pg_dump, replicas, etc.). ALTER dates would
be more useful, but as Tom points out, would need the
user-configurability which can only be delivered by something like event
triggers.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2012-12-27 12:08:56 UTC
Permalink
Post by Josh Berkus
Post by Stephen Frost
This information could be extremely useful for forensics, debugging, ETL
processes (many of which create tables as part of their processes), etc.
I'd say "moderately useful" at best. Quite a number of things could
make the creation dates misleading or not distinctive (think
partition replacement, restore from pg_dump, replicas, etc.).
ALTER dates would be more useful, but as Tom points out, would need
the user-configurability which can only be delivered by something
like event triggers.
To be honest, I really just don't find this to be *that* difficult and
an intuitive set of rules which are well documented feels like it'd
cover 99% of the cases. pg_dump would preserve the times (though it
could be optional), replicas should as well, etc. We haven't even
started talking about the 'hard' part, which would be a 'modification'
type of field..

Thanks,

Stephen
Dimitri Fontaine
2012-12-27 16:04:42 UTC
Permalink
Hi,
Post by Tom Lane
Post by Fabrízio de Royes Mello
This proposal is about add a column "datcreated" on "pg_database" to store
the "timestamp" of the database creation.
I'm inclined to think that anyone who really needs this should be
pointed at event triggers. That feature (if it gets in) will allow
people to track creation/DDL-change times with exactly the behavior
they want.
Agreed.
Post by Tom Lane
To be honest, I really just don't find this to be *that* difficult and
an intuitive set of rules which are well documented feels like it'd
cover 99% of the cases. pg_dump would preserve the times (though it
could be optional), replicas should as well, etc. We haven't even
started talking about the 'hard' part, which would be a 'modification'
type of field..
Here's a complete test case that works with my current branch, with a
tricky test while at it, of course:

create table public.tracking
(
relation regclass primary key,
relname name not null, -- in case it changes later
relnamespace name not null, -- same reason
created timestamptz default now(),
altered timestamptz,
dropped timestamptz
);

create or replace function public.track_table_activity() returns event_trigger
language plpgsql
as $$
begin
raise notice 'track table activity: % %', tg_tag, tg_objectid::regclass;
if tg_operation = 'CREATE'
then
insert into public.tracking(relation, relname, relnamespace)
select tg_objectid, tg_objectname, tg_schemaname;

elsif tg_operation = 'ALTER'
then
update public.tracking set altered = now() where relation = tg_objectid;

elsif tg_operation = 'DROP'
then
update public.tracking set dropped = now() where relation = tg_objectid;

else
raise notice 'unknown operation';
end if;
end;
$$;

drop event trigger if exists track_table;

create event trigger track_table
on ddl_command_trace
when tag in ('create table', 'alter table', 'drop table')
and context in ('toplevel', 'generated', 'subcommand')
execute procedure public.track_table_activity();

drop schema if exists test cascade;

create schema test
create table foo(id serial primary key, f1 text);

alter table test.foo add column f2 text;

select relation::regclass, * from public.tracking;

drop table test.foo;

select * from public.tracking;

select * from public.tracking;
-[ RECORD 1 ]+------------------------------
relation | tracking
relname | tracking
relnamespace | public
created | 2012-12-27 17:02:13.567979+01
altered |
dropped |
-[ RECORD 2 ]+------------------------------
relation | 25139
relname | foo
relnamespace | test
created | 2012-12-27 17:02:26.696039+01
altered | 2012-12-27 17:02:29.105241+01
dropped | 2012-12-27 17:02:37.834997+01


Maybe the best way to reconciliate both your views would be to provide
the previous example in the event trigger docs?

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Fabrízio de Royes Mello
2012-12-27 23:02:36 UTC
Permalink
Post by Dimitri Fontaine
Post by Tom Lane
Post by Fabrízio de Royes Mello
This proposal is about add a column "datcreated" on "pg_database" to store
the "timestamp" of the database creation.
I'm inclined to think that anyone who really needs this should be
pointed at event triggers. That feature (if it gets in) will allow
people to track creation/DDL-change times with exactly the behavior
they want.
Agreed.
+1
Post by Dimitri Fontaine
Maybe the best way to reconciliate both your views would be to provide
the previous example in the event trigger docs?
+1

If all of you agree I can improve the event trigger docs...

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Post by Dimitri Fontaine
Post by Tom Lane
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Stephen Frost
2012-12-28 02:14:58 UTC
Permalink
Dimitri,
Post by Dimitri Fontaine
Here's a complete test case that works with my current branch, with a
Apparently I've managed to miss the tricky case..?

Sure, dropping tables, schemas, etc, would have an impact on the values.
Dropping a table and then recreating it would be akin to deleteing a row
and then inserting a new one- the create value would be set to the time
of the new table being created and information about the dropped table
would be lost.

I'm not thinking of this as audit tracking where every action is logged.

I agree that what I was suggesting would be possible to implement with
event triggers, but I see that as a rather advanced feature that most
users aren't going to understand or implement. At the same time, those
more novice users are likely to be looking for this kind of information-
being told "oh, well, you *could* have been collecting it all along if
you knew about event triggers" isn't a particularly satisfying answer.

That's my 2c on it.

I agree that having the example in the docs would be nice- examples are
always good things to include.

Thanks,

Stephen
Dimitri Fontaine
2012-12-28 10:04:29 UTC
Permalink
Post by Stephen Frost
Apparently I've managed to miss the tricky case..?
That shouldn't be tricky as a user, but has been a tricky subject every
time we've been talking about implement Event Triggers in the past two
years, so I though I would include it:

create schema test
create table foo(id serial primary key, f1 text);

create event trigger track_table
on ddl_command_trace
when tag in ('create table', 'alter table', 'drop table')
and context in ('toplevel', 'generated', 'subcommand')
execute procedure public.track_table_activity();

The trick is that you then want to fire the event trigger for a command
in a 'subcommand' context, as seen in the logs provided by the "snitch"
example:

NOTICE: snitch event: ddl_command_end, context: SUBCOMMAND
NOTICE: tag: CREATE TABLE, operation: CREATE, type: TABLE
NOTICE: oid: 25139, schema: test, name: foo
Post by Stephen Frost
Sure, dropping tables, schemas, etc, would have an impact on the values.
we don't have, as of yet, support for a 'cascade' context. We will need
some heavy refactoring to get there, basically forcing the cascade drops
to happen via ProcessUtility(), but having a single DropStmt to handle
that I guess it shouldn't be very hard to do.
Post by Stephen Frost
being told "oh, well, you *could* have been collecting it all along if
you knew about event triggers" isn't a particularly satisfying answer.
True that.

Now, having at least a way to do that without resorting to hacking the
backend or writing a C coded extension sure feels nice enough an answer
to me here.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Fabrízio de Royes Mello
2012-12-28 12:05:57 UTC
Permalink
Hi all,

And about proposal that originated this thread... I proposed only to add a
column on shared catalog "pg_database" with timestamp of its creation.

Event triggers don't cover "CREATE DATABASE" statement.

Regards,
--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Josh Berkus
2012-12-28 18:53:05 UTC
Permalink
Post by Fabrízio de Royes Mello
Hi all,
And about proposal that originated this thread... I proposed only to add a
column on shared catalog "pg_database" with timestamp of its creation.
Event triggers don't cover "CREATE DATABASE" statement.
Works for me, in that case.

You'd need something a lot more mature than checking the file timestamp
on PG_VERSION, though.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Dimitri Fontaine
2012-12-29 14:12:07 UTC
Permalink
Post by Fabrízio de Royes Mello
Event triggers don't cover "CREATE DATABASE" statement.
The reason why is because you create Event Triggers in a specific
database and they only get run when you happen to be connected to that
specific database.

So for example say you install your Event Trigger in the "postgres"
database but then do a CREATE DATABASE while connected to "mydb", the
Event Trigger is not installed and will not fire.

It's the same analysis about tablespaces and roles, for all global
objects in fact. I don't think there's much of a technical
implementation reason why not supporting Event Triggers on those, it's
all about POLA violation.

I personnaly think that given a good documentation coverage having Event
Trigger on global objects could be useful enough, even if you would have
to install them in every database when you want them to fire no matter
what.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2012-12-29 14:34:41 UTC
Permalink
Dimitri,
Post by Dimitri Fontaine
I personnaly think that given a good documentation coverage having Event
Trigger on global objects could be useful enough, even if you would have
to install them in every database when you want them to fire no matter
what.
I disagree. If we're going to have what are essentially 'global' event
triggers, then they should go into a shared catalog- the user shouldn't
be required to install them everywhere to get coverage. In addition,
they should always fire in the same database (eg: postgres), so you
could reasonably have a single log of 'CREATE DATABASE' commands being
run. Of course, then we get into the technical issues which prevent
that, such as having one backend connected to database xyz but needing
to run commands in the postgres database.

So, for my 2c, I do think there's a technical challenge which would have
to be overcome to have global event triggers.

Thanks,

Stephen
Dimitri Fontaine
2012-12-29 14:45:00 UTC
Permalink
Post by Stephen Frost
I disagree. If we're going to have what are essentially 'global' event
triggers, then they should go into a shared catalog- the user shouldn't
be required to install them everywhere to get coverage. In addition,
I understand your view point, and if we think we will be able to get
that in the future, then I think we should be careful not to implement
something else in the mean time.
Post by Stephen Frost
they should always fire in the same database (eg: postgres), so you
could reasonably have a single log of 'CREATE DATABASE' commands being
run. Of course, then we get into the technical issues which prevent
that, such as having one backend connected to database xyz but needing
to run commands in the postgres database.
So, for my 2c, I do think there's a technical challenge which would have
to be overcome to have global event triggers.
It sounds to me like either autonomous transaction with the capability
to run the independant transaction in another database, or some dblink
creative use case. Another approach would be to get plproxy into core
as a Foreign Data Wrapper for FOREIGN FUNCTION that would target
PostgreSQL.

Given that, we could maybe have an internal setup that allows us to run
foreign functions in the postgres database from any other one, providing
what we need for Global Event Triggers.

Oh, I don't see that happening in 9.3.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2012-12-29 14:59:49 UTC
Permalink
Post by Dimitri Fontaine
It sounds to me like either autonomous transaction with the capability
to run the independant transaction in another database, or some dblink
creative use case. Another approach would be to get plproxy into core
as a Foreign Data Wrapper for FOREIGN FUNCTION that would target
PostgreSQL.
Given that, we could maybe have an internal setup that allows us to run
foreign functions in the postgres database from any other one, providing
what we need for Global Event Triggers.
Of those, I'd think autonomous transactions is by far the most likely
and also useful for other sitatuions. I don't see dblink or plproxy
being used for this. Having some internal setup which allows us to run
foreign functions in other databases seems more-or-less akin to
autonomous transactions also.
Post by Dimitri Fontaine
Oh, I don't see that happening in 9.3.
I agree, didn't mean to imply otherwise.

Thanks,

Stephen
Andres Freund
2012-12-29 15:26:18 UTC
Permalink
Post by Stephen Frost
Post by Dimitri Fontaine
It sounds to me like either autonomous transaction with the capability
to run the independant transaction in another database, or some dblink
creative use case. Another approach would be to get plproxy into core
as a Foreign Data Wrapper for FOREIGN FUNCTION that would target
PostgreSQL.
Given that, we could maybe have an internal setup that allows us to run
foreign functions in the postgres database from any other one, providing
what we need for Global Event Triggers.
Of those, I'd think autonomous transactions is by far the most likely
and also useful for other sitatuions. I don't see dblink or plproxy
being used for this. Having some internal setup which allows us to run
foreign functions in other databases seems more-or-less akin to
autonomous transactions also.
I don't think autonomous transactions are the biggest worry
here. Transactions essentially already span multiple databases, so thats
not really a problem in this context. Making it possible to change
catalogs while still being active in another database seems *far*
harder. To the point where I would say its not really feasible.

A shared table for event triggers sounds like it would be the far easier
solution (9.4+ that is).

Greetings,

Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2012-12-29 15:49:23 UTC
Permalink
Post by Andres Freund
I don't think autonomous transactions are the biggest worry
here. Transactions essentially already span multiple databases, so thats
not really a problem in this context. Making it possible to change
catalogs while still being active in another database seems *far*
harder. To the point where I would say its not really feasible.
There's two pieces- one is changing catalogs and the other is being able
to have multiple top-level transactions running in a single backend. I
agree that transactions already span multiple databases but I was
expecting the global event trigger to need to run in its own transaction
in the other database, similar to autonomous transactions (though those
could be running in the same database, thus omitting the catalog switch
issue). Spawning a new backend which connects to any database and gets
a new transaction would handle both, which is what I was thinking about.
Post by Andres Freund
A shared table for event triggers sounds like it would be the far easier
solution (9.4+ that is).
But what happens when it fires and tries to insert a record into a
table..? Does that table have to exist in every database or the event
fails? If it exists in every database, the admin/user/whomever has to
go hunting through all the databases to get a complete picture..
Neither is very good, imv.

Thanks,

Stephen
Robert Haas
2013-01-03 01:35:00 UTC
Permalink
Post by Andres Freund
A shared table for event triggers sounds like it would be the far easier
solution (9.4+ that is).
The problem is that the event trigger table is a just a pointer to a
function, and there's no procedure OID to store in that shared catalog
unless you also have a proposal for making pg_proc into a shared
catalog ... which would also require making pg_language into a shared
catalog, and maybe a few others.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2013-01-03 02:01:48 UTC
Permalink
Post by Robert Haas
Post by Andres Freund
A shared table for event triggers sounds like it would be the far easier
solution (9.4+ that is).
The problem is that the event trigger table is a just a pointer to a
function, and there's no procedure OID to store in that shared catalog
unless you also have a proposal for making pg_proc into a shared
catalog ... which would also require making pg_language into a shared
catalog, and maybe a few others.
This was why I was suggesting that there be a single database in which
the events would actually fire and that's where the function itself
would also be stored. The information to pass to the function would
have to be collected and represented logically from the calling
database, of course, and it wouldn't be possible to make changes in the
database where the modification happened without using something like
dblink, but I could still see there being a lot of good use cases for
such a thing.

All pie-in-the-sky currently though, of course, but that's along the
lines of what I was thinking.

Thanks,

Stephen
Hannu Krosing
2013-01-03 10:03:17 UTC
Permalink
On 12/28/2012 03:14 AM, Stephen Frost wrote:
...
Post by Stephen Frost
I agree that what I was suggesting would be possible to implement with
event triggers, but I see that as a rather advanced feature that most
users aren't going to understand or implement. At the same time, those
more novice users are likely to be looking for this kind of
information- being told "oh, well, you *could* have been collecting it
all along if you knew about event triggers" isn't a particularly
satisfying answer. That's my 2c on it. I agree that having the example
in the docs would be nice- examples are always good things to include.
If what you want is something close to current unix file time semantics
(ctime, mtime, atime) then why not just create a function to look up
these attributes on database directory and/or database files ?

----------------
Hannu
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andres Freund
2013-01-03 10:18:08 UTC
Permalink
Post by Hannu Krosing
...
Post by Stephen Frost
I agree that what I was suggesting would be possible to implement with
event triggers, but I see that as a rather advanced feature that most
users aren't going to understand or implement. At the same time, those
more novice users are likely to be looking for this kind of information-
being told "oh, well, you *could* have been collecting it all along if you
knew about event triggers" isn't a particularly satisfying answer. That's
my 2c on it. I agree that having the example in the docs would be nice-
examples are always good things to include.
If what you want is something close to current unix file time semantics
(ctime, mtime, atime) then why not just create a function to look up these
attributes on database directory and/or database files ?
Because too many things change those. Moving to a different tablespace,
a rewriting ALTER TABLE, etc.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hannu Krosing
2013-01-03 11:34:23 UTC
Permalink
Post by Andres Freund
Post by Hannu Krosing
...
Post by Stephen Frost
I agree that what I was suggesting would be possible to implement with
event triggers, but I see that as a rather advanced feature that most
users aren't going to understand or implement. At the same time, those
more novice users are likely to be looking for this kind of information-
being told "oh, well, you *could* have been collecting it all along if you
knew about event triggers" isn't a particularly satisfying answer. That's
my 2c on it. I agree that having the example in the docs would be nice-
examples are always good things to include.
If what you want is something close to current unix file time semantics
(ctime, mtime, atime) then why not just create a function to look up these
attributes on database directory and/or database files ?
Because too many things change those. Moving to a different tablespace,
a rewriting ALTER TABLE, etc.
Can't we actually fix these to preserve file creation date like tar does
and still keep
unix file semantics ?

So it is as about agreeing on what we actually want this "create time"
mean opening a can of worms as tom predicted ?

For example, how would this work in replication context ?
Post by Andres Freund
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2013-01-03 13:17:46 UTC
Permalink
Post by Hannu Krosing
Can't we actually fix these to preserve file creation date like tar
does and still keep
unix file semantics ?
I'm not sure that I really see the advantage to trying to use the
filesystem to keep this information for us..?
Post by Hannu Krosing
So it is as about agreeing on what we actually want this "create time"
mean opening a can of worms as tom predicted ?
I agree that we need to hash out what, exactly, the values mean, but I
don't think that's a terribly difficult thing to do.
Post by Hannu Krosing
For example, how would this work in replication context ?
If it's stored in the database catalogs, this is clear- it's replicated
just like the catalog, and then you don't have to worry about trying to
ensure that the file creation timestamp in the filesystem is right...

Thanks,

Stephen
Hannu Krosing
2013-01-03 13:40:33 UTC
Permalink
Post by Stephen Frost
Post by Hannu Krosing
Can't we actually fix these to preserve file creation date like tar
does and still keep
unix file semantics ?
I'm not sure that I really see the advantage to trying to use the
filesystem to keep this information for us..?
If we would treat "database" as a file in this case then it would give
us pre-defined meaning :)
Post by Stephen Frost
Post by Hannu Krosing
So it is as about agreeing on what we actually want this "create time"
mean opening a can of worms as tom predicted ?
I agree that we need to hash out what, exactly, the values mean, but I
don't think that's a terribly difficult thing to do.
Post by Hannu Krosing
For example, how would this work in replication context ?
If it's stored in the database catalogs, this is clear- it's replicated
just like the catalog, and then you don't have to worry about trying to
ensure that the file creation timestamp in the filesystem is right...
But then some customer comes and wants it to mean "when was this replica
database created" ?
Post by Stephen Frost
Thanks,
Stephen
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2013-01-03 13:42:49 UTC
Permalink
Post by Hannu Krosing
But then some customer comes and wants it to mean "when was this
replica database created" ?
That's an entirely different question, imv, than what we're talking
about.

I'm not saying that it won't be asked, but as it's a different question,
we can look to answer it in a different way.

Thanks,

Stephen
Hannu Krosing
2013-01-03 13:46:20 UTC
Permalink
Post by Stephen Frost
Post by Hannu Krosing
But then some customer comes and wants it to mean "when was this
replica database created" ?
That's an entirely different question, imv, than what we're talking
about.
I'm not saying that it won't be asked, but as it's a different question,
we can look to answer it in a different way.
How is "what does database creation date mean?" a different question ?

It is same question as :

what is the creation date of db when I create a replica of my database
from backup?

does it depend on how I restore my replica ?

can I restore it from pg_dump and still have same creation date ?

etc. etc.

--------
Hannu
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas
2013-01-03 14:09:22 UTC
Permalink
Post by Hannu Krosing
How is "what does database creation date mean?" a different question ?
what is the creation date of db when I create a replica of my database from
backup?
does it depend on how I restore my replica ?
can I restore it from pg_dump and still have same creation date ?
etc. etc.
I think you (and Tom) are doing an excellent job of making a simple
problem seem complicated. Suppose a man comes walking out of the
desert looking exhausted and collapses on our front doorstep,
muttering, in a semi-conscious state, the single word "water". Now
this is a somewhat incoherent utterance, so there are several
objections that might be raised:

- It is not clear what the man wants done with the water.
- The amount of water to be provided is unspecified.
- Does he want tap water, bottled water, or club soda?
- Furthermore, if we do give him water, he might go on to ask for a
few crackers and a phone call; we could end up spending the whole
morning on this.
- In a situation of extreme thirst, a solution involving a proper
electrolyte balance would likely be superior to plain water.

Of course, these objections miss the point. Even an imperfect
solution will be better than no solution at all. And it is very
likely that if we simply provide whatever hydrating agent lies closest
to hand, we'll get full marks.

Similarly, in the present situation, I believe that there is little
reason to suppose that the simplest possible implementation of this
feature won't resolve the overwhelming majority of the needs that
people have. We have many features about which users might raise the
same kinds of questions that you are raising about this one, and they
do, and those questions are perfectly valid. But they are not reasons
to remove those features, and the questions you raise are not reasons
to avoid having this one. They are simply things that must be
documented and explained, just as we need to do with every other
feature we ship. And if someone is not perfectly happy with the
design, it won't be the first time for that, either. It does not mean
that it's worse than not having anything.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hannu Krosing
2013-01-03 16:15:31 UTC
Permalink
Post by Robert Haas
Post by Hannu Krosing
How is "what does database creation date mean?" a different question ?
what is the creation date of db when I create a replica of my database from
backup?
does it depend on how I restore my replica ?
can I restore it from pg_dump and still have same creation date ?
etc. etc.
...
Post by Robert Haas
Of course, these objections miss the point. Even an imperfect
solution will be better than no solution at all. And it is very
likely that if we simply provide whatever hydrating agent lies closest
to hand, we'll get full marks.
This is what I did with my sample pl/python function ;)
Post by Robert Haas
Similarly, in the present situation, I believe that there is little
reason to suppose that the simplest possible implementation of this
feature won't resolve the overwhelming majority of the needs that
people have. We have many features about which users might raise the
same kinds of questions that you are raising about this one, and they
do, and those questions are perfectly valid. But they are not reasons
to remove those features, and the questions you raise are not reasons
to avoid having this one. They are simply things that must be
documented and explained, just as we need to do with every other
feature we ship. And if someone is not perfectly happy with the
design, it won't be the first time for that, either. It does not mean
that it's worse than not having anything.
If we made sure that things like CLUSTER or moving to
another tablespace would keep file ctime, then this would
answer 98% of requests .

Even without keeping them, this would be giving the chap "water" ...

So my proposal is to just have a pg_database_createtime(dbname)
function and solve the simple part of the problem.

-----------------
Hannu
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas
2013-01-03 17:27:45 UTC
Permalink
Post by Hannu Krosing
This is what I did with my sample pl/python function ;)
Yeah, except that the "c" in "ctime" does not stand for create, and
therefore the function isn't necessarily reliable. The problem is
even worse for tables, where a rewrite may remove the old file and
create a new one. I mean, I'm not stupid about this: when I need to
figure this kind of stuff out, I do in fact look at the file times -
mtime, ctime, atime, whatever there is. Sometimes that turns out to
be helpful, and sometimes it doesn't. An obvious example of the
latter is when you're looking at a bunch of files that have just been
untarred from a backup device.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Christopher Browne
2013-01-03 17:54:45 UTC
Permalink
Post by Robert Haas
Post by Hannu Krosing
This is what I did with my sample pl/python function ;)
Yeah, except that the "c" in "ctime" does not stand for create, and
therefore the function isn't necessarily reliable. The problem is
even worse for tables, where a rewrite may remove the old file and
create a new one. I mean, I'm not stupid about this: when I need to
figure this kind of stuff out, I do in fact look at the file times -
mtime, ctime, atime, whatever there is. Sometimes that turns out to
be helpful, and sometimes it doesn't. An obvious example of the
latter is when you're looking at a bunch of files that have just been
untarred from a backup device.
Yep, and I think that the behaviour of tar pretty nicely characterizes
what's troublesome here. It is quite likely that a tar run will *capture*
the creation time of a file, but if you pull data from a tar archive, it is
by no means obvious that the filesystem can or will accept that date
and apply it to the extracted copy.

I'd contrast pg_dump with tar in that the former is intended as more of
a "logical" dump than the latter, so that, in keeping with Greg Stark's
comments, these timestamps Should Not be captured or carried forward
by pg_dump.

The interaction with streaming replication is pretty analogous to the
interaction one might expect to get out of filesystem snapshot
technologies like DRBD, zfs, btrfs, LVM. If we put a creation time
into pg_database or pg_class, then streaming replication will, as a
"physical" replication mechanism, carry the timestamp forward into
replicas, in pretty much exactly the same fashion that timestamps
would be carried onto clones/snapshots by the filesystem
snapshotting systems.

And in contrast, I'd expect Andres Freund's logical replication
infrastructure *NOT* to carry these dates over, but rather to establish
fresh new creation dates on a replica. (And from a forensic perspective,
that's a perfectly fine thing.)

I imagine that we should be careful to put these forensic timestamps
onto things with some care.

- Putting them on pg_database seems like a fine idea.
- Putting them on pg_attribute seems mighty dodgy; I don't expect I'd
often care, and this change increases the size of an extremely heavily
accessed system table
- I am equivocal about putting them on pg_class. That increases the
size of a pretty big, heavily accessed system table.
- Perhaps there are other relevant tables (pg_event_trigger,
pg_extension, FDW tables, pg_language, pg_proc, pg_tablespace); I
don't feel so strongly about them, but if you're puzzling over what
went wrong with an extension, event trigger, or FDW, time of creation
seems like it might be useful.
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas
2013-01-03 20:26:09 UTC
Permalink
Post by Christopher Browne
Yep, and I think that the behaviour of tar pretty nicely characterizes
what's troublesome here. It is quite likely that a tar run will *capture*
the creation time of a file, but if you pull data from a tar archive, it is
by no means obvious that the filesystem can or will accept that date
and apply it to the extracted copy.
I'd contrast pg_dump with tar in that the former is intended as more of
a "logical" dump than the latter, so that, in keeping with Greg Stark's
comments, these timestamps Should Not be captured or carried forward
by pg_dump.
The interaction with streaming replication is pretty analogous to the
interaction one might expect to get out of filesystem snapshot
technologies like DRBD, zfs, btrfs, LVM. If we put a creation time
into pg_database or pg_class, then streaming replication will, as a
"physical" replication mechanism, carry the timestamp forward into
replicas, in pretty much exactly the same fashion that timestamps
would be carried onto clones/snapshots by the filesystem
snapshotting systems.
And in contrast, I'd expect Andres Freund's logical replication
infrastructure *NOT* to carry these dates over, but rather to establish
fresh new creation dates on a replica. (And from a forensic perspective,
that's a perfectly fine thing.)
I agree all around.

And to take a step back and speak a bit more broadly about this, I
believe that, more and more, we can't rely on the operating system to
do things for us any more. Five or ten years ago, maybe people were
running Linux, and PostgreSQL was a part of that. Now, more and more,
people are running PostgreSQL, and Linux (or Windows, or some other
OS) is a way to make that happen. At least when I talk to customers,
places where the OS behavior bleeds into what the database server does
are not viewed as features. Telling people that we use the OS
collation facilities, or that we use the OS buffer cache, or that we
don't provide a scheduler because Linux has cron and Windows has
scheduled tasks, or that people should examine file timestamps to try
to work out when a relation was created results in bemusement, or
sometimes incredulity. Many people are understanding of the idea that
we don't have the manpower to implement everything ourselves, but very
few customers I've spoken with think that planning to rely on the OS
facilities is a sound design principle. It's true, as we've often
said here, that leveraging the OS facilities means that we get the
benefit of improving OS facilities "for free" - but it also means that
we never exceed what the OS facilities are able to provide. And
frankly, as in this case, the OS facilities are often poorly suited to
what users actually want. We obviously do not want to go bonkers and
take over everything from the OS, but I don't think we should be
afraid to rotate the knob a little bit in that direction. The fact
that people are pushing us to go there is a sign of our success. We
are the ecosystem.

I do have a concern about catalog bloat. I think it would be easy to
add so many knobs that we end up slowing the system down and bloating
the size of an otherwise-empty database, or one with lots of SQL
objects. Let's not do that. But let's not do nothing, either.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Peter Eisentraut
2013-01-04 18:07:37 UTC
Permalink
Post by Robert Haas
It's true, as we've often
said here, that leveraging the OS facilities means that we get the
benefit of improving OS facilities "for free" - but it also means that
we never exceed what the OS facilities are able to provide.
And that should be the deciding factor, shouldn't it? Clearly, the OS
timestamps do not satisfy the requirements of tracking database object
creation times.
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Fabrízio de Royes Mello
2013-01-05 15:48:40 UTC
Permalink
Post by Peter Eisentraut
Post by Robert Haas
It's true, as we've often
said here, that leveraging the OS facilities means that we get the
benefit of improving OS facilities "for free" - but it also means that
we never exceed what the OS facilities are able to provide.
And that should be the deciding factor, shouldn't it? Clearly, the OS
timestamps do not satisfy the requirements of tracking database object
creation times.
+1

And IMHO we must decide what we do or if we'll don't anything.

In this thread was discussed many ways to how to implement and how not
implement, so I compile some important points discussed before (sorry if I
forgot something):

* the original proposal was just to add a column in shared catalog
'pg_database' to track creation time (I already sent a patch [1]), but the
discussion going to implement a way to track creation time off all database
objects

* some people said if we implement that then we must have some way to alter
creation times by SQL (ALTER cmds) and also have a way to dump and restore
this info by pg_dump/pg_restore. Some agreed and others disagree.

* we talk about implement it with EventTriggers, but they not cover shared
objects (like databases, roles, tablespaces,...), and someone talked to
extend EventTriggers to cover this kind of objects or maybe we have a way
to create *shared tables* (this is what I understood). This way force to
every people implement your own track time system or maybe someone share a
extension to do that.

* also we discuss about create two new catalogs, one local and another
shared (like pg_description and pg_shdescription) to track creation times
of all database objects.

Please fix if I forgot something. Anyway, we must decide what to do.

I don't know enough, but I have another idea. What you guys think about we
have tables like "stats tables" to track creation times, with a GUC to
enable or disable this behavior.


Regards,


[1] http://archives.postgresql.org/pgsql-hackers/2013-01/msg00111.php
--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Post by Peter Eisentraut
Post by Robert Haas
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Stephen Frost
2013-01-05 16:04:43 UTC
Permalink
Post by Fabrízio de Royes Mello
* also we discuss about create two new catalogs, one local and another
shared (like pg_description and pg_shdescription) to track creation times
of all database objects.
Creating a separate catalog (or two) every time we want to track XYZ for
all objects is rather overkill... Thinking about this a bit more, and
noting that pg_description/shdescription more-or-less already exist as a
framework for tracking 'something' for 'all catalog entries'- why don't
we just add these columns to those tables..? This would also address
Peter's concern about making sure we do this 'wholesale' and in one
release rather than spread across multiple releases- just make sure it
covers the same set of things which 'comment' does.

Also, I don't think we really need a GUC for this.

Thanks,

Stephen
Fabrízio de Royes Mello
2013-01-05 17:59:30 UTC
Permalink
Post by Stephen Frost
Post by Fabrízio de Royes Mello
* also we discuss about create two new catalogs, one local and another
shared (like pg_description and pg_shdescription) to track creation times
of all database objects.
Creating a separate catalog (or two) every time we want to track XYZ for
all objects is rather overkill... Thinking about this a bit more, and
noting that pg_description/shdescription more-or-less already exist as a
framework for tracking 'something' for 'all catalog entries'- why don't
we just add these columns to those tables..?
But those tables are filled only when we execute COMMENT ON statement...
then your idea is create a 'null' comment every time we create a single
object... is it?

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Post by Stephen Frost
Post by Fabrízio de Royes Mello
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Stephen Frost
2013-01-05 19:31:53 UTC
Permalink
Post by Fabrízio de Royes Mello
But those tables are filled only when we execute COMMENT ON statement...
then your idea is create a 'null' comment every time we create a single
object... is it?
Yes, and have the actual 'description' field (as it's variable) at the
end of the catalog.

Regarding the semantics of it- I was thinking about how directories and
unix files work. Basically, adding or removing a sub-object would
update the alter time on the object itself, changing an already existing
object or sub-object would update only the object/sub-object's alter
time. Creating an object or sub/object would set its create time and
alter time to the same value. I would distinguish 'create' from
'ctime', however, and have our 'create' time be only the actual
*creation* time of the object. ALTER table OWNER TO user; would update
"table"s alter time.

Open to other thoughts on this and perhaps we should create a wiki page
to start documentating the semantics. Once we get agreement there, it's
just a bit of code. :)

Thanks,

Stephen
Fabrízio de Royes Mello
2013-01-07 00:54:48 UTC
Permalink
Post by Stephen Frost
Yes, and have the actual 'description' field (as it's variable) at the
end of the catalog.
Regarding the semantics of it- I was thinking about how directories and
unix files work. Basically, adding or removing a sub-object would
update the alter time on the object itself, changing an already existing
object or sub-object would update only the object/sub-object's alter
time. Creating an object or sub/object would set its create time and
alter time to the same value. I would distinguish 'create' from
'ctime', however, and have our 'create' time be only the actual
*creation* time of the object. ALTER table OWNER TO user; would update
"table"s alter time.
Understood... a "COMMENT" is a database object, then if we add a creation
time column to pg_description/shdescription tables how we track his
creation time?
Post by Stephen Frost
Open to other thoughts on this and perhaps we should create a wiki page
to start documentating the semantics. Once we get agreement there, it's
just a bit of code. :)
+1

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Post by Stephen Frost
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Stephen Frost
2013-01-07 13:42:32 UTC
Permalink
Post by Fabrízio de Royes Mello
Understood... a "COMMENT" is a database object, then if we add a creation
time column to pg_description/shdescription tables how we track his
creation time?
When it's NULL it "doesn't exist", in this case, when it transistions
from NULL, it becomes created. A transistion from non-NULL to non-NULL
would be an alter, and a transistion from non-NULL to NULL would be a
drop/remove.

Thanks,

Stephen
Robert Haas
2013-01-08 15:08:29 UTC
Permalink
Post by Stephen Frost
Post by Fabrízio de Royes Mello
* also we discuss about create two new catalogs, one local and another
shared (like pg_description and pg_shdescription) to track creation times
of all database objects.
Creating a separate catalog (or two) every time we want to track XYZ for
all objects is rather overkill... Thinking about this a bit more, and
noting that pg_description/shdescription more-or-less already exist as a
framework for tracking 'something' for 'all catalog entries'- why don't
we just add these columns to those tables..? This would also address
Peter's concern about making sure we do this 'wholesale' and in one
release rather than spread across multiple releases- just make sure it
covers the same set of things which 'comment' does.
I suspect that trying to shoehorn this into
pg_description/pg_shdescription will contort both features
unnecessarily, but I'm willing to be proven wrong.
Post by Stephen Frost
Also, I don't think we really need a GUC for this.
Indeed, a GUC would seem to me to defeat the entire point of the feature.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Peter Eisentraut
2013-01-08 22:09:23 UTC
Permalink
Post by Stephen Frost
Creating a separate catalog (or two) every time we want to track XYZ for
all objects is rather overkill... Thinking about this a bit more, and
noting that pg_description/shdescription more-or-less already exist as a
framework for tracking 'something' for 'all catalog entries'- why don't
we just add these columns to those tables..? This would also address
Peter's concern about making sure we do this 'wholesale' and in one
release rather than spread across multiple releases- just make sure it
covers the same set of things which 'comment' does.
Yeah, actually, the other day I was thinking we should get rid of all
the system catalogs and use a big EAV-like schema instead. We're not
getting any relational-database value out of the current way, and it's
just a lot of duplicate code. If we had a full EAV system, we could
even do in-place upgrade.

Obviously, this isn't going to happen any time soon or ever, but I think
I agree with your concern above as a partial step.
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Pavel Stehule
2013-01-08 22:12:21 UTC
Permalink
Post by Peter Eisentraut
Post by Stephen Frost
Creating a separate catalog (or two) every time we want to track XYZ for
all objects is rather overkill... Thinking about this a bit more, and
noting that pg_description/shdescription more-or-less already exist as a
framework for tracking 'something' for 'all catalog entries'- why don't
we just add these columns to those tables..? This would also address
Peter's concern about making sure we do this 'wholesale' and in one
release rather than spread across multiple releases- just make sure it
covers the same set of things which 'comment' does.
Yeah, actually, the other day I was thinking we should get rid of all
the system catalogs and use a big EAV-like schema instead. We're not
getting any relational-database value out of the current way, and it's
just a lot of duplicate code. If we had a full EAV system, we could
even do in-place upgrade.
-1

now we have a thousands tables, I am not sure so EAV can get good performance

Pavel
Post by Peter Eisentraut
Obviously, this isn't going to happen any time soon or ever, but I think
I agree with your concern above as a partial step.
--
http://www.postgresql.org/mailpref/pgsql-hackers
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2013-01-08 22:17:41 UTC
Permalink
Post by Pavel Stehule
Post by Peter Eisentraut
Yeah, actually, the other day I was thinking we should get rid of all
the system catalogs and use a big EAV-like schema instead. We're not
getting any relational-database value out of the current way, and it's
just a lot of duplicate code. If we had a full EAV system, we could
even do in-place upgrade.
-1
now we have a thousands tables, I am not sure so EAV can get good performance
To be honest, my first reaction to this was an assumption that it was
pure sarcasm..

Seriously tho, the argument for not putting these things into the
various individual catalogs is that they'd create bloat and these items
don't need to be performant. I would think that the kind of timestamps
that we're talking about fall into the same data category as comments on
tables.

If there isn't a good reason for comments on objects to be off in a
generic "this is for any kind of object" table, then perhaps we should
move them into the appropriate catalog tables?

Thanks,

Stephen
Peter Eisentraut
2013-01-09 04:42:45 UTC
Permalink
Post by Stephen Frost
Seriously tho, the argument for not putting these things into the
various individual catalogs is that they'd create bloat and these items
don't need to be performant. I would think that the kind of
timestamps
that we're talking about fall into the same data category as comments on
tables.
If there isn't a good reason for comments on objects to be off in a
generic "this is for any kind of object" table, then perhaps we should
move them into the appropriate catalog tables?
I think basic refactoring logic would support taking common things out
of the individual catalogs and keeping them in a common structure,
especially when they are for amusement only and not needed in any
critical paths. All the ALTER command refactoring and so on that's been
going on is also moving into the direction that for data definition
management, there should be mainly one kind of object with a few
variants here and there.
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas
2013-01-08 13:47:27 UTC
Permalink
Post by Peter Eisentraut
Post by Robert Haas
It's true, as we've often
said here, that leveraging the OS facilities means that we get the
benefit of improving OS facilities "for free" - but it also means that
we never exceed what the OS facilities are able to provide.
And that should be the deciding factor, shouldn't it? Clearly, the OS
timestamps do not satisfy the requirements of tracking database object
creation times.
Yes, I think so.

But I am not entirely sold on tracking the creation time of every SQL
object. It might be all right, but what about catalog bloat?

I am on board for databases, and for tables, at any rate.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Peter Eisentraut
2013-01-03 14:18:38 UTC
Permalink
Post by Hannu Krosing
Post by Andres Freund
Post by Hannu Krosing
If what you want is something close to current unix file time semantics
(ctime, mtime, atime) then why not just create a function to look up these
attributes on database directory and/or database files ?
Because too many things change those. Moving to a different tablespace,
a rewriting ALTER TABLE, etc.
Can't we actually fix these to preserve file creation date like tar does
and still keep
unix file semantics ?
I don't think that would be a good idea, because various file system
tools might actually want to look at, say, the mtime to know what to
back up. Also, none of those file attributes are the *creation* time,
so we wouldn't actually solve the original problem.
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Pavel Stehule
2013-01-03 10:20:09 UTC
Permalink
Post by Hannu Krosing
...
Post by Stephen Frost
I agree that what I was suggesting would be possible to implement with
event triggers, but I see that as a rather advanced feature that most users
aren't going to understand or implement. At the same time, those more novice
users are likely to be looking for this kind of information- being told "oh,
well, you *could* have been collecting it all along if you knew about event
triggers" isn't a particularly satisfying answer. That's my 2c on it. I
agree that having the example in the docs would be nice- examples are always
good things to include.
If what you want is something close to current unix file time semantics
(ctime, mtime, atime) then why not just create a function to look up these
attributes on database directory and/or database files ?
Implementation of ctime, mtime, atime will have little bit higher
impact than just creation time - and these values should be moved to
statistics instead bloated pg_class.

You cannot use a filesystem data, because some requests are solved by
cache not by filesystem.

I had to emulate MySQL fields - and this was a first implementation,
but totally useles - now we have a solution based on enhancing pg_stat
and it works as expected

Regards

Pavel
Post by Hannu Krosing
----------------
Hannu
--
http://www.postgresql.org/mailpref/pgsql-hackers
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2013-01-03 13:15:51 UTC
Permalink
Post by Hannu Krosing
If what you want is something close to current unix file time
semantics (ctime, mtime, atime) then why not just create a function
to look up these attributes on database directory and/or database
files ?
Because, as noted before, those aren't always going to be correct.
Database files can be rewritten and recreated based on certain commands
(eg: CLUSTER). Perhaps there's a fork that isn't, but that almost seems
like it's more painful to try and figure out than just hooking in with
the CREATE command.

Thanks,

Stephen
Fabrízio de Royes Mello
2012-12-27 22:58:36 UTC
Permalink
Post by Tom Lane
This has been debated, and rejected, before.
I know this discussion...
Post by Tom Lane
To mention just one problem, are we going to add nonstandard,
non-backwards-compatible syntax to every single kind of CREATE to allow
pg_dump to preserve the creation dates? Another interesting question is
whether we should likewise track the last ALTER time, or perhaps whether
a sufficiently major ALTER redefinition should update the creation time.
I agree with you because now we have Event Triggers...
Post by Tom Lane
I'm inclined to think that anyone who really needs this should be
pointed at event triggers. That feature (if it gets in) will allow
people to track creation/DDL-change times with exactly the behavior
they want.
Exactly, but Event Triggers [1] don't cover "CREATE DATABASE" statement,
and for this reason I propose the patch to add a single column "datcreated"
on shared catalog "pg_database".

[1] http://www.postgresql.org/docs/devel/static/event-trigger-matrix.html


Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Post by Tom Lane
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Robert Haas
2013-01-03 01:52:27 UTC
Permalink
Post by Tom Lane
This has been debated, and rejected, before.
To mention just one problem, are we going to add nonstandard,
non-backwards-compatible syntax to every single kind of CREATE to allow
pg_dump to preserve the creation dates? Another interesting question is
whether we should likewise track the last ALTER time, or perhaps whether
a sufficiently major ALTER redefinition should update the creation time.
Well, IMHO, there is no need for any syntax change at all. CREATE
TABLE and CREATE DATABASE should just record the creation time
somewhere, and that's all. If you dump-and-reload, the creation time
changes. Deal with it, or hack your catalogs if you really care that
much.

I find the suggestion of using event triggers for this to miss the
point almost completely. At least in my case, the time when you
really wish you had some timestamps is when you get dropped into a
customer environment and need to do forensics. The customer will not
have installed the convenient package of event triggers at database
bootstrap time. Their environment will likely be poorly configured
and completely undocumented; that's why you're doing forensics, isn't
it?

I know this has been discussed and rejected before, but I find that
rejection to be wrong-headed. I have repeatedly been asked, with
levels of exasperation ranging from mild to homicidal, why we don't
have this feature, and I have no good answer. If it were somehow
difficult to record this or likely to produce a lot of overhead, that
would be one thing. But it isn't. It's probably a hundred-line
patch, and AFAICS the overhead would be miniscule.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Tom Lane
2013-01-03 02:02:09 UTC
Permalink
[ on creation timestamps ]
I know this has been discussed and rejected before, but I find that
rejection to be wrong-headed. I have repeatedly been asked, with
levels of exasperation ranging from mild to homicidal, why we don't
have this feature, and I have no good answer. If it were somehow
difficult to record this or likely to produce a lot of overhead, that
would be one thing. But it isn't. It's probably a hundred-line
patch, and AFAICS the overhead would be miniscule.
If I believed that it would be a hundred-line patch, and would *stay*
a hundred-line patch, I'd be fine with it. But it won't. I will
bet a very fine dinner that the feature wouldn't get out the door
before there would be demands for pg_dump support. And arguments
about whether ALTER should or should not change the timestamp.
And I doubt you counted psql \d support in that hundred lines.
So this is just a can of worms that I'd rather not open.

regards, tom lane
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2013-01-03 02:12:06 UTC
Permalink
Post by Tom Lane
If I believed that it would be a hundred-line patch, and would *stay*
a hundred-line patch, I'd be fine with it. But it won't. I will
bet a very fine dinner that the feature wouldn't get out the door
before there would be demands for pg_dump support.
Fine, how about a function that can be called by pg_dump (and anyone
else who has the rights and feels the need) to set that value? That
avoids all need for any new syntax and still gives us the pg_dump and
friends support that will apparently be asked for.
Post by Tom Lane
And arguments
about whether ALTER should or should not change the timestamp.
There is no case where ALTER should change the *creation* time, imo.
Post by Tom Lane
And I doubt you counted psql \d support in that hundred lines.
So this is just a can of worms that I'd rather not open.
The last psql \d support change that I looked at (thanks Jon) had a
diffstat (excluding documentation and whitespace changes) of:

***@beorn:/home/sfrost/Downloads> cat qq | diffstat
describe.c | 5 +++++
1 file changed, 5 insertions(+)

Just saying. ;)

Thanks,

Stephen
Robert Haas
2013-01-03 04:04:43 UTC
Permalink
Post by Stephen Frost
Post by Tom Lane
If I believed that it would be a hundred-line patch, and would *stay*
a hundred-line patch, I'd be fine with it. But it won't. I will
bet a very fine dinner that the feature wouldn't get out the door
before there would be demands for pg_dump support.
Fine, how about a function that can be called by pg_dump (and anyone
else who has the rights and feels the need) to set that value? That
avoids all need for any new syntax and still gives us the pg_dump and
friends support that will apparently be asked for.
TBH, I don't think anyone has any business changing the creation
timestamp. Ever. For me, the fact that pg_dump wouldn't preserve
this information would be a feature, not a bug. I mostly meant to
point out that someone could bypass it if they cared enough, not to
recommend it. Honestly, I'd probably *rather* store this information
someplace where it couldn't be changed via SQL *at all*. But I don't
think we have such a place, so I'm happy enough to store it in the
catalogs, with the associated risks of catalog hackery that entails.
Post by Stephen Frost
Post by Tom Lane
And arguments
about whether ALTER should or should not change the timestamp.
There is no case where ALTER should change the *creation* time, imo.
Duh.
Post by Stephen Frost
Post by Tom Lane
And I doubt you counted psql \d support in that hundred lines.
So this is just a can of worms that I'd rather not open.
The last psql \d support change that I looked at (thanks Jon) had a
describe.c | 5 +++++
1 file changed, 5 insertions(+)
Just saying. ;)
Yeah, I don't think this is really a problem. I would expect the psql
support for this feature to be not a whole lot more complicated than
that. Sure, it might be more than 5 lines of raw code if it requires
an additional version of some query for which we're currently using
the same version for both PG93 and PG92, but it's hardly fair to cite
that as an argument for not doing this. Such changes are almost
entirely boilerplate.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Bernd Helmle
2013-01-03 10:28:15 UTC
Permalink
Post by Robert Haas
TBH, I don't think anyone has any business changing the creation
timestamp. Ever. For me, the fact that pg_dump wouldn't preserve
this information would be a feature, not a bug. I mostly meant to
point out that someone could bypass it if they cared enough, not to
recommend it. Honestly, I'd probably *rather* store this information
someplace where it couldn't be changed via SQL *at all*. But I don't
think we have such a place, so I'm happy enough to store it in the
catalogs, with the associated risks of catalog hackery that entails.
This is exactly what Informix does, it stores creation or modification
dates of a table in its system catalog (systables.created, to be specific).
Any export/import of tables doesn't preserve the dates, if you restore a
database (or table), the creation date is adjusted. I'm not aware of any
SQL interface to influence this.
--
Thanks

Bernd
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hannu Krosing
2013-01-03 10:54:03 UTC
Permalink
Post by Robert Haas
O
Yeah, I don't think this is really a problem. I would expect the psql
support for this feature to be not a whole lot more complicated than
that. Sure, it might be more than 5 lines of raw code if it requires
an additional version of some query for which we're currently using
the same version for both PG93 and PG92, but it's hardly fair to cite
that as an argument for not doing this. Such changes are almost
entirely boilerplate.
Here is a pl/python function which gives you "the real" database
creation time.

CREATE OR REPLACE FUNCTION database_create_ts(INOUT dbname text, OUT
ctime timestamp)
RETURNS SETOF RECORD
LANGUAGE plpythonu AS
$$
import os, time
res = plpy.execute("""select datname,
current_setting('data_directory') ddir,
oid as dboid
from pg_database where datname like '%s';""" %
dbname)
for row in res:
dbpath = '%(ddir)s/base/%(dboid)s' % row
stat = os.stat(dbpath)
yield row['datname'], '%04d-%02d-%02d %02d:%02d:%02d+00' %
time.gmtime(stat.st_ctime)[:6]
$$;

SELECT * FROM database_create_ts('template%');

------------------
Hannu
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Stephen Frost
2013-01-03 02:06:01 UTC
Permalink
Post by Robert Haas
Well, IMHO, there is no need for any syntax change at all. CREATE
TABLE and CREATE DATABASE should just record the creation time
somewhere, and that's all. If you dump-and-reload, the creation time
changes. Deal with it, or hack your catalogs if you really care that
much.
I'd be alright with this also, tbh. Not preserving such information
across pg_dump's wouldn't really be all *that* much of a loss.

As for hacking at the catalogs, I do find that a rather terrible
recommendation, ever. I'm currently trying to convince people at $work
that hacking at pg_database to modify datallowconns is really not a
good or ideal solution (and requires a lot more people to have
superuser rights than really should, which is practically no one, imo).
Annoyingly, we don't seem to have a way to ALTER DATABASE to set that
value, although I *think* 'connection limit = 0' might be good enough.
Post by Robert Haas
I find the suggestion of using event triggers for this to miss the
point almost completely. At least in my case, the time when you
really wish you had some timestamps is when you get dropped into a
customer environment and need to do forensics. The customer will not
have installed the convenient package of event triggers at database
bootstrap time. Their environment will likely be poorly configured
and completely undocumented; that's why you're doing forensics, isn't
it?
Exactly, that's what I was trying to get at upstream.
Post by Robert Haas
I know this has been discussed and rejected before, but I find that
rejection to be wrong-headed. I have repeatedly been asked, with
levels of exasperation ranging from mild to homicidal, why we don't
have this feature, and I have no good answer. If it were somehow
difficult to record this or likely to produce a lot of overhead, that
would be one thing. But it isn't. It's probably a hundred-line
patch, and AFAICS the overhead would be miniscule.
+1

Thanks,

Stephen
Pavel Stehule
2013-01-03 08:16:59 UTC
Permalink
Post by Stephen Frost
Post by Robert Haas
Well, IMHO, there is no need for any syntax change at all. CREATE
TABLE and CREATE DATABASE should just record the creation time
somewhere, and that's all. If you dump-and-reload, the creation time
changes. Deal with it, or hack your catalogs if you really care that
much.
I'd be alright with this also, tbh. Not preserving such information
across pg_dump's wouldn't really be all *that* much of a loss.
As for hacking at the catalogs, I do find that a rather terrible
recommendation, ever. I'm currently trying to convince people at $work
that hacking at pg_database to modify datallowconns is really not a
good or ideal solution (and requires a lot more people to have
superuser rights than really should, which is practically no one, imo).
Annoyingly, we don't seem to have a way to ALTER DATABASE to set that
value, although I *think* 'connection limit = 0' might be good enough.
Post by Robert Haas
I find the suggestion of using event triggers for this to miss the
point almost completely. At least in my case, the time when you
really wish you had some timestamps is when you get dropped into a
customer environment and need to do forensics. The customer will not
have installed the convenient package of event triggers at database
bootstrap time. Their environment will likely be poorly configured
and completely undocumented; that's why you're doing forensics, isn't
it?
Exactly, that's what I was trying to get at upstream.
Post by Robert Haas
I know this has been discussed and rejected before, but I find that
rejection to be wrong-headed. I have repeatedly been asked, with
levels of exasperation ranging from mild to homicidal, why we don't
have this feature, and I have no good answer. If it were somehow
difficult to record this or likely to produce a lot of overhead, that
would be one thing. But it isn't. It's probably a hundred-line
patch, and AFAICS the overhead would be miniscule.
+1
+1

yes, this task can be simply solved by EVENT TRIGGERS, but native
implementation can carry some unification - and time of creation is
basic attribute that I would to see everywhere. And I am not alone

regards

Pavel Stehule
Post by Stephen Frost
Thanks,
Stephen
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Greg Stark
2013-01-03 16:38:49 UTC
Permalink
Post by Stephen Frost
I'd be alright with this also, tbh. Not preserving such information
across pg_dump's wouldn't really be all *that* much of a loss.
I think it would be mandatory for pg_dump not to restore this info
actually. A fair amount of work has gone into pg_dump -s to ensure
that the output is identical for identical databases. OIDs were
removed and the sort order was changed to be deterministic for
example. Any "alter table set creation time 'xxx'" will defeat that
entirely.

When last I managed a production Postgres database I would use pg_dump
-s to regenerate a schema file that was checked into revision control.
And when I migrated changes live I would rerun pg_dump -s and diff
that against the checked in schema.
--
greg
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Fabrízio de Royes Mello
2013-01-03 04:08:26 UTC
Permalink
Post by Robert Haas
I know this has been discussed and rejected before, but I find that
rejection to be wrong-headed. I have repeatedly been asked, with
levels of exasperation ranging from mild to homicidal, why we don't
have this feature, and I have no good answer. If it were somehow
difficult to record this or likely to produce a lot of overhead, that
would be one thing. But it isn't. It's probably a hundred-line
patch, and AFAICS the overhead would be miniscule.
Hi all,

The attached patch add a new column into 'pg_database' called 'datcreated'
to store the timestamp of database creation.

If this feature is approved I could extend it to add a column into
'pg_class' to store creation timestamp too.

I think we can discuss about psql support to show this new info about
databases...

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Post by Robert Haas
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Stephen Frost
2013-01-03 13:13:11 UTC
Permalink
Post by Fabrízio de Royes Mello
The attached patch add a new column into 'pg_database' called 'datcreated'
to store the timestamp of database creation.
Please use hard-tabs (not spaces) and the column should come before the
variable length records (see the comment in pg_database.h).

Thanks,

Stephen
Fabrízio de Royes Mello
2013-01-03 13:33:47 UTC
Permalink
Post by Stephen Frost
Post by Fabrízio de Royes Mello
The attached patch add a new column into 'pg_database' called 'datcreated'
to store the timestamp of database creation.
Please use hard-tabs (not spaces) and the column should come before the
variable length records (see the comment in pg_database.h).
You all right... I fixed it in attached patch.

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Post by Stephen Frost
Post by Fabrízio de Royes Mello
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Fabrízio de Royes Mello
2013-01-03 13:40:09 UTC
Permalink
On Thu, Jan 3, 2013 at 11:33 AM, Fabrízio de Royes Mello <
Post by Fabrízio de Royes Mello
Post by Stephen Frost
Post by Fabrízio de Royes Mello
The attached patch add a new column into 'pg_database' called 'datcreated'
to store the timestamp of database creation.
Please use hard-tabs (not spaces) and the column should come before the
variable length records (see the comment in pg_database.h).
You all right... I fixed it in attached patch.
Please... discard this patch... I make a mistake... soon I send the new one.

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Post by Fabrízio de Royes Mello
Post by Stephen Frost
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Stephen Frost
2013-01-03 13:41:23 UTC
Permalink
Post by Fabrízio de Royes Mello
Post by Stephen Frost
Please use hard-tabs (not spaces) and the column should come before the
variable length records (see the comment in pg_database.h).
You all right... I fixed it in attached patch.
You also need to fix the Anum_* values to match what's in the struct
definition now..

I'd recommend that you look over the code more closely and ensure that
you're ordering everything correctly throughout and that it all makes
sense..

Thanks,

Stephen
Fabrízio de Royes Mello
2013-01-03 15:00:05 UTC
Permalink
Post by Stephen Frost
Post by Fabrízio de Royes Mello
Post by Stephen Frost
Please use hard-tabs (not spaces) and the column should come before the
variable length records (see the comment in pg_database.h).
You all right... I fixed it in attached patch.
You also need to fix the Anum_* values to match what's in the struct
definition now..
I'd recommend that you look over the code more closely and ensure that
you're ordering everything correctly throughout and that it all makes
sense..
Now I fixed it.

Regards,
--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Post by Stephen Frost
Post by Fabrízio de Royes Mello
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Peter Eisentraut
2013-01-03 14:12:25 UTC
Permalink
Post by Fabrízio de Royes Mello
The attached patch add a new column into 'pg_database' called
'datcreated' to store the timestamp of database creation.
If this feature is approved I could extend it to add a column into
'pg_class' to store creation timestamp too.
While I'm entirely in favor of this feature in general, I think this is
the wrong way to approach it. It will end up like the CREATE OR REPLACE
support: We add it for a few commands in one release, for a few more
commands in the next release, for almost all commands in the following
release, and now we're still not done.

If we're going to store object creation time, I think we should do it
for all objects, stored in a separate catalog, like pg_depend or
pg_description, keyed off classid, objectid. And have a simple C
function to call to update the information stored there.

That would also make storing the modification time, which I'd ask for
next, easier.
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Pavel Stehule
2013-01-03 14:18:02 UTC
Permalink
Post by Peter Eisentraut
Post by Fabrízio de Royes Mello
The attached patch add a new column into 'pg_database' called
'datcreated' to store the timestamp of database creation.
If this feature is approved I could extend it to add a column into
'pg_class' to store creation timestamp too.
While I'm entirely in favor of this feature in general, I think this is
the wrong way to approach it. It will end up like the CREATE OR REPLACE
support: We add it for a few commands in one release, for a few more
commands in the next release, for almost all commands in the following
release, and now we're still not done.
If we're going to store object creation time, I think we should do it
for all objects, stored in a separate catalog, like pg_depend or
pg_description, keyed off classid, objectid. And have a simple C
function to call to update the information stored there.
That would also make storing the modification time, which I'd ask for
next, easier.
+1

Pavel
Post by Peter Eisentraut
--
http://www.postgresql.org/mailpref/pgsql-hackers
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Robert Haas
2013-01-03 14:22:36 UTC
Permalink
Post by Peter Eisentraut
Post by Fabrízio de Royes Mello
The attached patch add a new column into 'pg_database' called
'datcreated' to store the timestamp of database creation.
If this feature is approved I could extend it to add a column into
'pg_class' to store creation timestamp too.
While I'm entirely in favor of this feature in general, I think this is
the wrong way to approach it. It will end up like the CREATE OR REPLACE
support: We add it for a few commands in one release, for a few more
commands in the next release, for almost all commands in the following
release, and now we're still not done.
If we're going to store object creation time, I think we should do it
for all objects, stored in a separate catalog, like pg_depend or
pg_description, keyed off classid, objectid. And have a simple C
function to call to update the information stored there.
That would also make storing the modification time, which I'd ask for
next, easier.
+1
+1
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Alvaro Herrera
2013-01-03 14:30:12 UTC
Permalink
Post by Peter Eisentraut
If we're going to store object creation time, I think we should do it
for all objects, stored in a separate catalog, like pg_depend or
pg_description, keyed off classid, objectid. And have a simple C
function to call to update the information stored there.
+1

We require two catalogs though, one shared, one database-local.

Would we track ctime of subsidiary objects such as constraints etc?
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Fabrízio de Royes Mello
2013-01-03 15:17:07 UTC
Permalink
Post by Alvaro Herrera
Post by Peter Eisentraut
If we're going to store object creation time, I think we should do it
for all objects, stored in a separate catalog, like pg_depend or
pg_description, keyed off classid, objectid. And have a simple C
function to call to update the information stored there.
+1
+1
Post by Alvaro Herrera
We require two catalogs though, one shared, one database-local.
Have you a suggestion for the names of this new two catalogs?
Post by Alvaro Herrera
Would we track ctime of subsidiary objects such as constraints etc?
If we're going to this way I think yes...

As Peter said we can start add it for a few commands in one release (maybe
first for shared objects) and then for a few more commands in a next
release, and next... until we cover all commands...

Regards,

--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Post by Alvaro Herrera
Post by Peter Eisentraut
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
Twitter: http://twitter.com/fabriziomello
Alvaro Herrera
2013-01-03 16:17:33 UTC
Permalink
Post by Fabrízio de Royes Mello
As Peter said we can start add it for a few commands in one release (maybe
first for shared objects) and then for a few more commands in a next
release, and next... until we cover all commands...
No, he was describing a pessimistic scenario that he doesn't want us to
be on.
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2013-01-03 21:51:02 UTC
Permalink
Post by Robert Haas
these timestamps Should Not be captured or carried forward by
pg_dump.
If we put a creation time into pg_database or pg_class, then
streaming replication will, as a "physical" replication
mechanism, carry the timestamp forward into replicas
And in contrast, I'd expect Andres Freund's logical replication
infrastructure *NOT* to carry these dates over, but rather to
establish fresh new creation dates on a replica. (And from a
forensic perspective, that's a perfectly fine thing.)
I agree all around.
+1

My analogy would be to xmin in tuples. Anything that preserves that
should preserve table creation timestamp. If the tuples' xmin
values in the table receiving the data differ, the creation
timestamp should, too.

In my experience, this would have been valuable forensic
information many times. Preserving xmin rather than aggressively
freezing never has been or would have been useful to me.

-Kevin
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Andrew Dunstan
2013-01-03 22:11:28 UTC
Permalink
Post by Kevin Grittner
Post by Robert Haas
these timestamps Should Not be captured or carried forward by
pg_dump.
If we put a creation time into pg_database or pg_class, then
streaming replication will, as a "physical" replication
mechanism, carry the timestamp forward into replicas
And in contrast, I'd expect Andres Freund's logical replication
infrastructure *NOT* to carry these dates over, but rather to
establish fresh new creation dates on a replica. (And from a
forensic perspective, that's a perfectly fine thing.)
I agree all around.
+1
My analogy would be to xmin in tuples. Anything that preserves that
should preserve table creation timestamp. If the tuples' xmin
values in the table receiving the data differ, the creation
timestamp should, too.
In my experience, this would have been valuable forensic
information many times. Preserving xmin rather than aggressively
freezing never has been or would have been useful to me.
I don't especially have a horse in the race, but ISTM that if you want
the information you want it to be able to persist across dump/restore,
at least optionally. If you can happily lose it when you're forced to
recover using a logical dump then it's not that important to you.


cheers

andrew
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Kevin Grittner
2013-01-03 22:30:49 UTC
Permalink
Post by Andrew Dunstan
I don't especially have a horse in the race, but ISTM that if you want
the information you want it to be able to persist across dump/restore,
at least optionally. If you can happily lose it when you're forced to
recover using a logical dump then it's not that important to you.
On that point I guess we will just disagree. In my experience, if
you are OK with a periodic pg_dump for your primary backup
technique, then the data is just not that important to you. And if
you drop and re-create a table from pg_dump output, that event is
worth noting -- I would rather see the timestamp of applying the
pg_dump output.

When it comes to forensics, why don't we feel that it is worth
preserving next available xid and every tuple's xmin and xmax
through pg_dump? I don't think we should, but the arguments against
trying to do it seem similar to me. They are newly created tables
when you run the SQL generated by pg_dump, with fresh rows and
indexes. To pretend otherwise seems to me to reduce the value of
the feature.

On the other hand, having one central way to deal with it for all
object types seems to increase the value of the feature.

-Kevin
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Joshua D. Drake
2013-01-03 22:45:11 UTC
Permalink
Post by Kevin Grittner
Post by Andrew Dunstan
I don't especially have a horse in the race, but ISTM that if you want
the information you want it to be able to persist across dump/restore,
at least optionally. If you can happily lose it when you're forced to
recover using a logical dump then it's not that important to you.
On that point I guess we will just disagree. In my experience, if
you are OK with a periodic pg_dump for your primary backup
technique, then the data is just not that important to you.
Or the data doesn't change that much but in principle I agree with you.

JD
--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
High Availability, Oracle Conversion, Postgres-XC
@cmdpromptinc - 509-416-6579
--
Sent via pgsql-hackers mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Loading...