An interesting convention. I'm personally of the opinion that since I'm
explicitly defining foreign key constraints and relationships in the database anyway, and that's easily viewed by querying the table definition, putting much more in the name itself is both redundant and causes confusion (or at least extra work and annoyance) when used outside the database.
I don't name the primary key field id because I'm denoting a type, I call it id because it holds the identifier for the current record. I don't call the foreign relation identifier person_id because it holds a "person" and is an id, I call it person_id because it holds the identifier for a person record, and that's a valid description for what the field holds. The constraint tells how the relation is defined.
> the PK for a table called Thing is explicitly named pkThing rather than letting SQL Server pick a name [2] for instance an index on Thing covering col1 and col2 is ix_Thing_col1_col2
So, do you call those fields that index works on col1_ix1 and cal2_ix1 or something? If not, why do you denote the primary key with PK, and not indexed columns? Both have explicit definitions in the schema that specify exactly how they are defined, one a primary key, the other an index over multiple columns.
To me this sounds a bit like how Hungarian Notation[1] had it's original purpose lost as it shifted over time to encompass a larger, less well defined set of behavior which didn't always deliver an actual benefits.[2] To me, if the system is enforcing the values, and you can introspect the system, that's good enough.
That said, I did spend a while reading an IDEF1X explanation and HOWTO last night, and there are some interesting points and arguments there, so I'm not entirely set in stone with these views.
1: https://en.wikipedia.org/wiki/Hungarian_notation#Systems_vs....
2: TL;DR "Apps Hungarian" prefixed variable names with info about what they were supposed to contain at a high level, such as rwFoo containing the row of Foo. Systems Hungarian came along later and they used the prefix to encode the type, such that iFoo might mean an integer Foo. The benefit of this in a language such as C or C++ that requires explicitly typing the variable anyways is debatable.
3: http://www.softwaregems.com.au/Documents/Documentary%20Examp...