Using PostgreSQL with Entity Framework

The most widespread O/R (object-relational) mapper for the .NET platform is the Entity Framework. It is most often used in combination with Microsoft SQL Server as database. But the architecture of the Entity Framework allows to use it with other databases as well. A popular and reliable is open-source SQL database is PostgreSQL. This article shows how to use a PostgreSQL database with the Entity Framework.

Installing the Data Provider

First you need an Entity Framework data provider for PostgreSQL. It is called Npgsql. You can install it via NuGet. If you use Entity Framework 6 the package is called EntityFramework6.Npgsql:

> Install-Package EntityFramework6.Npgsql

If you use Entity Framework Core for the new .NET Core platform, you have to install a different package:

> Install-Package Npgsql.EntityFrameworkCore.PostgreSQL

Configuring the Data Provider

The next step is to configure the data provider and the database connection string in the App.config file of your project, for example:

<configuration>
  <!-- ... -->

  <entityFramework>
    <providers>
      <provider invariantName="Npgsql"
         type="Npgsql.NpgsqlServices, EntityFramework6.Npgsql" />
    </providers>
  </entityFramework>

  <system.data>
    <DbProviderFactories>
      <add name="Npgsql Data Provider"
           invariant="Npgsql"
           description="Data Provider for PostgreSQL"
           type="Npgsql.NpgsqlFactory, Npgsql"
           support="FF" />
    </DbProviderFactories>
  </system.data>

  <connectionStrings>
    <add name="AppDatabaseConnectionString"
         connectionString="Server=localhost;Database=postgres"
         providerName="Npgsql" />
  </connectionStrings>

</configuration>

Possible parameters in the connection string are Server, Port, Database, User Id and Password. Here’s an example connection string using all parameters:

Server=192.168.0.42;Port=5432;Database=mydatabase;User Id=postgres;Password=topsecret

The database context class

To use the configured database you create a database context class in the application code:

class AppDatabase : DbContext
{
  private readonly string schema;

  public AppDatabase(string schema)
    : base("AppDatabaseConnectionString")
  {
    this.schema = schema;
  }

  public DbSet<User> Users { get; set; }

  protected override void OnModelCreating(DbModelBuilder builder)
  {
    builder.HasDefaultSchema(this.schema);
    base.OnModelCreating(builder);
  }
}

The parameter to the super constructor call is the name of the configured connection string in App.config. In this example the method OnModelCreating is overridden to set the name of the used schema. Here the schema name is injected via constructor. For PostgreSQL the default schema is called “public”:

using (var db = new AppDatabase("public"))
{
  var admin = db.Users.First(user => user.UserName == "admin")
  // ...
}

The Entity Framework mapping of entity names and properties are case sensitive. To make the mapping work you have to preserve the case when creating the tables by putting the table and column names in double quotes:

create table public."Users" ("Id" bigserial primary key, "UserName" text not null);

With these basics you’re now set up to use PostgreSQL in combination with the Entity Framework.

 

Advertisements

Monitoring long running operations in Oracle databases

We regularly work with database tables with hundreds of millions of entries. Some operations on these table can take a while. Not necessarily queries, but operations in preparation to make queries fast, for example the creation of materialized views or indexes.

The problem with most SQL tools is: once you run your SQL statement you have no indication of how long it will take to complete the operation. No progress bar and no display of the remaining time. Will it take minutes or hours?

Oracle databases have a nice feature I learned about recently that can answer these questions. Operations that take longer than 6 seconds to complete are considered “long operations” and get an entry in a special view called V$SESSION_LONGOPS.

This view does not only contain the currently running long operations but also the history of completed long operations. You can query the status of the current long operations like this:

SELECT * FROM V$SESSION_LONGOPS 
  WHERE time_remaining > 0;

This view contains columns like

  • TARGET (table or view on which the operation is carried out)
  • SOFAR (units of work done so far)
  • TOTALWORK (total units of work)
  • ELAPSED_SECONDS (number of elapsed seconds from the start of the operation)

Based on these values the view offers another column, which contains the estimated remaining time in seconds: TIME_REMAINING.

This remaining time is really just an estimate, because it assumes long running operations to be linear, which is not necessarily true. Also some SQL statements can spawn multiple consecutive operations, e.g. first a “Table Scan” operation and then a “Sort Output” operation, which will only become visible after the first operation has finished. Nevertheless I found this feature quite helpful to get a rough idea of how long I will have to wait or to inform decisions such as whether I really want to perform an operation until completion or if I want to cancel it.

Modern developer Issue 4: My SQL toolbox

SQL is such a basic and useful language but the underlying thinking is non-intuitive when you come from imperative languages like Java, Ruby and similar.
SQL is centered around sets and operations on them. The straight forward solution might not be the best one.

Limit

Let’s say we need the maximum value in a certain set. Easy:

select max(value) from table

But what if we need the row with the maximum value? Just adding the other columns won’t work since aggregations only work with other aggregations and group bys. Joining with the same table may be straight forward but better is to not do any joins:

select * from (select * from table order by value desc) where rownum<=1

Group by and having

Even duplicate values can be found without joining:

select value from table group by value having count(*) > 1

Grouping is a powerful operation in SQL land:

select max(value), TO_CHAR(time, 'YYYY-MM') from table group by TO_CHAR(time, 'YYYY-MM')

Finding us the maximum value in each month.

Mapping with outer joins

SQL is also good for calculations. Say we have one table with values and one with a mapping like a precalculated log table. Joining both gets the log of each of your values:

select t.value, log.y from table t left outer join log_table log on t.value=log.x

Simple calculations

We can even use a linear interpolation between two values. Say we have only the function values stored for integers but we values between them and these values between them can be interpolated linearly.

select t.value, (t.value-floor(t.value))*f.y + (ceil(t.value)-t.value)*g.y from table t left outer join function_table f on floor(t.value)=f.x left outer join function_table g on ceil(t.value)=g.x

When you need to calculate for large sets of values and insert them into another table it might be better to calculate in SQL and insert in one step without all the conversion and wrapping stuff present in programming languages.

Conditions

Another often overlooked feature is to use a condition:

select case when MOD(t.value, 2) = 0 then 'divisible by 2' else 'not divisible by 2' end from table t

These handful operations are my basic toolbox when working with SQL, almost all queries I need can be formulated with them.

Dates and timestamps

One last reminder: when you work with time always specify the wanted time zone in your query.

Monitoring data integrity with health checks

An important aspect for systems, which are backed by a database storage, is to maintain data integrity. Most relational databases offer the possibility to define constraints in order to maintain data integrity, usually referential integrity and entity integrity. Typical constraints are foreign key constraints, not-null constraints, unique constraints and primary key constraints.

SQL also provides the CHECK constraint, which allows you to specify a condition on each row in a table:

ALTER TABLE table_name ADD CONSTRAINT
   constraint_name CHECK ( predicate )

For example:

CHECK (AGE >= 18)

However, these check constraints are limited. They can’t be defined on views, they can’t refer to columns in other tables and they can’t include subqueries.

Health checks

In order to monitor data integrity on a higher level that is closer to the business rules of the domain, we have deployed a technique that we call health checks in some of our applications.

These health checks are database queries, which check that certain constraints are met in accordance with the business rules. The queries are usually designed to return an empty result set on success and to return the faulty data records otherwise.

The health checks are run periodically. For example, we use a Jenkins job to trigger the health checks of one of our web applications every couple of hours. In this case we don’t directly query the database, but the application does and returns the success or failure states of the health checks in the response of a HTTP GET request.

This way we can detect problems in the stored data in a timely manner and take countermeasures. Of course, if the application is bug free these health checks should never fail, and in fact they rarely do. We mostly use the health checks as an addition to regression tests after a bug fix, to ensure and monitor that the unwanted state in the data will never happen again in the future.

IS NULL or IS NOT NULL, that is the question

Today I’ll demonstrate a curiosity of SQL regarding the NOT IN operator in combination with a subquery and NULL values.

Let’s assume we have two database tables, users and profiles:

 users              profiles
+--------------+  +-------------+
| id  username |  | id  user_id |
| 0   'joe'    |  | 0   2       |
| 1   'kate'   |  | 1   0       |
| 2   'john'   |  | 2   NULL    |
| 3   'maria'  |  +-------------+
+--------------+

We want to find all users, which have no associated profile. The intuitive solution would be a negated membership test (“NOT IN”) on the result set of a subquery:

SELECT * FROM users WHERE id NOT IN (SELECT user_id FROM profiles);

The anticipated result is:

+---------------+
| id  username	|
| 1   'kate'    |
| 3   'maria'   |
+---------------+

However, the actual result is an empty set:

+--------------+
| id  username |
+--------------+

This is irritating, especially since the non-negated form produces a sensible result:

SELECT * FROM users WHERE id IN (SELECT user_id FROM profiles);

+--------------+
| id  username	|
| 0   'joe'    |
| 2   'john'   |
+--------------+

So why does the NOT IN operator produce this strange result?

To understand what happens we replace the result of the subquery with a set literal:

SELECT * FROM users WHERE id NOT IN (2, 0, NULL);

This statement is internally translated to:

SELECT * FROM users WHERE id<>2 AND id<>0 AND id<>NULL;

And here comes the twist: a field<>NULL clause evaluates to UNKNOWN in SQL, which is treated like FALSE in a boolean expression. The desired clause would be id IS NOT NULL, but this is not what is used by SQL. As a consequence the result set is empty.

The result for the non-negated membership test (“IN”) can be explained as well. The IN clause is internally translated to:

SELECT * FROM users WHERE id=2 OR id=0 OR id=NULL;

A field=NULL clause evaluates to UNKNOWN as well. But in this case it is of no consequence, since the clause is joined via OR.

Now that we know what’s going on, how can we fix it? There are two possibilities:

One is to use an outer join:

SELECT u.id FROM users u LEFT OUTER JOIN profiles p ON u.id=p.user_id WHERE p.id IS NULL;

The other option is to filter out all NULL values in the subquery:

SELECT id FROM users WHERE id NOT IN (SELECT user_id FROM profiles WHERE user_id IS NOT NULL);

Conclusion

Both field=NULL and field<>NULL evaluate to UNKNOWN in SQL. Unfortunately, SQL uses these clauses for IN and NOT IN set operations. The solution is to work around it.

Thinking in immutability

The way I learned programming is dictated by objects and states. According to my thinking data is packed into objects which are later modified to reflect the changes over time. State and modification are a central modelling technique. For me programming and OOP in particular resolved around this common theme. Mutating objects pervade my thinking even beyond the code into the database and even the architecture of the whole system.
Besides advantages and on going efforts in the industry I couldn’t help but thinking: immutability is nice. I can use it in some cases and keep it quietly stored in the corner.
But it didn’t remain silent.
So I asked myself: How do you construct programs that build upon immutability? How do you (mostly) avoid mutable objects? How do you think in immutability?
The first step was to unlearn. No updates. No modifications. Read, create, copy. That’s about it. No more CRUD only CR. No more SQL updates, only inserts.

Events and logs

To illustrate I use a simple example. Creating, moving, translating and deleting a point. In the traditional OO way it looks like this:

Point p = new Point(40, 30);
p.translateXBy(5);
p.moveTo(10, 20);
p.delete();

Or using SQL might be something like this (omitting primary keys and where clauses here):

insert into points (x, y) values (40, 30)
update points p set p.x = p.x + 5
update points p set p.x = 10, p.y = 20
delete from points

In our memory (or database if we use one) every line updates our point:

Point p = new Point(40, 30); // p = {x: 40, y: 30}
p.translateXBy(5); // p = {x: 45, y: 30}
p.moveTo(10, 20); // p = {x: 10, y: 20}
p.delete(); // p = ?

But what if we do not store the results of the operations but the operations themselves? The events.
Imagine your state changes as a series of events. Just imagine.

new PointCreated(40, 30); // pointEvents = [{created[x: 40, y:30]}]
new PointTranslatedXBy(5); // pointEvents = [{created[x: 40, y:30]}, {translated[x: 5]}]
new PointMovedTo(10, 20); // pointEvents = [{created[x: 40, y:30]}, {translated[x: 5]}, {moved:[x: 10, y:20]}]
new PointDeleted(); // pointEvents = [{created[x: 40, y:30]}, {translated[x: 5]}, {moved:[x: 10, y:20]}, {deleted}]

Even in the database we would just use inserts, no more updates and no more deletes. The events are stored in a log (ironically the database does this the same way). A log is a fully ordered, append only queue. Once we use and store events we have some extras besides immutability: an audit trail, an undo stack, recovering, …
We could externalize the event stream in a message queue and could monitor it, replay it to reproduce bugs, distribute it. The possibilities are endless.

But. That’s all nice and fine. I have one more question: what’s the current state? A user should see the current state and other parts of the system also (not mentioning that I – coming from a mutable state kind of thinking – would also feel better seeing it).

So what’s the current state?

All events applied in order.

OK. But isn’t this expensive doing this all the time?

Yes!

Here another concept from databases helps us: materialized views. We can easily translate in our mind between the new immutable event driven way and the old in place update way. It is just the same data in different representations (if we only are interested in the current state). If we store the current state as a materialized view (or cache) besides the event log we can have both.
Every part of the program which needs the current state gets an immutable copy of it. If this part needs to know when something changes, it can observe the events and act accordingly. This way mutability is pushed to the borders, to the parts where the current state is shown (like the UI layer).

What’s your time, database?

Time is a difficult subject. Especially time zones and daylight saving time. Sounds easy? Well, take a look.
Adding layers in software development complicates the issue and every layer has its own view of time. Let’s start with an example: we write a simple application which stores time based data in a SQL database, e.g. Oracle. The table has a column named ‘at’. Since we don’t want to mess around with timezones, we use a column type without timezone information, in Oracle this would be ‘Date’ if we do not need milliseconds and ‘Timestamp’ if we need them. In Java with plain JDBC we can extract it with a call to ‘getTimestamp’:

Date timestamp = resultSet.getTimestamp("at");

The problem is now we have a timestamp in our local timezone. Where is it converted? Oracle itself has two timezone settings: for the database and for the session. We can query them with:

select DBTIMEZONE from dual;

and

select SESSIONTIMEZONE from dual;

First Oracle uses the time zone set in the session, then the database one. The results from those queries are interesting though: some return a named timezone like ‘Europe/Berlin’, the other return an offset ‘+01:00’. Here a first subtle detail is important: the named timezone uses the offset and the daylight saving time from the respective timezone, the offset setting only uses the offset and no daylight saving. So ‘+01:00’ would just add 1 hour to UTC regardless of the date.
In our example changing these two settings does not change our time conversion. The timezone settings are for another column type: timestamp with (local) timezone.
Going up one layer the JDBC API reveals an interesting tidbit:

Timestamp getTimestamp(int columnIndex)
throws SQLException

Retrieves the value of the designated column in the current row of this ResultSet object as a java.sql.Timestamp object in the Java programming language.

Sounds about right, but wait there’s another method:

Timestamp getTimestamp(int columnIndex,
Calendar cal)
throws SQLException


Retrieves the value of the designated column in the current row of this ResultSet object as a java.sql.Timestamp object in the Java programming language. This method uses the given calendar to construct an appropriate millisecond value for the timestamp if the underlying database does not store timezone information.

Just as in Oracle we can use a named timezone or an offset:

Date timestamp = resultSet.getTimestamp("at", Calendar.getInstance(TimeZone.getTimeZone("GMT+1:00")));

This way we have control over what and how the time is extracted from the database. The next time you work with time based information take a close look. And if you work with Java use Joda Time.