The important thing to remember is to use COPY .. FROM STDIN, not insert, for bulk loading into PostgreSQL. Most PostgreSQL drivers support COPY from the client, though it's not always available through generic database APIs.
COPY commands typically write hundreds of thousands of rows per second on a large server/cluster. It's useful to write over multiple connections, but rarely more than ~16.
The standard API for database access in Python is DB-API 2 and it doesn't include support for COPY , so each driver may implement it differently.
There's another aspect to this, and that's the format of the file to be ingested as it can be 'text', 'CSV' or 'binary'. If you're generating the file yourself then you have a choice, but whether 'binary' is faster than 'CSV' I just don't know.
Worth noting that you pass in data in a proprietary binary format. [1]
I've done it, not out of performance concerns, but because transcoding between proprietary binary formats with types is a lot saner than the alternative.
One caveat to keep in mind when using the binary format is that arrays of custom types are not portable across databases because the serialized array contains the OID of the custom type, which may be different on the other end.
The other thing to keep in mind is that text or CSV can be much more compact for data sets with many small integers or NULLs. On the other hand, the binary format is much more compact for timestamps and floating point numbers. In general, binary format has lower parsing overhead.
COPY commands typically write hundreds of thousands of rows per second on a large server/cluster. It's useful to write over multiple connections, but rarely more than ~16.