Optimizing PostgreSQL for .NET Apps | Otimizando PostgreSQL para Aplicações .NET
Optimizing PostgreSQL for .NET Apps
When building high-performance applications with .NET and PostgreSQL, database tuning can make the difference between a smooth, scalable product and a bottleneck-prone system. In this article, we’ll explore performance best practices, PostgreSQL configuration tweaks, and .NET-side optimizations for faster queries and better resource utilization.
Why PostgreSQL for .NET?
PostgreSQL is one of the most reliable, powerful, and extensible relational databases. Combined with the Npgsql driver, it offers native .NET support, strong ACID compliance, and advanced features like JSONB, full-text search, and partitioning.
Common Performance Bottlenecks
- Unoptimized indexes leading to slow reads.
- Connection overhead from excessive opening/closing.
- Large unbatched writes causing lock contention.
- Suboptimal query plans due to outdated statistics.
See also: Hangfire and Decentralized Queues
PostgreSQL Configuration for Performance
Below are PostgreSQL parameters that often need tuning for production workloads.
Parameter | Recommended Tuning for .NET Apps | Impact |
---|---|---|
max_connections | Size according to connection pool limits | Prevents overload |
shared_buffers | 25-40% of system RAM | Faster caching |
work_mem | 4–64MB depending on query complexity | Sort/hash performance |
maintenance_work_mem | Higher (256MB+) for maintenance jobs | Faster index builds |
effective_cache_size | 50-75% of total RAM | Query planner hints |
Example: postgresql.conf
snippet
max_connections = 200
shared_buffers = 4GB
work_mem = 16MB
maintenance_work_mem = 512MB
effective_cache_size = 12GB
Data Modeling Best Practices for .NET
- Use appropriate data types (
uuid
for GUIDs,timestamp with time zone
for dates). - Avoid **SELECT *** in production code.
- Leverage JSONB for flexible schema parts.
- Apply partitioning for large time-series datasets.
Connection Pooling in .NET
.NET apps using Npgsql benefit greatly from pooling. Example configuration:
using Npgsql;
using System;
using System.Threading.Tasks;
class Program
{
static async Task Main()
{
var connString = "Host=localhost;Username=appuser;Password=secret;Database=mydb;Pooling=true;Minimum Pool Size=5;Maximum Pool Size=50;Timeout=15";
await using var conn = new NpgsqlConnection(connString);
await conn.OpenAsync();
await using var cmd = new NpgsqlCommand("SELECT COUNT(*) FROM users", conn);
var count = await cmd.ExecuteScalarAsync();
Console.WriteLine($"Users count: {count}");
}
}
Key Notes:
Pooling=true
enables connection reuse.- Adjust
Minimum Pool Size
andMaximum Pool Size
based on workload. - Monitor with
pg_stat_activity
to ensure pool limits are optimal.
Monitoring and Metrics
Use
pg_stat_statements
to track slow queries.Monitor cache hit ratio:
SELECT sum(blks_hit) / sum(blks_hit + blks_read) AS cache_hit_ratio FROM pg_stat_database;
Integrate with Grafana + Prometheus for live metrics.
Full Example: Optimized Bulk Insert in .NET
using Npgsql;
using System;
using System.Threading.Tasks;
class BulkInsertExample
{
public static async Task Main()
{
var connString = "Host=localhost;Username=appuser;Password=secret;Database=mydb;Pooling=true";
await using var conn = new NpgsqlConnection(connString);
await conn.OpenAsync();
using var writer = conn.BeginBinaryImport("COPY products (id, name, price) FROM STDIN (FORMAT BINARY)");
for (int i = 0; i < 1000; i++)
{
writer.StartRow();
writer.Write(Guid.NewGuid(), NpgsqlTypes.NpgsqlDbType.Uuid);
writer.Write($"Product {i}", NpgsqlTypes.NpgsqlDbType.Text);
writer.Write(i * 10.5m, NpgsqlTypes.NpgsqlDbType.Numeric);
}
await writer.CompleteAsync();
Console.WriteLine("Bulk insert completed.");
}
}
Internal Links
External References
Connect with me on LinkedIn e to share your PostgreSQL optimization tips for .NET.