<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Security on UIPad Blog</title><link>https://blog.uipad.cn/en/tags/security/</link><description>Recent content in Security on UIPad Blog</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Sat, 14 Mar 2026 21:00:00 +0800</lastBuildDate><atom:link href="https://blog.uipad.cn/en/tags/security/index.xml" rel="self" type="application/rss+xml"/><item><title>Cross-Cloud Database Resilience: PostgreSQL Streaming Replication with Dual-Cert SSL</title><link>https://blog.uipad.cn/en/post/2026-03/postgresql-cross-cloud-streaming-replication-with-ssl-en/</link><pubDate>Sat, 14 Mar 2026 21:00:00 +0800</pubDate><guid>https://blog.uipad.cn/en/post/2026-03/postgresql-cross-cloud-streaming-replication-with-ssl-en/</guid><description>&lt;p&gt;As an indie developer, &amp;ldquo;Single Point of Failure&amp;rdquo; is the stuff of nightmares. While Oracle Cloud offers a generous Free Tier, relying on a single provider for your production data is risky. To ensure data sovereignty and high availability, I recently implemented a cross-cloud PostgreSQL streaming replication setup.&lt;/p&gt;
&lt;p&gt;This isn&amp;rsquo;t just a simple cron-job for backups—it&amp;rsquo;s a real-time &amp;ldquo;Shadow Database&amp;rdquo; architecture.&lt;/p&gt;
&lt;h2 id="1-the-need-for-real-time-replication"&gt;1. The Need for Real-Time Replication
&lt;/h2&gt;&lt;p&gt;Relying solely on daily snapshots had two major drawbacks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;RPO (Recovery Point Objective)&lt;/strong&gt;: A crash at 11 PM meant losing a whole day of data if the last backup was at 2 AM.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RTO (Recovery Time Objective)&lt;/strong&gt;: Restoring Giga-bytes of data from S3 to a new container during an outage takes far too long.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;My Goal&lt;/strong&gt;: Maintain a read-only replica on a separate VPS, ready for a minute-level failover.&lt;/p&gt;
&lt;h2 id="2-the-solution-mtls--streaming-replication"&gt;2. The Solution: mTLS + Streaming Replication
&lt;/h2&gt;&lt;p&gt;Since data travels over the public internet, &lt;strong&gt;security&lt;/strong&gt; was non-negotiable.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Encryption&lt;/strong&gt;: TLS is mandatory to prevent sniffing.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Authentication&lt;/strong&gt;: Instead of passwords, I used &lt;strong&gt;Mutual TLS (mTLS)&lt;/strong&gt;. Only the server holding a client certificate signed by my private CA can connect to the primary.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Isolation&lt;/strong&gt;: Custom ports (e.g., &lt;code&gt;8765&lt;/code&gt;) combined with strict IP whitelisting via &lt;code&gt;iptables&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="3-hard-won-lessons-the-pitfalls"&gt;3. Hard-Won Lessons (The Pitfalls)
&lt;/h2&gt;&lt;h3 id="pitfall-1-docker-vs-system-firewall"&gt;Pitfall #1: Docker vs. System Firewall
&lt;/h3&gt;&lt;p&gt;Docker often bypasses standard &lt;code&gt;ufw&lt;/code&gt; rules by manipulating &lt;code&gt;iptables&lt;/code&gt; directly.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Fix&lt;/strong&gt;: Explicitly insert rules into the &lt;code&gt;INPUT&lt;/code&gt; chain for the specific standby IP and persist them using &lt;code&gt;iptables-persistent&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="pitfall-2-the-pg_basebackup-path-ghost"&gt;Pitfall #2: The pg_basebackup Path Ghost
&lt;/h3&gt;&lt;p&gt;When running &lt;code&gt;pg_basebackup&lt;/code&gt; in a temporary container, it hardcodes the ephemeral paths (like &lt;code&gt;/temp_certs&lt;/code&gt;) into &lt;code&gt;postgresql.auto.conf&lt;/code&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Fix&lt;/strong&gt;: Manually edit the config post-clone to point to the permanent volume paths (e.g., &lt;code&gt;/var/lib/postgresql/data/certs/&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="pitfall-3-permissions-the-999-rule"&gt;Pitfall #3: Permissions (The 999 Rule)
&lt;/h3&gt;&lt;p&gt;PostgreSQL is notoriously picky about certificate permissions.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Error&lt;/strong&gt;: &lt;code&gt;could not open file &amp;quot;server.key&amp;quot;: Permission denied&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fix&lt;/strong&gt;: Run &lt;code&gt;chown -R 999:999&lt;/code&gt; on the host data directory. Even if it looks like &lt;code&gt;root&lt;/code&gt; on the host, the container process must own it as UID &lt;code&gt;999&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="4-key-configurations"&gt;4. Key Configurations
&lt;/h2&gt;&lt;h3 id="primary-pg_hbaconf"&gt;Primary &lt;code&gt;pg_hba.conf&lt;/code&gt;
&lt;/h3&gt;&lt;pre tabindex="0"&gt;&lt;code class="language-conf" data-lang="conf"&gt;# Only allow replication_user via certificate-based auth
hostssl replication replication_user &amp;lt;Standby_IP&amp;gt;/32 cert
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="standby-postgresqlautoconf"&gt;Standby &lt;code&gt;postgresql.auto.conf&lt;/code&gt;
&lt;/h3&gt;&lt;pre tabindex="0"&gt;&lt;code class="language-conf" data-lang="conf"&gt;primary_conninfo = &amp;#39;user=replication_user host=&amp;lt;Primary_IP&amp;gt; port=8765 sslmode=verify-ca sslcert=/path/to/client.crt sslkey=/path/to/client.key sslrootcert=/path/to/ca.crt&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id="5-maintenance-why-i-switched-to-bind-mounts"&gt;5. Maintenance: Why I Switched to Bind Mounts
&lt;/h2&gt;&lt;p&gt;I moved away from Docker Named Volumes to &lt;strong&gt;Bind Mounts&lt;/strong&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Reason&lt;/strong&gt;: When migrating the standby to a new server, a simple &lt;code&gt;tar&lt;/code&gt; of &lt;code&gt;/opt/pgsql/data&lt;/code&gt; is transparent. No more hunting for obscure Docker volume hashes. It makes migration 200% faster.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="conclusion"&gt;Conclusion
&lt;/h2&gt;&lt;p&gt;High Availability isn&amp;rsquo;t about showing off; it&amp;rsquo;s about sleeping better at night. Currently, this setup maintains near-zero lag under Uptime Kuma monitoring.&lt;/p&gt;</description></item></channel></rss>