<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Pratik N Borkar's blog]]></title><description><![CDATA[Pratik N Borkar's blog]]></description><link>https://blog.pratiknborkar.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 20:06:12 GMT</lastBuildDate><atom:link href="https://blog.pratiknborkar.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Implementing Dual-Stack (IPv4 & IPv6) in Oracle Kubernetes Engine (OKE): Pod Networking and Traffic Flow Explained]]></title><description><![CDATA[As IPv4 exhaustion becomes increasingly real, dual-stack Kubernetes is no longer just an experiment it’s a strategic decision. Recently, I implemented and validated a dual-stack cluster in Oracle Kubernetes Engine on Oracle Cloud Infrastructure, and ...]]></description><link>https://blog.pratiknborkar.com/oracle-kubernetes-engine-oke-dual-stack-ipv4-ipv6-pod-networking</link><guid isPermaLink="true">https://blog.pratiknborkar.com/oracle-kubernetes-engine-oke-dual-stack-ipv4-ipv6-pod-networking</guid><category><![CDATA[oke]]></category><category><![CDATA[Oracle]]></category><category><![CDATA[Oracle Cloud]]></category><category><![CDATA[Devops]]></category><category><![CDATA[SRE]]></category><category><![CDATA[SRE devops]]></category><category><![CDATA[ipv6]]></category><category><![CDATA[IPv4]]></category><category><![CDATA[networking]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Wed, 11 Feb 2026 23:25:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770852312784/48dc37b5-9adf-4d8d-8b3d-0e33a52f3020.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As IPv4 exhaustion becomes increasingly real, dual-stack Kubernetes is no longer just an experiment it’s a strategic decision. Recently, I implemented and validated a dual-stack cluster in Oracle Kubernetes Engine on Oracle Cloud Infrastructure, and this article shares the real networking behavior behind it not just configuration steps, but how traffic actually flows.</p>
<p>This is not theory. This is what happens in a live cluster.</p>
<h3 id="heading-what-dual-stack-really-means-in-oke">What Dual-Stack Really Means in OKE</h3>
<p>In a dual-stack Kubernetes cluster, every Pod receives:</p>
<ul>
<li><p>One IPv4 address</p>
</li>
<li><p>One IPv6 address</p>
</li>
</ul>
<p>Both addresses are active simultaneously.</p>
<p>This enables:</p>
<ul>
<li><p>Backward compatibility with IPv4 systems</p>
</li>
<li><p>Native IPv6 communication</p>
</li>
<li><p>Future-proof cloud networking</p>
</li>
</ul>
<p>In OCI, dual-stack works natively with VCN networking. There is no overlay tunnel involved. Pod IPs are directly routable inside the VCN.</p>
<p>That architectural decision makes a significant difference in how traffic behaves.</p>
<h3 id="heading-how-a-pod-receives-both-ipv4-and-ipv6">How a Pod Receives Both IPv4 and IPv6</h3>
<p>When a Pod is scheduled onto a node, the OCI VCN-native CNI allocates IP addresses from the configured Pod subnet.</p>
<p>For example, a single Pod may receive:</p>
<pre><code class="lang-plaintext">IPv4  → 100.0.5.102
IPv6  → fd00:100:0:4:0:6ed9:2497:6dd8
</code></pre>
<p>Both addresses are bound to the Pod’s network interface.</p>
<p>To confirm this in a running cluster:</p>
<pre><code class="lang-plaintext">PRATIK_BOR@cloudshell:~ (ap-sydney-1)$ kubectl get pods -A \
&gt; -o custom-columns=NS:.metadata.namespace,NAME:.metadata.name,IPv4:.status.podIPs[0].ip,IPv6:.status.podIPs[1].ip
NS            NAME                                   IPv4          IPv6
default       nginx-568f7cfd68-g4h2c                 100.0.5.102   fd00:100:0:4:0:6ed9:2497:6dd8
default       nginx-568f7cfd68-pvgct                 100.0.5.215   fd00:100:0:4:0:bc9b:fecf:997
default       nginx-568f7cfd68-vprr8                 100.0.5.106   fd00:100:0:4:0:c05c:cfe5:7593
kube-system   coredns-bfd957d6b-ddr79                100.0.5.110   fd00:100:0:4:0:a800:ed61:1c0d
kube-system   coredns-bfd957d6b-slxml                100.0.5.104   fd00:100:0:4:0:3717:f114:d94d
kube-system   coredns-bfd957d6b-w2tgj                100.0.5.122   fd00:100:0:4:0:d046:deaf:a1f0
kube-system   csi-oci-node-rfmkw                     100.0.3.90    fd00:100:0:3:0:d02f:752e:66c7
kube-system   csi-oci-node-s2thk                     100.0.3.117   fd00:100:0:3:0:eab9:9713:9d05
kube-system   csi-oci-node-tc5rc                     100.0.3.174   fd00:100:0:3:0:d4c0:26f3:8990
kube-system   kube-dns-autoscaler-7f9c9ddb55-c4vdd   100.0.5.125   fd00:100:0:4:0:6eee:830e:daf7
kube-system   kube-proxy-5cwj5                       100.0.3.117   fd00:100:0:3:0:eab9:9713:9d05
kube-system   kube-proxy-pb427                       100.0.3.90    fd00:100:0:3:0:d02f:752e:66c7
kube-system   kube-proxy-zldlk                       100.0.3.174   fd00:100:0:3:0:d4c0:26f3:8990
kube-system   proxymux-client-g9bhp                  100.0.3.174   fd00:100:0:3:0:d4c0:26f3:8990
kube-system   proxymux-client-tbqdm                  100.0.3.90    fd00:100:0:3:0:d02f:752e:66c7
kube-system   proxymux-client-w9cdf                  100.0.3.117   fd00:100:0:3:0:eab9:9713:9d05
kube-system   vcn-native-ip-cni-8qfhn                100.0.3.117   fd00:100:0:3:0:eab9:9713:9d05
kube-system   vcn-native-ip-cni-fqdx7                100.0.3.90    fd00:100:0:3:0:d02f:752e:66c7
kube-system   vcn-native-ip-cni-gszjb                100.0.3.174   fd00:100:0:3:0:d4c0:26f3:8990
PRATIK_BOR@cloudshell:~ (ap-sydney-1)$
</code></pre>
<p>The output clearly shows both IP versions assigned to the same Pod.</p>
<p>Inside the container:</p>
<pre><code class="lang-plaintext">kubectl exec -it nginx-568f7cfd68-g4h2c  -- hostname -i
fd00:100:0:4:0:6ed9:2497:6dd8 100.0.5.102
</code></pre>
<p>You’ll see both <code>inet</code> (IPv4) and <code>inet6</code> entries.</p>
<p>This confirms that dual-stack is not just enabled at the control plane level it is active at the workload level.</p>
<h3 id="heading-pod-to-pod-traffic-what-actually-happens">Pod-to-Pod Traffic: What Actually Happens</h3>
<h3 id="heading-same-node-communication">Same Node Communication</h3>
<p>When two Pods run on the same node:</p>
<ul>
<li><p>Traffic stays local.</p>
</li>
<li><p>Linux networking handles routing internally.</p>
</li>
<li><p>No VCN-level routing occurs.</p>
</li>
</ul>
<p>Dual-stack does not change this behavior. IPv4 and IPv6 packets are both handled locally by the node’s networking stack.</p>
<p>Latency is minimal because packets never leave the host.</p>
<h3 id="heading-cross-node-communication">Cross-Node Communication</h3>
<p>When Pods are on different nodes, the behavior becomes more interesting.</p>
<p>In OCI’s VCN-native networking:</p>
<ol>
<li><p>The source Pod sends traffic.</p>
</li>
<li><p>The node routes it directly into the VCN.</p>
</li>
<li><p>The destination node receives it.</p>
</li>
<li><p>The packet is delivered to the target Pod.</p>
</li>
</ol>
<p>There is no overlay tunnel.<br />There is no encapsulation.<br />There is no extra hop.</p>
<p>For IPv6 traffic specifically:</p>
<ul>
<li><p>OCI routes IPv6 natively.</p>
</li>
<li><p>No NAT is required.</p>
</li>
<li><p>IPv6 packets travel directly using the assigned IPv6 CIDR.</p>
</li>
</ul>
<p>This creates a very clean routing model compared to many overlay-based CNI implementations.</p>
<h3 id="heading-external-traffic-flow-world-pod">External Traffic Flow (World → Pod)</h3>
<p>When exposing a workload using a Kubernetes LoadBalancer service, OCI provisions a native load balancer.</p>
<p>In a dual-stack subnet, that Load Balancer can have:</p>
<ul>
<li><p>Public IPv4</p>
</li>
<li><p>Public IPv6</p>
</li>
</ul>
<p>Traffic flow looks like this:</p>
<p>Client → OCI Load Balancer → Node → Pod</p>
<p>If the client connects over IPv6:</p>
<p>IPv6 Client → LB IPv6 → Pod IPv6</p>
<p>This enables true end-to-end IPv6 connectivity without translation.</p>
<p>That’s a significant architectural advantage.'</p>
<h3 id="heading-pod-to-external-world-egress">Pod to External World (Egress)</h3>
<p>Outbound traffic behaves differently for IPv4 and IPv6.</p>
<h3 id="heading-ipv4-egress">IPv4 Egress</h3>
<ul>
<li><p>Requires NAT Gateway</p>
</li>
<li><p>Private IPv4 → Translated → Public IPv4</p>
</li>
</ul>
<h3 id="heading-ipv6-egress">IPv6 Egress</h3>
<ul>
<li><p>No NAT required (if using globally routable IPv6)</p>
</li>
<li><p>Direct routing possible</p>
</li>
</ul>
<p>IPv6 removes an entire NAT layer from the architecture.</p>
<p>This simplifies troubleshooting and reduces stateful dependencies in network design.</p>
<h3 id="heading-final-thoughts">Final Thoughts</h3>
<p>Dual-stack Kubernetes is no longer experimental.</p>
<p>In OCI, it is stable, practical, and production-ready — provided that:</p>
<ul>
<li><p>Subnets are properly designed</p>
</li>
<li><p>IPv6 CIDRs are correctly allocated</p>
</li>
<li><p>Traffic flows are understood</p>
</li>
<li><p>Observability tools are updated</p>
</li>
</ul>
<p>IPv6 adoption is accelerating globally. Running dual-stack today prepares your Kubernetes platform for what’s coming tomorrow.</p>
]]></content:encoded></item><item><title><![CDATA[How to Export Custom RHEL 8/9/10 Images from Red Hat to OCI]]></title><description><![CDATA[Many enterprises standardize their Linux operating systems using golden images. Red Hat Image Builder provides a simple way to create customized RHEL images, while Oracle Cloud Infrastructure (OCI) allows importing these images as Custom Compute Imag...]]></description><link>https://blog.pratiknborkar.com/export-custom-rhel-8-9-10-image-to-oracle-cloud-oci</link><guid isPermaLink="true">https://blog.pratiknborkar.com/export-custom-rhel-8-9-10-image-to-oracle-cloud-oci</guid><category><![CDATA[redhat]]></category><category><![CDATA[OCI]]></category><category><![CDATA[migration]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Export]]></category><category><![CDATA[data migration]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[Cloud infrastructure]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Tue, 13 Jan 2026 09:11:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768295527413/8ae52349-04cf-4515-8b20-5bbebbf75689.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Many enterprises standardize their Linux operating systems using golden images. Red Hat Image Builder provides a simple way to create customized RHEL images, while Oracle Cloud Infrastructure (OCI) allows importing these images as Custom Compute Images.</p>
<p>In this DIY article, you will learn how to:</p>
<ul>
<li><p>Build a custom RHEL 8/9/10 image using Red Hat Image Builder</p>
</li>
<li><p>Export the image in QCOW2 format</p>
</li>
<li><p>Upload it to OCI Object Storage</p>
</li>
<li><p>Import and use it as a custom image in OCI Compute</p>
</li>
</ul>
<p>This approach ensures consistent builds, faster provisioning, and compliance across cloud environments.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before starting, ensure you have:</p>
<ul>
<li><p>A valid Red Hat account with active RHEL subscription or developer access</p>
</li>
<li><p>Access to Red Hat Hybrid Cloud Console</p>
</li>
<li><p>Permissions to use Red Hat Image Builder</p>
</li>
<li><p>An OCI tenancy with access to:</p>
<ul>
<li><p>Object Storage</p>
</li>
<li><p>Compute → Custom Images</p>
</li>
</ul>
</li>
<li><p>(Optional) OCI CLI configured for automation</p>
</li>
</ul>
<h2 id="heading-step-1-log-in-to-red-hat-hybrid-cloud-console">Step 1: Log in to Red Hat Hybrid Cloud Console</h2>
<p>Open the Red Hat console:</p>
<p>https://console.redhat.com/</p>
<p>Log in with your Red Hat credentials and verify that your account has developer access or an active subscription. Without this, image creation options will not be visible.Step 2: Open Image Builder (Inventory Image Tool)</p>
<p>From the Red Hat Console dashboard:</p>
<ol>
<li><p>Navigate to Inventory</p>
</li>
<li><p>Select Image Builder</p>
</li>
</ol>
<p>Image Builder allows you to create cloud-ready RHEL images for multiple platforms, including OCI.</p>
<h2 id="heading-step-3-create-a-blueprint-for-rhel-8-9-10">Step 3: Create a Blueprint for RHEL 8 / 9 / 10</h2>
<p>In Image Builder:</p>
<ol>
<li><p>Click Create Blueprint</p>
</li>
<li><p>Select the required RHEL version:</p>
<ul>
<li><p>RHEL 8</p>
</li>
<li><p>RHEL 9</p>
</li>
<li><p>RHEL 10</p>
</li>
</ul>
</li>
<li><p>Choose architecture (x86_64 is recommended for OCI)</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768294150701/35172a76-f05d-4cde-90ce-a1c9be417e84.png" alt class="image--center mx-auto" /></p>
<p>Customize the blueprint:</p>
<ul>
<li><p>Install required RPM packages</p>
</li>
<li><p>Configure users and SSH public keys</p>
</li>
<li><p>Enable repositories</p>
</li>
<li><p>Apply security hardening or baseline configurations</p>
</li>
</ul>
<p>This blueprint acts as your gold image definition.</p>
<h2 id="heading-step-4-build-the-custom-rhel-image">Step 4: Build the Custom RHEL Image</h2>
<p>After saving the blueprint:</p>
<ol>
<li><p>Click Build Image</p>
</li>
<li><p>Select QCOW2 as the image output format</p>
</li>
</ol>
<blockquote>
<p>QCOW2 is the recommended and supported format for OCI custom image imports.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768294244143/9840df7d-6c1d-4148-8274-e8cb2e6c5fe4.png" alt class="image--center mx-auto" /></p>
<p>The build process will start and may take several minutes. Wait until the status shows Completed.</p>
<h2 id="heading-step-5-download-the-rhel-qcow2-image">Step 5: Download the RHEL QCOW2 Image</h2>
<p>Once the image build is complete:</p>
<ul>
<li><p>Download the generated QCOW2 image file</p>
</li>
<li><p>(Optional) Validate checksum or file integrity</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768294274984/1281771b-a592-4414-aa9a-df2c9517954a.png" alt class="image--center mx-auto" /></p>
<p>Ensure sufficient disk space before downloading, as image sizes can be multiple GBs.</p>
<h2 id="heading-step-6-upload-the-image-to-oci-object-storage">Step 6: Upload the Image to OCI Object Storage</h2>
<p>Log in to the OCI Console:</p>
<ol>
<li><p>Navigate to Object Storage → Buckets</p>
</li>
<li><p>Select an existing bucket or create a new one (example: <code>RHEL-Images</code>)</p>
</li>
<li><p>Upload the downloaded QCOW2 image</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768294393269/160d1094-f543-4ba4-87e7-0d4b1d8366d7.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p>Make sure the bucket is in the same region where you plan to import the custom image.</p>
<h2 id="heading-step-7-import-the-custom-rhel-image-into-oci">Step 7: Import the Custom RHEL Image into OCI</h2>
<p>After uploading the image:</p>
<ol>
<li><p>Go to Compute → Custom Images</p>
</li>
<li><p>Click Import Image</p>
</li>
<li><p>Select Object Storage as the source</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768294447692/ac722315-53c9-46f9-980c-0d47ec496541.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Choose the bucket and QCOW2 image object</p>
</li>
<li><p>Set image type to QCOW2</p>
</li>
<li><p>Provide a descriptive name (example: <code>RHEL9-Golden-Image</code>)</p>
</li>
</ol>
<p>The import process may take several minutes depending on image size.</p>
<h2 id="heading-step-8-launch-an-oci-instance-using-the-custom-image">Step 8: Launch an OCI Instance Using the Custom Image</h2>
<p>Once the image import completes:</p>
<ol>
<li><p>Go to Compute → Instances</p>
</li>
<li><p>Click Create Instance</p>
</li>
<li><p>Choose Custom Image</p>
</li>
<li><p>Select your imported RHEL image</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768294496995/c7815a5d-8e21-4a0a-bc7b-caf516f11641.png" alt class="image--center mx-auto" /></p>
<p>Configure shape, networking, and SSH access, then launch the instance.</p>
<h2 id="heading-post-deployment-validation">Post-Deployment Validation</h2>
<p>After the instance starts:</p>
<ul>
<li><p>Confirm OS version:</p>
<p>  cat /etc/redhat-release</p>
</li>
<li><p>Verify installed packages and custom configurations</p>
</li>
<li><p>Validate SSH, networking, and application readiness</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768295030725/90b84ee0-c994-4754-ad31-bd98f1fbb3ac.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h2 id="heading-benefits-of-using-custom-rhel-images-on-oci">Benefits of Using Custom RHEL Images on OCI</h2>
<ul>
<li><p>Faster VM provisioning</p>
</li>
<li><p>Consistent OS builds across environments</p>
</li>
<li><p>Improved security and compliance</p>
</li>
<li><p>Reduced configuration drift</p>
</li>
<li><p>Enterprise-ready golden image strategy</p>
</li>
<li><p>In-place OS conversion during migration.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Exporting a custom RHEL 8/9/10 image from Red Hat Image Builder and importing it into Oracle Cloud Infrastructure is a reliable way to standardize Linux deployments. By using QCOW2 images and OCI Custom Images, organizations can achieve scalable, secure, and repeatable infrastructure builds.</p>
]]></content:encoded></item><item><title><![CDATA[ClamAV on Oracle Linux 9: Complete DIY Antivirus & Malware Protection Guide]]></title><description><![CDATA[Introduction
Malware protection on Linux servers is often overlooked, especially in enterprise environments where systems host critical applications, databases, and shared storage. While Linux is inherently secure, it is not immune to malware, ransom...]]></description><link>https://blog.pratiknborkar.com/clamav-on-oracle-linux-9-complete-diy-antivirus-and-malware-protection-guide</link><guid isPermaLink="true">https://blog.pratiknborkar.com/clamav-on-oracle-linux-9-complete-diy-antivirus-and-malware-protection-guide</guid><category><![CDATA[cybersecurity]]></category><category><![CDATA[linux-security]]></category><category><![CDATA[malwareprotection]]></category><category><![CDATA[AntiVirus]]></category><category><![CDATA[Oracle Linux]]></category><category><![CDATA[Security]]></category><category><![CDATA[cloud security]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[OCI]]></category><category><![CDATA[infrastructure]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Wed, 17 Dec 2025 14:45:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765982404663/4b7f0a01-92be-4962-8569-42c6993b781b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Malware protection on Linux servers is often overlooked, especially in enterprise environments where systems host critical applications, databases, and shared storage. While Linux is inherently secure, it is not immune to malware, ransomware, or infected files introduced through users, shared mounts, or file transfers.</p>
<p>ClamAV is a widely adopted open-source antivirus engine designed for Linux and Unix-based systems. This article provides a step-by-step, production-ready guide to installing and configuring ClamAV on Oracle Linux 9 (OL9) using the upstream RPM, with on-access scanning, SELinux support, automatic updates, quarantine handling, and scheduled scans.</p>
<h3 id="heading-what-is-clamav-and-why-use-it">What is ClamAV and Why Use It?</h3>
<p>ClamAV is an open-source antivirus toolkit primarily used on Linux systems to:</p>
<ul>
<li><p>Detect malware, trojans, and ransomware</p>
</li>
<li><p>Scan uploaded or shared files</p>
</li>
<li><p>Provide on-access (real-time) malware protection</p>
</li>
<li><p>Protect Linux servers that interact with Windows clients</p>
</li>
</ul>
<h3 id="heading-key-features-of-clamav">Key Features of ClamAV</h3>
<ul>
<li><p>Signature-based malware detection</p>
</li>
<li><p>On-demand and on-access scanning</p>
</li>
<li><p>Automatic virus definition updates (<code>freshclam</code>)</p>
</li>
<li><p>Lightweight and server-friendly</p>
</li>
<li><p>SELinux-compatible</p>
</li>
<li><p>CLI-driven (ideal for automation and cron jobs)</p>
</li>
</ul>
<p>In enterprise Linux environments, ClamAV is commonly deployed to:</p>
<ul>
<li><p>Scan <code>/home</code> directories</p>
</li>
<li><p>Protect shared mounts</p>
</li>
<li><p>Meet compliance and security baseline requirements</p>
</li>
<li><p>Prevent malware propagation across platforms</p>
</li>
</ul>
<h3 id="heading-architecture-overview">Architecture Overview</h3>
<p>A standard ClamAV deployment consists of:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Component</td><td>Purpose</td></tr>
</thead>
<tbody>
<tr>
<td>clamd</td><td>Main scanning daemon</td></tr>
<tr>
<td>clamonacc</td><td>On-access (real-time) scanner</td></tr>
<tr>
<td>freshclam</td><td>Virus database updater</td></tr>
<tr>
<td>clamscan</td><td>On-demand manual scanner</td></tr>
</tbody>
</table>
</div><p>This guide configures all components correctly for OL9 with SELinux enforcing.</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>Before starting, verify:</p>
<ul>
<li><p>Oracle Linux 9.x</p>
</li>
<li><p>SELinux in Enforcing mode</p>
</li>
<li><p>Root or sudo access</p>
</li>
<li><p>Internet connectivity for virus updates</p>
</li>
</ul>
<p>Verify:</p>
<pre><code class="lang-plaintext">cat /etc/os-release
getenforce
</code></pre>
<h3 id="heading-step-1-download-and-install-clamav-upstream-rpm">Step 1: Download and Install ClamAV (Upstream RPM)</h3>
<p>Oracle Linux repositories often lag behind upstream ClamAV releases. For security and stability, install the official upstream RPM.</p>
<pre><code class="lang-plaintext">cd /tmp
wget https://www.clamav.net/downloads/production/clamav-1.5.1.linux.x86_64.rpm
sudo dnf install -y ./clamav-1.5.1.linux.x86_64.rpm
</code></pre>
<p>Verify installation:</p>
<pre><code class="lang-plaintext">rpm -q clamav
</code></pre>
<h3 id="heading-step-2-create-required-system-users"><strong>Step 2: Create Required System Users</strong></h3>
<p>The upstream RPM does not create service users automatically.<br />ClamAV separates responsibilities using two system accounts:</p>
<ul>
<li><p><code>clamscan</code> → scanning daemon</p>
</li>
<li><p><code>clamupdate</code> → virus database updates</p>
</li>
</ul>
<p>Create users:</p>
<pre><code class="lang-plaintext">sudo useradd -r -s /sbin/nologin -d /var/lib/clamav clamscan
sudo useradd -r -s /sbin/nologin -d /var/lib/clamav clamupdate
</code></pre>
<p>Verify:</p>
<pre><code class="lang-plaintext">id clamscan
id clamupdate
</code></pre>
<h3 id="heading-step-3-create-required-directories">Step 3: Create Required Directories</h3>
<p>Create directories for:</p>
<ul>
<li><p>Virus databases</p>
</li>
<li><p>Runtime sockets</p>
</li>
<li><p>PID files</p>
</li>
<li><p>Logs</p>
</li>
</ul>
<pre><code class="lang-plaintext">sudo mkdir -p \
  /usr/local/share/clamav \
  /var/log/clamav \
  /run/clamd \
  /run/clamav
</code></pre>
<p>Set ownership:</p>
<pre><code class="lang-plaintext">sudo chown -R clamupdate:clamupdate /usr/local/share/clamav /run/clamav
sudo chown -R clamscan:clamscan /run/clamd
</code></pre>
<p>Set permissions:</p>
<pre><code class="lang-plaintext">sudo chmod 755 /var/log/clamav /run/clamd
</code></pre>
<blockquote>
<p>⚠️ Incorrect permissions are the most common reason ClamAV fails to start.</p>
</blockquote>
<h3 id="heading-step-4-create-log-files-manually-critical">Step 4: Create Log Files Manually (CRITICAL)</h3>
<p>ClamAV will not create log files automatically.</p>
<pre><code class="lang-plaintext">sudo touch /var/log/clamav/clamd.log
sudo touch /var/log/clamav/freshclam.log
</code></pre>
<p>Set ownership:</p>
<pre><code class="lang-plaintext">sudo chown clamscan:clamscan /var/log/clamav/clamd.log
sudo chown clamupdate:clamupdate /var/log/clamav/freshclam.log
</code></pre>
<p>Set permissions:</p>
<pre><code class="lang-plaintext">sudo chmod 640 /var/log/clamav/*.log
sudo chmod 755 /var/log/clamav
</code></pre>
<h3 id="heading-step-5-configure-freshclam-virus-updates">Step 5: Configure freshclam (Virus Updates)</h3>
<p>Copy the sample configuration:</p>
<pre><code class="lang-plaintext">sudo cp /usr/local/etc/freshclam.conf.sample /usr/local/etc/freshclam.conf
sudo vi /usr/local/etc/freshclam.conf
</code></pre>
<p>Use only the following content:</p>
<pre><code class="lang-plaintext">DatabaseDirectory /usr/local/share/clamav
UpdateLogFile /var/log/clamav/freshclam.log
PidFile /run/clamav/freshclam.pid
DatabaseMirror database.clamav.net
</code></pre>
<p>Important:<br />Remove the <code>Example</code> line completely.</p>
<p>Set ownership:</p>
<pre><code class="lang-plaintext">sudo chown clamupdate:clamupdate /usr/local/etc/freshclam.conf
</code></pre>
<h3 id="heading-step-6-configure-clamd-scanning-daemon">Step 6: Configure clamd (Scanning Daemon)</h3>
<p>Copy sample file:</p>
<pre><code class="lang-plaintext">sudo cp /usr/local/etc/clamd.conf.sample /usr/local/etc/clamd.conf
sudo vi /usr/local/etc/clamd.conf
</code></pre>
<h3 id="heading-minimal-production-configuration">Minimal Production Configuration</h3>
<pre><code class="lang-plaintext">DatabaseDirectory /usr/local/share/clamav

LogFile /var/log/clamav/clamd.log
LogTime yes

LocalSocket /run/clamd/clamd.sock
LocalSocketMode 666
PidFile /run/clamd/clamd.pid

User root
Foreground yes

# On-access scanning
OnAccessIncludePath /home
OnAccessExcludeRootUID yes
OnAccessPrevention yes

# Mandatory exclusions
OnAccessExcludePath ^/proc
OnAccessExcludePath ^/sys
OnAccessExcludePath ^/run
OnAccessExcludePath ^/dev
OnAccessExcludePath ^/var/lib
OnAccessExcludePath ^/var/log
OnAccessExcludePath ^/tmp

# Performance
MaxQueue 200
MaxThreads 20
OnAccessMaxThreads 10
</code></pre>
<p>Remove the <code>Example</code> line.</p>
<p>Set ownership:</p>
<pre><code class="lang-plaintext">sudo chown clamscan:clamscan /usr/local/etc/clamd.conf
</code></pre>
<h3 id="heading-step-7-selinux-configuration-mandatory">Step 7: SELinux Configuration (MANDATORY)</h3>
<p>Allow antivirus scanning in SELinux enforcing mode:</p>
<pre><code class="lang-plaintext">sudo restorecon -Rv /var/log/clamav /run/clamd
sudo setsebool -P antivirus_can_scan_system 1
</code></pre>
<h3 id="heading-step-8-download-virus-definitions-first-time">Step 8: Download Virus Definitions (First Time)</h3>
<pre><code class="lang-plaintext">sudo -u clamupdate /usr/local/bin/freshclam
</code></pre>
<p>Verify:</p>
<pre><code class="lang-plaintext">ls -lh /usr/local/share/clamav

total 108M
-rw-r--r--. 1 clamupdate clamupdate 8.9K Dec 17 21:18 bytecode-339.cvd.sign
-rw-r--r--. 1 clamupdate clamupdate 276K Dec 17 21:18 bytecode.cvd
-rw-r--r--. 1 clamupdate clamupdate 8.9K Dec 17 21:18 daily-27853.cvd.sign
-rw-r--r--. 1 clamupdate clamupdate  23M Dec 17 21:17 daily.cvd
-rw-r--r--. 1 clamupdate clamupdate   90 Dec 17 21:17 freshclam.dat
-rw-r--r--. 1 clamupdate clamupdate 8.9K Dec 17 21:18 main-63.cvd.sign
-rw-r--r--. 1 clamupdate clamupdate  85M Dec 17 21:18 main.cvd
</code></pre>
<p>You should see:</p>
<ul>
<li><p><code>daily.cvd</code></p>
</li>
<li><p><code>main.cvd</code></p>
</li>
<li><p><code>bytecode.cvd</code></p>
</li>
</ul>
<h3 id="heading-step-9-create-systemd-service-files">Step 9: Create systemd Service Files</h3>
<p>clamd.service</p>
<pre><code class="lang-plaintext">sudo vi /etc/systemd/system/clamd.service
</code></pre>
<pre><code class="lang-plaintext">[Unit]
Description=ClamAV Daemon
After=network.target

[Service]
Type=simple
User=clamscan
Group=clamscan
ExecStart=/usr/local/sbin/clamd --config-file=/usr/local/etc/clamd.conf --foreground
Restart=on-failure
RestartSec=10
RuntimeDirectory=clamd
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target
</code></pre>
<p>Clamav-freshclam.service</p>
<pre><code class="lang-plaintext">sudo vi /etc/systemd/system/clamav-freshclam.service
</code></pre>
<pre><code class="lang-plaintext">[Unit]
Description=ClamAV Virus Database Updater
After=network.target

[Service]
Type=oneshot
User=clamupdate
Group=clamupdate
ExecStart=/usr/local/bin/freshclam

[Install]
WantedBy=multi-user.target
</code></pre>
<h3 id="heading-clamonaccservice-on-access-scanner">Clamonacc.service (On-Access Scanner)</h3>
<pre><code class="lang-plaintext">sudo vi /etc/systemd/system/clamonacc.service
</code></pre>
<pre><code class="lang-plaintext">[Unit]
Description=ClamAV On-Access Scanner
After=clamd.service
Requires=clamd.service

[Service]
Type=simple
ExecStart=/usr/local/sbin/clamonacc --foreground --fdpass
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
</code></pre>
<h3 id="heading-step-10-enable-and-start-services">Step 10: Enable and Start Services</h3>
<pre><code class="lang-plaintext">sudo systemctl daemon-reexec
sudo systemctl daemon-reload

sudo systemctl enable clamd clamav-freshclam clamonacc
sudo systemctl start clamd
sudo systemctl start clamav-freshclam
sudo systemctl start clamonacc
</code></pre>
<h3 id="heading-step-11-verify-services">Step 11: Verify Services</h3>
<pre><code class="lang-plaintext">systemctl is-active clamd
systemctl is-active clamonacc
systemctl status clamav-freshclam
</code></pre>
<p>Expected:</p>
<ul>
<li><p><code>clamd</code> → active</p>
</li>
<li><p><code>clamonacc</code> → active</p>
</li>
<li><p><code>freshclam</code> → inactive (0/SUCCESS)</p>
</li>
</ul>
<h3 id="heading-step-12-malware-validation-eicar-test">Step 12: Malware Validation (EICAR Test)</h3>
<pre><code class="lang-plaintext">[opc@#### ~]$ wget https://secure.eicar.org/eicar_com.zip
--2025-12-17 22:27:37--  https://secure.eicar.org/eicar_com.zip
Resolving secure.eicar.org (secure.eicar.org)... 89.238.73.97, 2a00:1828:1000:2497::2
Connecting to secure.eicar.org (secure.eicar.org)|89.238.73.97|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 184 [application/zip]
Saving to: ‘eicar_com.zip’

eicar_com.zip                    100%[=========================================================&gt;]     184  --.-KB/s    in 0s

2025-12-17 22:27:38 (3.26 MB/s) - ‘eicar_com.zip’ saved [184/184]

[opc@#### ~]$ unzip eicar_com.zip
error:  cannot open zipfile [ eicar_com.zip ]
        Operation not permitted
unzip:  cannot find or open eicar_com.zip, eicar_com.zip.zip or eicar_com.zip.ZIP.
[opc@OHS ~]$
</code></pre>
<p>Expected behavior:</p>
<ul>
<li><p>Root can read the file (expected)</p>
</li>
<li><p>Non-root users are blocked</p>
</li>
<li><p>Detection logged in <code>clamd.log</code></p>
</li>
</ul>
<p>Check logs:</p>
<pre><code class="lang-plaintext">[opc@#### ~]$ journalctl -u clamonacc | tail
Dec 17 22:04:10 OHS clamonacc[4297]: ERROR: ClamClient: Could not connect to clamd, Could not connect to server
Dec 17 22:04:10 OHS clamonacc[4297]: ERROR: Clamonacc: daemon is local, but a connection could not be established
Dec 17 22:04:10 OHS systemd[1]: clamonacc.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 17 22:04:10 OHS systemd[1]: clamonacc.service: Failed with result 'exit-code'.
Dec 17 22:04:15 OHS systemd[1]: clamonacc.service: Scheduled restart job, restart counter is at 3.
Dec 17 22:04:15 OHS systemd[1]: Stopped ClamAV On-Access Scanner.
Dec 17 22:04:15 OHS systemd[1]: Started ClamAV On-Access Scanner.
Dec 17 22:04:16 OHS clamonacc[4726]: ClamInotif: watching '/home' (and all sub-directories)
Dec 17 22:05:44 OHS clamonacc[4726]: /home/opc/eicar_com.zip: Eicar-Test-Signature FOUND
Dec 17 22:27:45 OHS clamonacc[4726]: /home/opc/eicar_com.zip: Eicar-Test-Signature FOUND


[opc@#### ~]$ sudo tail -f /var/log/clamav/clamd.log
Wed Dec 17 22:04:11 2025 -&gt; SWF support enabled.
Wed Dec 17 22:04:11 2025 -&gt; HTML support enabled.
Wed Dec 17 22:04:11 2025 -&gt; XMLDOCS support enabled.
Wed Dec 17 22:04:11 2025 -&gt; HWP3 support enabled.
Wed Dec 17 22:04:11 2025 -&gt; OneNote support enabled.
Wed Dec 17 22:04:11 2025 -&gt; Self checking every 600 seconds.
Wed Dec 17 22:05:44 2025 -&gt; /home/opc/eicar_com.zip: Eicar-Test-Signature FOUND
Wed Dec 17 22:15:44 2025 -&gt; SelfCheck: Database status OK.
Wed Dec 17 22:25:44 2025 -&gt; SelfCheck: Database status OK.
Wed Dec 17 22:27:45 2025 -&gt; /home/opc/eicar_com.zip: Eicar-Test-Signature FOUND
</code></pre>
<h3 id="heading-step-13-quarantine-and-scheduled-scans">Step 13: Quarantine and Scheduled Scans</h3>
<p>Create quarantine directory:</p>
<pre><code class="lang-plaintext">sudo mkdir -p /var/quarantine/clamav
sudo chmod 700 /var/quarantine/clamav
</code></pre>
<p>Daily scan example:</p>
<pre><code class="lang-plaintext">/usr/local/bin/clamscan -r --infected \
  --move=/var/quarantine/clamav \
  --log=/var/log/clamav/daily_scan.log \
  /home /Data
</code></pre>
<h3 id="heading-conclusion">Conclusion</h3>
<p>This DIY ClamAV setup provides enterprise-grade malware protection on Oracle Linux 9, including:</p>
<ul>
<li><p>Real-time scanning</p>
</li>
<li><p>SELinux enforcement</p>
</li>
<li><p>Automated updates</p>
</li>
<li><p>Safe quarantine handling</p>
</li>
</ul>
<p>When deployed correctly, ClamAV becomes a silent, reliable security layer that protects Linux servers without impacting performance.</p>
]]></content:encoded></item><item><title><![CDATA[How to Build an AWS Lambda Function to List EC2 Instances Across All Regions (Python 3.14 DIY Guide)]]></title><description><![CDATA[Managing EC2 instances across multiple AWS regions can quickly become complicated, especially in large or multi-account environments.If you’ve ever wondered:

“How can I get a list of all EC2 instances across all AWS regions?”

“How do I build a Lamb...]]></description><link>https://blog.pratiknborkar.com/aws-lambda-python-3-14-list-all-ec2-instances-across-regions</link><guid isPermaLink="true">https://blog.pratiknborkar.com/aws-lambda-python-3-14-list-all-ec2-instances-across-regions</guid><category><![CDATA[#python314]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[triggers]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[multi-cloud]]></category><category><![CDATA[Cloud Governance]]></category><category><![CDATA[Cloud infrastructure]]></category><category><![CDATA[cloud architecture]]></category><category><![CDATA[awstutorial]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Thu, 11 Dec 2025 06:11:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765433464949/4a550f2e-009e-4e89-a278-5c2be6813d8a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Managing EC2 instances across multiple AWS regions can quickly become complicated, especially in large or multi-account environments.<br />If you’ve ever wondered:</p>
<ul>
<li><p><em>“How can I get a list of all EC2 instances across all AWS regions?”</em></p>
</li>
<li><p><em>“How do I build a Lambda function that scans every region safely?”</em></p>
</li>
<li><p><em>“Why do I get AuthFailure errors when calling DescribeInstances?”</em></p>
</li>
</ul>
<p>…then this DIY AWS Lambda tutorial is exactly what you need.</p>
<p>In this article, we will walk through creating a fully working Python 3.14 AWS Lambda function that scans every AWS region, safely skips restricted regions, and returns a clean JSON list of all EC2 instances.</p>
<p>This guide is written for operations engineers, cloud admins, DevOps teams, and AWS learners who want a practical scenarios.</p>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>Before you start, make sure you have:</p>
<ul>
<li><p>An AWS account</p>
</li>
<li><p>IAM permissions to create and run Lambda functions</p>
</li>
<li><p>Basic knowledge of Python and AWS Console</p>
</li>
<li><p>Access to CloudWatch Logs</p>
</li>
</ul>
<h3 id="heading-step-1-create-the-lambda-function">Step 1: Create the Lambda Function</h3>
<ol>
<li><p>Go to AWS Console → Lambda</p>
</li>
<li><p>Click Create Function</p>
</li>
<li><p>Choose:</p>
<ul>
<li><p>Author from scratch</p>
</li>
<li><p>Runtime: Python 3.14</p>
</li>
<li><p>Architecture: x86_64 or ARM64</p>
</li>
</ul>
</li>
<li><p>Click Create Function</p>
</li>
</ol>
<h3 id="heading-step-2-add-required-iam-permissions">Step 2: Add Required IAM Permissions</h3>
<h3 id="heading-your-lambda-role-must-include-these-permissions">Your Lambda role MUST include these permissions:</h3>
<pre><code class="lang-plaintext">{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeInstances",
        "ec2:DescribeRegions"
      ],
      "Resource": "*"
    }
  ]
}
</code></pre>
<p>Additionally, CloudWatch logging permissions:</p>
<pre><code class="lang-plaintext">{
  "Effect": "Allow",
  "Action": [
    "logs:CreateLogGroup",
    "logs:CreateLogStream",
    "logs:PutLogEvents"
  ],
  "Resource": "*"
}
</code></pre>
<p>These policies allow Lambda to read EC2 information and write logs.</p>
<h3 id="heading-step-3-paste-the-python-314-ec2-scanning-lambda-code">Step 3: Paste the Python 3.14 EC2-Scanning Lambda Code</h3>
<p><strong>This version safely handles restricted regions, preventing common errors like:</strong></p>
<p><code>AuthFailure: AWS was not able to validate the provided access credentials``UnauthorizedOperation</code></p>
<p><strong>Fully Working Python 3.14 Code</strong></p>
<pre><code class="lang-plaintext">import json
import boto3
from botocore.exceptions import ClientError

def lambda_handler(event, context):
    ec2list = []
    ec2 = boto3.client('ec2')

    # Get all AWS regions
    regions = ec2.describe_regions(AllRegions=True).get('Regions', [])

    for region in regions:
        reg = region['RegionName']
        print(f"* Checking region -- {reg}")

        try:
            client = boto3.client('ec2', region_name=reg)
            paginator = client.get_paginator('describe_instances')

            for page in paginator.paginate():
                for reservation in page.get("Reservations", []):
                    for instance in reservation.get("Instances", []):
                        ec2list.append({
                            "InstanceId": instance.get("InstanceId"),
                            "Region": reg
                        })

        except ClientError as e:
            # Skip restricted or disabled regions
            if "AuthFailure" in str(e):
                print(f"Skipping region {reg}: Not enabled for this account.")
                continue
            else:
                print(f"Error in region {reg}: {e}")
                continue

    return {
        "statusCode": 200,
        "body": json.dumps(ec2list)
    }
</code></pre>
<p>This is currently the best and safest multi-region EC2 discovery Lambda code for Python 3.14.</p>
<h3 id="heading-step-4-test-the-lambda-function">Step 4: Test the Lambda Function</h3>
<ol>
<li><p>Click Test</p>
</li>
<li><p>Choose Create Test Event</p>
</li>
<li><p>Use this simple test JSON:</p>
</li>
</ol>
<pre><code class="lang-plaintext">{}
</code></pre>
<ol start="4">
<li>Run the test.</li>
</ol>
<p>You will see logs such as:</p>
<pre><code class="lang-plaintext">Test Event Name
hello-world

Response
{
  "statusCode": 200,
  "body": "[]"
}

Function Logs
START RequestId: c036f21d-83e5-4fa7-b26c-5b286a55ac74 Version: $LATEST
* Checking region -- eu-north-1
* Checking region -- eu-west-3
* Checking region -- eu-west-2
* Checking region -- eu-west-1
* Checking region -- ap-northeast-3
* Checking region -- ap-northeast-2
* Checking region -- me-south-1
</code></pre>
<p>Finally, your output will be if any instance is running:</p>
<pre><code class="lang-plaintext">[
  {
    "InstanceId": "i-1234567890abcd",
    "InstanceType": "t3.micro",
    "State": "running",
    "Region": "us-east-1"
  }
]
</code></pre>
<p>If your account has no instances, it will return:</p>
<pre><code class="lang-plaintext">[]
</code></pre>
<h3 id="heading-troubleshooting">Troubleshooting</h3>
<p><strong>Error: AuthFailure</strong></p>
<p>This means the region is not enabled for your AWS account.<br />The provided code already skips these regions safely.</p>
<p><strong>Error: UnauthorizedOperation</strong></p>
<p>You are missing IAM permissions.<br />Add:</p>
<pre><code class="lang-plaintext">ec2:DescribeInstances
ec2:DescribeRegions
</code></pre>
<h3 id="heading-timeout-errors">Timeout Errors</h3>
<p>Increase Lambda timeout to 30–60 seconds:</p>
<p>Lambda → Configuration → General → Edit → Timeout</p>
]]></content:encoded></item><item><title><![CDATA[Full Stack DR for OKE: Complete Guide to Backup, Image Replication, and Disaster Recovery]]></title><description><![CDATA[Disaster Recovery (DR) is a critical requirement for Kubernetes workloads running on Oracle Container Engine for Kubernetes (OKE). Oracle Full Stack DR provides an automated and reliable mechanism to protect clusters, replicate required artifacts, an...]]></description><link>https://blog.pratiknborkar.com/oracle-cloud-full-stack-dr-for-kubernetes</link><guid isPermaLink="true">https://blog.pratiknborkar.com/oracle-cloud-full-stack-dr-for-kubernetes</guid><category><![CDATA[Disaster recovery]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[#kubernetes #container ]]></category><category><![CDATA[OCI]]></category><category><![CDATA[FSDR]]></category><category><![CDATA[Oracle Cloud]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Wed, 03 Dec 2025 10:38:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764758165132/147adc9d-d1c7-4eb9-8380-e6849374f326.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Disaster Recovery (DR) is a critical requirement for Kubernetes workloads running on Oracle Container Engine for Kubernetes (OKE). Oracle Full Stack DR provides an automated and reliable mechanism to protect clusters, replicate required artifacts, and restore workloads in a standby region with minimal downtime. This article explains how Full Stack DR handles backup, image replication, scaling actions, and the overall workflow during a DR event.</p>
<h3 id="heading-how-full-stack-dr-creates-and-stores-backups">How Full Stack DR Creates and Stores Backups</h3>
<p>Full Stack DR uses OCI Container Instances to manage both backup and restore operations:</p>
<ul>
<li><p>During backup, a Container Instance is created in the primary region, and a backup container runs inside it to capture Kubernetes resources, images, and metadata.</p>
</li>
<li><p>The generated backup artifacts are stored securely in OCI Object Storage.</p>
</li>
<li><p>A log file is also created for every operation, enabling auditability and troubleshooting.</p>
</li>
</ul>
<p>For a Restore Operation:</p>
<ul>
<li>A Container Instance is created in the standby region, where a restore container retrieves the previously stored data from Object Storage and restores the cluster state.</li>
</ul>
<p>This isolated container-driven approach ensures that backup/restore tasks are efficient, secure, and do not interfere with your live cluster.</p>
<h3 id="heading-scheduling-backup-operations">Scheduling Backup Operations</h3>
<p>DR plans support flexible backup frequency options:</p>
<ul>
<li><p>Hourly</p>
</li>
<li><p>Daily</p>
</li>
<li><p>Weekly</p>
</li>
<li><p>Monthly</p>
</li>
</ul>
<p>These schedules enable businesses to design RPO/RTO values that match their compliance and operational needs.</p>
<p><img src="https://blogs.oracle.com/wp-content/uploads/sites/83/2025/10/FSDR-20250326-oke-sw-before.png" alt="Full Stack DR failover process for Kubernetes applications" /></p>
<h3 id="heading-image-replication-for-dr">Image Replication for DR</h3>
<p>Full Stack DR automatically replicates private OCIR images used by workloads in the primary OKE cluster:</p>
<ul>
<li><p>Only <em>private images</em> can be replicated.</p>
</li>
<li><p>Public Docker Hub or public OCIR images cannot be replicated as part of Full Stack DR.</p>
</li>
<li><p>Users may choose to supply a custom image replication secret stored in OCI Vault instead of the default.</p>
</li>
</ul>
<p>This ensures that all required images exist in the standby region, enabling applications to start smoothly after failover.</p>
<h3 id="heading-node-pool-scaling-during-dr">Node Pool Scaling During DR</h3>
<p>Full Stack DR allows you to define node pool scaling actions, which can automatically adjust the number of nodes during failover or switchover events.<br />You can scale up or down each node pool based on the expected workload load in the standby region.</p>
<h3 id="heading-instance-jump-host-api-access-instance">Instance Jump Host (API Access Instance)</h3>
<p>Full Stack DR requires access to the OKE cluster’s API endpoint to execute backup or recovery tasks.</p>
<ul>
<li><p>You can specify an existing Instance Jump Host, which must have network access to the cluster's public or private API endpoint.</p>
</li>
<li><p>If you do not provide an instance, Full Stack DR will automatically create an ephemeral Container Instance to handle API operations.</p>
</li>
<li><p>Using a dedicated jump host is recommended for stable and controlled access.</p>
</li>
</ul>
<p>This ensures consistent connectivity during DR operations, especially in private or restricted networks.</p>
<h3 id="heading-load-balancer-mapping">Load Balancer Mapping</h3>
<p>If your workloads use the OCI Native Ingress Controller, Full Stack DR requires mapping each primary region Load Balancer with a corresponding Load Balancer in the standby region.</p>
<p>This establishes consistent routing in the restored environment and ensures services remain accessible post-failover.</p>
<h3 id="heading-vault-mapping-for-secrets">Vault Mapping for Secrets</h3>
<p>If your applications store Kubernetes secrets in OCI Vault, Full Stack DR supports:</p>
<ul>
<li><p>Mapping primary-region vaults to standby-region vaults</p>
</li>
<li><p>Enabling vault replication</p>
</li>
<li><p>Or manually copying secrets to the standby vault</p>
</li>
</ul>
<p>This keeps all secret data synchronized so restored applications function without manual fixes.</p>
<h3 id="heading-namespace-backup-policies">Namespace Backup Policies</h3>
<p>You have flexible options for selecting namespaces during backup:</p>
<ul>
<li><p>Include all namespaces</p>
</li>
<li><p>Include specific namespaces</p>
</li>
<li><p>Exclude selected namespaces</p>
</li>
</ul>
<p>A maximum of 32 namespaces can be selected for fine-grained control.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Oracle Full Stack DR for OKE provides a comprehensive, container-driven approach to Kubernetes disaster recovery. By leveraging Container Instances, Object Storage, image replication, load balancer mapping, and vault replication, it ensures a smooth and predictable failover process.</p>
<p>From scheduled backups to automated scaling and image synchronization, Full Stack DR eliminates complexity and helps organizations maintain business continuity with confidence.</p>
]]></content:encoded></item><item><title><![CDATA[How to Migrate AWS ECR Images to OCI Container Registry (OCIR) Using Skopeo – Complete Guide]]></title><description><![CDATA[As organizations move toward multi-cloud or decide to shift workloads between cloud providers, migrating container images becomes a critical task. One common requirement is transferring images from AWS Elastic Container Registry (ECR) to Oracle Cloud...]]></description><link>https://blog.pratiknborkar.com/migrate-ecr-to-ocir-using-skopeo</link><guid isPermaLink="true">https://blog.pratiknborkar.com/migrate-ecr-to-ocir-using-skopeo</guid><category><![CDATA[ocir]]></category><category><![CDATA[skopeo]]></category><category><![CDATA[MigrateClouds]]></category><category><![CDATA[Container Registry]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Mon, 01 Dec 2025 02:55:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764557584895/cc442354-454c-4c7e-99de-a8f255a1bf38.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As organizations move toward multi-cloud or decide to shift workloads between cloud providers, migrating container images becomes a critical task. One common requirement is transferring images from AWS Elastic Container Registry (ECR) to Oracle Cloud Infrastructure Registry (OCIR).</p>
<p>In this article, i used Skopeo a powerful image management tool to migrate entire repositories, including all tags, from ECR to OCIR. This guide also includes a practical script that uses Docker login + Skopeo for secure image transfer.</p>
<h3 id="heading-what-is-skopeo"><strong>What is Skopeo?</strong></h3>
<p>Skopeo is an open-source CLI tool that performs operations on container registries without needing Docker or Podman running in the background.</p>
<h3 id="heading-why-skopeo"><strong>Why Skopeo?</strong></h3>
<ul>
<li><p>No local image pull needed – copies images directly from registry to registry</p>
</li>
<li><p>Fast layer transfers – only missing layers are pushed</p>
</li>
<li><p>Daemonless – reduces overhead, avoids Docker dependency</p>
</li>
<li><p>Supports all major registries – AWS ECR, OCIR, GCR, DockerHub, Quay, GitHub Container Registry, etc.</p>
</li>
<li><p>Secure authentication – works with AWS IAM token and OCIR auth token</p>
</li>
</ul>
<p>Because of these advantages, Skopeo is the best tool for migrating images across cloud providers.</p>
<h3 id="heading-migrating-ecr-to-ocir-using-skopeo"><strong>Migrating ECR to OCIR Using Skopeo</strong></h3>
<p>Below is the method I used:<br />✔ Authenticate to AWS ECR using Docker<br />✔ Authenticate to OCIR using Docker<br />✔ Get all tags from the ECR repository<br />✔ Loop through each tag<br />✔ Copy from ECR → OCIR using Skopeo</p>
<p>This is a recommended and production-ready approach.</p>
<h3 id="heading-1-authenticate-to-aws-ecr">1. Authenticate to AWS ECR</h3>
<p>AWS ECR uses temporary auth tokens. We log in using Docker:</p>
<pre><code class="lang-plaintext">aws ecr get-login-password --region $AWS_REGION | \
docker login --username AWS --password-stdin $ECR_DOMAIN
</code></pre>
<ul>
<li><p><code>$AWS_REGION</code> – Example: <code>us-east-1</code></p>
</li>
<li><p><code>$ECR_DOMAIN</code> – Example: <a target="_blank" href="http://896077038029.dkr.ecr.us-east-1.amazonaws.com"><code>896077038029.dkr.ecr.us-east-1.amazonaws.com</code></a></p>
</li>
</ul>
<h3 id="heading-2-authenticate-to-ocir">2. Authenticate to OCIR</h3>
<p>OCIR requires an auth token, not your console password.</p>
<pre><code class="lang-plaintext">docker login $OCI_REGISTRY \
  --username "${OCI_NAMESPACE}/oracleidentitycloudservice/${OCIR_EMAIL}" \
  --password "${OCIR_TOKEN}"
</code></pre>
<p>Where:</p>
<ul>
<li><p><code>$OCI_REGISTRY</code> → <a target="_blank" href="http://iad.ocir.io"><code>ap-sydney-1.ocir.io</code></a></p>
</li>
<li><p><code>$OCI_NAMESPACE</code> → Your tenancy namespace</p>
</li>
<li><p><code>$OCIR_EMAIL</code> → Your login email</p>
</li>
<li><p><code>$OCIR_TOKEN</code> → OCIR auth token</p>
</li>
</ul>
<h3 id="heading-3-fetch-all-image-tags-from-the-ecr-repository">3. Fetch All Image Tags from the ECR Repository</h3>
<pre><code class="lang-plaintext">TAGS=$(aws ecr list-images \
    --region $AWS_REGION \
    --repository-name $ECR_REPO \
    --query 'imageIds[*].imageTag' \
    --output text)
</code></pre>
<p>This captures all tags so every version of your image is copied.</p>
<h3 id="heading-4-loop-through-tags-and-copy-images-using-skopeo">4. Loop Through Tags and Copy Images Using Skopeo</h3>
<p>Below is your exact logic, written in clean article format:</p>
<pre><code class="lang-plaintext">for TAG in $TAGS; do
    if [ "$TAG" == "None" ]; then
        echo "Skipping untagged image in $ECR_REPO"
        continue
    fi

    SRC="docker://${ECR_DOMAIN}/${ECR_REPO}:${TAG}"
    DEST="docker://${OCI_REGISTRY}/${OCI_NAMESPACE}/${OCI_REPO}:${TAG}"

    echo "Copying $SRC → $DEST"

    skopeo copy --all \
      --src-creds "AWS:$(aws ecr get-login-password --region $AWS_REGION)" \
      --dest-creds "$OCIR_CREDS" \
      "$SRC" "$DEST"
done
</code></pre>
<h3 id="heading-what-happens-here">What happens here?</h3>
<ul>
<li><p><code>--src-creds</code> authenticates to AWS ECR using fresh token</p>
</li>
<li><p><code>--dest-creds</code> authenticates to OCIR</p>
</li>
<li><p>The <code>--all</code> flag ensures all image manifests and multi-arch data are copied</p>
</li>
<li><p>Skopeo moves layers directly registry → registry without local storage</p>
</li>
</ul>
<h3 id="heading-why-this-approach-is-effective">Why This Approach Is Effective</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Benefit</td><td>Explanation</td></tr>
</thead>
<tbody>
<tr>
<td>Secure</td><td>No plaintext passwords; AWS token auto-rotates</td></tr>
<tr>
<td>Fast</td><td>Direct registry-to-registry transfer</td></tr>
<tr>
<td>Automated</td><td>Loop copies all image tags</td></tr>
<tr>
<td>No local pull/push</td><td>Saves disk space; ideal for CI/CD</td></tr>
<tr>
<td>Multi-arch compatible</td><td>Works with ARM + AMD images</td></tr>
</tbody>
</table>
</div><p>This method is commonly used in cloud migrations, DR planning, and multi-cloud container strategies.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Migrating container images from AWS ECR to OCI Container Registry (OCIR) is simple and efficient when using Skopeo. Its ability to copy images directly between registries combined with Docker authenticationmakes it perfect for large migrations where reliability, speed, and automation are essential.</p>
]]></content:encoded></item><item><title><![CDATA[OKE Cluster Autoscaler Explained: Installation, Scaling Test, and Best Practices]]></title><description><![CDATA[OKE Cluster Autoscaler is a Kubernetes component that automatically adjusts the number of nodes in your Oracle Kubernetes Engine (OKE) node pools based on pod demand.
The OKE Cluster Autoscaler supports two authentication methods: Instance Principals...]]></description><link>https://blog.pratiknborkar.com/oke-cluster-autoscaler-explained-installation-scaling-test-and-best-practices</link><guid isPermaLink="true">https://blog.pratiknborkar.com/oke-cluster-autoscaler-explained-installation-scaling-test-and-best-practices</guid><category><![CDATA[OCI]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[#kubernetes #container ]]></category><category><![CDATA[oke]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[containerization]]></category><category><![CDATA[observability]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Fri, 21 Nov 2025 20:42:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763759284348/1fbdfc86-7a22-4084-8dec-2c6b4208912f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>OKE Cluster Autoscaler is a Kubernetes component that automatically adjusts the number of nodes in your Oracle Kubernetes Engine (OKE) node pools based on pod demand.</p>
<p>The OKE Cluster Autoscaler supports two authentication methods: Instance Principals and Workload Identity Principals. Instance Principal means the autoscaler uses the identity of the OCI compute instance it runs on, requiring no secrets and offering the simplest, most secure setup.</p>
<p>Workload Identity Principal uses the identity of a Kubernetes workload, allowing more granular access but requiring additional configuration.</p>
<p>In this article, we configured and tested the autoscaler using Instance Principals.<br />With this setup, your OKE cluster can automatically scale up and down based on workload demand, improving efficiency while reducing operational overhead.</p>
<p><strong>Scales Up (adds nodes) when:</strong></p>
<ul>
<li><p>There are pending pods that cannot be scheduled<br />  (e.g., not enough CPU/memory on existing nodes).</p>
</li>
<li><p>Autoscaler requests OCI to create new worker nodes in the node pool.h</p>
</li>
</ul>
<p><strong>Scales Down (removes nodes) when:</strong></p>
<ul>
<li><p>Nodes are under-utilized for a long time<br />  (default = 10 minutes unless changed).</p>
</li>
<li><p>No critical pods are running on that node.</p>
</li>
<li><p>Autoscaler safely drains the node and deletes it from OCI.</p>
</li>
</ul>
<h3 id="heading-step-1-create-dynamic-group-for-oke-nodes"><strong>STEP 1 — Create Dynamic Group for OKE Nodes</strong></h3>
<p>Go to: <strong>IAM → Dynamic Groups → Create Dynamic Group</strong></p>
<p>Example: <code>oke-nodepool-dg</code></p>
<p>Rule (recommended):</p>
<pre><code class="lang-plaintext">Any {instance.compartment.id = '&lt;COMPARTMENT_OCID&gt;'}
</code></pre>
<h3 id="heading-step-2-create-iam-policies"><strong>STEP 2 — Create IAM Policies</strong></h3>
<p>Go to: <strong>IAM → Policies → Create Policy</strong></p>
<p>Choose the compartment where your OKE node pool exists.</p>
<p>Paste the required policy:</p>
<pre><code class="lang-plaintext">Allow dynamic-group &lt;dynamic-group-name&gt; to manage cluster-node-pools in compartment &lt;compartment-name&gt;
Allow dynamic-group &lt;dynamic-group-name&gt; to manage instance-family in compartment &lt;compartment-name&gt;
Allow dynamic-group &lt;dynamic-group-name&gt; to use subnets in compartment &lt;compartment-name&gt;
Allow dynamic-group &lt;dynamic-group-name&gt; to read virtual-network-family in compartment &lt;compartment-name&gt;
Allow dynamic-group &lt;dynamic-group-name&gt; to use vnics in compartment &lt;compartment-name&gt;
Allow dynamic-group &lt;dynamic-group-name&gt; to inspect compartments in compartment &lt;compartment-name&gt;
</code></pre>
<p>This allows nodes to scale their own node pool.</p>
<h3 id="heading-step-3-deploy-oke-cluster-autoscaler-using-add-on"><strong>STEP 3 — Deploy OKE Cluster Autoscaler using Add-On</strong></h3>
<ul>
<li><p>Go to <strong>OKE → Cluster → Add-Ons</strong></p>
</li>
<li><p>Select Cluster Autoscaler → Enable</p>
</li>
<li><p>Set Replicas = 3</p>
</li>
<li><p>Add node pool scaling config:</p>
<pre><code class="lang-plaintext">  &lt;min&gt;:&lt;max&gt;:&lt;NODEPOOL_OCID&gt;
</code></pre>
<p>  Example:</p>
<pre><code class="lang-plaintext">  3:5:ocid1.nodepool.oc1...
</code></pre>
</li>
<li><p>Save the Add-On</p>
</li>
<li><p>Verify pods:</p>
<pre><code class="lang-plaintext">  [root@jump-host ~]# kubectl -n kube-system get pods | grep autoscaler
  cluster-autoscaler-64bf849b78-hxqm7   1/1     Running   0          11d
  cluster-autoscaler-64bf849b78-jd8vs   1/1     Running   0          11d
  cluster-autoscaler-64bf849b78-t4fzh   1/1     Running   0          11d
</code></pre>
<p>  Deploy OKE Cluster Autoscaler (Manual YAML Method)</p>
</li>
<li><p>In a text editor, create a file called <code>cluster-autoscaler.yaml</code> with the following content:</p>
</li>
<li><pre><code class="lang-plaintext">  ---
  apiVersion: v1
  kind: ServiceAccount
  metadata:
    labels:
      k8s-addon: cluster-autoscaler.addons.k8s.io
      k8s-app: cluster-autoscaler
    name: cluster-autoscaler
    namespace: kube-system
  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
    name: cluster-autoscaler
    labels:
      k8s-addon: cluster-autoscaler.addons.k8s.io
      k8s-app: cluster-autoscaler
  rules:
    - apiGroups: [""]
      resources: ["events", "endpoints"]
      verbs: ["create", "patch"]
    - apiGroups: [""]
      resources: ["pods/eviction"]
      verbs: ["create"]
    - apiGroups: [""]
      resources: ["pods/status"]
      verbs: ["update"]
    - apiGroups: [""]
      resources: ["endpoints"]
      resourceNames: ["cluster-autoscaler"]
      verbs: ["get", "update"]
    - apiGroups: [""]
      resources: ["nodes"]
      verbs: ["watch", "list", "get", "patch", "update"]
    - apiGroups: [""]
      resources:
        - "pods"
        - "services"
        - "replicationcontrollers"
        - "persistentvolumeclaims"
        - "persistentvolumes"
      verbs: ["watch", "list", "get"]
    - apiGroups: ["extensions"]
      resources: ["replicasets", "daemonsets"]
      verbs: ["watch", "list", "get"]
    - apiGroups: ["policy"]
      resources: ["poddisruptionbudgets"]
      verbs: ["watch", "list"]
    - apiGroups: ["apps"]
      resources: ["statefulsets", "replicasets", "daemonsets"]
      verbs: ["watch", "list", "get"]
    - apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses", "csinodes", "volumeattachments"]
      verbs: ["watch", "list", "get"]
    - apiGroups: ["batch", "extensions"]
      resources: ["jobs"]
      verbs: ["get", "list", "watch", "patch"]
    - apiGroups: ["coordination.k8s.io"]
      resources: ["leases"]
      verbs: ["create"]
    - apiGroups: ["coordination.k8s.io"]
      resourceNames: ["cluster-autoscaler"]
      resources: ["leases"]
      verbs: ["get", "update"]
    - apiGroups: [""]
      resources: ["namespaces"]
      verbs: ["watch", "list"]
    - apiGroups: ["storage.k8s.io"]
      resources: ["csidrivers", "csistoragecapacities"]
      verbs: ["watch", "list"]
  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: Role
  metadata:
    name: cluster-autoscaler
    namespace: kube-system
    labels:
      k8s-addon: cluster-autoscaler.addons.k8s.io
      k8s-app: cluster-autoscaler
  rules:
    - apiGroups: [""]
      resources: ["configmaps"]
      verbs: ["create","list","watch"]
    - apiGroups: [""]
      resources: ["configmaps"]
      resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
      verbs: ["delete", "get", "update", "watch"]

  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRoleBinding
  metadata:
    name: cluster-autoscaler
    labels:
      k8s-addon: cluster-autoscaler.addons.k8s.io
      k8s-app: cluster-autoscaler
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: cluster-autoscaler
  subjects:
    - kind: ServiceAccount
      name: cluster-autoscaler
      namespace: kube-system

  ---
  apiVersion: rbac.authorization.k8s.io/v1
  kind: RoleBinding
  metadata:
    name: cluster-autoscaler
    namespace: kube-system
    labels:
      k8s-addon: cluster-autoscaler.addons.k8s.io
      k8s-app: cluster-autoscaler
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: cluster-autoscaler
  subjects:
    - kind: ServiceAccount
      name: cluster-autoscaler
      namespace: kube-system

  ---
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: cluster-autoscaler
    namespace: kube-system
    labels:
      app: cluster-autoscaler
  spec:
    replicas: 3
    selector:
      matchLabels:
        app: cluster-autoscaler
    template:
      metadata:
        labels:
          app: cluster-autoscaler
        annotations:
          prometheus.io/scrape: 'true'
          prometheus.io/port: '8085'
      spec:
        serviceAccountName: cluster-autoscaler
        containers:
          - image: iad.ocir.io/oracle/oci-cluster-autoscaler:{{ image tag }}
            name: cluster-autoscaler
            resources:
              limits:
                cpu: 100m
                memory: 300Mi
              requests:
                cpu: 100m
                memory: 300Mi
            command:
              - ./cluster-autoscaler
              - --v=0
              - --stderrthreshold=info
              - --cloud-provider=oci
              - --scale-down-enabled=true
              - --scale-down-delay-after-add=10m
              - --scale-down-delay-after-delete=10s
              - --scale-down-delay-after-failure=3m
              - --scale-down-unneeded-time=10m
              - --scale-down-unready-time=20m
              - --scale-down-utilization-threshold=0.5
              - --scale-down-non-empty-candidates-count=30
              - --scale-down-candidates-pool-ratio=0.1
              - --scale-down-candidates-pool-min-count=50
              - --scan-interval=10s
              - --max-nodes-total=0
              - --cores-total=0:320000
              - --memory-total=0:6400000
              - --max-graceful-termination-sec=600
              - --max-total-unready-percentage=45
              - --ok-total-unready-count=3
              - --max-node-provision-time=15m
              - --nodes=3:5:{{ node pool ocid 1 }}
              - --emit-per-nodegroup-metrics=false
              - --estimator=binpacking
              - --expander=random
              - --ignore-daemonsets-utilization=false
              - --ignore-mirror-pods-utilization=false
              - --write-status-configmap=true
              - --status-config-map-name=cluster-autoscaler-status
              - --max-inactivity=10m
              - --max-failing-time=15m
              - --balance-similar-node-groups=false
              - --unremovable-node-recheck-timeout=5m
              - --expendable-pods-priority-cutoff=-10
              - --daemonset-eviction-for-empty-nodes=false
              - --daemonset-eviction-for-occupied-nodes=true
              - --cordon-node-before-terminating=false
              - --record-duplicated-events=false
              - --max-nodes-per-scaleup=1000
              - --new-pod-scale-up-delay=0s
              - --max-scale-down-parallelism=10
              - --max-bulk-soft-taint-count=10
              - --max-pod-eviction-time=2m0s
              - --debugging-snapshot-enabled=false
              - --enforce-node-group-min-size=false
              - --skip-nodes-with-system-pods=true
              - --skip-nodes-with-local-storage=true
              - --min-replica-count=0
              - --skip-nodes-with-custom-controller-pods=true
            imagePullPolicy: "Always"
</code></pre>
<p>  <strong>Important Note</strong></p>
<ul>
<li><p><code>--scale-down-unneeded-time=10m</code></p>
</li>
<li><p><code>--scan-interval=10s</code></p>
</li>
<li><p><code>--scale-down-delay-after-add=10m</code></p>
</li>
</ul>
</li>
</ul>
<p>    After a new node is added, the autoscaler waits 10 minutes before checking for scale-down.<br />    Then the node must remain unused for another 10 minutes before being removed.</p>
<ul>
<li><p>Worst-case scale-down time:</p>
<p>  10m (delay-after-add) + 10m (unneeded-time) = 20 minutes.</p>
</li>
</ul>
<h3 id="heading-step-4-deploy-test-workload-amp-trigger-autoscaler-scaling"><strong>STEP 4 — Deploy Test Workload &amp; Trigger Autoscaler Scaling</strong></h3>
<p><strong>1️⃣ Create the NGINX deployment</strong></p>
<p>Apply the following manifest:</p>
<pre><code class="lang-plaintext">apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: syd.ocir.io/#######/ocir-repo:nginx-latest
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "500Mi"
        imagePullPolicy: Always
</code></pre>
<p>Apply it:</p>
<pre><code class="lang-plaintext">kubectl apply -f nginx.yaml
</code></pre>
<p><strong>2️⃣ Scale the deployment to create load</strong></p>
<p>Increase replicas to <strong>50</strong> to force scheduling pressure:</p>
<pre><code class="lang-plaintext">kubectl scale deploy nginx-deployment --replicas=50
</code></pre>
<p>This will create pending pods → autoscaler should <strong>scale UP</strong> your node pool.</p>
<p><strong>3️⃣ Observe Autoscaler events (scale up / scale down)</strong></p>
<p>Run this command to watch autoscaler decisions live:</p>
<pre><code class="lang-plaintext">for p in $(kubectl get pods -n kube-system -l app=cluster-autoscaler -o name); do
  echo "=== $p ==="
  kubectl logs -n kube-system $p --since=1h | grep -Ei 'scale[- ]?up|scale[- ]?down' || true
done
</code></pre>
<p><strong>Observe Autoscaler events (scale up)</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763755588133/51b77ea4-92ea-476e-a35f-8a7a32a5a426.png" alt class="image--center mx-auto" /></p>
<p><strong>Observe Autoscaler events (scale Down)</strong></p>
<p>Scale the deployment down to 3 replicas to intentionally create scheduling pressure.</p>
<pre><code class="lang-plaintext">kubectl scale deploy nginx-deployment --replicas=3
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763756356187/a1db3270-60b1-491b-824b-8c37bba2bba2.png" alt class="image--center mx-auto" /></p>
<p>In this article, we configured the OKE Cluster Autoscaler, deployed test workloads, and validated scale-up/scale-down events. With this setup, your cluster will continuously maintain the right size while reducing manual operational tasks.</p>
]]></content:encoded></item><item><title><![CDATA[Migrating from Flannel to Cilium on Oracle Kubernetes Engine (OKE)]]></title><description><![CDATA[Cilium is an open-source, cloud-native networking, security, and observability platform built on top of eBPF (Extended Berkeley Packet Filter) a revolutionary Linux kernel technology that allows programs to run safely and efficiently inside the kerne...]]></description><link>https://blog.pratiknborkar.com/replace-flannel-with-cilium-on-oke</link><guid isPermaLink="true">https://blog.pratiknborkar.com/replace-flannel-with-cilium-on-oke</guid><category><![CDATA[cilium]]></category><category><![CDATA[OCI]]></category><category><![CDATA[kube-proxy]]></category><category><![CDATA[eBPF]]></category><category><![CDATA[calico]]></category><category><![CDATA[cni]]></category><category><![CDATA[CNCF]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Mon, 10 Nov 2025 08:15:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762762497550/216b7b68-21ef-4d60-a8b1-e401fff4bebd.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Cilium</strong> is an open-source, cloud-native networking, security, and observability platform built on top of <strong>eBPF (Extended Berkeley Packet Filter)</strong> a revolutionary Linux kernel technology that allows programs to run safely and efficiently inside the kernel without changing its source code.</p>
<p>Unlike traditional CNIs (like Flannel, Calico, or Weave) that rely on <strong>iptables</strong> for packet forwarding and filtering, Cilium uses <strong>eBPF</strong> to dynamically inject logic into the kernel’s networking stack. This enables <strong>faster performance</strong>, <strong>fine-grained visibility</strong>, and <strong>deep network security enforcement</strong> all without the limitations of legacy Linux networking mechanisms.</p>
<p>Cilium can completely replace:</p>
<ul>
<li><p><strong>Flannel</strong> :- as the CNI plugin for pod networking</p>
</li>
<li><p><strong>kube-proxy</strong> :- as the Kubernetes Service load-balancer</p>
</li>
<li><p><strong>NetworkPolicy</strong> :- engines — by enforcing security policies using eBPF</p>
</li>
<li><p><strong>Monitoring tools</strong> :- through its built-in observability layer, <strong>Hubble</strong></p>
</li>
</ul>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ul>
<li><p>Access to an <strong>OKE cluster</strong> (v1.27 or later recommended)</p>
</li>
<li><p><code>kubectl</code> configured to access your Jump Server or CloudsShell</p>
</li>
<li><p><code>helm</code> installed (v3.8+)</p>
</li>
<li><p>Administrator privileges in the cluster</p>
</li>
</ul>
<h3 id="heading-step-0-disable-oke-addon-flannel">Step 0 – Disable OKE Addon Flannel</h3>
<p>OKE automatically provisions <strong>Flannel</strong> as the default CNI.<br />Before installing Cilium, you must disable this addon and remove the existing DaemonSet.</p>
<p>Run the following commands:</p>
<pre><code class="lang-plaintext"># Disable Flannel addon from OKE console

# Remove the Flannel DaemonSet
kubectl delete ds flannel -n kube-system
</code></pre>
<p>This ensures Flannel pods are deleted and networking configuration is ready for Cilium to take over.</p>
<h3 id="heading-step-1-add-the-cilium-helm-repository">Step 1 – Add the Cilium Helm Repository</h3>
<p>Add and update the official <strong>Cilium Helm chart repository</strong>:</p>
<pre><code class="lang-plaintext">helm repo add cilium https://helm.cilium.io/
helm repo update
</code></pre>
<p>You can verify available versions with:</p>
<pre><code class="lang-plaintext">helm search repo cilium/cilium --versions
</code></pre>
<h3 id="heading-step-2-install-cilium-cni">Step 2 – Install Cilium CNI</h3>
<p>Install latest <strong>Cilium version 1.18.3</strong> using Helm:</p>
<pre><code class="lang-plaintext">helm install cilium cilium/cilium \
  --version 1.18.3 \
  --namespace kube-system \
  --set kubeProxyReplacement=true \
  --set kubeProxyReplacementHealthzBindAddr="0.0.0.0:10256" \
  --set ipam.mode=kubernetes \
  --set ipv4.enabled=true \
  --set cluster.name=oke-cilium \
  --set k8sServiceHost=$(kubectl config view \
      --minify -o jsonpath='{.clusters[0].cluster.server}' \
      | sed 's#https://##;s#:.*##') \
  --set k8sServicePort=6443 \
  --set cluster.podCIDRList="{10.230.0.0/16}" \
  --set cluster.serviceCIDR="10.96.0.0/16"
</code></pre>
<p>Explanation</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Parameter</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><code>kubeProxyReplacement=true</code></td><td>Enables full eBPF-based kube-proxy replacement</td></tr>
<tr>
<td><code>ipam.mode=kubernetes</code></td><td>IPAM controlled by Kubernetes</td></tr>
<tr>
<td><code>cluster.podCIDRList</code></td><td>Your Pod CIDR range</td></tr>
<tr>
<td><code>cluster.serviceCIDR</code></td><td>Your Service CIDR range</td></tr>
<tr>
<td><code>k8sServiceHost</code> &amp; <code>k8sServicePort</code></td><td>Points Cilium to your cluster’s API server</td></tr>
</tbody>
</table>
</div><p>After installation, Cilium will replace both <strong>Flannel (CNI)</strong> and <strong>kube-proxy</strong> at the dataplane level.</p>
<h3 id="heading-step-3-enable-hubble-relay-ui-and-metrics">Step 3 – Enable Hubble Relay, UI, and Metrics</h3>
<p>Hubble provides powerful observability for network flows and policies in your cluster.</p>
<p>Upgrade the Helm release to enable Hubble features:</p>
<pre><code class="lang-plaintext">helm upgrade cilium cilium/cilium \
  --namespace kube-system \
  --reuse-values \
  --set bandwidthManager.enabled=true \
  --set hubble.enabled=true \
  --set hubble.metrics.enabled="{dns,drop,tcp,flow,icmp,http}" \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true \
  --set hubble.ui.service.type=LoadBalancer \
  --set hubble.relay.service.type=ClusterIP \
  --set hubble.relay.resources.limits.cpu=200m \
  --set hubble.relay.resources.limits.memory=256Mi \
  --set hubble.relay.resources.requests.cpu=100m \
  --set hubble.relay.resources.requests.memory=128Mi \
  --set policyEnforcementMode=default
</code></pre>
<h3 id="heading-step-4-verify-cilium-installation">Step 4 – Verify Cilium Installation</h3>
<p>Check Pod Status</p>
<pre><code class="lang-plaintext">kubectl -n kube-system get pods -l k8s-app=cilium
kubectl -n kube-system get pods -l k8s-app=hubble-relay
kubectl -n kube-system get pods -l k8s-app=hubble-ui
</code></pre>
<p>All pods should be in the <strong>Running</strong> state.</p>
<p>Check Cilium Agent Health</p>
<pre><code class="lang-plaintext">kubectl -n kube-system exec -ti ds/cilium -- cilium status
</code></pre>
<p>Expected output (summary):</p>
<pre><code class="lang-plaintext">Kubernetes:              Ok         1.34 (v1.34.1) [linux/arm64]
Cilium:                  Ok   1.18.3 (v1.18.3-c1601689)
Cilium health daemon:    Ok
Proxy Status:            OK, ip 10.230.1.88, 0 redirects active on ports 10000-20000, Envoy: external
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 49.54   Metrics: Ok
</code></pre>
<h3 id="heading-step-5-verify-kube-proxy-replacement">Step 5 – Verify Kube-Proxy Replacement</h3>
<p>Cilium replaces kube-proxy by handling Kubernetes Service routing via eBPF.<br />Verify this at both <strong>deployment</strong> and <strong>runtime</strong> levels.</p>
<p>Deployment Level (Helm Values)</p>
<pre><code class="lang-plaintext">helm get values cilium -n kube-system | grep kubeProxyReplacement
</code></pre>
<p>Expected output:</p>
<pre><code class="lang-plaintext">kubeProxyReplacement: true
</code></pre>
<p>Runtime Level (eBPF Map)</p>
<pre><code class="lang-plaintext">kubectl -n kube-system exec -ti ds/cilium -- cilium bpf lb list
</code></pre>
<p>If you see ClusterIP, NodePort, or LoadBalancer entries, Cilium’s eBPF load balancer is active.</p>
<p>Example output:</p>
<pre><code class="lang-plaintext">10.96.0.10:53/UDP -&gt; 10.230.0.5:53/UDP
0.0.0.0:30001/TCP -&gt; 10.230.0.15:8080/TCP
</code></pre>
<h3 id="heading-step-6-verify-hubble-functionality">Step 6 – Verify Hubble Functionality</h3>
<p>Check Hubble components:</p>
<pre><code class="lang-plaintext">kubectl -n kube-system get pods -l k8s-app=hubble-relay
kubectl -n kube-system get pods -l k8s-app=hubble-ui
</code></pre>
<p>Check the Hubble UI service:</p>
<pre><code class="lang-plaintext">kubectl get svc -n kube-system | grep hubble-ui
</code></pre>
<p>Then open:</p>
<pre><code class="lang-plaintext">http://&lt;LoadBalancer&gt;
</code></pre>
<p>You’ll see a real-time topology of all pod-to-pod communication in your cluster.This confirms <strong>Cilium is replacing kube-proxy</strong> at the kernel level.</p>
<h3 id="heading-step-7-allow-hubble-to-capture-all-flows">Step 7 - <strong>Allow Hubble to Capture All Flows</strong></h3>
<p>By default, Cilium aggregates network-flow events to reduce load.<br />To allow <strong>Hubble to record every individual flow</strong>, disable aggregation:</p>
<pre><code class="lang-plaintext">kubectl -n kube-system set env daemonset/cilium \
  CILIUM_MONITOR_AGGREGATION=none \
  CILIUM_MONITOR_AGGREGATION_INTERVAL=5s
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762761765397/b594bc7c-daff-44d3-887c-20583b7ed391.png" alt class="image--center mx-auto" /></p>
<p>This ensures <strong>fine-grained visibility</strong> for debugging, audit, and metrics.</p>
<h3 id="heading-step-8-validation-checklist">Step 8 - Validation Checklist</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Check</td><td>Command</td><td>Expected</td></tr>
</thead>
<tbody>
<tr>
<td>Flannel removed</td><td><code>kubectl get ds -n kube-system</code></td><td>No <code>flannel</code> DS</td></tr>
<tr>
<td>Cilium running</td><td><code>kubectl -n kube-system get pods -l k8s-app=cilium</code></td><td>Running</td></tr>
<tr>
<td>Hubble running</td><td><code>kubectl -n kube-system get pods -l k8s-app=hubble-relay</code></td><td>Running</td></tr>
<tr>
<td>kube-proxy replaced</td><td><code>helm get values cilium -n kube-system</code></td><td><code>kubeProxyReplacement: true</code></td></tr>
<tr>
<td>eBPF services active</td><td><code>cilium bpf lb list</code></td><td>ClusterIP/NodePort/LB entries</td></tr>
<tr>
<td>iptables clean</td><td><code>iptables -t nat -L KUBE-SERVICES</code></td><td>“No chain”</td></tr>
<tr>
<td>Hubble captures all flows</td><td>Env vars on DS</td><td><code>CILIUM_MONITOR_AGGREGATION=none</code></td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[Deploying n8n on Oracle Kubernetes Engine (OKE)]]></title><description><![CDATA[This guide walks you through deploying the n8n automation platform on Oracle Cloud Infrastructure (OCI) using Oracle Kubernetes Engine (OKE) — including persistent storage, authentication, and public access via an OCI Load Balancer.
Overview
n8n (“no...]]></description><link>https://blog.pratiknborkar.com/deploying-n8n-on-oracle-kubernetes-engine-oke</link><guid isPermaLink="true">https://blog.pratiknborkar.com/deploying-n8n-on-oracle-kubernetes-engine-oke</guid><category><![CDATA[n8n]]></category><category><![CDATA[n8n workflows]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[#OCI #OracleCloud #CloudComputing #DataManagement #BigData #CloudStorage #Database #Analytics #AIAutomation #DataEngineering #MachineLearning #DataIntegration #OCIFoundations #CloudData #OracleDatabase #TechLearning]]></category><category><![CDATA[OCI]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Wed, 05 Nov 2025 16:19:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762359493453/5d718238-e713-427d-bc14-30904fac5bde.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This guide walks you through <strong>deploying the n8n automation platform on Oracle Cloud Infrastructure (OCI)</strong> using <strong>Oracle Kubernetes Engine (OKE)</strong> — including persistent storage, authentication, and public access via an OCI Load Balancer.</p>
<h3 id="heading-overview">Overview</h3>
<p><strong>n8n</strong> (“nodemation”) is an open-source workflow automation tool that lets you visually connect services and automate tasks.</p>
<p><strong>Oracle Kubernetes Engine (OKE)</strong> is a managed Kubernetes service on OCI that provides scalable, highly available clusters with integrated networking and load balancing.</p>
<p>In this guide, you’ll:</p>
<ul>
<li><p>Create a namespace and persistent volume for n8n data</p>
</li>
<li><p>Deploy n8n to OKE</p>
</li>
<li><p>Expose it through an OCI LoadBalancer</p>
</li>
<li><p>Fix permission and cookie issues for production readiness</p>
</li>
</ul>
<h3 id="heading-prerequisites">Prerequisites</h3>
<p>Before you begin, ensure you have:</p>
<ul>
<li><p>An existing <strong>OKE cluster</strong> (v1.34.1 or newer)</p>
</li>
<li><p><code>kubectl</code> configured to connect to the cluster (<code>kubectl get nodes</code> works)</p>
</li>
<li><p>One <strong>public subnet OCID</strong> (for the OCI Load Balancer)</p>
</li>
<li><p>Basic understanding of Kubernetes YAML manifests</p>
</li>
</ul>
<h3 id="heading-step-1-create-the-namespace-and-persistent-volume-claim">Step 1 — Create the Namespace and Persistent Volume Claim</h3>
<p>Create a namespace for organizational isolation and a PVC to persist n8n configuration and workflows.</p>
<pre><code class="lang-plaintext">apiVersion: v1
kind: Namespace
metadata:
  name: n8n
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: n8n-pvc
  namespace: n8n
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
</code></pre>
<p>Apply it:</p>
<pre><code class="lang-plaintext">kubectl apply -f n8n-storage.yaml
</code></pre>
<h3 id="heading-step-2-create-authentication-secret">Step 2 — Create Authentication Secret</h3>
<p>Create a Kubernetes secret for n8n’s basic authentication.</p>
<pre><code class="lang-plaintext">apiVersion: v1
kind: Secret
metadata:
  name: n8n-secret
  namespace: n8n
type: Opaque
stringData:
  username: admin
  password: strongpassword123
</code></pre>
<p>Apply it:</p>
<pre><code class="lang-plaintext">kubectl apply -f n8n-secret.yaml
</code></pre>
<h3 id="heading-step-3-deploy-n8n-application">Step 3 — Deploy n8n Application</h3>
<p>Below is a production-ready Deployment that:</p>
<ul>
<li><p>Uses the fully qualified Docker image</p>
</li>
<li><p>Mounts the PVC for persistence</p>
</li>
<li><p>Fixes file-permission issues (<code>fsGroup: 1000</code>)</p>
</li>
<li><p>Disables secure cookie enforcement (for HTTP access via LoadBalancer)</p>
</li>
</ul>
<pre><code class="lang-plaintext">apiVersion: apps/v1
kind: Deployment
metadata:
  name: n8n
  namespace: n8n
spec:
  replicas: 1
  selector:
    matchLabels:
      app: n8n
  template:
    metadata:
      labels:
        app: n8n
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
      containers:
        - name: n8n
          image: docker.io/n8nio/n8n:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 5678
          env:
            - name: N8N_BASIC_AUTH_ACTIVE
              value: "true"
            - name: N8N_BASIC_AUTH_USER
              valueFrom:
                secretKeyRef:
                  name: n8n-secret
                  key: username
            - name: N8N_BASIC_AUTH_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: n8n-secret
                  key: password
            - name: N8N_HOST
              value: "n8n"
            - name: N8N_PORT
              value: "5678"
            - name: N8N_PROTOCOL
              value: "http"
            - name: N8N_SECURE_COOKIE
              value: "false"
            - name: GENERIC_TIMEZONE
              value: "Asia/Kolkata"
            - name: TZ
              value: "Asia/Kolkata"
          volumeMounts:
            - name: n8n-data
              mountPath: /home/node/.n8n
      volumes:
        - name: n8n-data
          persistentVolumeClaim:
            claimName: n8n-pvc
</code></pre>
<p>Apply it:</p>
<pre><code class="lang-plaintext">kubectl apply -f n8n-deployment.yaml
</code></pre>
<h3 id="heading-step-4-expose-n8n-via-oci-loadbalancer">Step 4 — Expose n8n via OCI LoadBalancer</h3>
<p>Now expose n8n externally so you can access it through a public IP.</p>
<pre><code class="lang-plaintext">apiVersion: v1
kind: Service
metadata:
  name: n8n-service
  namespace: n8n
spec:
  type: LoadBalancer
  selector:
    app: n8n
  ports:
    - name: http
      port: 80
      targetPort: 5678
      protocol: TCP
</code></pre>
<p>Apply it:</p>
<pre><code class="lang-plaintext">kubectl apply -f n8n-service.yaml
</code></pre>
<p>After a few minutes:</p>
<pre><code class="lang-plaintext">kubectl get svc -n n8n
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
n8n-service     LoadBalancer   10.96.123.45   140.238.xxx.xxx  80:31234/TCP   2m
</code></pre>
<h3 id="heading-step-5-verify-and-troubleshoot">Step 5 — Verify and Troubleshoot</h3>
<p>Check pod logs:</p>
<pre><code class="lang-plaintext">kubectl logs -f -n n8n deploy/n8n
</code></pre>
<p>Expected startup log:</p>
<pre><code class="lang-plaintext">Editor is now accessible:
 ▸  http://0.0.0.0:5678/
</code></pre>
<p>If you see permission errors like <code>EACCES: permission denied</code>, confirm your <code>fsGroup: 1000</code> is present in the YAML.</p>
<p>If you get a “secure cookie” warning, ensure <code>N8N_SECURE_COOKIE=false</code> is set, or enable HTTPS on your LoadBalancer.</p>
<h3 id="heading-step-6-optional-add-https-with-oci-loadbalancer">Step 6 — (Optional) Add HTTPS with OCI LoadBalancer</h3>
<p>For production, you can enable HTTPS by adding these annotations to the Service:</p>
<pre><code class="lang-plaintext">metadata:
  annotations:
    service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/oci-load-balancer-tls-secret: "n8n/n8n-tls"
</code></pre>
<p>Then create a TLS secret:</p>
<pre><code class="lang-plaintext">kubectl create secret tls n8n-tls \
  --cert=server.crt --key=server.key -n n8n
</code></pre>
<h3 id="heading-step-7-optional-use-postgresql-for-production">Step 7 — (Optional) Use PostgreSQL for Production</h3>
<p>For better performance and reliability, you can connect n8n to an external PostgreSQL (or OCI Autonomous Database):</p>
<pre><code class="lang-plaintext">- name: DB_TYPE
  value: "postgresdb"
- name: DB_POSTGRESDB_HOST
  value: "&lt;your-db-host&gt;"
- name: DB_POSTGRESDB_USER
  value: "&lt;your-db-user&gt;"
- name: DB_POSTGRESDB_PASSWORD
  value: "&lt;your-db-password&gt;"
- name: DB_POSTGRESDB_DATABASE
  value: "n8n"
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762358844592/a815c23a-16f1-4a6b-8548-a4d2b09be778.png" alt="n8n" class="image--center mx-auto" /></p>
<h3 id="heading-summary">Summary</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Component</td><td>Purpose</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Namespace</strong></td><td>Isolates resources</td></tr>
<tr>
<td><strong>PVC</strong></td><td>Persists workflow data</td></tr>
<tr>
<td><strong>Secret</strong></td><td>Stores login credentials</td></tr>
<tr>
<td><strong>Deployment</strong></td><td>Runs n8n container securely</td></tr>
<tr>
<td><strong>Service (LoadBalancer)</strong></td><td>Exposes n8n externally</td></tr>
<tr>
<td><strong>SecurityContext</strong></td><td>Fixes PVC permission issues</td></tr>
<tr>
<td><strong>Optional TLS</strong></td><td>Enables HTTPS via OCI LoadBalancer</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[Automate OCI Object Storage with Terraform]]></title><description><![CDATA[Object Storage in Oracle Cloud Infrastructure (OCI) provides durable and scalable storage for any type of data. Using Terraform, we can declaratively create and manage Object Storage Buckets, including lifecycle rules for intelligent data management ...]]></description><link>https://blog.pratiknborkar.com/automate-oci-object-storage-with-terraform</link><guid isPermaLink="true">https://blog.pratiknborkar.com/automate-oci-object-storage-with-terraform</guid><category><![CDATA[OCI]]></category><category><![CDATA[object storage]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[#OCI #OracleCloud #CloudComputing #DataManagement #BigData #CloudStorage #Database #Analytics #AIAutomation #DataEngineering #MachineLearning #DataIntegration #OCIFoundations #CloudData #OracleDatabase #TechLearning]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Thu, 30 Oct 2025 13:10:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1761829531177/2c9128f5-6a6c-44ee-87f9-f9dfe9a65529.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Object Storage in <strong>Oracle Cloud Infrastructure (OCI)</strong> provides durable and scalable storage for any type of data. Using <strong>Terraform</strong>, we can declaratively create and manage Object Storage Buckets, including lifecycle rules for intelligent data management — such as moving old files to <strong>Infrequent Access</strong>, <strong>Archiving</strong>, or <strong>Deleting</strong> them after a certain time.</p>
<p>In this article, we’ll create:</p>
<ul>
<li><p><strong>Single Bucket without lifecycle</strong></p>
</li>
<li><p><strong>Single Bucket with lifecycle rules</strong></p>
</li>
<li><p><strong>Multiple Buckets using a Terraform loop</strong></p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before you begin, make sure you have:</p>
<ul>
<li><p>An OCI tenancy with Object Storage access</p>
</li>
<li><p>Compartment OCID</p>
</li>
<li><p>Terraform ≥ 1.5</p>
</li>
<li><p>OCI provider ≥ 7.3.0</p>
</li>
<li><p>Configured OCI CLI or API keys</p>
</li>
</ul>
<h3 id="heading-1-single-bucket-without-lifecycle">1. Single Bucket Without Lifecycle</h3>
<p>(<code>main.tf</code>)</p>
<pre><code class="lang-plaintext">provider "oci" {
  tenancy_ocid     = var.tenancy_ocid
  user_ocid        = var.user_ocid
  fingerprint      = var.fingerprint
  private_key_path = var.private_key_path
  region           = var.region
}

resource "oci_objectstorage_bucket" "bucket" {
  compartment_id = var.compartment_ocid
  name           = var.bucket_name
  namespace      = var.namespace
  storage_tier   = var.storage_tier
}
</code></pre>
<p>Define Variables (<a target="_blank" href="http://variables.tf"><code>variables.tf</code></a>)</p>
<pre><code class="lang-plaintext">variable "tenancy_ocid" {}
variable "user_ocid" {}
variable "fingerprint" {}
variable "private_key_path" {}
variable "region" {}

variable "compartment_ocid" {}
variable "bucket_name" {}
variable "namespace" {}
variable "storage_tier" {
  default = "Standard" # Options: Standard or Archive
}
</code></pre>
<p>Provide Variable Values (<code>terraform.tfvars</code>)</p>
<pre><code class="lang-plaintext">private_key_path = "C:/Users/Pratik N Borkar/.oci/xxxx.pem"
user_ocid        = "ocid1.user.oc1..aaaaabmza"
fingerprint      = "3a:d9f:71"
tenancy_ocid     = "ocid1.tenancy.ookmjm7dq"
region           = "ap-sydney-1"


compartment_ocid  = "ocid1.compartr2vdqa"
bucket_name      = "example-bucket"
namespace        = "XXXXXXX"
</code></pre>
<h3 id="heading-2-multiple-bucket-without-lifecycle">2. Multiple Bucket Without Lifecycle</h3>
<p>(<code>main.tf</code>)</p>
<pre><code class="lang-plaintext">provider "oci" {
  tenancy_ocid     = var.tenancy_ocid
  user_ocid        = var.user_ocid
  fingerprint      = var.fingerprint
  private_key_path = var.private_key_path
  region           = var.region
}

data "oci_objectstorage_namespace" "ns" {
  compartment_id = var.compartment_ocid
}

locals {
  bucket_names = [for i in range(1, var.bucket_count + 1) : format("AIS-%03d", i)]
}

resource "oci_objectstorage_bucket" "bucket" {
  for_each = toset(local.bucket_names)

  compartment_id     = var.compartment_ocid
  name               = each.value
  namespace          = data.oci_objectstorage_namespace.ns.namespace
  storage_tier       = var.storage_tier
}
</code></pre>
<p>Define Variables (<a target="_blank" href="http://variables.tf"><code>variables.tf</code></a>)</p>
<pre><code class="lang-plaintext">variable "tenancy_ocid" {
  type = string
}

variable "user_ocid" {
  type = string
}

variable "fingerprint" {
  type = string
}

variable "private_key_path" {
  type = string
}

variable "region" {
  type = string
}

variable "compartment_ocid" {
  type = string
}

variable "storage_tier" {
  type    = string
  default = "Standard"
}

variable "bucket_count" {
  type    = number
  default = 10
}
</code></pre>
<p>Provide Variable Values (<code>terraform.tfvars</code>)</p>
<pre><code class="lang-plaintext">private_key_path = "C:/Users/Pratik N Borkar/.oci/xxxx.pem"
user_ocid        = "ocid1.user.oc1..aaaaabmza"
fingerprint      = "3a:d9f:71"
tenancy_ocid     = "ocid1.tenancy.ookmjm7dqxxxxxx"
region           = "ap-sydney-1"


compartment_ocid  = "ocid1.compartr2vdqaxxxxx"
storage_tier     = "Standard"
bucket_count     = 10
</code></pre>
<h3 id="heading-3-single-bucket-with-lifecycle">3. Single Bucket With Lifecycle</h3>
<p>(<code>main.tf</code>)</p>
<pre><code class="lang-plaintext">provider "oci" {
  tenancy_ocid        = var.tenancy_ocid
  user_ocid           = var.user_ocid
  fingerprint         = var.fingerprint
  private_key_path    = var.private_key_path
  region              = var.region
}

resource "oci_objectstorage_bucket" "this" {
  count          = var.object_storage_bucket_deploy ? 1 : 0
  compartment_id = var.object_storage_bucket_compartment_ocid
  name           = var.object_storage_bucket_name
  namespace      = var.object_storage_bucket_namespace

  access_type           = "NoPublicAccess"
  auto_tiering          = var.object_storage_bucket_storage_tier == "Standard" ? "Disabled" : null
  metadata              = var.object_storage_bucket_metadata
  freeform_tags         = var.object_storage_bucket_freeform_tags
  object_events_enabled = var.object_storage_bucket_object_events_enabled
  storage_tier          = var.object_storage_bucket_storage_tier
  versioning            = var.object_storage_bucket_versioning
}

resource "oci_objectstorage_object_lifecycle_policy" "this" {
  count     = length(var.object_storage_bucket_lifecycle_policy_rules) == 0 ? 0 : 1
  bucket    = oci_objectstorage_bucket.this[0].name
  namespace = oci_objectstorage_bucket.this[0].namespace

  dynamic "rules" {
    for_each = var.object_storage_bucket_lifecycle_policy_rules
    content {
      action      = rules.value.action
      is_enabled  = rules.value.is_enabled
      name        = rules.value.name
      time_amount = rules.value.time_amount
      time_unit   = rules.value.time_unit
      target      = rules.value.target

      dynamic "object_name_filter" {
        for_each = (
          length(rules.value.object_name_filter.inclusion_patterns) &gt; 0 ||
          length(rules.value.object_name_filter.exclusion_patterns) &gt; 0 ||
          length(rules.value.object_name_filter.inclusion_prefixes) &gt; 0
        ) ? [1] : []

        content {
          exclusion_patterns = toset(rules.value.object_name_filter.exclusion_patterns)
          inclusion_patterns = toset(rules.value.object_name_filter.inclusion_patterns)
          inclusion_prefixes = toset(rules.value.object_name_filter.inclusion_prefixes)
        }
      }
    }
  }
}

resource "oci_objectstorage_object_lifecycle_policy" "multipart_uploads" {
  bucket    = oci_objectstorage_bucket.this[0].name
  namespace = oci_objectstorage_bucket.this[0].namespace

  rules {
    action      = "ABORT"
    is_enabled  = true
    name        = "Delete uncommitted or failed multipart uploads Rule"
    time_amount = "7"
    time_unit   = "DAYS"
    target      = "multipart-uploads"
  }
}
</code></pre>
<p>Define Variables (<a target="_blank" href="http://variables.tf"><code>variables.tf</code></a>)</p>
<pre><code class="lang-plaintext">variable "tenancy_ocid" {}
variable "user_ocid" {}
variable "fingerprint" {}
variable "private_key_path" {}
variable "region" {}

variable "object_storage_bucket_compartment_ocid" {}
variable "object_storage_bucket_name" {}
variable "object_storage_bucket_namespace" {}

variable "object_storage_bucket_metadata" {
  type = map(string)
  default = {}
}

variable "object_storage_bucket_freeform_tags" {
  type = map(string)
  default = {}
}

variable "object_storage_bucket_object_events_enabled" {
  type    = bool
  default = false
}

variable "object_storage_bucket_versioning" {
  default = "Enabled"
}

variable "object_storage_bucket_storage_tier" {
  default = "Standard"
}

variable "object_storage_bucket_deploy" {
  type    = bool
  default = true
}

variable "object_storage_bucket_lifecycle_policy_rules" {
  type = list(object({
    action      = string
    is_enabled  = bool
    name        = string
    time_amount = number
    time_unit   = string
    target      = string
    object_name_filter = object({
      inclusion_patterns = list(string)
      exclusion_patterns = list(string)
      inclusion_prefixes = list(string)
    })
  }))
}
</code></pre>
<p>Provide Variable Values (<code>terraform.tfvars</code>)</p>
<pre><code class="lang-plaintext">private_key_path = "C:/Users/Pratik N Borkar/.oci/xxxx.pem"
user_ocid        = "ocid1.user.oc1..aaaaabmza"
fingerprint      = "3a:d9f:71"
tenancy_ocid     = "ocid1.tenancy.ookmjm7dqxxxxxx"
region           = "ap-sydney-1"


object_storage_bucket_compartment_ocid = "ocid1.compartmei3ier2vdqa"
object_storage_bucket_name             = "my-logs-bucket"
object_storage_bucket_namespace        = "XXXXXXX"

object_storage_bucket_metadata = {}
object_storage_bucket_freeform_tags = {
  Environment = "Dev"
}
object_storage_bucket_object_events_enabled = false
object_storage_bucket_versioning            = "Enabled"
object_storage_bucket_storage_tier          = "Standard"

object_storage_bucket_lifecycle_policy_rules = [
  {
    action      = "INFREQUENT_ACCESS"
    is_enabled  = true
    name        = "Move Infrequent Access Objects Rule"
    time_amount = 45
    time_unit   = "DAYS"
    target      = "objects"
    object_name_filter = {
      inclusion_patterns = []
      exclusion_patterns = []
      inclusion_prefixes = []
    }
  },
  {
    action      = "ARCHIVE"
    is_enabled  = true
    name        = "Archive Objects Rule"
    time_amount = 90
    time_unit   = "DAYS"
    target      = "objects"
    object_name_filter = {
      inclusion_patterns = []
      exclusion_patterns = []
      inclusion_prefixes = []
    }
  },
  {
    action      = "DELETE"
    is_enabled  = true
    name        = "Delete Objects Rule"
    time_amount = 120
    time_unit   = "DAYS"
    target      = "objects"
    object_name_filter = {
      inclusion_patterns = []
      exclusion_patterns = []
      inclusion_prefixes = []
    }
  },
  {
    action      = "INFREQUENT_ACCESS"
    is_enabled  = true
    name        = "Move Infrequent Access Previous Versions Rule"
    time_amount = 45
    time_unit   = "DAYS"
    target      = "previous-object-versions"
    object_name_filter = {
      inclusion_patterns = []
      exclusion_patterns = []
      inclusion_prefixes = []
    }
  },
  {
    action      = "ARCHIVE"
    is_enabled  = true
    name        = "Archive Previous Versions Rule"
    time_amount = 90
    time_unit   = "DAYS"
    target      = "previous-object-versions"
    object_name_filter = {
      inclusion_patterns = []
      exclusion_patterns = []
      inclusion_prefixes = []
    }
  },
  {
    action      = "DELETE"
    is_enabled  = true
    name        = "Delete Previous Versions Rule"
    time_amount = 240
    time_unit   = "DAYS"
    target      = "previous-object-versions"
    object_name_filter = {
      inclusion_patterns = []
      exclusion_patterns = []
      inclusion_prefixes = []
    }
  }
]
</code></pre>
<h2 id="heading-summary">Summary</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Folder</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><code>Single-Without-Lifecycle</code></td><td>Creates a single basic Object Storage bucket</td></tr>
<tr>
<td><code>Single-With-Lifecycle</code></td><td>Adds 6 lifecycle rules for object and version management</td></tr>
<tr>
<td><code>Multiple Bucket</code></td><td>Creates multiple buckets dynamically using Terraform loops</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[Creating an Oracle Cloud VCN with Terraform]]></title><description><![CDATA[A Virtual Cloud Network (VCN) in Oracle Cloud Infrastructure (OCI) is a customizable, software-defined network that hosts your cloud resources such as compute instances, databases, and load balancers.
Using Terraform, you can automate the creation an...]]></description><link>https://blog.pratiknborkar.com/creating-an-oracle-cloud-vcn-with-terraform</link><guid isPermaLink="true">https://blog.pratiknborkar.com/creating-an-oracle-cloud-vcn-with-terraform</guid><category><![CDATA[OCI]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[#OCI #OracleCloud #CloudComputing #DataManagement #BigData #CloudStorage #Database #Analytics #AIAutomation #DataEngineering #MachineLearning #DataIntegration #OCIFoundations #CloudData #OracleDatabase #TechLearning]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Wed, 29 Oct 2025 11:18:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1761736590222/54a90ee5-ea0f-439d-9f0f-fcbfe0b4224b.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A <strong>Virtual Cloud Network (VCN)</strong> in Oracle Cloud Infrastructure (OCI) is a customizable, software-defined network that hosts your cloud resources such as compute instances, databases, and load balancers.</p>
<p>Using <strong>Terraform</strong>, you can automate the creation and management of VCNs, ensuring repeatable and consistent infrastructure deployments.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before you begin, make sure you have:</p>
<ol>
<li><p><strong>OCI Account</strong> with permissions to create networking resources.</p>
</li>
<li><p><strong>Terraform</strong> installed (v1.10.0 or higher recommended).</p>
</li>
<li><p><strong>OCI Terraform Provider</strong> configured.</p>
</li>
<li><p>Your OCI credentials:</p>
<ul>
<li><p>Tenancy OCID</p>
</li>
<li><p>User OCID</p>
</li>
<li><p>Compartment OCID</p>
</li>
<li><p>Fingerprint</p>
</li>
<li><p>Private key path</p>
</li>
<li><p>Region</p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-project-structure">Project Structure</h2>
<p>Create a working directory, for example:</p>
<pre><code class="lang-plaintext">oci-vcn-terraform/
│
├── main.tf
├── variables.tf
└── terraform.tfvars
</code></pre>
<p>Step 1: Define Provider and Resources (<code>main.tf</code>)</p>
<pre><code class="lang-plaintext"># -------------------------------
# Provider
# -------------------------------
provider "oci" {
  tenancy_ocid     = var.tenancy_ocid
  user_ocid        = var.user_ocid
  fingerprint      = var.fingerprint
  private_key_path = var.private_key_path
  region           = var.region
}

# -------------------------------
# VCN
# -------------------------------
resource "oci_core_vcn" "oke_vcn" {
  cidr_block     = var.vcn_cidr
  compartment_id = var.compartment_ocid
  display_name   = "OKE-VCN-Sydney"
  dns_label      = "okevcn"
}

# -------------------------------
# Gateways
# -------------------------------
resource "oci_core_internet_gateway" "igw" {
  display_name   = "Internet-Gateway"
  compartment_id = var.compartment_ocid
  vcn_id         = oci_core_vcn.oke_vcn.id
  enabled        = true
}

resource "oci_core_nat_gateway" "nat_gw" {
  display_name   = "NAT-Gateway"
  compartment_id = var.compartment_ocid
  vcn_id         = oci_core_vcn.oke_vcn.id
  block_traffic  = false
}

# -------------------------------
# Subnet Definitions
# -------------------------------
locals {
  subnets = {
    jump_host = { name = "Jump-Host-Subnet", cidr = var.jump_host_cidr, public = true }
    master    = { name = "Master-Subnet", cidr = var.master_cidr, public = false }
    node      = { name = "Node-Subnet", cidr = var.node_cidr, public = false }
    lb        = { name = "LB-Subnet", cidr = var.lb_cidr, public = true }
    pod       = { name = "POD-Subnet", cidr = var.pod_cidr, public = false }
  }
}

# -------------------------------
# Security Lists
# -------------------------------
resource "oci_core_security_list" "sl" {
  for_each = local.subnets

  display_name   = "Security List for ${each.value.name}"
  compartment_id = var.compartment_ocid
  vcn_id         = oci_core_vcn.oke_vcn.id

  egress_security_rules {
    destination = "0.0.0.0/0"
    protocol    = "all"
  }

  ingress_security_rules {
    source   = "0.0.0.0/0"
    protocol = "all"
  }
}

# -------------------------------
# Route Tables
# -------------------------------
resource "oci_core_route_table" "rt" {
  for_each = local.subnets

  display_name   = "Route Table for ${each.value.name}"
  compartment_id = var.compartment_ocid
  vcn_id         = oci_core_vcn.oke_vcn.id

  # Attach Internet Gateway for public subnets
  # Attach NAT Gateway for private subnets
  route_rules {
    description       = each.value.public ? "Route to Internet via IGW" : "Route to Internet via NAT"
    destination       = "0.0.0.0/0"
    destination_type  = "CIDR_BLOCK"
    network_entity_id = each.value.public ? oci_core_internet_gateway.igw.id : oci_core_nat_gateway.nat_gw.id
  }
}

# -------------------------------
# Subnets
# -------------------------------
resource "oci_core_subnet" "subnet" {
  for_each = local.subnets

  cidr_block                 = each.value.cidr
  display_name               = each.value.name
  dns_label                  = replace(lower(each.key), "_", "")
  prohibit_public_ip_on_vnic = each.value.public ? false : true
  vcn_id                     = oci_core_vcn.oke_vcn.id
  route_table_id             = oci_core_route_table.rt[each.key].id
  security_list_ids          = [oci_core_security_list.sl[each.key].id]
  compartment_id             = var.compartment_ocid
}
</code></pre>
<p>Step 2: Define Variables (<code>variables.tf</code>)</p>
<pre><code class="lang-plaintext">variable "tenancy_ocid" {
  type        = string
  description = "Tenancy OCID"
}

variable "user_ocid" {
  type        = string
  description = "User OCID"
}

variable "fingerprint" {
  type        = string
  description = "API Key Fingerprint"
}

variable "private_key_path" {
  type        = string
  description = "Path to private key"
}

variable "region" {
  type        = string
  description = "OCI region"
}

variable "compartment_ocid" {
  type        = string
  description = "Compartment OCID"
}

variable "existing_vcn_id" {
  type        = string
  description = "Existing VCN OCID"
}

variable "endpoint_subnet_id" {
  type        = string
  description = "Subnet OCID for cluster endpoints"
}

variable "lb_subnet_id" {
  type        = string
  description = "Subnet OCID for load balancer"
}

variable "node_pool_subnet_id" {
  type        = string
  description = "Subnet OCID for node pool"
}

variable "node_pool_ssh_public_key" {
  type        = string
  description = "SSH public key for node pool"
}

variable "nodepool_image_id" {
  type        = string
  description = "OCID of the image for node pool"
}

variable "nodepool_cloud_init" {
  type        = string
  description = "Cloud-init script for node pool"
  default     = ""
}
</code></pre>
<p>Step 3: Provide Variable Values (<code>terraform.tfvars</code>)</p>
<pre><code class="lang-plaintext"># -------------------------------
# Provider Authentication
# -------------------------------
tenancy_ocid     = "ocid1.tenancy.oc1.xxx"
user_ocid        = "ocid1.user.oc1..axxx"
fingerprint      = "3a:ds.xx.xx"
private_key_path = "path of key"
region           = "ap-sydney-1"

# -------------------------------
# Networking Details
# -------------------------------
compartment_ocid = "ocid1.compartment.oc1.."
</code></pre>
<p>Step 4: Initialize and Apply</p>
<pre><code class="lang-plaintext">terraform init
terraform plan
terraform apply
</code></pre>
<p>TerraformConfirm with <code>yes</code> when prompted.<br />Terraform will provision</p>
<pre><code class="lang-plaintext">| Resource Type    | Count | Purpose                          |
| ---------------- | ----- | -------------------------------- |
| VCN              | 1     | Core network                     |
| Internet Gateway | 1     | Public internet access           |
| NAT Gateway      | 1     | Private outbound internet access |
| Security Lists   | 5     | One per subnet                   |
| Route Tables     | 5     | One per subnet                   |
| Subnets          | 5     | Logical network segments         |
</code></pre>
<h2 id="heading-verification">Verification</h2>
<p>After the deployment:</p>
<ol>
<li><p>Log in to the <strong>OCI Console</strong>.</p>
</li>
<li><p>Navigate to <strong>Networking → Virtual Cloud Networks</strong>.</p>
</li>
<li><p>You’ll see the newly created <strong>OKE-VCN-Sydney</strong> and related resources.</p>
</li>
</ol>
<p>You can also verify via CLI:</p>
<pre><code class="lang-plaintext">oci network vcn list --compartment-id &lt;compartment_ocid&gt;
</code></pre>
<h2 id="heading-clean-up">Clean Up</h2>
<p>To remove all resources:</p>
<pre><code class="lang-plaintext">terraform destroy
</code></pre>
]]></content:encoded></item><item><title><![CDATA[KOP Kubernetes Custer]]></title><description><![CDATA[Configure AWS CLI and Create a user User must have Administrative access or for better security, you can create a user manually with the following Policy AmazonEC2FullAccess, AmazonRoute53FullAccess, AmazonS3FullAccess, AmazonVPCFullAccess
{
    "Ver...]]></description><link>https://blog.pratiknborkar.com/kop-kubernetes-custer</link><guid isPermaLink="true">https://blog.pratiknborkar.com/kop-kubernetes-custer</guid><category><![CDATA[AWS]]></category><category><![CDATA[k8s]]></category><category><![CDATA[Kops]]></category><dc:creator><![CDATA[Pratik N Borkar]]></dc:creator><pubDate>Sun, 17 Jul 2022 19:40:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1658087488273/3Zb2v-vV6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Configure AWS CLI and Create a user User must have Administrative access or for better security, you can create a user manually with the following Policy AmazonEC2FullAccess, AmazonRoute53FullAccess, AmazonS3FullAccess, AmazonVPCFullAccess</p>
<pre><code class="lang-plaintext">{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        }
    ]
}
</code></pre>
<p>Create S3 Bucket in my case state.handsonk8s.ga</p>
<pre><code class="lang-plaintext">aws s3 mb s3://state.handsonk8s.ga
</code></pre>
<p>Install kops and kubectl In my case Installing Kops on macOS</p>
<pre><code class="lang-plaintext">curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/
</code></pre>
<p>Installing Kubectl on macOS</p>
<pre><code class="lang-plaintext">curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
</code></pre>
<p>Add you domain/cluster name is Hosted Zone using Route 53 Make sure TTL should be 60 sec or less than that for urly DNS propagation.</p>
<p>Create public key</p>
<pre><code class="lang-plaintext">Lucifers-MacBook-Pro:.ssh lucifer$ ls
known_hosts
Lucifers-MacBook-Pro:.ssh lucifer$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/lucifer/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/lucifer/.ssh/id_rsa.
Your public key has been saved in /Users/lucifer/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:ZChUKEFJAERAbmpDRsmWi0jC5YWZso8QwgKA+nYyjU0 lucifer@Lucifers-MacBook-Pro.local
The key's randomart image is:
+---[RSA 2048]----+
|^OB+=+.          |
|OX+=o  .         |
|X*+o. . o        |
|Xo  E. o         |
|o+o=    S        |
|..B.+            |
| . +             |
|                 |
|                 |
+----[SHA256]-----+
Lucifers-MacBook-Pro:.ssh lucifer$ 
Lucifers-MacBook-Pro:.ssh lucifer$ ls
id_rsa        id_rsa.pub    known_hosts
</code></pre>
<p>Create kops cluster</p>
<pre><code class="lang-plaintext">
kops create cluster \
--state "s3://state.handsonk8s.ga" \
--zones "us-east-1a,us-east-1b"  \
--master-count 3 \
--master-size=t2.micro \
--node-count 3 \
--node-size=t2.micro \
--name handsonk8s.ga  \
--yes
</code></pre>
<p>--state:- Your S3 bucket</p>
<p>--master-count:- No of Master node count here is set 1</p>
<p>--master-size:- Instance type or size you can set while creating cluster</p>
<p>--node-count:- No of Worker node count here is set 2</p>
<p>--node-size:- Instance type or size you can set while creating cluster</p>
<p>--name:- Your Hosted zone name</p>
<p>Validate kops cluster</p>
<pre><code class="lang-plaintext">kops validate cluster \
       --state "s3://state.handsonk8s.ga" \
       --name handsonk8s.ga
</code></pre>
<p>Update kops cluster (Always update cluster after changing Security Groups)</p>
<pre><code class="lang-plaintext">kops update cluster \
       --state "s3://state.handsonk8s.ga" \
       --name cluster.k8s.local  \
       --yes
</code></pre>
<p>Distroy kops Cluster</p>
<pre><code class="lang-plaintext">kops delete cluster \
       --state "s3://state.handsonk8s.ga" \
       --name handsonk8s.ga  \
       --yes
</code></pre>
]]></content:encoded></item></channel></rss>