Skip to main content

Command Palette

Search for a command to run...

Migrating from Flannel to Cilium on Oracle Kubernetes Engine (OKE)

Updated
5 min read
Migrating from Flannel to Cilium on Oracle Kubernetes Engine (OKE)

Cilium is an open-source, cloud-native networking, security, and observability platform built on top of eBPF (Extended Berkeley Packet Filter) a revolutionary Linux kernel technology that allows programs to run safely and efficiently inside the kernel without changing its source code.

Unlike traditional CNIs (like Flannel, Calico, or Weave) that rely on iptables for packet forwarding and filtering, Cilium uses eBPF to dynamically inject logic into the kernel’s networking stack. This enables faster performance, fine-grained visibility, and deep network security enforcement all without the limitations of legacy Linux networking mechanisms.

Cilium can completely replace:

  • Flannel :- as the CNI plugin for pod networking

  • kube-proxy :- as the Kubernetes Service load-balancer

  • NetworkPolicy :- engines — by enforcing security policies using eBPF

  • Monitoring tools :- through its built-in observability layer, Hubble

Prerequisites

  • Access to an OKE cluster (v1.27 or later recommended)

  • kubectl configured to access your Jump Server or CloudsShell

  • helm installed (v3.8+)

  • Administrator privileges in the cluster

Step 0 – Disable OKE Addon Flannel

OKE automatically provisions Flannel as the default CNI.
Before installing Cilium, you must disable this addon and remove the existing DaemonSet.

Run the following commands:

# Disable Flannel addon from OKE console

# Remove the Flannel DaemonSet
kubectl delete ds flannel -n kube-system

This ensures Flannel pods are deleted and networking configuration is ready for Cilium to take over.

Step 1 – Add the Cilium Helm Repository

Add and update the official Cilium Helm chart repository:

helm repo add cilium https://helm.cilium.io/
helm repo update

You can verify available versions with:

helm search repo cilium/cilium --versions

Step 2 – Install Cilium CNI

Install latest Cilium version 1.18.3 using Helm:

helm install cilium cilium/cilium \
  --version 1.18.3 \
  --namespace kube-system \
  --set kubeProxyReplacement=true \
  --set kubeProxyReplacementHealthzBindAddr="0.0.0.0:10256" \
  --set ipam.mode=kubernetes \
  --set ipv4.enabled=true \
  --set cluster.name=oke-cilium \
  --set k8sServiceHost=$(kubectl config view \
      --minify -o jsonpath='{.clusters[0].cluster.server}' \
      | sed 's#https://##;s#:.*##') \
  --set k8sServicePort=6443 \
  --set cluster.podCIDRList="{10.230.0.0/16}" \
  --set cluster.serviceCIDR="10.96.0.0/16"

Explanation

ParameterDescription
kubeProxyReplacement=trueEnables full eBPF-based kube-proxy replacement
ipam.mode=kubernetesIPAM controlled by Kubernetes
cluster.podCIDRListYour Pod CIDR range
cluster.serviceCIDRYour Service CIDR range
k8sServiceHost & k8sServicePortPoints Cilium to your cluster’s API server

After installation, Cilium will replace both Flannel (CNI) and kube-proxy at the dataplane level.

Step 3 – Enable Hubble Relay, UI, and Metrics

Hubble provides powerful observability for network flows and policies in your cluster.

Upgrade the Helm release to enable Hubble features:

helm upgrade cilium cilium/cilium \
  --namespace kube-system \
  --reuse-values \
  --set bandwidthManager.enabled=true \
  --set hubble.enabled=true \
  --set hubble.metrics.enabled="{dns,drop,tcp,flow,icmp,http}" \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true \
  --set hubble.ui.service.type=LoadBalancer \
  --set hubble.relay.service.type=ClusterIP \
  --set hubble.relay.resources.limits.cpu=200m \
  --set hubble.relay.resources.limits.memory=256Mi \
  --set hubble.relay.resources.requests.cpu=100m \
  --set hubble.relay.resources.requests.memory=128Mi \
  --set policyEnforcementMode=default

Step 4 – Verify Cilium Installation

Check Pod Status

kubectl -n kube-system get pods -l k8s-app=cilium
kubectl -n kube-system get pods -l k8s-app=hubble-relay
kubectl -n kube-system get pods -l k8s-app=hubble-ui

All pods should be in the Running state.

Check Cilium Agent Health

kubectl -n kube-system exec -ti ds/cilium -- cilium status

Expected output (summary):

Kubernetes:              Ok         1.34 (v1.34.1) [linux/arm64]
Cilium:                  Ok   1.18.3 (v1.18.3-c1601689)
Cilium health daemon:    Ok
Proxy Status:            OK, ip 10.230.1.88, 0 redirects active on ports 10000-20000, Envoy: external
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 49.54   Metrics: Ok

Step 5 – Verify Kube-Proxy Replacement

Cilium replaces kube-proxy by handling Kubernetes Service routing via eBPF.
Verify this at both deployment and runtime levels.

Deployment Level (Helm Values)

helm get values cilium -n kube-system | grep kubeProxyReplacement

Expected output:

kubeProxyReplacement: true

Runtime Level (eBPF Map)

kubectl -n kube-system exec -ti ds/cilium -- cilium bpf lb list

If you see ClusterIP, NodePort, or LoadBalancer entries, Cilium’s eBPF load balancer is active.

Example output:

10.96.0.10:53/UDP -> 10.230.0.5:53/UDP
0.0.0.0:30001/TCP -> 10.230.0.15:8080/TCP

Step 6 – Verify Hubble Functionality

Check Hubble components:

kubectl -n kube-system get pods -l k8s-app=hubble-relay
kubectl -n kube-system get pods -l k8s-app=hubble-ui

Check the Hubble UI service:

kubectl get svc -n kube-system | grep hubble-ui

Then open:

http://<LoadBalancer>

You’ll see a real-time topology of all pod-to-pod communication in your cluster.This confirms Cilium is replacing kube-proxy at the kernel level.

Step 7 - Allow Hubble to Capture All Flows

By default, Cilium aggregates network-flow events to reduce load.
To allow Hubble to record every individual flow, disable aggregation:

kubectl -n kube-system set env daemonset/cilium \
  CILIUM_MONITOR_AGGREGATION=none \
  CILIUM_MONITOR_AGGREGATION_INTERVAL=5s

This ensures fine-grained visibility for debugging, audit, and metrics.

Step 8 - Validation Checklist

CheckCommandExpected
Flannel removedkubectl get ds -n kube-systemNo flannel DS
Cilium runningkubectl -n kube-system get pods -l k8s-app=ciliumRunning
Hubble runningkubectl -n kube-system get pods -l k8s-app=hubble-relayRunning
kube-proxy replacedhelm get values cilium -n kube-systemkubeProxyReplacement: true
eBPF services activecilium bpf lb listClusterIP/NodePort/LB entries
iptables cleaniptables -t nat -L KUBE-SERVICES“No chain”
Hubble captures all flowsEnv vars on DSCILIUM_MONITOR_AGGREGATION=none

More from this blog

P

Pratik N Borkar's blog

12 posts