OCI2024-12-206 min read

Going Serverless on OCI: When Functions Save Money (And When They Don't)

OCI Functions can dramatically reduce costs for event-driven workloads — but they're not always the cheapest option. Here's how to decide.

OT

OCIFinOps Team

Serverless computing promises you only pay for what you use. On OCI, Functions (based on the open-source Fn Project) provides this model. But "pay for what you use" doesn't automatically mean "pay less."

How OCI Functions Pricing Works

OCI Functions charges based on two factors:

Invocations: First 2 million per month are free, then $0.0000002 per invocation

Execution time: Measured in GB-seconds (memory allocated × duration). First 400,000 GB-seconds/month are free.

The free tier is generous — many low-to-medium traffic workloads fit entirely within it.

When Functions Save Money

Event-Driven Workloads

If your code runs in response to events (file uploads, queue messages, API calls) and is idle most of the time, Functions is ideal. You pay nothing during idle periods.

Example: Processing uploaded cost reports. If reports arrive 24 times per day and each takes 10 seconds to process, your monthly cost is essentially zero (well within the free tier).

Variable Traffic

Workloads with unpredictable traffic patterns benefit from Functions' automatic scaling. Instead of provisioning for peak traffic and paying for idle capacity, Functions scales to zero between requests.

Scheduled Tasks

Cron-style jobs that run briefly and infrequently (daily reports, cleanup tasks, health checks) are perfect for Functions. Running a VM 24/7 for a task that executes for 30 seconds per day is wasteful.

When Functions Don't Save Money

Sustained Workloads

If your code runs continuously (an always-on API server), a dedicated compute instance is almost always cheaper. The per-GB-second pricing adds up quickly at high utilization.

Break-even analysis: A function with 256MB memory running continuously costs approximately $11.52/day. A VM.Standard.E4.Flex with 1 OCPU (which includes far more memory) costs about $1.54/day. For always-on workloads, compute wins by ~7.5x.

Memory-Intensive Processing

Functions are billed by memory × time. If your workload needs 2+ GB of memory and runs for minutes at a time, the GB-second costs can exceed a dedicated instance.

Cold Start Sensitivity

Functions that haven't been invoked recently experience "cold starts" — initialization delays of 1-5 seconds. For latency-sensitive APIs, this is unacceptable. You'd need provisioned concurrency, which negates the cost advantage.

Hybrid Approach

The most cost-effective architecture often combines both:

Always-on API tier: Compute instances with auto-scaling for the main application

Event processing: Functions for asynchronous tasks, file processing, notifications

Scheduled jobs: Functions for periodic maintenance and reporting

Monitoring Function Costs

In OCIFinOps, track your Functions costs alongside compute costs. Compare the total cost of your serverless components against what they would cost as dedicated instances. This data-driven approach ensures you're using the right tool for each workload.

The serverless decision shouldn't be religious — it should be mathematical. Calculate the costs both ways, factor in operational benefits, and choose what makes sense for each workload.

Ready to optimize your OCI costs?

Start with a free demo and see how OCIFinOps can help.