Hey Ruby friends! This year I’m super excited to offer a PostgreSQL Performance Workshop for Rubyists. I already ran the first one at the Ruby Community Conference, and it was a blast!
I wanted to share some key learnings we discovered during the workshop and reflect on how our technology choices impact performance. I’m particularly interested in how we can mix technologies to keep all the conveniences Rails offers while still achieving fast execution and low memory footprint for those high-throughput bottlenecks we all struggle with.
With Ruby 3.3’s impressive performance improvements and Rails 8’s focus on speed, I’ve been watching the ORM performance landscape evolve rapidly. The benchmarks I’ve been running show up to 70% faster Ruby code execution with YJIT in Ruby 3.3, but I kept wondering: how does this translate to real-world database operations? As our applications scale and data grows, choosing the right ORM strategy becomes more critical than ever.
In this post, I’ll dive into the latest performance differences I found between ActiveRecord and Sequel when working with TimescaleDB, a specialized PostgreSQL extension for time-series data. My benchmarks were conducted using Ruby 3.3.6 and the latest versions of both ORMs running on an Apple M4 chip, and I’ve made special efforts to ensure the comparisons are fair and accurate.
The ORM Performance Benchmark Challenge
My no-brain choice is always ActiveRecord but for the Performance workshop, I started collecting also information from the community and decided to also build comparison between different ORMs for the workshop. Each ORM offers unique features, syntax, and performance characteristics. The benchmarks I’m sharing aim to provide insights into these performance differences to help you make informed decisions for your projects.
UPDATE (March 21, 2025): After receiving valuable feedback from the community (special thanks to Maurício Szabo!), I’ve updated these benchmarks to ensure the queries being compared are equivalent. This post now contains the updated results from these improved benchmarks.
It’s important to note that these benchmarks focus purely on execution time and memory usage - they don’t account for developer productivity, which is a crucial factor when choosing an ORM. While Sequel may outperform ActiveRecord in raw speed for certain operations, ActiveRecord’s integration with Rails and familiar syntax can significantly reduce development time. The true cost of an ORM includes both execution performance and the time developers spend writing and maintaining code.
Updated Benchmark Results (March 2025)
After ensuring that the SQL queries generated by both ORMs are equivalent, here are the updated benchmark results. All tests were performed on a MacBook Pro with an Apple M4 chip, running Ruby 3.3.6 with TimescaleDB 2.18 on PostgreSQL 17.2 through the timescaledb-ha docker image.
Here’s the full output of the benchmark, you can get the orm_comparison.rb:
ruby 03_orm_comparison.rb
Fetching gem metadata from https://rubygems.org/.......
Resolving dependencies...
[✔] Generating sample data ... Done!
Generated:
┌──────────┬────────┐
│ Users │ 1,000 │
│ Posts │ 10,000 │
│ Comments │ 50,000 │
└──────────┴────────┘
=== ORM Comparison Examples (Significant Differences >10%) ===
Dataset: 1000 users, 10000 posts, 50000 comments
1. Simple Query Performance (1000 users, limit: 100)
Query: WHERE created_at < current_time LIMIT 100
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
ActiveRecord - simple query
34.000 i/100ms
Sequel - simple query
275.000 i/100ms
Calculating -------------------------------------
ActiveRecord - simple query
372.005 (±23.7%) i/s (2.69 ms/i) - 1.802k in 5.038347s
Sequel - simple query
2.777k (± 4.6%) i/s (360.12 μs/i) - 14.025k in 5.063636s
Comparison:
Sequel - simple query: 2776.9 i/s
ActiveRecord - simple query: 372.0 i/s - 7.46x slower
2. Aggregation Performance (10000 posts)
Query: GROUP BY user_id HAVING COUNT(*) > 5
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
ActiveRecord - aggregation with HAVING
15.000 i/100ms
Sequel - aggregation with HAVING
22.000 i/100ms
Calculating -------------------------------------
ActiveRecord - aggregation with HAVING
152.437 (± 7.9%) i/s (6.56 ms/i) - 765.000 in 5.046387s
Sequel - aggregation with HAVING
208.504 (± 4.8%) i/s (4.80 ms/i) - 1.056k in 5.075211s
Comparison:
Sequel - aggregation with HAVING: 208.5 i/s
ActiveRecord - aggregation with HAVING: 152.4 i/s - 1.37x slower
3. Query Building Approaches (1000 users, limit: 100)
Query: Complex conditions with LIKE pattern '%test%', date range, order by created_at desc
ruby 3.3.6 (2024-11-05 revision 75015d4c1f) [arm64-darwin24]
Warming up --------------------------------------
ActiveRecord - method chain
234.000 i/100ms
Sequel - method chain
252.000 i/100ms
Calculating -------------------------------------
ActiveRecord - method chain
2.221k (± 6.0%) i/s (450.22 μs/i) - 11.232k in 5.075242s
Sequel - method chain
2.449k (± 2.8%) i/s (408.39 μs/i) - 12.348k in 5.046805s
Comparison:
Sequel - method chain: 2448.7 i/s
ActiveRecord - method chain: 2221.2 i/s - 1.10x slower
Ok, now let’s go case by case and check what’s going on.
Simple Queries Performance
Benchmark.ips do |x|
# Using proper warmup and measurement times for stability
x.config(time: 5, warmup: 2)
x.report("ActiveRecord - simple query") do
User.where("created_at < ?", Time.current).limit(100).to_a
end
x.report("Sequel - simple query") do
DB[:users].where(Sequel.lit("created_at < ?", Time.current)).limit(100).all
end
x.compare!
end
Results:
Implementation | Performance (i/s) | Comparison |
---|---|---|
Sequel | 2,774.5 i/s | baseline |
ActiveRecord | 417.0 i/s | 6.65x slower |
Complex Join Performance
Benchmark.ips do |x|
x.config(time: 5, warmup: 2)
x.report("ActiveRecord - complex join") do
# Both queries select the same columns and use the same GROUP BY
User.joins(posts: :comments)
.select('users.id, users.name, COUNT(DISTINCT posts.id) as posts_count, COUNT(comments.id) as comments_count')
.group('users.id, users.name')
.to_a
end
x.report("Sequel - complex join") do
DB[:users]
.join(:posts, user_id: :id)
.join(:comments, post_id: Sequel[:posts][:id])
.select(
Sequel[:users][:id],
Sequel[:users][:name],
Sequel.lit('COUNT(DISTINCT posts.id) as posts_count'),
Sequel.function(:count, Sequel[:comments][:id]).as(:comments_count)
)
.group(Sequel[:users][:id], Sequel[:users][:name])
.all
end
x.compare!
end
Results:
Implementation | Performance (i/s) | Comparison |
---|---|---|
Sequel | 62.9 i/s | baseline |
ActiveRecord | 60.3 i/s | 1.04x slower |
Aggregation Performance
Benchmark.ips do |x|
x.config(time: 5, warmup: 2)
x.report("ActiveRecord - aggregation with HAVING") do
Post.group(:user_id)
.select('user_id, COUNT(*) as posts_count, AVG(LENGTH(content)) as avg_content_length')
.having('COUNT(*) > 5')
.to_a
end
x.report("Sequel - aggregation with HAVING") do
DB[:posts]
.select(:user_id)
.select_append { [
count(id).as(:posts_count),
avg(length(content)).as(:avg_content_length)
] }
.group(:user_id)
.having { count(id) > 5 }
.all
end
x.compare!
end
Results:
Implementation | Performance (i/s) | Comparison |
---|---|---|
Sequel | 211.7 i/s | baseline |
ActiveRecord | 154.0 i/s | 1.38x slower |
Bulk Update Performance
Benchmark.ips do |x|
x.config(time: 5, warmup: 2)
x.report("ActiveRecord - bulk update") do
# Update all users who were created before now
User.where("created_at < ?", Time.current)
.limit(100)
.update_all(updated_at: Time.current)
end
x.report("Sequel - bulk update") do
# Sequel doesn't support updates with limit, so we use a different approach
# First get the ids of 100 users
ids = DB[:users]
.where(Sequel.lit("created_at < ?", Time.current))
.limit(100)
.select(:id)
.map(:id)
# Then update those specific users
DB[:users]
.where(id: ids)
.update(updated_at: Time.current)
end
x.compare!
end
Results from previous run:
Implementation | Performance (i/s) | Comparison |
---|---|---|
ActiveRecord | 1,643.7 i/s | baseline |
Sequel | 1,474.6 i/s | 1.11x slower |
Memory Usage Patterns (MB)
Operation Type | ActiveRecord | Sequel | Key Insights |
---|---|---|---|
Large Result Set | 125 | 45 | ActiveRecord objects consume ~2.8x more memory than Sequel |
Batch Processing | 60 | 35 | Using find_each with ActiveRecord helps control memory usage |
JSON Processing | 80 | 50 | JSONB is more memory-efficient than standard JSON |
Aggregations | 40 | 35 | Memory patterns are similar for aggregation operations |
Key Findings from the Updated Benchmarks
Before diving into the detailed results, I want to share some revised insights from my benchmarking study. These findings represent patterns I observed across multiple test scenarios with properly equivalent queries and adequate warmup periods.
I had the incredible privilege of meeting Jeremy Evans (that’s us in the photo above), the creator of Sequel and author of “Polished Ruby Programming,” at RubyConf Thailand 2023. His insights have been invaluable in helping me understand the design decisions that make Sequel so performant in many scenarios.
Jeremy recently shared some valuable insights on Twitter about why Sequel performs so well:
Sequel does not use async queries, it has a lot of optimizations and a more efficient design. Some of the Sequel examples shown in the blog post could be tuned even further (where/cond.all->where_all/cond), storing static intermediate datasets instead of rebuilding them)
This confirms what my benchmarks revealed - Sequel’s design emphasizes efficiency and optimization at every level, from query building to execution.
My updated benchmarks revealed several consistent patterns:
-
Sequel still excels at raw speed for simple queries - For basic CRUD operations with properly warmed-up benchmarks, Sequel consistently outperforms ActiveRecord by 6-7x for simple selects, though this is less dramatic than my initial tests showed.
-
With equivalent complex queries, the performance gap narrows dramatically - When comparing truly equivalent complex joins and aggregations, the performance difference is much smaller than I initially reported (1.04-1.38x).
-
ActiveRecord leads in bulk updates - ActiveRecord appears to have better optimizations for bulk update operations, showing about a 11% performance advantage over Sequel in this specific use case.
-
Proper benchmark methodology is crucial - Using adequate warmup periods (2+ seconds) and multiple runs produces much more stable and reliable results than quick benchmarks.
-
Memory usage remains a significant differentiator - In high-throughput applications, memory consumption frequently becomes the bottleneck before query speed, making lightweight ORMs particularly valuable.
Let’s examine these findings in more detail with some additional benchmark results:
Batch Processing Comparison
# Benchmark setup
users = User.limit(30000)
Benchmark.measure do
# Processing all records at once
users.each { |user| user.touch }
end
Benchmark.measure do
# Using find_each with default batch size
User.limit(30000).find_each do |user|
user.touch
end
end
Benchmark.measure do
# Using update_all
User.limit(30000).update_all(updated_at: Time.current)
end
Results: | Operation | Time | Queries | Rate | Relative Speed | |—————————————–|———-|———|———–|—————-| | Processing all records at once | 10.342 s | 30,001 | 2.9k/s | baseline | | Using find_each with default batch size | 9.657 s | 30,031 | 3.1k/s | 1.1x faster | | Using update_all | 0.282 s | 1 | 106.4k/s | 36.6x faster |
The results are clear: bulk operations like update_all
are dramatically faster (36.6x) but bypass ActiveRecord callbacks and validations. I’ve found this represents a common trade-off I need to make between performance and application logic. Sometimes I choose performance, sometimes I need those callbacks!
Multiple Runs for Stability
3.times do |i|
puts "\nRun #{i+1}/3:"
Benchmark.ips do |x|
x.config(time: 5, warmup: 2)
x.report("ActiveRecord - simple query") do
User.where("created_at < ?", Time.current).limit(100).to_a
end
x.report("Sequel - simple query") do
DB[:users].where(Sequel.lit("created_at < ?", Time.current)).limit(100).all
end
x.compare!
end
end
Results: | Run | ActiveRecord | Sequel | Difference | |—–|————–|——–|————| | 1 | 414.1 i/s | 2,823.8 i/s | 6.82x faster | | 2 | 405.2 i/s | 2,739.4 i/s | 6.76x faster | | 3 | 431.8 i/s | 2,760.3 i/s | 6.39x faster |
This demonstrates that with proper warmup periods, benchmark results become more stable across runs, providing more reliable data for decision-making.
Memory Optimization with OccamsRecord
One of the most impressive optimizations I’ve found for memory usage is OccamsRecord:
# Standard ActiveRecord
Benchmark.memory do |x|
x.report("ActiveRecord with associations") do
users_data = User.includes(:posts).map do |user|
{
name: user.name,
email: user.email,
post_count: user.posts.size
}
end
end
x.report("OccamsRecord") do
users_data = OccamsRecord
.query(User.all)
.eager_load(:posts)
.run
.map { |user| {
name: user.name,
email: user.email,
post_count: user.posts.size
}}
end
x.compare!
end
Results: | Implementation | Memory Allocated | Memory Retained | |—————-|——————|—————–| | ActiveRecord with associations | 27.30 MB | 68.34 KB | | OccamsRecord | 16.03 MB | 16.30 KB |
This is a dramatic improvement in memory usage - OccamsRecord uses 41% less allocated memory and 76% less retained memory compared to standard ActiveRecord!
Best Practices for ORM Performance
Based on my updated benchmarks, I’ve revised some best practices for optimizing ORM performance:
1. Ensure Benchmark Accuracy
# Poor benchmark methodology
Benchmark.ips do |x|
x.config(time: 1, warmup: 0.5)
# This will produce unstable results...
end
# Better benchmark methodology
Benchmark.ips do |x|
x.config(time: 5, warmup: 2)
# This provides more stable and realistic results
end
2. Compare Equivalent Queries
# Inequivalent queries lead to misleading comparisons
ActiveRecord: User.select('*').joins(:posts).group('users.id')
Sequel: DB[:users].join(:posts, user_id: :id).select(:name).group(:id, :name)
# Equivalent queries for fair comparison
ActiveRecord: User.select('users.id, users.name').joins(:posts).group('users.id, users.name')
Sequel: DB[:users].join(:posts, user_id: :id).select(Sequel[:users][:id], Sequel[:users][:name]).group(Sequel[:users][:id], Sequel[:users][:name])
3. Use Index-Friendly Query Patterns
# Poor index usage (can't use index on name column)
User.where("name LIKE ?", "%smith%")
# Better index usage (can use index on name column)
User.where("name LIKE ?", "smith%")
4. Batch Processing
# Inefficient
User.all.each { |user| user.update(status: 'active') }
# Efficient (36x faster)
User.update_all(status: 'active')
Join Me for PostgreSQL Performance Optimization at Tropical on Rails 2025!
Want to dive deeper into these concepts and learn how to optimize your Ruby applications with PostgreSQL? Come hang out with me at my upcoming PostgreSQL Performance for Ruby Developers workshop at Tropical on Rails 2025!
The workshop is based on our PostgreSQL Performance course, which has helped hundreds of developers optimize their database interactions and improve application performance. I’ve refined it based on feedback from previous sessions.
Workshop Details:
- When: April 2nd, 2025 - 2 PM to 6 PM
- Language: English
- Location: São Paulo, Brazil
My Updated Conclusions
After addressing the feedback from the community and running more accurate benchmarks, my conclusions have evolved. The choice between ActiveRecord and Sequel isn’t as clear-cut as my initial benchmarks suggested. While Sequel still generally offers better raw performance for simple queries, the gap is much smaller for complex operations when comparing equivalent queries.
Here’s what I consider when choosing an ORM:
- I use Sequel for performance-critical, simple data operations where raw speed matters
- I stick with ActiveRecord for standard CRUD and Rails integration
- Sometimes I use both in the same application where appropriate (yes, this works!)
- I always profile my specific use case with equivalent queries before committing to an ORM
- I use bulk operations whenever possible for better performance
Remember that the best ORM for your application depends on your specific requirements and constraints. My updated benchmarks provide a more realistic starting point for your decision-making process, but your mileage may vary based on your specific architecture and usage patterns.
What’s your experience with ORM performance? I’d love to hear your thoughts in the comments. And if you’re interested in diving deeper, join me at Tropical on Rails 2025 to continue the conversation!
Resources
- TimescaleDB Ruby Gem
- PostgreSQL Performance Workshop for Rubyists
- Sequel Documentation
- ActiveRecord Query Interface
- OccamsRecord
- Benchmark-ips Gem
- Ruby Performance Optimization Book
- Polished Ruby Programming by Jeremy Evans
Take Your PostgreSQL Performance to the Next Level!
Looking for even more dramatic performance improvements in your Ruby + PostgreSQL applications? Check out the timescaledb gem that I maintain! This powerful extension enables time-series data optimization, hypertables, and advanced query capabilities that can deliver 10-100x performance gains for time-series workloads.
As a maintainer, I’ve seen teams transform their application performance with minimal code changes. Drop me a message if you have questions or need implementation advice - I love helping people optimize their database performance!