Most developers I know only use redis as a queue for resque or sidekiq. However, you can do so much more with redis. It enables elegant solutions for tasks that would be a chore with other kinds of databases. I will give you three examples, but first I will show you how easy it is to work with redis directly.

Using the redis gem

Below is an example of using the redis gem (be sure to gem install redis first) to perform some set operations.

2.1.6 :001 > require 'redis'
 => true
2.1.6 :002 > r = host: 'localhost', port: 6379, db: 0
 => #<Redis client v3.2.1 for redis://localhost:6379/0>
2.1.6 :003 > r.sadd 'workers', 'worker-3.1234'
 => true
2.1.6 :004 > r.sadd 'workers', 'worker-2.6543'
 => true
2.1.6 :005 > r.sismember 'workers', 'worker-3.1234'
 => true
2.1.6 :006 > r.srem 'workers', 'worker-3.1234'
 => true
2.1.6 :007 > r.sismember 'workers', 'worker-3.1234'
 => false
2.1.6 :008 > r.smembers 'workers'
 => ["worker-2.6543"]

As you can see, all you have to do is instantiate a redis object and follow a 1-to-1 mapping from redis commands to ruby methods. It's worth taking some time to explore the available redis commands because some of them are really useful and interesting. Once you realize that redis is a data structure server, you will find more interesting use cases for it.

Throttle alert emails

I've maintained apps that need to alert a customer of a required action. We ran a job that periodically queried for unacknowledged alerts and emailed the customer. However, saving and checking timestamps for each potential email delivery in SQL required a lot of extra code, and it would deadlock sometimes because the workload was both read and write heavy. When I rewrote the email throttling code with redis, these problems went away.

def send_email(message)
  key = "#{} #{}"
  frequency = 60 * 60 # 1 hour

  throttle_operation(key, frequency) do

def throttle_operation(key, frequency_in_seconds)
  # only perform the operation when the throttle key does not exist
  unless redis.exists key
    yield if block_given?

    # perform the next to redis commands in a transaction
    redis.multi do
      redis.set key, 1
      redis.expire key, frequency_in_seconds

In this example, I create a #throttle_operation method to ensure I never deliver an email more than once an hour. The method relies on the expire command, which will delete a key after a given interval. The message only gets sent if the key does not exist, and the key gets reset after sending the message. Notice that I use the multi command to wrap multiple commands in a transaction, so a failure scenario won't result in a key that never expires!

Aggregate and flush metrics

When you record a lot of metrics, it is not performant or cost efficient to record each metric with an API call. Instead, you can temporarily aggregate the metrics in redis.

# adds one to the current value, or sets to 1 if there is no current value
redis.incr 'metrics.user.logins'

# the same, except you can add/set larger values
redis.incrby '', uploaded_photos.size

Later, in a background process, you can read and reset the current metric values and flush them to your metrics service. I like librato, so I'll use the librato-metrics gem in this example.

# In a cron that runs once a minute
require 'redis'
require 'librato/metrics'

# first, setup your connections
redis =
Librato::Metrics.authenticate ENV['LIBRATO_EMAIL'], ENV['LIBRATO_API_KEY']

# get the current values and reset to zero atomically
user_logins, photo_uploads = redis.multi do
  redis.getset('metrics.user.logins', 0)
  redis.getset('', 0)

# submit to your favorite metrics service
    "user.logins" => user_logins.to_i
    "photo.uploads" => photo_uploads.to_i

It's worth noting that you could potentially lose metrics if the metric submission fails because you have already read the current values and reset them to zero. In that case, you may want to log an error with the current metric values and retry metric submissions later (or just log the error and don't retry if they're not that critical).

Lock operations

I've seen some really unwieldy solutions with SQL transactions to prevent a certain background job from running concurrently in a concurrent environment. Here is how I use redis to create a lock for the operation that prevents it from overlapping with itself. If the operation is already running, it will skip the operation. As in the last example, we're using the getset command to read the current value and write a new one.

class LongRunningUnreliableOperation
  include Sidekiq::Worker

  def perform
    lock_operation( do

  def lock_operation(key)
    Sidekiq.redis do |conn|
      unless conn.getset(key, 'yes')
          yield if block_given?
          conn.del key


I created the #lock_operation method, which accepts a block that performs the actual work. This makes the code a lot more readable, and I could put this method into it's own module and include it in other workers if I needed to reuse it. Also, if you use sidekiq for processing background jobs, this is how you safely use the redis connection from sidekiq's connection pool.


I've shown you three different ways to use redis for simple tasks that are not so simple with other kinds of databases. Hopefully, this inspires you to see redis as more than just a backend for resque/sidekiq. I would love to hear about other uses you find for redis!