Ridesharing Algorithms in TransLoc OnDemand

Michael Fogleman
TransLoc TechLog
Published in
8 min readMar 23, 2016

--

In August 2015, TransLoc launched a new product to serve the demand-responsive transit market. TransLoc OnDemand lets passengers request a ride with their mobile device and automatically dispatches a transit vehicle within seconds. The driver has an iPad that tells them where to go and who they’ll be picking up or dropping off. Rides are shared when possible.

Currently, six universities are using the product to automate their Safe Ride services. Students can get a ride anywhere on campus within minutes. Manual dispatching is now a thing of the past, and there’s no more awkward communication over walkie talkies to figure out which vehicles should serve which rides. Looking forward, TransLoc will soon deploy the service in larger municipal areas.

The algorithms that power TransLoc OnDemand are quite sophisticated and interesting, so we thought the world might want to learn more.

The Problem

At the core of TransLoc OnDemand is the scheduling algorithm. The inputs into the algorithm are rides and vehicles.

Each vehicle has the following attributes:

  • Current Location
  • Current Passenger Load
  • Total Passenger Capacity

Each ride has the following attributes:

  • Origin & Destination
  • Number of Passengers
  • Desired Pickup or Dropoff Time (if scheduled in advance)
  • Candidate Vehicles (determined by other parts of our system)

The problem to solve is two-fold:

Which vehicles should serve which rides and in what order should each vehicle make its stops?

Determining an optimal schedule for the entire system is our goal. But this is, perhaps unintuitively, extremely difficult. Consider the case of two rides and one vehicle. How many different ways can we serve the rides?

  • P1, P2, D1, D2
  • P1, P2, D2, D1
  • P2, P1, D1, D2
  • P2, P1, D2, D1
  • P1, D1, P2, D2
  • P2, D2, P1, D1

Note that each ride has two stops: P denotes a pickup and D denotes a dropoff. Of course, a ride’s pickup must come before its dropoff. There are six different ways for one vehicle to service two rides. That doesn’t seem so bad, but look at how the number of permutations grows as the number of rides grows:

  • 1 ride: 1
  • 2 rides: 6
  • 3 rides: 90
  • 4 rides: 2520
  • 5 rides: 113400
  • 6 rides: 7484400
  • 7 rides: 681080400
  • 8 rides: 81729648000
  • 9 rides: 12504636144000
  • 10 rides: 2375880867360000

Yikes! This is some serious exponential growth. For a single vehicle, the number of valid stop orderings is given by the formula:

p = (2n)! / 2^n

At 33 rides, the number of permutations exceeds the number of atoms in the universe!

We can also derive the size of the search space when there are more vehicles, but the number of rides dominates as you can see below.

Size of the Search Space as a Function of Number of Vehicles and Number of Rides (Log Scale)

So it’s clear that a brute-force approach is out of the question.

Simulated Annealing

The search space is huge. But consider this…

If we start with some initial state, we can mutate it by changing a ride’s vehicle assignment or moving a stop to a different position in the ordering to produce a new, neighboring state.

And from any state, we can get to any other state in the whole search space with no more than 3N mutations, where N is the number of rides. One mutation per ride to change its vehicle assignment and two mutations to change the pickup and dropoff positions in the stop ordering.

This means that the search space, while enormous, has a small graph diameter.

Simulated annealing is one of my favorite algorithms, and it’s perfect for this type of problem. If you are unfamiliar with this algorithm, the Wikipedia article is quite good — check it out.

So the approach is to apply simulated annealing. This means we need to model the state and provide these methods:

  • DoMove (make a random mutation)
  • UndoMove (undo the previous mutation because it was bad)
  • Energy (compute the energy, or score, of this state)
  • Copy (make a copy of the state because it’s the best found so far)

We’ve already talked about the types of mutations, but how do we compute a score for a given set of vehicle assignments and stop orderings?

Scoring

We’re traversing a massive search space and looking for the global minimum. What are we trying to minimize?

  • Wait Time (how long each passenger waits for their vehicle to arrive)
  • Ride Duration (how long each passenger is on their vehicle)
  • Vehicle Mileage (how many miles the vehicles travel)

Different weights can be applied to these individual factors, and they aren’t all necessarily linear. If given a choice between shaving two minutes off a 5 minute wait or 1 minute off a 10 minute wait, all else equal, we might want to do the latter.

Whenever the state is mutated, we need to compute all of these factors for each ride. Computing the energy is usually the most performance-intensive part of simulated annealing. We search through literally millions of states when computing a schedule, so it needs to be fast. A lot of effort went into caching values where possible and reducing memory allocations to avoid excessive garbage creation and collection. A code profiler is your friend here!

We could potentially add other scoring factors, like avoiding getting a passenger near their destination and then traveling away from it. Even if the result is technically ideal, psychological factors could make the route seem inefficient.

Multiple Annealing Runs for the Same Input

The chart above shows the minimum energy found as the annealing algorithm iterates over an input with 12 rides and 1 vehicle. Each run converges on the same solution even though they started with different random seeds. We can be fairly certain that this is the global minimum. Quite impressive considering the search space contains 151,476,660,579,404,160,000 unique states and each run took under 15 seconds to complete!

This chart also reveals something else. Simulated annealing can’t really be parallelized, because each run progresses at about the same rate. There’s no benefit in running multiple workers and picking the winner, even if this is done multiple times throughout a single optimization. (I tried it. Sad face.)

Routing

In order to know the wait time and ride duration, we need to know how long it takes a vehicle to drive from point A to point B. If there are 10 stops, there are 100 (N^2) A-to-B segments that we need to compute this for. (Our annealing algorithm will probably stumble on most of these pairs.) Several APIs provide a route matrix functionality to do just this. But the terms of use for Google Directions API, for example, don’t really permit this type of backend usage.

So we ended up writing our own router on top of OpenStreetMap (OSM) data using a relatively simple A* search algorithm. We compute all of the segment distances and durations up front before the simulated annealing begins.

More recently, I’ve discovered the Open Source Routing Machine (OSRM) which is a self-hosted OSM router that’s super fast. It’s possible we’ll switch to this in the future instead of maintaining our own router.

Scheduling Rides in Advance

There’s another tricky problem we had to solve, and it deals with scheduling rides in advance.

For the on-demand use case, we simply want to complete all of the rides as soon as possible. For the in-advance use case, passengers can specify a desired departure or arrival time. We want to honor these times as best as we can.

In this situation, it’s possible that a vehicle will have some idle time in between rides. It might take 10 minutes to drive to the next stop, but the passenger doesn’t need to be picked up until an hour from now. What time should we aim to be there?

The naive answer would be to show up at exactly their desired pickup time. But what if there are a lot of upcoming stops to make in a short amount of time? We might quickly get behind schedule. It would be better to pick up the first passenger a little early to get a head start. So our new problem is:

How much idle time, if any, should be inserted in between each stop to minimize the error between the desired stop times and the actual stop times?

A diagram I made while working out the problem. I asked about it on StackExchange.

This problem had me stumped for a few days. I had code that could solve it, but not quickly. This operation needs to be run for each state during annealing, so again, it has to be very fast. I created the diagram above to distill the problem with a generic example. And then I posted it on the Computer Science Stack Exchange, in search of an efficient algorithm…

The following day, Chao Xu identified the problem as an Isotonic Regression, which has an O(n) time algorithm. I had never heard of this, but it was indeed the solution. I found more details online and was able to implement the Pool Adjacent Violators Algorithm to quickly calculate the solution for any given stop ordering.

Execution Model

The scheduling algorithm runs whenever a new ride is submitted, a ride is canceled, or a driver reports a no-show. It also runs every two minutes to refine the schedule. This way, the system is dynamically adjusting to real-time conditions and always provides a globally optimal schedule.

We record every estimate that the algorithm makes. We can compare these to actual wait times and ride durations, so it’s possible we can further improve the algorithm in the future with this data.

The Results

Several transit agencies are already using TransLoc OnDemand with great results. Since August, we’ve transported 60,000 passengers with an average wait time of 8 minutes. For one of our clients, ridership increased 32% and wait time decreased 15% after switching from a competing product. Built upon a solid foundation of powerful algorithms, our product is ready to grow and make an even bigger impact in the demand-responsive market.

Thanks for reading!

Michael Fogleman
Software Engineer at TransLoc

--

--

Always coding, mostly Python & Go. Computer graphics, 2D & 3D. Art, woodworking, gardening. Space exploration enthusiast. Always busy with side projects. ♥