Thursday, March 29, 2012

PK models in R and in Julia

1.1 Pharmacokinetic models with an analytic solution

Pharmacokinetics is the study of the absorption and elimination of drugs and their metabolites in the body. As described in the wikipedia article there are several parameters, such rate constants, volumes of distribution and clearances, that we wish to estimate from pharmacokinetic data, which usually consists of measurements of blood serum concentrations following certain doses at certain times. My particular interest is in analysis of population pharmacokinetic data using nonlinear mixed-effects models and in the design of experiments for population pharmacokinetic studies.

For many of the models based on linear elimination kinetics the concentration at time t after a dose d has an analytic solution. As part of the PFIM software developed in France Mentre's lab at Inserm, Julie Bertrand created a catalogue of these expressions for 1, 2 or 3 compartment models after oral or bolus or infusion administration of a single dose or a fixed number of doses or in steady state after multiple doses. Julie and I created the PKPDmodels package for R to provide these expressions as functions that also evaluate the gradient with respect to the parameters and, optionally, the Hessian.

At the recent PODE: Population Optimum Design of Experiments conference organized by France, the wonderful Emmanuelle Comets mentioned that in her saemix package for R she only needs the predicted concentration values. She asked if it was slowing things down to evaluate the gradient at every function evaluation. I decided to benchmark.

1.2 Model function and benchmarks in R

One of the (regrettably few) published examples of population pharmacokinetic data is from a study of Theophylline kinetics. These data are available as the Theoph data set in R. The experimental subjects were given a single oral dose of the drug. Assuming a 1-compartment model with linear elimination kinetics the expression for the model is

> library(PKPDmodels)
> PKexpr("oral", "sd")
~(dose/V) * (ka/(ka - k)) * (exp(-k * t) - exp(-ka * t))

in terms of the parameters V, the volume of distribution, ka, the absorption rate constant, and k, the elimination rate constant.

It turns out that the combination of V and k is not a particularly good choice of parameters as their estimates usually end up being highly correlated. A better choice is V and Cl, the clearance, which is defined as Cl = kV. We can achieve this parameterization by providing a list of transformation expressions from the new parameters to the old. Nothing sophisticated is going on, just a substitution of expressions in another expression. While we are at it we could switch to the logarithms of pharmacokinetic parameters which provide a log-likelihood function that is closer to being quadratic.

> PKexpr("oral", "sd", list(ka ~ exp(lka), V ~ exp(lV), k ~ exp(lCl)/V))
~(dose/exp(lV)) * (exp(lka)/(exp(lka) - exp(lCl)/V)) * (exp(-(exp(lCl)/V) * 
    t) - exp(-exp(lka) * t))

A call to PKmod with the same arguments produces a byte-compiled function that evaluates both the function and the gradient.

> (mfg <- PKmod("oral", "sd", list(ka ~ exp(lka), V ~ exp(lV), k ~ exp(lCl - lV))))
function (dose, t, lV, lka, lCl) 
{
    .expr1 <- exp(lV)
    .expr2 <- dose/.expr1
    .expr3 <- exp(lka)
    .expr5 <- exp(lCl - lV)
    .expr6 <- .expr3 - .expr5
    .expr7 <- .expr3/.expr6
    .expr8 <- .expr2 * .expr7
    .expr11 <- exp(-.expr5 * t)
    .expr14 <- exp(-.expr3 * t)
    .expr15 <- .expr11 - .expr14
    .expr19 <- .expr8 * (.expr11 * (.expr5 * t))
    .expr21 <- .expr6^2
    .expr23 <- .expr2 * (.expr3 * .expr5/.expr21)
    .value <- .expr8 * .expr15
    .grad <- array(0, c(length(.value), 3L), list(NULL, c("lV", 
        "lka", "lCl")))
    .grad[, "lV"] <- .expr19 - (.expr23 + dose * .expr1/.expr1^2 * 
        .expr7) * .expr15
    .grad[, "lka"] <- .expr2 * (.expr7 - .expr3 * .expr3/.expr21) * 
        .expr15 + .expr8 * (.expr14 * (.expr3 * t))
    .grad[, "lCl"] <- .expr23 * .expr15 - .expr19
    attr(.value, "gradient") <- .grad
    .value
}
<bytecode: 0x2bd7c88>

Because the symbolic differentiation in R to produce the gradient also performs common subexpression elimination this function produces a cleaner evaluation of the model function and a comparatively compact expression of the gradient.

For comparison I use two types of evaluations of the model function itself, one with a nested function call and one with the transformations performed internally.

> mf0 <- cmpfun(function(dose, t, V, ka, k) (dose/V) * (ka/(ka - k)) * (exp(-k * t) - exp(-ka * t)))
> mf1 <- cmpfun(function(dose, t, lV, lka, lCl) mf0(dose, t, exp(lV), exp(lka), exp(lCl - lV)))
> mf2 <- cmpfun(function(dose, t, lV, lka, lCl) {V <- exp(lV); ka <- exp(lka); k <- exp(lCl - lV); (dose/V) * (ka/(ka - k)) * (exp(-k * t) - exp(-ka * t))})

Now would be a good time to check that I haven't made transcription mistakes

> Dose <- Theoph$Dose
> str(Dose)
 num [1:132] 4.02 4.02 4.02 4.02 4.02 4.02 4.02 4.02 4.02 4.02 ...
> Time <- Theoph$Time
> lV <- rep.int(-1, length(Dose))
> lka <- rep.int(0.6, length(Dose))
> lCl <- rep.int(-4, length(Dose))
> all.equal(mf1(Dose, Time, lV, lka, lCl), mf2(Dose, Time, lV, lka, lCl))
[1] TRUE
> all.equal(mf1(Dose, Time, lV, lka, lCl), as.vector(mfg(Dose, Time, lV, lka, lCl)))
[1] TRUE

Finally we benchmark

> library(rbenchmark)
> cols <- c("test", "elapsed", "relative", "user.self", "sys.self")
> benchmark(mf1(Dose, Time, lV, lka, lCl), mf2(Dose, Time, lV, lka, lCl), mfg(Dose, Time, lV, lka, lCl), replications=1000L, columns=cols, order="elapsed")
                           test elapsed relative user.self sys.self
2 mf2(Dose, Time, lV, lka, lCl)   0.081 1.000000     0.080        0
1 mf1(Dose, Time, lV, lka, lCl)   0.084 1.037037     0.084        0
3 mfg(Dose, Time, lV, lka, lCl)   0.219 2.703704     0.220        0

So the conclusion is that there is very little difference between the nested function calls and the explicit assignment of transformed values and that it takes 2.7 times as long to evaluate both the fitted values and the gradient than does evaluation of only the fitted values. However, the actual execution times for 1000 function evaluations are sufficiently small that this operation should not be a bottleneck.

1.3 Translation into Julia

I decided to translate this code into Julia, partially to gain experience with the language. I have said that the most valuable character trait for a programmer is unbounded pessimism because you spend your working time thinking, "now what can go wrong here". Because of this tendency I was careful to distinguish scalar and vector arguments and to test for correct sizes of vector arguments, etc. A bit of testing showed me that this was unnecessary. Write it the simple way and let the functions that you call check for numeric arguments and vector-scalar operations and correct vector-vector operations, which they will do.

The only changes I needed to make were to replace '*' by '.*' and '/' by './'. The "dot" forms of these operators perform componentwise vector-vector operations. Because they are also defined for scalar-scalar operations and scalar-vector operations there is no harm in using them throughout.

The result is anticlimactic. The function definitions are

mf0(dose, t, V, ka, k) = (dose ./ V) .* (ka ./ (ka - k)) .* (exp(-k .* t) - exp(-ka .* t))
mf1(dose, t, lV, lka, lCl) = mf0(dose, t, exp(lV), exp(lka), exp(lCl - lV))

function mfg(dose, t, lV, lka, lCl)
    V      = exp(lV)
    expr2  = dose ./ V
    ka     = exp(lka)
    Cl     = exp(lCl)
    k      = Cl ./ V
    expr6  = ka - k
    expr7  = ka ./ expr6
    expr8  = expr2 .* expr7
    expr11 = exp(-k .* t)
    expr14 = exp(-ka .* t)
    expr15 = expr11 - expr14
    expr18 = V .* V
    expr19 = Cl .* V ./ expr18
    expr24 = expr6 .* expr6
    expr8 .* expr15, hcat(expr8 .* (expr11 .* (expr19 .* t)) - (expr2 .* (ka .* expr19 ./ expr24) + dose .* V ./ expr18 .* expr7) .* expr15,
                          expr2 .* (expr7 - ka .* ka ./ expr24) .* expr15 + expr8 .* (expr14 .* (ka .* t)),
                          expr2 .* (ka .* k ./ expr24) .* expr15 - expr8 .* (expr11 .* (k .* t)))
end

and timing the same calculations on the same machine produced

elapsed time: 0.08809614181518555 seconds
elapsed time: 0.17201519012451172 seconds

which are very close to the timings for the byte-compiled functions in R.

Monday, March 26, 2012

Julia functions for the Rmath library

1.1 Signatures of the d-p-q-r functions in libRmath

Users of R are familiar with the functions for evaluating properties of and for sampling from probability distributions, the so-called d-p-q-r functions. The designation d-p-q-r reflects the fact that, for each distribution, there are up to 4 such functions, each beginning with one of these letters and followed by an abbreviated name of the distribution. The prefixes indicate:

d
the density of a continuous random variable or the probability mass function of a discrete random variable
p
the cumulative probability function, also called the cumulative distribution function or c.d.f.
q
the quantile function, which is the inverse of the c.d.f. defined on the interval (0, 1)
r
random sampling from the distribution

Members of R-core, notably Martin Maechler, have devoted considerable effort to ensuring that these functions are both reliable and general. Because they are part of an Open Source system, they can be used in other Open Source projects. In fact, there is the capability to collect these functions in a stand-alone library. One of the R packages for Debian Linux and distributions that are derived from it, such as Ubuntu, is r-mathlib which contains this stand-alone library. The header file for this library is $RHOME/include/Rmath.h on most systems (/usr/share/R/include/Rmath.h on Debian and derived distributions because of the rules for Debian packaging).

The names and arguments of these functions follow well-defined patterns. For example, those for the χ2 distribution have signatures

double  dchisq(double, double, int);
double  pchisq(double, double, int, int);
double  qchisq(double, double, int, int);
double  rchisq(double);

The first argument of dchisq is x, the abscissa, of pchisq is q, the quantile, and of qchisq is p, the probability. The next argument of the d-p-q functions, and the only argument of rchisq, is the distribution parameter, df, which is the degrees of freedom of the χ2. The last argument in the d-p-q functions controls whether the probability is on the logarithm scale. It is named give_log for the density function and log_p for the probability and quantile functions. (At first glance this argument may seem unnecessary as the probability density, for example, could be calculated on the probability scale then transformed to the logarithm scale. However, operations that are mathematically equivalent do not always produce the same result in floating point arithmetic. There are good reasons for these cases.)

The second last argument in the p and q functions determines whether probability is accumulated from the left, the lower tail, or from the right, the upper tail. The default is the lower tail. Again, there is an apparent redundancy because one can always subtract the probability in the lower tail from 1 to get the probability in the upper tail. However when x is extremely small but positive, 1 - x rounds to 1 and small upper-tail probabilities truncate too quickly. (Remember Kernighan and Plauger's aphorism that "10 times 0.1 is hardly ever 1" - that's what you live with in floating point computations.)

1.2 Calling these functions from Julia

It is surprisingly easy from within Julia to call C functions like these having simple signatures. First you call the Julia function dlopen to access the shared object file and save the handle

julia> _jl_libRmath = dlopen("libRmath")
Ptr{Void} @0x0000000003461050

I use an awkward, patterned name for this variable to avoid potential name conflicts. At present the Julia namespace is flat (but that is likely to change as namespaces are already being discussed).

Then a call to, say, pchisq needs only

julia> ccall(dlsym(_jl_libRmath,:pchisq),Float64,(Float64,Float64,Int32,Int32), 1.96^2, 1, true, false)
0.950004209703559

The first argument to the ccall function is the symbol extracted with dlsym, followed by the specific type of the function value, the argument signatures and the actual arguments. The actual arguments are converted to the signature shown before being passed to the C function. Thus the true is converted to 1 and the false to 0.

The value of this function call, 0.95000, is what we would expect, because a χ2 on 1 degree of freedom is the square of a standard normal distribution and the interval [-1.960, 1.960] contains approximately 95% of the probability of the standard normal.

1.3 Creating a Julia function for scalar arguments

Obviously, calling such a C function from the Rmath library is simpler if we wrap the call in a Julia function. We can define such a function as a one-liner (well, two lines but only because it would be a very long single line)

pchisq(q::Number, df::Number, lower_tail::Bool, log_p::Bool) =
   ccall(dlsym(_jl_libRmath,:pchisq),Float64,(Float64,Float64,Int32,Int32), q, df, lower_tail, log_p)

This illustrates one of the methods of defining a function in Julia,

fname(arg1, arg2, ...) = expression

This form, which reads much like the mathematical description f(x) = value, is useful when the function body is a single expression.

This code defines a particular method, with a reasonably general signature, for pchisq. The C function pchisq requires double precision values (called Float64 in Julia) for the first two arguments and integers for the last two. However, these integers are treated as Boolean values, so we require the arguments passed to the Julia function to be Boolean. The Number type in Julia is an abstract type incorporating all the integer and real number types (including complex and rational numbers, as it turns out). The Julia function, pchisq, can have methods of many different signatures. In a sense a function in Julia is just a name used to index into a method table.

We could have allowed unrestricted arguments, as in

pchisq(q, df, lower_tail, log_p) = <something>

but then we would need to define checks on the argument types, to ensure that they are appropriate and to check for vector versus scalar arguments, in the function body. It is better to use the method signature to check for general forms of arguments and to distinguish the scalar and vector cases. (Vector methods are defined in the next section).

This is one of the organizing principles of Julia; it is built around functions but actually all functions are methods chosen via multiple dispatch. So thinking in terms of signatures right off the bat is a good idea.

1.4 Default argument values

In R we can match arguments by names in a function call and we can define default values for arguments that are not given a value. At present neither of these capabilities is available in Julia. However, we can provide defaults for trailing arguments by creating methods with reduced signatures

pchisq(q::Number, df::Number, lower_tail::Bool) = pchisq(q, df, lower_tail, false)
pchisq(q::Number, df::Number) = pchisq(q, df, true, false)

The fact that the signature is recognized by position and not by using named actual arguments means that whenever log_p is true we must specify lower_tail, even if its value is the default value, true.

In the case of pchisq the distribution parameter, df, does not have a default value. For many other distributions there are default values of distribution parameters. For example, the distribution parameters for the logistic distribution are location and scale with defaults 0 and 1. Having defined the 5-argument scalar version of plogis

plogis(q::Number, l::Number, s::Number, lo::Bool, lg::Bool) = 
    ccall(dlsym(_jl_libRmath,:plogis),Float64,(Float64,Float64,Float64,Int32,Int32), q, l, s, lo, lg)

(for brevity we have used the names l, s, lo and lg for the last four arguments) we can define the methods with default values as

plogis(q::Number, l::Number, lo::Bool, lg::Bool) = plogis(q, l, 1, lo, lg)
plogis(q::Number, lo::Bool, lg::Bool) = plogis(q, 0, 1, true, false)
plogis(q::Number, l::Number, s::Number, lo::Bool) = plogis(q, l, s, lo, false)
plogis(q::Number, l::Number, lo::Bool) = plogis(q, l, 1, lo, false)
plogis(q::Number, lo::Bool) = plogis(q, 0, 1, lo, false)
plogis(q::Number, l::Number, s::Number) = plogis(q, l, s, true, false)
plogis(q::Number, l::Number) = plogis(q, l, 1, true, false)
plogis(q::Number) = plogis(q, 0, 1, true, false)

Because a Bool is not a Number, a signature such as (Number, Bool) can be distinguished from (Number, Number).

These method definitions allow for some combinations of default values but not all combinations. Having named arguments, possibly with default values, is still a more flexible system but these method signatures do handle the most important cases.

Defining all those methods can get tedious (not to mention providing the possibility of many transcription errors) so it is worthwhile scripting the process using the macro language for Julia. See the file extras/Rmath.jl in the Julia distribution for the macros that generate both these methods and the vectorized methods described in the next section.

1.5 Vectorizing the function

In R there are no scalars, only vectors of length 1, so, in that sense, all functions are defined for vector arguments. In addition, many functions, including the d-p-q functions, are vectorized in the sense that they create a result of the appropriate form from vector arguments whose length is greater than 1. Thus providing a vector of quantiles to pchisq will return a vector of probabilities. We would want the same to be true for the Julia functions.

In Julia scalars are distinct from vectors and we use the method dispatch to distinguish scalar and vector cases. I think a general programming paradigm for Julia is first to define a function for scalar arguments and then build the vectorized version from that. However, I don't yet have enough experience to say if this really is a general paradigm.

We can write methods for a vector q as

pchisq(q::Vector{Number}, df::Number, lower_tail::Bool, log_p::Bool) = 
    [ pchisq(q[i], df, lower_tail, log_p) | i=1:length(q) ]
pchisq(q::Vector{Number}, df::Number, lower_tail::Bool) = 
    [ pchisq(q[i], df, lower_tail, false) | i=1:length(q) ]
pchisq(q::Vector{Number}, df::Number) = 
    [ pchisq(q[i], df, true, false) | i=1:length(q) ]

These methods use a "comprehension" to specify the loop over the individual elements of the array producing another array. They could, of course, be written in a more conventional looping notation but the comprehension notation is powerful and compact, similar to the "apply" family of functions in R, so we use it here. Like the one-line function definition, the comprehension reads much like a mathematical expression to create an array of values of a certain form for i = 1,…,n.

By the way, experienced R programmers, who know not to write 1:length(q) because of unexpected results when length(q) is zero, need not worry. The sequence notation, a:b doesn't count down in Julia when a is greater than b so 1:0 has length zero. That is, 1:length(q) in Julia always produces the same answer as seq_along(q) does in R.

We could also vectorize the df argument which leads us to consider the case of vector df and scalar q and the case of vector df and vector q, etc. At this point we realize that we are seeing many variations on a theme and decide that it is best to script this vectorization. Fortunately the file base/operations.jl has macros _jl_vectorize_1arg and _jl_vectorize_2arg which we can adapt for this purpose.

1.6 The result

As of this writing, loading the file extras/Rmath.jl

julia> load("../extras/Rmath.jl")

defines

julia> whos()
R_pow                         Function
_jl_libRmath                  Ptr{None}
dbeta                         Function
dbinom                        Function
dcauchy                       Function
dchisq                        Function
dexp                          Function
df                            Function
dgamma                        Function
dgeom                         Function
dlnorm                        Function
dlogis                        Function
dnbinom                       Function
dnchisq                       Function
dnorm                         Function
dpois                         Function
dsignrank                     Function
dt                            Function
dunif                         Function
dweibull                      Function
dwilcox                       Function
pbeta                         Function
pbinom                        Function
pcauchy                       Function
pchisq                        Function
pexp                          Function
pf                            Function
pgamma                        Function
pgeom                         Function
plnorm                        Function
plogis                        Function
pnbinom                       Function
pnchisq                       Function
pnorm                         Function
ppois                         Function
psignrank                     Function
pt                            Function
punif                         Function
pweibull                      Function
pwilcox                       Function
qbeta                         Function
qbinom                        Function
qcauchy                       Function
qchisq                        Function
qexp                          Function
qf                            Function
qgamma                        Function
qgeom                         Function
qlnorm                        Function
qlogis                        Function
qnbinom                       Function
qnchisq                       Function
qnorm                         Function
qpois                         Function
qsignrank                     Function
qt                            Function
qunif                         Function
qweibull                      Function
qwilcox                       Function
rchisq                        Function
rgeom                         Function
rpois                         Function
rsignrank                     Function
rt                            Function
set_seed                      Function

and a function like plogis provides many method signatures

julia> plogis
Methods for generic function plogis
plogis(Number,Number,Number,Bool,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:275
plogis(Number,Number,Bool,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:281
plogis(Number,Number,Number,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:277
plogis(Number,Bool,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:287
plogis(Number,Number,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:283
plogis(Number,Number,Number) at /home/bates/build/julia/extras/../extras/Rmath.jl:279
plogis(Number,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:289
plogis(Number,Number) at /home/bates/build/julia/extras/../extras/Rmath.jl:285
plogis(Number,) at /home/bates/build/julia/extras/../extras/Rmath.jl:291
plogis{T1<:Number,T2<:Number}(T1<:Number,AbstractArray{T2<:Number,N}) at operators.jl:162
plogis{T1<:Number,T2<:Number}(AbstractArray{T1<:Number,N},T2<:Number) at operators.jl:165
plogis{T1<:Number,T2<:Number}(AbstractArray{T1<:Number,N},AbstractArray{T2<:Number,N}) at operators.jl:169
plogis{T1<:Number,T2<:Number,T3<:Number}(AbstractArray{T1<:Number,N},T2<:Number,T3<:Number) at /home/bates/build/julia/extras/../extras/Rmath.jl:7
plogis{T1<:Number,T2<:Number,T3<:Number}(T1<:Number,AbstractArray{T2<:Number,N},T3<:Number) at /home/bates/build/julia/extras/../extras/Rmath.jl:10
plogis{T1<:Number,T2<:Number,T3<:Number}(T1<:Number,T2<:Number,AbstractArray{T3<:Number,N}) at /home/bates/build/julia/extras/../extras/Rmath.jl:13
plogis{T1<:Number,T2<:Number,T3<:Number}(AbstractArray{T1<:Number,N},AbstractArray{T2<:Number,N},T3<:Number) at /home/bates/build/julia/extras/../extras/Rmath.jl:16
plogis{T1<:Number,T2<:Number,T3<:Number}(T1<:Number,AbstractArray{T2<:Number,N},AbstractArray{T3<:Number,N}) at /home/bates/build/julia/extras/../extras/Rmath.jl:20
plogis{T1<:Number,T2<:Number,T3<:Number}(AbstractArray{T1<:Number,N},T2<:Number,AbstractArray{T3<:Number,N}) at /home/bates/build/julia/extras/../extras/Rmath.jl:24

You will see that the vectorized methods have slightly more general signatures involving AbstractArrays of Numbers, which also handles cases such as matrix arguments.

The methods allow for calling plogis in a way that is natural to R programmers

julia> plogis(-1:0.2:1, 0)
[0.268941, 0.310026, 0.354344, 0.401312, 0.450166, 0.5, 0.549834, 0.598688, 0.645656, 0.689974, 0.731059]

julia> plogis(-1:0.2:1, 0, 2)
[0.377541, 0.401312, 0.425557, 0.450166, 0.475021, 0.5, 0.524979, 0.549834, 0.574443, 0.598688, 0.622459]

(In running that example I realized that I had omitted some cases from my macros. That will be repaired "in the fullness of time" as Bill Venables is wont to say.)

Monday, March 12, 2012

A Julia version of the multinomial sampler

In the previous post on RcppEigen I described an example of sampling from collection of multinomial distributions represented by a matrix of probabilities.  In the timing example the matrix was 100000 by 5 with each of the 100000 rows summing to 1.  The objective is to create a vector of 100000 1-based indices representing a sample from the probabilities in each row.

For each row we take the cumulative sums and, for safety, normalize by dividing by the last element then compare these values to a random draw from a standard uniform distribution.  The number of elements in the cumulative sums that are less than the uniform draw is the 0-based index of the result.  We add 1 to convert to a 1-based index.

I have been experimenting a bit with a very interesting new language called Julia and decided to write a similar function in it. The version shown here has been updated according to suggestions from Jeff Bezanson, Stefan Karpinski and Viral Shah on the julia-dev  list at Google Groups

function samp1(x::Array{Float64, 2},)
    cs = cumsum(reshape(x, length(x)))
    sum(cs/cs[end] < rand()) + 1
end

function samp(X::Array{Float64, 2},)
    if any(X < 0)
        error("Negative probabilities not allowed")
    end
    [samp1(X[i,:]) | i = 1:size(X,1)]
end
This version is about 10 times as fast as the pure R version but about 9 times slower than the RcppEigen version.

Update:
In the thread on the julia-dev list about this example Stefan Karpinski showed that in Julia you can enhance performance by de-vectorizing your code and came up with the following version which is much faster than the RcppEigen version
  
function SKsamp(X::Matrix{Float64})
  if any(X < 0)
    error("Negative probabilities not allowed")
  end
  s = Array(Int, size(X,1))
  for i = 1:size(X,1)
    r = rand()
    for j = 1:size(X,2)
      r -= X[i,j]
      if r <= 0.0
        s[i] = j
        break
      end
    end
  end
  return s
end       
  
To a longtime R/S programmer the concept of de-vectorizing your code seems heretical but I can understand that code created by a JIT will be happier with the looping and break style.

In any case, I think this example shows that R programmers should take a look at Julia. Two immediate applications I can imagine are McMC methods and large scale iterative fits such as Generalized linear models

Sunday, March 11, 2012

An RcppEigen example

R is an Open Source project providing an interactive language and environment for statistical computing.  It has become the lingua franca for research in statistical methods.  Because R is an interpreted language it is comparatively easy to develop applications (called packages) for it.  However, the resulting code can sometimes be slow, which is a problem in compute-bound methods like Markov chain Monte Carlo

From its inception R, and its predecessor S, have allowed calls to compiled code written in C or Fortran but such programming is not for the faint-hearted.  There are many ways in which you can trip yourself up.  Over the past several years Dirk Eddelbuettel and Romain Francois (there should be a cedilla under the c but I don't know how to create one) developed a package called Rcpp that provides C++ classes and methods for encapsulating R objects.  I recently started using these in the lme4 package for mixed-effects models that I develop with Martin Maechler and Ben Bolker.

High-performance numerical linear algebra is particularly important in the mixed-effects models calculations which use both dense and sparse matrices.  I ran across a wonderful linear algebra system called Eigen that is a templated library of C++ classes and methods and, with Dirk and Romain, wrote the RcppEigen package to interface with R.

This posting is to show an example of code that can be made to run much faster using RcppEigen than in the original R code.

The example, from Dongjun Chung, requires sampling from a collection of multinomial random variables as part of an iterative estimation method for parameter estimation in his R package for the analysis of ChIP-sequencing data.  There are generally a small number of categories - 5 is typical - and a relatively large number (say 10,000) instances that are available as a 10000 by 5 matrix of non-negative elements whose rows sum to 1.  At each iteration a sample of size 10000 consisting of indices in the range 1 to 5 is to be generated from the current matrix of probabilities.  Dongjun wrote an R function for this

Rsamp <- function(X) {
  stopifnot(is.numeric(X <- as.matrix(X)),
            (nc <- ncol(X)) > 1L,
            all(X >= 0))
  apply(X, 1L, function(x) sample(nc, size=1L, replace=FALSE, prob=x+1e-10))
}
which is careful R code (e.g. using apply instead of running a loop) but, even so, this function is the bottleneck in the method.

A method using RcppEigen requires a similar R function

RcppSamp <- function(X) {
  stopifnot(is.numeric(X <- as.matrix(X)),
            (nc <- ncol(X)) > 1L,
            all(X >= 0))
  .Call(CppSamp, X)
}

and a C++ function

SEXP CppSamp(SEXP X_) {
  typedef Eigen::ArrayXd   Ar1;
  typedef Eigen::ArrayXXd  Ar2;
  typedef Eigen::Map<Ar2> MAr2;

  const MAr2 X(as<MAr2>(X_));
  int m(X.rows()), n(X.cols()), nm1(n - 1);
  Ar1     samp(m);
  RNGScope sc; // Handle GetRNGstate()/PutRNGstate()
  for (int i=0; i < m; ++i) {
    Ar1 ri(X.row(i));
    std::partial_sum(ri.data(), ri.data() + n, ri.data())
    ri /= ri[nm1];    // normalize to sum to 1
    samp[i] = (ri < ::unif_rand()).count() + 1;
  }
  return wrap(samp);
}

The general idea is that the Eigen::ArrayXd class is a one-dimensional array of doubles and the Eigen::ArrayXXd class is a two-dimensional array of doubles.  Operations on array classes are component-wise operations or reductions.  There are corresponding classes Eigen::VectorXd and Eigen::MatrixXd that provide linear algebra operations.  A Eigen::Map of another structure has the corresponding structure but takes a pointer to the storage instead of allocating its own storage.  One of the idioms of writing with RcppEigen is to create const Eigen::Map<klass> objects from the input arguments, thus avoiding unnecessary copying.  The as templated function and the wrap function are part of RcppEigen that generalizes the methods in Rcpp for converting back and forth between the R objects and the C++ classes.

Here the approach is to find the cumulative sums in each row, normalize these sums by dividing by the last element and comparing them to a draw from the standard uniform distribution.  The number of elements less than the uniform variate is the 0-based index for the result and we add 1 to get the 1-based index.

Creating the RcppEigen code is more work than the pure R code but not orders of magnitude more work.  You can operate on entire arrays as shown here and, best of all, you don't need to worry about protecting storage from the garbage collector.  And the RcppEigen code is much faster

> set.seed(1234321)
> X <- matrix(runif(100000 * 5), ncol=5)
> benchmark(Rsamp(X), RcppSamp(X), replications=5,
+           columns=c("test", "elapsed", "relative", "user.self"))
         test elapsed relative user.self
2 RcppSamp(X)   0.058        1      0.06
1    Rsamp(X)   5.162       89      5.12

Update: I have corrected the spelling of Dongjun Chung's name.  My apologies for mis-spelling it.


Update: The next posting discusses a Julia implementation of this function.  An initial version was somewhat slower than the RcppEigen version but then Stefan Karpinsky got to work and created an implementation that was over twice as fast as the RcppEigen version.  In what may seem like a bizarre approach to an R programmer, the trick is to de-vectorize the code.

The name

When I bought the first computer for use in our Statistics Department - a Vax 11/750 that cost about a quarter of a million dollars in 1983 - I was considered extravagant because I purchased and installed a second megabyte of memory for the machine.