Usetutoringspotscode to get 8% OFF on your first order!

  • time icon24/7 online - support@tutoringspots.com
  • phone icon1-316-444-1378 or 44-141-628-6690
  • login iconLogin

The Canadian Journal of Statistics

The Canadian Journal of Statistics
Vol. 42, No. 3, 2014, Pages 365–383
La revue canadienne de statistique

365

Semiparametric methods for survival analysis
of case-control data subject to dependent
censoring
Douglas E. SCHAUBEL1 *, Hui ZHANG2 , John D. KALBFLEISCH1 and Xu SHU1
1 Department
2 U.S.

of Biostatistics, University of Michigan, Ann Arbor, MI, USA
Food and Drug Administration, Silver Spring, MD, USA

Key words and phrases: Case-control study; Cox regression; dependent censoring; estimating equation;
inverse weighting
MSC 2010: Primary 62N01; secondary 62D05
Abstract: Case-control sampling can be an ef?cient and cost-saving study design, wherein subjects are
selected into the study based on the outcome of interest. It was established long ago that proportional hazards
regression can be applied to case-control data. However, each of the various estimation techniques available
assumes that failure times are independently censored. Since independent censoring is often violated in
observational studies, we propose methods for Cox regression analysis of survival data obtained through
case-control sampling, but subject to dependent censoring. The proposed methods are based on weighted
estimating equations, with separate inverse weights used to account for the case-control sampling and to
correct for dependent censoring. The proposed estimators are shown to be consistent and asymptotically
normal, and consistent estimators of the asymptotic covariance matrices are derived. Finite-sample properties
of the proposed estimators are examined through simulation studies. The methods are illustrated through an
analysis of pre-transplant mortality among end-stage liver disease patients obtained from a national organ
failure registry. The Canadian Journal of Statistics 42: 365–383; 2014 © 2014 Statistical Society of Canada
Re´ sume´ : L’´echantillonnage cas-t´emoins peut constituer un plan d’exp´erience ef?cace et e´ conomique dans
le cadre duquel les sujets sont choisis pour l’´etude en fonction du ph´enom`ene e´ tudi´e. Il est e´ tabli depuis
longtemps que le mod`ele de r´egression a` risques proportionnels peut s’appliquer a` des donn´ees cas-t´emoins.
Cependant, toutes les techniques d’estimation existantes supposent que les temps de d´efaillance sont cen´
sur´es de fac¸on ind´ependante. Etant
donn´e que l’ind´ependance de la censure est souvent bafou´ee dans le
cadre d’´etudes observationnelles, les auteurs proposent des m´ethodes pour la r´egression de Cox de donn´ees
de survie sujettes a` la censure d´ependante obtenues par un e´ chantillonnage cas-t´emoins. Les m´ethodes propos´ees se fondent sur des e´ quations d’estimation pond´er´ees dont les poids s´epar´es et inverses permettent de
tenir compte de l’´echantillonnage cas-t´emoins et de corriger le biais li´e a` la censure d´ependante. Les auteurs montrent que les estimateurs propos´es sont convergents et asymptotiquement normaux. Ils obtiennent
e´ galement des estimateurs convergents pour les matrices de covariance asymptotique. Ils examinent les propri´et´es de ces estimateurs sur des e´ chantillons de taille ?nie par voie de simulation et illustrent les m´ethodes
au moyen d’une analyse de donn´ees sur le taux de mortalit´e pr´etransplantation chez les patients atteints
d’une maladie h´epatique en phase terminale provenant d’un registre national d’organes d´efaillants. La revue
canadienne de statistique 42: 365–383; 2014 © 2014 Soci´et´e statistique du Canada

1. INTRODUCTION
Case-control and case-cohort sampling schemes are designed to over-sample subjects expected
to provide relatively large amounts of information in terms of parameter estimation. In the case* Author to whom correspondence may be addressed.
E-mail: deschau@umich.edu
© 2014 Statistical Society of Canada / Soci´et´e statistique du Canada

366

SCHAUBEL ET AL.

Vol. 42, No. 3

control study, subjects observed to experience the event of interest (the cases) are over-sampled,
while only a fraction of subjects not experiencing the event of interest (the controls) are selected. In
the original formulation of case-control study, all cases are selected but, more generally, cases are
over-sampled, such that their sampling fraction would be many times greater than that expected
through random sampling. In the context of proportional hazards regression (Cox, 1972), the casecohort design (Prentice, 1986) was proposed, in which one selects all cases, as well as a random
sample of the study population (the sub-cohort). The case-cohort design is especially well-suited
to Cox regression since the sub-cohort representing the risk set can be used to compute partial
likelihood contributions at each observed event.
A number of methods have been proposed for the regression analysis of case-control and
case-cohort studies under the proportional hazards model. Prentice & Breslow (1978) adapted
the Cox model for use in case-control studies, and studied in detail the correspondence between
Cox regression in the presence of prospective and retrospective sampling. Oakes (1981) provided a partial likelihood interpretation for the likelihood function used by Prentice & Breslow
(1978) and, hence, a theoretical justi?cation of the estimation procedure. Self & Prentice (1988)
provided a rigorous theoretical evaluation of the case-cohort design of Prentice (1986), including
detailed asymptotic derivations and various ef?ciency calculations. Kalb?eisch & Lawless (1988)
developed pseudo-likelihood methods for semiparametric estimation of the illness-death model,
applicable to case-cohort and other retrospective sampling designs. Lin & Ying (1993) considered
general missing data problems in the context of Cox regression, with the case-cohort design cast
as a special case. The asymptotic development by Lin & Ying (1993) leads to a different variance
estimator for the Prentice (1986) case-cohort estimator than that derived by Self & Prentice (1988).
Much of the subsequent research on the case-cohort design was focussed on modi?cations
to the sampling scheme and/or analysis in order to increase ef?ciency. For example, Chen & Lo
(1999) derived a class of estimators for case-cohort sampling; essentially, the methods involve
adding future cases to the risk set (in addition to the sub-cohort) in order to improve ef?ciency
relative to the original method by Prentice (1986). Borgan et al. (2000) proposed strati?ed casecohort designs to improve ef?ciency. Motivated by family-based studies, Moger, Pawitan, &
Borgan (2008) developed case-cohort methods for ?tting frailty models to large clustered data sets.
The methods were inspired by the work of Kalb?eisch & Lawless (1988) and Borgan et al. (2000).
The motivation for the work of Moger, Pawitan, & Borgan (2008) was to reduce computational
burden, rather than dollar cost.
With respect to the case-control study, much of the development in the last 15 years has been
directed at clustered data. For instance, in the case-control family study, one selects a random
sample of cases, random sample of controls, as well as the family members of the cases and
controls. Li, Yang, & Schwartz (1998) developed parametric methods for ?tting copula models to
clustered survival data obtained through case-control family sampling. Shih & Chatterjee (2002)
extended this work to the semiparametric setting through quasi-likelihood procedures. Hsu &
Gor?ne (2006) developed pseudo-likelihood methods for ?tting frailty model to case-control
family data using an approach similar to that of Shih & Chatterjee (2002). Hsu et al. (2004)
developed semiparametric methods for frailty models based on case-control family sampling,
with the focus being on the conditional regression parameter. Gor?ne, Zucker, & Hsu (2009)
further evaluated frailty models based on case-control family data through a pseudo-full likelihood
approach, and derived rigorous large-sample theory.
There have been several evaluations of cohort sampling in a general sense. Chen & Lo (1999)
described the close theoretical connection between the case-cohort and case-control study within
the framework of Cox regression. Chen (2001) considered “generalized case-cohort sampling”
(nested case-control, case-cohort and case-control, all within the same framework) in the context
of Cox regression. Gray (2009) studied various cohort sampling designs, focusing on Inverse

The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

367

Probability of Selection Weighted (IPSW) estimators applied to the Cox model (e.g., Binder,
1992; Barlow, 1994; Borgan et al., 2000; Lin, 2000; Pan & Schaubel, 2008; Nan, Kalb?eisch, &
Yu, 2009).
The motivation for subsampling of large epidemiologic databases is well-established in the
literature. Nothing prevents such data from being subject to dependent censoring. However, each
of the subsampling methods cited thus far (and, to the best of our knowledge, all existing methods
for Cox regression analysis of case-control data) assumes that subjects are censored in a manner
independent of the failure rate. In most practical applications in which the independent censoring
assumption is violated, censoring is actually a mixture of independent (e.g., administrative, or
random loss to follow-up) and dependent (e.g., censoring of pre-treatment death by the initiation of
treatment). In such cases, it is necessary to distinguish between subjects who were independently
and dependently censored. The most popular method for handling dependent censoring is Inverse
Probability of Censoring Weighting (IPCW) proposed by Robins & Rotnitzky (1992). From this
perspective, case-control studies appear to extend nicely to the dependent censoring setting. In
particular, since IPCW entails modelling the dependent censoring hazard, there is an incentive to
over-sample dependently censored subjects.
We propose semiparametric methods for the analysis of failure time data generated by casecontrol sampling and subject to dependent censoring. In the setting of interest, we over-sample
dependently censored subjects, since correcting for dependent censoring through IPCW requires
that the dependent censoring process be modelled. The dependent censoring hazard is modelled
using existing case-control survival analysis methods featuring Inverse Probability of Sampling
Weighting (IPSW). Parameter estimation for the failure time hazard model then proceeds through
weighted estimating equations, with the weights correcting for both the case-control sampling and
the dependent censoring. We derive asymptotic properties of the proposed estimator, in a manner
which accounts for the randomness in the estimated IPCW and IPSW components. Since the
derived asymptotic variance is complicated, we suggest an alternative variance estimator that is
much easier to program and much faster to compute. Finite-sample performance of the proposed
procedures is evaluated through simulation.
In the evaluation of a proposed subsampling method, it is useful to assess not only if the method
can be implemented in a real data set, but, in addition, whether the method gives reasonable answers
in practice. Along these lines, we apply the proposed methods to an analysis of pre-transplant
survival among end-stage liver disease (ESLD) patients. In this application, the full cohort is
available, meaning that the results based on the full-cohort can serve as a target for the results
obtained through the proposed case-control sampling methods. With respect to background, liver
transplantation is the preferred method of treatment. However, there are thousands more patients
needing a liver transplant than there are available donor organs. As such, patients clinically suitable
for receiving a deceased-donor liver transplant are put on a wait-list, with the pertinent time origin
(t = 0) then being the date of wait listing. The receipt of a liver transplant does not censor death
but, naturally, does censor pre-transplant death (i.e., the time until death in the absence of liver
transplantation). In liver transplantation, patients are ranked on the wait-list based on their most
recent Model for End-stage Liver Disease (MELD) score. MELD was chosen as the scoring system
largely because it is a very strong predictor of pre-transplant mortality (e.g., Wiesner et al., 2001;
Kremers et al., 2004; Huo et al., 2005; Merion et al., 2005; Basto et al., 2008; Subramanian et al.,
2010). Since MELD is time-dependent, an analysis of the effect of baseline risk factors (i.e.,
measured at time t = 0) on the pre-transplant death hazard could result in substantial bias if liver
transplantation were treated as independent censoring. This and other related issues in the liver
transplant setting are discussed by Schaubel et al. (2009).
In certain cases, special exceptions are made under which a wait-listed patient may be assigned
a MELD score which is higher than that calculated, in an attempt to re?ect the patient’s perceived

DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

368

SCHAUBEL ET AL.

Vol. 42, No. 3

medical urgency. The most frequent occurrence of such MELD exceptions is for patients with
hepatocellular carcinoma (HCC), a form of liver cancer. HCC patients are typically assigned a
MELD score of 22, which is often considerably higher than the score based on their laboratory
measures. To our knowledge, no existing analyses in the liver transplant literature have quanti?ed
whether the MELD score of 22 accurately re?ects the true “MELD-equivalent” wait-list mortality
risk faced by HCC patients. As a primary example in this article, we compare HCC versus
non-HCC patients, with the latter group categorized by baseline MELD score. Since MELD (at
time 0 but, also, after time 0) affects both death and liver transplantation probabilities, liver
transplantation is handled as dependent censoring of pre-transplant death time.
It should be noted that MELD exception scores for HCC are usually assigned at initial wait
listing (time 0). Therefore, given our analytic objective and in the interests of interpretation, it is
appropriate to compare HCC patients to non-HCC patients, with the latter categorized speci?cally
by time 0 MELD score. To use MELD at time > 0 would be tantamount to having the HCC and
comparator patients established at different times, which would render the resulting parameter
estimates uninterpretable. More generally, the perils of adjusting for time-dependent factors (such
as MELD at time >0) are well-described in the causal inference literature (e.g., Hern´an, Brumback,
& Robins, 2000). Another general setting for which adjustment for time-dependent factors is
explicitly avoided concerns the setting wherein there is a mediator. In evaluating a baseline factor
(e.g., treatment assignment), adjusting for the mediator (e.g., though a time-dependent indicator
covariate) would generally distort the resulting treatment effect; it might then be necessary to
treat the mediator as dependent censoring.
The remainder of this article is organized as follows. In Section 2, we describe the proposed
estimation procedures. In Section 3, we derive large sample properties for the proposed estimators.
We conduct simulation studies in Section 4 to investigate the ?nite-sample properties of the
proposed estimators. Section 5 provides an application of the methods to the afore-described
end-stage liver disease data. The article concludes with a discussion in Section 6.
2. METHODS
Let Z1i denote the q1 -vector of time-constant covariates for subject i (i = 1, . . . , n). Let Z2i (t)
i (t) =
be the q2 -vector of time-dependent covariates at time t, Zi (t) = {ZT1i , Z2i (t)T }T , and Z
{Zi (u) : 0 = u = t} denote the history of Zi (·) up to time t. Let Ti and Ci be the potential failure
and censoring times, respectively. We suppose that Ci = C1i ? C2i , where a ? b = min {a, b},
C1i is the censoring time due to mechanisms that are independent of Ti given Zi (0), and C2i
denotes the dependent censoring time; that is, C2i is dependent on Ti given Zi (0). Let Xi = Ti ?
Ci , Yi (t) = I (Xi = t), 1i = I (Ti = Ci ), 2i = I (C2i = C1i , C2i < Ti ), 3i = (1 – 1i )(1 –
2i ), Ni (t) = I (Xi = t, 1i = 1) and NiC (t) = I (Xi = t, 2i = 1), where I(·) is the indicator
function. The observable data are assumed to be n independently and identically distributed copies
of {Ni (·), NiC (·), Yi (·), Zi (·)}. Let ?i indicate whether or not subject i is sampled. The variate ?i
is allowed to depend on 1i , 2i and 3i so that the sampling probability can be different
for subjects who fail, subjects who are dependently censored and those who are independently
censored. Let the cohort be divided into three strata according to the outcome (1 , 2 , 3 )
such that
Lk = {i : ki = 1}, k = 1, 2, 3. Let pk = pr(?i = 1 | i ? Lk ), p = (p1 , p2 , p3 )T and
?i (p) = 3k=1 ki ?i /pk . Note that ?i (p) weights the ith subject by the inverse probability that
the subject is sampled.
We assume that the hazard of failure for individual i is speci?ed by the following proportional
hazards model (Cox, 1972),
?i {t | Zi (0)} = ?0 (t) exp{ßT0 Zi (0)},
The Canadian Journal of Statistics / La revue canadienne de statistique

(1)
DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

369

where ?0 (t) is an unspeci?ed baseline hazard function for failure time, and ß0 is a (q1 + q2 )dimensional regression parameter. Note that we are chie?y interested in inferring the role of Zi (0)
on the failure time hazard, as opposed to {Zi (t) : t > 0}, for reasons of interpretation. For example,
it is straightforward to predict survival probability from time 0 using a pre-speci?ed value of Zi (0)
i (t) would be
along with parameter estimates from model (1). To do so using a model based on Z
much more complicated, and would generally require modelling the stochastic properties of the
process Zi (t).
If it were also the case that C2i was independent of Ti given Zi (0) (unlike the setting of interest),
the root of the estimating equation U(ß) = 0, where
then ß0 could be consistently estimated by ß,
U(ß) =

n

t

?i (p){Zi (0) – Z(ß, t)}dNi (t),

(2)

0

i=1

where
follow-up time, Z(ß, t) = S(1) (ß, t)/S (0) (ß, t), S(d) (ß, t) =
n t < 8 is the?dmaximum
T
exp{ß Zi (0)}, with a?0 = 1, a?1 = a, and a?2 = aaT . Estimating equai=1 ?i (p)Yi (t)Zi (0)
tions of the same general structure as (2) and arising from IPSW have been proposed by several
previous authors, for example, Kalb?eisch & Lawless (1988), Binder (1992), Borgan et al. (2000)
and Lin (2000) for the Cox model; Kulich & Lin (2000) for the additive hazards model; and Nan,
Kalb?eisch, & Yu (2009) for the accelerated failure time model.
Since Zi (t) affects both the event and censoring times, and since Zi (t) is not incorporated into

model (1), C2i would generally not be independent of Ti given Zi (0). In this case, the estimate ß
derived from (2) could be substantially biased because (2) does not accommodate the dependence
i (t), the hazard of
between C2i and Ti . We assume that conditional on the covariate history Z
dependent censoring C2i at time t does not further depend on the possibly unobserved failure time
Ti , that is,
C

?C
i {t | Zi (Ti ), Ci = t, Ti = t, Ti } = ?i {t | Zi (t), Ci = t, Ti = t}.

(3)

This fundamental assumption is called “no unmeasured confounders for censoring” (Rubin,
1977; Robins, 1993). Borrowing terminology from the competing risks literature (Kalb?eisch
and Prentice, 2002), assumption (3) allows us to identify the cause-speci?c hazard for C2i . We
assume a time-dependent Cox proportional hazards model for the right-hand side of Equation (3),
C
T

?C
i {t | Zi (t), Xi = t} = ?0 (t) exp{a0 Vi (t)},

(4)

where ?C
0 (t) is an unspeci?ed baseline hazard function for dependent censoring, Vi (t) is a
i (t), and a0 is a s-dimensional regression parameter.
s-vector consisting of functions of Z
We propose the following estimating function,
UR (ß) =

n

i=1

t

Ri (t){Zi (0) – ZR (ß, R, t)}dNi (t),

(5)

0

where
ZR (ß, R, t) =

S(1) (ß, R, t)
S (0) (ß, R, t)

S(d) (ß, R, t) = n-1

n

Ri (t)Yi (t)Zi (0)?d exp{ßT Zi (0)}

i=1

DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

370

SCHAUBEL ET AL.

Vol. 42, No. 3

Ri (t) = ?i (p)Wi (t)
C

Wi (t) = ei (t) ?i (t),
t
C
T
where C
i (t) = 0 exp{a Vi (u)}d0 (u) and the function ?i (t) in the weight Wi (t) is a stabilization
factor. We consider three choices of ?i (t). One choice is ?1i (t) = 1. However, when the censoring
C
is heavy, ei (t) could be quite large and lead to instability in the estimation. In this case, the
t

choice of ?2i (t) = exp[- 0 exp{aT Vi (0)}dC
0 (u)] or ?3i (t) = exp[-i {t | Zi (0)}] may be more

appropriate, where i (t) is based on a time-to-censoring model that uses only the baseline coC
variate values, Zi (0). Hereafter, we denote Wji (t) = ei (t) ?ji (t), j = 1, 2, 3, and correspondingly

and ß
, the solutions to UR (ß) = 0 with weights W1i (t), W2i (t) and
estimate ß0 with ß
W1
W2
W3
W3i (t), respectively.
C (t)}, where
The weight W1i (t) can be estimated using exp{
i
C

i (t) =
C (t, a) =

0

t

0

C (s, a
)
exp{
aT Vi (s)}d
0

n

t

0

i=1

?
?-1
n

?
?j (p)Yj (s) exp{aT Vj (s)}? ?i (p)dNiC (s),
j=1

, the partial likelihood estimate of a0 , is the root of UC (a) = 0, where
where a
UC (a) =

n

0

i=1

t

{Vi (t) – V(a, p, t)}?i (p)dNiC (t),
(1)

(0)

is an IPSW-based estimating function, with V(a, p, t) = SC (a, p, t)/SC (a, p, t) and

T
(d)
SC (a, p, t) = n-1 ni=1 ?i (p)Yi (t)Vi (t)?d ea Vi (t) .
C (t)}, where
C {t, a
|
The weight W2i (t) can be estimated using
?2i (t) exp{
?2i (t) = exp[-
i
i
Zi (0)}]; that is, ?2i (t) is estimated using the same model (4), but only using baseline covariate
values,
C
| Zi (0)} =

i {t, a

0

t

C (s, a
).
exp{
aT Vi (0)}d
0

C (t)}, where
† (t)}, with
?3i (t) exp{
?3i (t) = exp{-
The weight W3i (t) can be estimated by
i
i
?3i (t) estimated using an additional baseline model for C2i ,

?i {t | Zi (0), Ci = t, Ti , Ti = t} = ?0 (t) exp{a† Vi (0)},
T

such that we have
† (t) =

i
† (t, a† ) =

0

t

0

T

n

i=1

(s, a
† ),
exp{
a† Vi (0)}d
0

0

t

?
?-1
n

T
?
?j (p)Yj (s) exp{a† Vj (0)}? ?i (p)dNiC (s),
j=1

The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

371

† is the partial likelihood estimate of a† under the model for dependent censoring with
and a

hazard ?i (t). Weight stabilizers analogous to ?3i (t) have been suggested, for example, by Robins
& Finkelstein (2000) and Hern´an, Brumback, & Robins (2000). We propose the stabilizer ?2i (t)
as an alternative. The performance of each of W1i (t), W2i (t) and W3i (t) are compared through
simulations studies described in Section 4.
(j = 1, 2, 3) is computed as the root of the estimating equation in (5),
To summarize, ß
Wj
i (t). After estimating ß0 , the cumulative baseline hazard, 0 (t), can then
with Ri (t) replaced by R
be estimated by
W (t) =

0

n

0

i=1

t

n

-1

T Z&nbsp;&nbsp; &nbsp; (0)}
&nbsp;&nbsp; &nbsp; (s)Y&nbsp;&nbsp; &nbsp; (s) exp{ß
R
Wj

i (s)dNi (s).
R

=1

3. ASYMPTOTIC PROPERTIES
The following conditions are assumed throughout this section.
{Ni (·), NiC (·), Yi (·), Zi (·)}, i = 1, . . . , n are independently and identically distributed.
P {Yi (t) =1} > 0.
t
|Zij (0)| + 0 |dZij (t)| < BZ < 8, where Zij is the jth component of Zi and BZ is a constant.
There exists a neighbourhood B of ß0 such that supu?[0,t],ß?B S(d) (ß, R, u) –

s(d) (ß, R, u) -? 0 in probability for d = 0, 1, 2, where s(d) (ß, R, u) = E S(d) (ß, R, u)
is absolutely continuous, for ß ? B, uniformly in u ? (0, t], E (·) denotes expectation. Moreover, s(0) (ß, R, u) is assumed to be bounded away from zero.
(d)
(e) There exists a neighbourhood BC of a0 such that supu?[0,t],a?BC SC (a, p, u)

(a)
(b)
(c)
(d)

(d)

(d)

(d)

-sC (a, u) -? 0 in probability for d = 0, 1, 2, where sC (a, u) = E {SC (a, p, u)} is ab(0)
solutely continuous, for a ? BC , uniformly in u ? (0, t]. Moreover, sC (a, u) is assumed to
be bounded away from zero.
(f ) The matrices A(ß0 ) and AC (a0 ) are positive de?nite, where

A(ß) =

t

0

AC (a) =

0

t

s(2) (ß, R, u)/s(0) (ß, R, u) – z(ß, R, u)?2 dF (u)

(2)
(0)
sC (a, u)/sC (a, u) – v(a, u)?2 dF C (u)
(1)

(0)

with z(ß, R, u) = s(1) (ß,R, u)/s(0) (ß, R,
u), v(a, u) = sC (a, u)/sC (a, u), F (u) = E
{Ri (u)Ni (u)}, F C (u) = E ?i (p0 )NiC (u) .
(g) 0 (t) < 8, C
0 (t) < 8.
We describe the asymptotic properties of the proposed estimators in the following theorems.

– a0 converges to a mean zero
Theorem 1. Under conditions (a) – (g), as n ? 8, n1/2 a
Normal distribution with covariance AC (a0 )-1 (a0 )AC (a0 )-1 , where AC (a0 ) is as de?ned by
Condition (f) and (a) = E{?i (a, p)?2 }, with ?i (a, p) being asymptotically independent and
identically distributed for i = 1, . . . , n; we defer the de?nition of ?i (a, p) to the Supplementary
Materials document.
DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

372

SCHAUBEL ET AL.

Vol. 42, No. 3

– a0 =
In the Supplementary Materials document, we show that n1/2 a

– a0 is essentially a scaled
AC (a0 )-1 n-1/2 ni=1 ?i (a0 , p0 ) + op (1); hence, n1/2 a
sum of n independent and identically distributed random quantities with mean zero and ?nite
variance. By the Multivariate Central Limit Theorem (MCLT) and empirical process theory, the
asymptotic normality is proved.
Theorem 2.

– ß , converges to a mean
Under conditions (a) – (g), as n ? 8, n1/2 ß
W1
0

zero Normal distribution with covariance A(ß0 )-1 (ß0 , R)A(ß0 )-1 , with A(ß) having been de?ned in Condition (f) and where (ß, R) = E{i (ß, R)?2 }, with i (ß, R) being independent
and identically distributed mean 0 variates (i = 1, . . . , n) asymptotically. The explicit de?nition
of i (ß, R) is provided in the Supplementary Materials.
The proof of Theorem 2 (provided in the Supplementary Materials) begins by de C (t) – C (t)} into n1/2 {
C (t; a
C (t; a
C (t; a
) –
, p
, p0 )} + n1/2 {
, p0 ) –
composing n1/2 {
0
0
0
0
0
C
C
C
C
C
1/2
1/2

0 (t; a0 , p0 )} + n {0 (t; a0 , p0 ) – 0 (t)}. Then n {0 (t) – 0 (t)} can be expressed
asymptotically as a sum of independent and identically distributed zero-mean variates, as n ? 8.
i (t) – Ri (t)} can
Combining this result and the Functional Delta Method, we can show that n1/2 {R
be written asymptotically as a sum of independent and identically distributed zero-mean variates, as n ? 8. Finally, through the Functional Delta Method, the asymptotic normality of
– ß ) is obtained.
n1/2 (ß
W1
0
is very complicated and dif?cult to
The expression for the asymptotic covariance of ß
W1
implement numerically. A practical way to estimate the variance of the proposed estimators is to
treat the weights Ri (t) as known rather than estimated. In the setting where the weight function
is known, results derived in the Supplementary Materials show that
– ß ) = A(ß )-1 n- 21
n1/2 (ß
0
0

n


Ui ß0 , R + op (1)

(6)

i=1

t

with Ui (ß0 , R) = 0 {Zi (0) – z(ß, R, t)}Ri (t)dMi (t) and dMi (t) = dNi (t) – Yi (t)di (t). Hence,
– ß ) is asymptotically a scaled sum of independent and identically distributed zeron1/2 (ß
0
is estimated by
mean random quantities with ?nite variance. Therefore, the variance of ß
W1
-1
R)
and
R)
-1 , where ‡ (ß, R) = E{U‡ (ß, R)?2 }, A(
ß)
ß)
ß)
‡ (ß,
‡ (ß,
A(
are calculated by
A(
i
replacing limiting values with their corresponding empirical counterparts.
– ß ) and n1/2 (ß
– ß ).
By similar arguments, the asymptotic normality holds for n1/2 (ß
W2
0
W3
0
– ß ). Therefore,
However, the covariance will be even more complicated than that of n1/2 (ß
W1
0
and ß
. Note that (6)
similarly, we can treat Ri (t) as ?xed to calculate the variance of ß
W2
W3
and ß
can be estimated by
holds when using W2 or W3 , such that the variances of each of ß
W2
W3
-1
R)
-1 , with Ri (t) being replaced by ?i (
ß)
ß)
‡ (ß,
A(
2i (t) and ?i (
3i (t), respectively.
A(
p)W
p)W
Our ?nal asymptotic result pertains to the proposed cumulative baseline hazard function
estimator.
W (t) – 0 (t)} converges to a zero-mean GausTheorem 3. Under conditions (a) – (g), n1/2 {
0
sian process as n ? 8, with an explicit covariance function estimator.
A proof of Theorem 3 is provided in the Supplementary Materials, including de?nitions
pertinent to the limiting covariance function.
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

373

Table 1: Simulation study: data con?gurations.
Parameter

1A

1B

2A

2B

3A

3B

?0

0.1

0.1

0.1

0.1

0.05

0.05

ß1

0.406

0.406

0.406

0.406

-0.406

-0.406

ß2

0.406

0.406

0.406

0.406

0.406

0.406

?C0

0.1

0.1

0.05

0.05

0.05

0.05

a1

-0.5

-0.5

-0.406

-0.406

-0.406

-0.406

a2

0.693

0.693

1.010

1.010

1.010

1.010

a3

0.095

0.095

0.095

0.095

0.095

0.095

%T

51%

51%

62%

62%

27%

27%

%C2

44%

44%

28%

28%

45%

45%

%C1

5%

5%

10%

10%

28%

28%

corr(T, C2 )

0.13

You can leave a response, or trackback from your own site.

Leave a Reply

The Canadian Journal of Statistics

The Canadian Journal of Statistics
Vol. 42, No. 3, 2014, Pages 365–383
La revue canadienne de statistique

365

Semiparametric methods for survival analysis
of case-control data subject to dependent
censoring
Douglas E. SCHAUBEL1 *, Hui ZHANG2 , John D. KALBFLEISCH1 and Xu SHU1
1 Department
2 U.S.

of Biostatistics, University of Michigan, Ann Arbor, MI, USA
Food and Drug Administration, Silver Spring, MD, USA

Key words and phrases: Case-control study; Cox regression; dependent censoring; estimating equation;
inverse weighting
MSC 2010: Primary 62N01; secondary 62D05
Abstract: Case-control sampling can be an ef?cient and cost-saving study design, wherein subjects are
selected into the study based on the outcome of interest. It was established long ago that proportional hazards
regression can be applied to case-control data. However, each of the various estimation techniques available
assumes that failure times are independently censored. Since independent censoring is often violated in
observational studies, we propose methods for Cox regression analysis of survival data obtained through
case-control sampling, but subject to dependent censoring. The proposed methods are based on weighted
estimating equations, with separate inverse weights used to account for the case-control sampling and to
correct for dependent censoring. The proposed estimators are shown to be consistent and asymptotically
normal, and consistent estimators of the asymptotic covariance matrices are derived. Finite-sample properties
of the proposed estimators are examined through simulation studies. The methods are illustrated through an
analysis of pre-transplant mortality among end-stage liver disease patients obtained from a national organ
failure registry. The Canadian Journal of Statistics 42: 365–383; 2014 © 2014 Statistical Society of Canada
Re´ sume´ : L’´echantillonnage cas-t´emoins peut constituer un plan d’exp´erience ef?cace et e´ conomique dans
le cadre duquel les sujets sont choisis pour l’´etude en fonction du ph´enom`ene e´ tudi´e. Il est e´ tabli depuis
longtemps que le mod`ele de r´egression a` risques proportionnels peut s’appliquer a` des donn´ees cas-t´emoins.
Cependant, toutes les techniques d’estimation existantes supposent que les temps de d´efaillance sont cen´
sur´es de fac¸on ind´ependante. Etant
donn´e que l’ind´ependance de la censure est souvent bafou´ee dans le
cadre d’´etudes observationnelles, les auteurs proposent des m´ethodes pour la r´egression de Cox de donn´ees
de survie sujettes a` la censure d´ependante obtenues par un e´ chantillonnage cas-t´emoins. Les m´ethodes propos´ees se fondent sur des e´ quations d’estimation pond´er´ees dont les poids s´epar´es et inverses permettent de
tenir compte de l’´echantillonnage cas-t´emoins et de corriger le biais li´e a` la censure d´ependante. Les auteurs montrent que les estimateurs propos´es sont convergents et asymptotiquement normaux. Ils obtiennent
e´ galement des estimateurs convergents pour les matrices de covariance asymptotique. Ils examinent les propri´et´es de ces estimateurs sur des e´ chantillons de taille ?nie par voie de simulation et illustrent les m´ethodes
au moyen d’une analyse de donn´ees sur le taux de mortalit´e pr´etransplantation chez les patients atteints
d’une maladie h´epatique en phase terminale provenant d’un registre national d’organes d´efaillants. La revue
canadienne de statistique 42: 365–383; 2014 © 2014 Soci´et´e statistique du Canada

1. INTRODUCTION
Case-control and case-cohort sampling schemes are designed to over-sample subjects expected
to provide relatively large amounts of information in terms of parameter estimation. In the case* Author to whom correspondence may be addressed.
E-mail: deschau@umich.edu
© 2014 Statistical Society of Canada / Soci´et´e statistique du Canada

366

SCHAUBEL ET AL.

Vol. 42, No. 3

control study, subjects observed to experience the event of interest (the cases) are over-sampled,
while only a fraction of subjects not experiencing the event of interest (the controls) are selected. In
the original formulation of case-control study, all cases are selected but, more generally, cases are
over-sampled, such that their sampling fraction would be many times greater than that expected
through random sampling. In the context of proportional hazards regression (Cox, 1972), the casecohort design (Prentice, 1986) was proposed, in which one selects all cases, as well as a random
sample of the study population (the sub-cohort). The case-cohort design is especially well-suited
to Cox regression since the sub-cohort representing the risk set can be used to compute partial
likelihood contributions at each observed event.
A number of methods have been proposed for the regression analysis of case-control and
case-cohort studies under the proportional hazards model. Prentice & Breslow (1978) adapted
the Cox model for use in case-control studies, and studied in detail the correspondence between
Cox regression in the presence of prospective and retrospective sampling. Oakes (1981) provided a partial likelihood interpretation for the likelihood function used by Prentice & Breslow
(1978) and, hence, a theoretical justi?cation of the estimation procedure. Self & Prentice (1988)
provided a rigorous theoretical evaluation of the case-cohort design of Prentice (1986), including
detailed asymptotic derivations and various ef?ciency calculations. Kalb?eisch & Lawless (1988)
developed pseudo-likelihood methods for semiparametric estimation of the illness-death model,
applicable to case-cohort and other retrospective sampling designs. Lin & Ying (1993) considered
general missing data problems in the context of Cox regression, with the case-cohort design cast
as a special case. The asymptotic development by Lin & Ying (1993) leads to a different variance
estimator for the Prentice (1986) case-cohort estimator than that derived by Self & Prentice (1988).
Much of the subsequent research on the case-cohort design was focussed on modi?cations
to the sampling scheme and/or analysis in order to increase ef?ciency. For example, Chen & Lo
(1999) derived a class of estimators for case-cohort sampling; essentially, the methods involve
adding future cases to the risk set (in addition to the sub-cohort) in order to improve ef?ciency
relative to the original method by Prentice (1986). Borgan et al. (2000) proposed strati?ed casecohort designs to improve ef?ciency. Motivated by family-based studies, Moger, Pawitan, &
Borgan (2008) developed case-cohort methods for ?tting frailty models to large clustered data sets.
The methods were inspired by the work of Kalb?eisch & Lawless (1988) and Borgan et al. (2000).
The motivation for the work of Moger, Pawitan, & Borgan (2008) was to reduce computational
burden, rather than dollar cost.
With respect to the case-control study, much of the development in the last 15 years has been
directed at clustered data. For instance, in the case-control family study, one selects a random
sample of cases, random sample of controls, as well as the family members of the cases and
controls. Li, Yang, & Schwartz (1998) developed parametric methods for ?tting copula models to
clustered survival data obtained through case-control family sampling. Shih & Chatterjee (2002)
extended this work to the semiparametric setting through quasi-likelihood procedures. Hsu &
Gor?ne (2006) developed pseudo-likelihood methods for ?tting frailty model to case-control
family data using an approach similar to that of Shih & Chatterjee (2002). Hsu et al. (2004)
developed semiparametric methods for frailty models based on case-control family sampling,
with the focus being on the conditional regression parameter. Gor?ne, Zucker, & Hsu (2009)
further evaluated frailty models based on case-control family data through a pseudo-full likelihood
approach, and derived rigorous large-sample theory.
There have been several evaluations of cohort sampling in a general sense. Chen & Lo (1999)
described the close theoretical connection between the case-cohort and case-control study within
the framework of Cox regression. Chen (2001) considered “generalized case-cohort sampling”
(nested case-control, case-cohort and case-control, all within the same framework) in the context
of Cox regression. Gray (2009) studied various cohort sampling designs, focusing on Inverse

The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

367

Probability of Selection Weighted (IPSW) estimators applied to the Cox model (e.g., Binder,
1992; Barlow, 1994; Borgan et al., 2000; Lin, 2000; Pan & Schaubel, 2008; Nan, Kalb?eisch, &
Yu, 2009).
The motivation for subsampling of large epidemiologic databases is well-established in the
literature. Nothing prevents such data from being subject to dependent censoring. However, each
of the subsampling methods cited thus far (and, to the best of our knowledge, all existing methods
for Cox regression analysis of case-control data) assumes that subjects are censored in a manner
independent of the failure rate. In most practical applications in which the independent censoring
assumption is violated, censoring is actually a mixture of independent (e.g., administrative, or
random loss to follow-up) and dependent (e.g., censoring of pre-treatment death by the initiation of
treatment). In such cases, it is necessary to distinguish between subjects who were independently
and dependently censored. The most popular method for handling dependent censoring is Inverse
Probability of Censoring Weighting (IPCW) proposed by Robins & Rotnitzky (1992). From this
perspective, case-control studies appear to extend nicely to the dependent censoring setting. In
particular, since IPCW entails modelling the dependent censoring hazard, there is an incentive to
over-sample dependently censored subjects.
We propose semiparametric methods for the analysis of failure time data generated by casecontrol sampling and subject to dependent censoring. In the setting of interest, we over-sample
dependently censored subjects, since correcting for dependent censoring through IPCW requires
that the dependent censoring process be modelled. The dependent censoring hazard is modelled
using existing case-control survival analysis methods featuring Inverse Probability of Sampling
Weighting (IPSW). Parameter estimation for the failure time hazard model then proceeds through
weighted estimating equations, with the weights correcting for both the case-control sampling and
the dependent censoring. We derive asymptotic properties of the proposed estimator, in a manner
which accounts for the randomness in the estimated IPCW and IPSW components. Since the
derived asymptotic variance is complicated, we suggest an alternative variance estimator that is
much easier to program and much faster to compute. Finite-sample performance of the proposed
procedures is evaluated through simulation.
In the evaluation of a proposed subsampling method, it is useful to assess not only if the method
can be implemented in a real data set, but, in addition, whether the method gives reasonable answers
in practice. Along these lines, we apply the proposed methods to an analysis of pre-transplant
survival among end-stage liver disease (ESLD) patients. In this application, the full cohort is
available, meaning that the results based on the full-cohort can serve as a target for the results
obtained through the proposed case-control sampling methods. With respect to background, liver
transplantation is the preferred method of treatment. However, there are thousands more patients
needing a liver transplant than there are available donor organs. As such, patients clinically suitable
for receiving a deceased-donor liver transplant are put on a wait-list, with the pertinent time origin
(t = 0) then being the date of wait listing. The receipt of a liver transplant does not censor death
but, naturally, does censor pre-transplant death (i.e., the time until death in the absence of liver
transplantation). In liver transplantation, patients are ranked on the wait-list based on their most
recent Model for End-stage Liver Disease (MELD) score. MELD was chosen as the scoring system
largely because it is a very strong predictor of pre-transplant mortality (e.g., Wiesner et al., 2001;
Kremers et al., 2004; Huo et al., 2005; Merion et al., 2005; Basto et al., 2008; Subramanian et al.,
2010). Since MELD is time-dependent, an analysis of the effect of baseline risk factors (i.e.,
measured at time t = 0) on the pre-transplant death hazard could result in substantial bias if liver
transplantation were treated as independent censoring. This and other related issues in the liver
transplant setting are discussed by Schaubel et al. (2009).
In certain cases, special exceptions are made under which a wait-listed patient may be assigned
a MELD score which is higher than that calculated, in an attempt to re?ect the patient’s perceived

DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

368

SCHAUBEL ET AL.

Vol. 42, No. 3

medical urgency. The most frequent occurrence of such MELD exceptions is for patients with
hepatocellular carcinoma (HCC), a form of liver cancer. HCC patients are typically assigned a
MELD score of 22, which is often considerably higher than the score based on their laboratory
measures. To our knowledge, no existing analyses in the liver transplant literature have quanti?ed
whether the MELD score of 22 accurately re?ects the true “MELD-equivalent” wait-list mortality
risk faced by HCC patients. As a primary example in this article, we compare HCC versus
non-HCC patients, with the latter group categorized by baseline MELD score. Since MELD (at
time 0 but, also, after time 0) affects both death and liver transplantation probabilities, liver
transplantation is handled as dependent censoring of pre-transplant death time.
It should be noted that MELD exception scores for HCC are usually assigned at initial wait
listing (time 0). Therefore, given our analytic objective and in the interests of interpretation, it is
appropriate to compare HCC patients to non-HCC patients, with the latter categorized speci?cally
by time 0 MELD score. To use MELD at time > 0 would be tantamount to having the HCC and
comparator patients established at different times, which would render the resulting parameter
estimates uninterpretable. More generally, the perils of adjusting for time-dependent factors (such
as MELD at time >0) are well-described in the causal inference literature (e.g., Hern´an, Brumback,
& Robins, 2000). Another general setting for which adjustment for time-dependent factors is
explicitly avoided concerns the setting wherein there is a mediator. In evaluating a baseline factor
(e.g., treatment assignment), adjusting for the mediator (e.g., though a time-dependent indicator
covariate) would generally distort the resulting treatment effect; it might then be necessary to
treat the mediator as dependent censoring.
The remainder of this article is organized as follows. In Section 2, we describe the proposed
estimation procedures. In Section 3, we derive large sample properties for the proposed estimators.
We conduct simulation studies in Section 4 to investigate the ?nite-sample properties of the
proposed estimators. Section 5 provides an application of the methods to the afore-described
end-stage liver disease data. The article concludes with a discussion in Section 6.
2. METHODS
Let Z1i denote the q1 -vector of time-constant covariates for subject i (i = 1, . . . , n). Let Z2i (t)
i (t) =
be the q2 -vector of time-dependent covariates at time t, Zi (t) = {ZT1i , Z2i (t)T }T , and Z
{Zi (u) : 0 = u = t} denote the history of Zi (·) up to time t. Let Ti and Ci be the potential failure
and censoring times, respectively. We suppose that Ci = C1i ? C2i , where a ? b = min {a, b},
C1i is the censoring time due to mechanisms that are independent of Ti given Zi (0), and C2i
denotes the dependent censoring time; that is, C2i is dependent on Ti given Zi (0). Let Xi = Ti ?
Ci , Yi (t) = I (Xi = t), 1i = I (Ti = Ci ), 2i = I (C2i = C1i , C2i < Ti ), 3i = (1 – 1i )(1 –
2i ), Ni (t) = I (Xi = t, 1i = 1) and NiC (t) = I (Xi = t, 2i = 1), where I(·) is the indicator
function. The observable data are assumed to be n independently and identically distributed copies
of {Ni (·), NiC (·), Yi (·), Zi (·)}. Let ?i indicate whether or not subject i is sampled. The variate ?i
is allowed to depend on 1i , 2i and 3i so that the sampling probability can be different
for subjects who fail, subjects who are dependently censored and those who are independently
censored. Let the cohort be divided into three strata according to the outcome (1 , 2 , 3 )
such that
Lk = {i : ki = 1}, k = 1, 2, 3. Let pk = pr(?i = 1 | i ? Lk ), p = (p1 , p2 , p3 )T and
?i (p) = 3k=1 ki ?i /pk . Note that ?i (p) weights the ith subject by the inverse probability that
the subject is sampled.
We assume that the hazard of failure for individual i is speci?ed by the following proportional
hazards model (Cox, 1972),
?i {t | Zi (0)} = ?0 (t) exp{ßT0 Zi (0)},
The Canadian Journal of Statistics / La revue canadienne de statistique

(1)
DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

369

where ?0 (t) is an unspeci?ed baseline hazard function for failure time, and ß0 is a (q1 + q2 )dimensional regression parameter. Note that we are chie?y interested in inferring the role of Zi (0)
on the failure time hazard, as opposed to {Zi (t) : t > 0}, for reasons of interpretation. For example,
it is straightforward to predict survival probability from time 0 using a pre-speci?ed value of Zi (0)
i (t) would be
along with parameter estimates from model (1). To do so using a model based on Z
much more complicated, and would generally require modelling the stochastic properties of the
process Zi (t).
If it were also the case that C2i was independent of Ti given Zi (0) (unlike the setting of interest),
the root of the estimating equation U(ß) = 0, where
then ß0 could be consistently estimated by ß,
U(ß) =

n

t

?i (p){Zi (0) – Z(ß, t)}dNi (t),

(2)

0

i=1

where
follow-up time, Z(ß, t) = S(1) (ß, t)/S (0) (ß, t), S(d) (ß, t) =
n t < 8 is the?dmaximum
T
exp{ß Zi (0)}, with a?0 = 1, a?1 = a, and a?2 = aaT . Estimating equai=1 ?i (p)Yi (t)Zi (0)
tions of the same general structure as (2) and arising from IPSW have been proposed by several
previous authors, for example, Kalb?eisch & Lawless (1988), Binder (1992), Borgan et al. (2000)
and Lin (2000) for the Cox model; Kulich & Lin (2000) for the additive hazards model; and Nan,
Kalb?eisch, & Yu (2009) for the accelerated failure time model.
Since Zi (t) affects both the event and censoring times, and since Zi (t) is not incorporated into

model (1), C2i would generally not be independent of Ti given Zi (0). In this case, the estimate ß
derived from (2) could be substantially biased because (2) does not accommodate the dependence
i (t), the hazard of
between C2i and Ti . We assume that conditional on the covariate history Z
dependent censoring C2i at time t does not further depend on the possibly unobserved failure time
Ti , that is,
C

?C
i {t | Zi (Ti ), Ci = t, Ti = t, Ti } = ?i {t | Zi (t), Ci = t, Ti = t}.

(3)

This fundamental assumption is called “no unmeasured confounders for censoring” (Rubin,
1977; Robins, 1993). Borrowing terminology from the competing risks literature (Kalb?eisch
and Prentice, 2002), assumption (3) allows us to identify the cause-speci?c hazard for C2i . We
assume a time-dependent Cox proportional hazards model for the right-hand side of Equation (3),
C
T

?C
i {t | Zi (t), Xi = t} = ?0 (t) exp{a0 Vi (t)},

(4)

where ?C
0 (t) is an unspeci?ed baseline hazard function for dependent censoring, Vi (t) is a
i (t), and a0 is a s-dimensional regression parameter.
s-vector consisting of functions of Z
We propose the following estimating function,
UR (ß) =

n

i=1

t

Ri (t){Zi (0) – ZR (ß, R, t)}dNi (t),

(5)

0

where
ZR (ß, R, t) =

S(1) (ß, R, t)
S (0) (ß, R, t)

S(d) (ß, R, t) = n-1

n

Ri (t)Yi (t)Zi (0)?d exp{ßT Zi (0)}

i=1

DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

370

SCHAUBEL ET AL.

Vol. 42, No. 3

Ri (t) = ?i (p)Wi (t)
C

Wi (t) = ei (t) ?i (t),
t
C
T
where C
i (t) = 0 exp{a Vi (u)}d0 (u) and the function ?i (t) in the weight Wi (t) is a stabilization
factor. We consider three choices of ?i (t). One choice is ?1i (t) = 1. However, when the censoring
C
is heavy, ei (t) could be quite large and lead to instability in the estimation. In this case, the
t

choice of ?2i (t) = exp[- 0 exp{aT Vi (0)}dC
0 (u)] or ?3i (t) = exp[-i {t | Zi (0)}] may be more

appropriate, where i (t) is based on a time-to-censoring model that uses only the baseline coC
variate values, Zi (0). Hereafter, we denote Wji (t) = ei (t) ?ji (t), j = 1, 2, 3, and correspondingly

and ß
, the solutions to UR (ß) = 0 with weights W1i (t), W2i (t) and
estimate ß0 with ß
W1
W2
W3
W3i (t), respectively.
C (t)}, where
The weight W1i (t) can be estimated using exp{
i
C

i (t) =
C (t, a) =

0

t

0

C (s, a
)
exp{
aT Vi (s)}d
0

n

t

0

i=1

?
?-1
n

?
?j (p)Yj (s) exp{aT Vj (s)}? ?i (p)dNiC (s),
j=1

, the partial likelihood estimate of a0 , is the root of UC (a) = 0, where
where a
UC (a) =

n

0

i=1

t

{Vi (t) – V(a, p, t)}?i (p)dNiC (t),
(1)

(0)

is an IPSW-based estimating function, with V(a, p, t) = SC (a, p, t)/SC (a, p, t) and

T
(d)
SC (a, p, t) = n-1 ni=1 ?i (p)Yi (t)Vi (t)?d ea Vi (t) .
C (t)}, where
C {t, a
|
The weight W2i (t) can be estimated using
?2i (t) exp{
?2i (t) = exp[-
i
i
Zi (0)}]; that is, ?2i (t) is estimated using the same model (4), but only using baseline covariate
values,
C
| Zi (0)} =

i {t, a

0

t

C (s, a
).
exp{
aT Vi (0)}d
0

C (t)}, where
† (t)}, with
?3i (t) exp{
?3i (t) = exp{-
The weight W3i (t) can be estimated by
i
i
?3i (t) estimated using an additional baseline model for C2i ,

?i {t | Zi (0), Ci = t, Ti , Ti = t} = ?0 (t) exp{a† Vi (0)},
T

such that we have
† (t) =

i
† (t, a† ) =

0

t

0

T

n

i=1

(s, a
† ),
exp{
a† Vi (0)}d
0

0

t

?
?-1
n

T
?
?j (p)Yj (s) exp{a† Vj (0)}? ?i (p)dNiC (s),
j=1

The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

371

† is the partial likelihood estimate of a† under the model for dependent censoring with
and a

hazard ?i (t). Weight stabilizers analogous to ?3i (t) have been suggested, for example, by Robins
& Finkelstein (2000) and Hern´an, Brumback, & Robins (2000). We propose the stabilizer ?2i (t)
as an alternative. The performance of each of W1i (t), W2i (t) and W3i (t) are compared through
simulations studies described in Section 4.
(j = 1, 2, 3) is computed as the root of the estimating equation in (5),
To summarize, ß
Wj
i (t). After estimating ß0 , the cumulative baseline hazard, 0 (t), can then
with Ri (t) replaced by R
be estimated by
W (t) =

0

n

0

i=1

t

n

-1

T Z     (0)}
     (s)Y     (s) exp{ß
R
Wj

i (s)dNi (s).
R

=1

3. ASYMPTOTIC PROPERTIES
The following conditions are assumed throughout this section.
{Ni (·), NiC (·), Yi (·), Zi (·)}, i = 1, . . . , n are independently and identically distributed.
P {Yi (t) =1} > 0.
t
|Zij (0)| + 0 |dZij (t)| < BZ < 8, where Zij is the jth component of Zi and BZ is a constant.
There exists a neighbourhood B of ß0 such that supu?[0,t],ß?B S(d) (ß, R, u) –

s(d) (ß, R, u) -? 0 in probability for d = 0, 1, 2, where s(d) (ß, R, u) = E S(d) (ß, R, u)
is absolutely continuous, for ß ? B, uniformly in u ? (0, t], E (·) denotes expectation. Moreover, s(0) (ß, R, u) is assumed to be bounded away from zero.
(d)
(e) There exists a neighbourhood BC of a0 such that supu?[0,t],a?BC SC (a, p, u)

(a)
(b)
(c)
(d)

(d)

(d)

(d)

-sC (a, u) -? 0 in probability for d = 0, 1, 2, where sC (a, u) = E {SC (a, p, u)} is ab(0)
solutely continuous, for a ? BC , uniformly in u ? (0, t]. Moreover, sC (a, u) is assumed to
be bounded away from zero.
(f ) The matrices A(ß0 ) and AC (a0 ) are positive de?nite, where

A(ß) =

t

0

AC (a) =

0

t

s(2) (ß, R, u)/s(0) (ß, R, u) – z(ß, R, u)?2 dF (u)

(2)
(0)
sC (a, u)/sC (a, u) – v(a, u)?2 dF C (u)
(1)

(0)

with z(ß, R, u) = s(1) (ß,R, u)/s(0) (ß, R,
u), v(a, u) = sC (a, u)/sC (a, u), F (u) = E
{Ri (u)Ni (u)}, F C (u) = E ?i (p0 )NiC (u) .
(g) 0 (t) < 8, C
0 (t) < 8.
We describe the asymptotic properties of the proposed estimators in the following theorems.

– a0 converges to a mean zero
Theorem 1. Under conditions (a) – (g), as n ? 8, n1/2 a
Normal distribution with covariance AC (a0 )-1 (a0 )AC (a0 )-1 , where AC (a0 ) is as de?ned by
Condition (f) and (a) = E{?i (a, p)?2 }, with ?i (a, p) being asymptotically independent and
identically distributed for i = 1, . . . , n; we defer the de?nition of ?i (a, p) to the Supplementary
Materials document.
DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

372

SCHAUBEL ET AL.

Vol. 42, No. 3

– a0 =
In the Supplementary Materials document, we show that n1/2 a

– a0 is essentially a scaled
AC (a0 )-1 n-1/2 ni=1 ?i (a0 , p0 ) + op (1); hence, n1/2 a
sum of n independent and identically distributed random quantities with mean zero and ?nite
variance. By the Multivariate Central Limit Theorem (MCLT) and empirical process theory, the
asymptotic normality is proved.
Theorem 2.

– ß , converges to a mean
Under conditions (a) – (g), as n ? 8, n1/2 ß
W1
0

zero Normal distribution with covariance A(ß0 )-1 (ß0 , R)A(ß0 )-1 , with A(ß) having been de?ned in Condition (f) and where (ß, R) = E{i (ß, R)?2 }, with i (ß, R) being independent
and identically distributed mean 0 variates (i = 1, . . . , n) asymptotically. The explicit de?nition
of i (ß, R) is provided in the Supplementary Materials.
The proof of Theorem 2 (provided in the Supplementary Materials) begins by de C (t) – C (t)} into n1/2 {
C (t; a
C (t; a
C (t; a
) –
, p
, p0 )} + n1/2 {
, p0 ) –
composing n1/2 {
0
0
0
0
0
C
C
C
C
C
1/2
1/2

0 (t; a0 , p0 )} + n {0 (t; a0 , p0 ) – 0 (t)}. Then n {0 (t) – 0 (t)} can be expressed
asymptotically as a sum of independent and identically distributed zero-mean variates, as n ? 8.
i (t) – Ri (t)} can
Combining this result and the Functional Delta Method, we can show that n1/2 {R
be written asymptotically as a sum of independent and identically distributed zero-mean variates, as n ? 8. Finally, through the Functional Delta Method, the asymptotic normality of
– ß ) is obtained.
n1/2 (ß
W1
0
is very complicated and dif?cult to
The expression for the asymptotic covariance of ß
W1
implement numerically. A practical way to estimate the variance of the proposed estimators is to
treat the weights Ri (t) as known rather than estimated. In the setting where the weight function
is known, results derived in the Supplementary Materials show that
– ß ) = A(ß )-1 n- 21
n1/2 (ß
0
0

n


Ui ß0 , R + op (1)

(6)

i=1

t

with Ui (ß0 , R) = 0 {Zi (0) – z(ß, R, t)}Ri (t)dMi (t) and dMi (t) = dNi (t) – Yi (t)di (t). Hence,
– ß ) is asymptotically a scaled sum of independent and identically distributed zeron1/2 (ß
0
is estimated by
mean random quantities with ?nite variance. Therefore, the variance of ß
W1
-1
R)
and
R)
-1 , where ‡ (ß, R) = E{U‡ (ß, R)?2 }, A(
ß)
ß)
ß)
‡ (ß,
‡ (ß,
A(
are calculated by
A(
i
replacing limiting values with their corresponding empirical counterparts.
– ß ) and n1/2 (ß
– ß ).
By similar arguments, the asymptotic normality holds for n1/2 (ß
W2
0
W3
0
– ß ). Therefore,
However, the covariance will be even more complicated than that of n1/2 (ß
W1
0
and ß
. Note that (6)
similarly, we can treat Ri (t) as ?xed to calculate the variance of ß
W2
W3
and ß
can be estimated by
holds when using W2 or W3 , such that the variances of each of ß
W2
W3
-1
R)
-1 , with Ri (t) being replaced by ?i (
ß)
ß)
‡ (ß,
A(
2i (t) and ?i (
3i (t), respectively.
A(
p)W
p)W
Our ?nal asymptotic result pertains to the proposed cumulative baseline hazard function
estimator.
W (t) – 0 (t)} converges to a zero-mean GausTheorem 3. Under conditions (a) – (g), n1/2 {
0
sian process as n ? 8, with an explicit covariance function estimator.
A proof of Theorem 3 is provided in the Supplementary Materials, including de?nitions
pertinent to the limiting covariance function.
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

373

Table 1: Simulation study: data con?gurations.
Parameter

1A

1B

2A

2B

3A

3B

?0

0.1

0.1

0.1

0.1

0.05

0.05

ß1

0.406

0.406

0.406

0.406

-0.406

-0.406

ß2

0.406

0.406

0.406

0.406

0.406

0.406

?C0

0.1

0.1

0.05

0.05

0.05

0.05

a1

-0.5

-0.5

-0.406

-0.406

-0.406

-0.406

a2

0.693

0.693

1.010

1.010

1.010

1.010

a3

0.095

0.095

0.095

0.095

0.095

0.095

%T

51%

51%

62%

62%

27%

27%

%C2

44%

44%

28%

28%

45%

45%

%C1

5%

5%

10%

10%

28%

28%

corr(T, C2 )

0.13

0.13

0.20

0.20

0.27

0.27

Full-cohort

2,500

2,500

2,500

2,500

2,500

2,500

p1

0.15

0.10

0.15

0.10

0.15

0.10

p2

0.15

0.10

0.15

0.10

0.15

0.10

p3

0.15

0.10

0.15

0.10

0.15

0.10

Case-control

375

250

375

250

375

250

4. SIMULATION STUDY
We investigated the ?nite-sample properties of the proposed estimators through a series of simulation studies. In each setting, the study population (i.e., full-cohort; prior to case-control sampling),
was n = 2,500. The baseline covariate, to be used in the failure time hazard model, consisted
of two independent Bernoulli variates, Zi (0) = [Z1i , Z2i (0)]T , where Z1i does not change over
time. Each of Z1i and Z2i (0) took the value 1 with probability 0.5. The independent censoring
times C1i were constant and equal to 12. After generating UT from a Uniform(0,1) distribution,
the failure time T was generated from a Cox model with hazard function
?i (t) = ?0 exp {ß1 Z1i + ß2 Z2i (0)} ,

(7)

T
by solving the equation 0 ?0 (u) exp {ß1 Z1i + ß2 Z2i (0)} du = – log UT for T , so that UT corresponds to the survival function at T . The time-dependent covariate Z2i (t) was generated as
Z2i (0)I (UT = 0.3) + {Z2i (0) + UT × int(t)} I (0.3 < UT = 0.6)
+ {Z2i (0) + UT /2 × int(t)} I (UT > 0.6) ,

(8)

where int(t) is the integer part of t. The dependent censoring time C2i was then generated from a
Cox model with hazard function
C
?C
i (t) = ?0 exp [a1 Z1i + a2 Z2i (t)Z2i (0) + a3 Z2i (t){1 – Z2i (0)}] .

(9)

We evaluated three data (full-cohort) con?gurations and, within each, two different case-control
sampling schemes. For each of the six scenarios, 1,000 replicates were generated. Parameter
speci?cations are given in Table 1. Note that the Wi (t) component of the weight was bounded by
10 in all runs.
DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

374

SCHAUBEL ET AL.

Vol. 42, No. 3

Table 2: Simulation results based on 1,000 replications: setting 1A–1B.
Wi (t)

Estimator

BIAS

ESD

ASE

CP

0.152

0.928

Setting 1A: p1 = p2 = p3 = 0.15
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.059

0.152

0.054

0.169

0.169

0.930

0.015

0.152

0.154

0.955

0.014

0.150

0.152

0.957

-0.087

0.150

0.149

0.905

-0.040

0.171

0.168

0.930

-0.003

0.151

0.153

0.948

-0.002

0.150

0.152

0.950

Setting 1B: p1 = p2 = p3 = 0.10
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.061

0.190

0.187

0.933

0.060

0.213

0.206

0.925

0.021

0.190

0.189

0.951

0.020

0.188

0.187

0.947

-0.085

0.187

0.183

0.922

-0.041

0.212

0.203

0.927

-0.005

0.185

0.187

0.947

-0.004

0.184

0.186

0.948

By the speci?cations above, the time-dependent covariate Z2i (t) is correlated with the event
time Ti through Equation (8). In addition, Z2i (t) also affects the censoring time C2i via model
(9). Since only the baseline value of Z2i (t), that is, Z2i (0), is adjusted for in the model (7), the
censoring time C2i is dependent on Ti given Z1i and Z2i (0).
For each data generation, we selected failures, dependent censoring subjects and independent
censoring subjects by Bernoulli sampling, with p1 = p2 = p3 = 0.15 (Settings 1A, 2A, 3A)
which, on average, resulted in a case-control sample n = 375 subjects; or p1 = p2 = p3 = 0.10
(Settings 1B, 2B, 3B) which resulted in an average of n = 250 subjects.
For each scenario, we report bias; empirical standard deviation (ESD) across the 1,000 replicates; average asymptotic standard error (ASE); and empirical coverage probability (CP) for 95%
con?dence intervals. Four estimators were evaluated, which differ with respect to the handling
of the IPCW component. The ?rst estimator did not make an adjustment for dependent censoring
(Wi (t) = 1). The second, third and fourth estimators are the three proposed estimators, with W1
being based on the unstabilized correction for dependent censoring; while W2 and W3 refer to the
different versions of the proposed stabilized estimators based on W2i (t) or W3i (t), respectively.
Results of the simulation study are provided in Tables 2, 3 and 4. Some general conclusions are
as follows. The estimator which did not account for dependent censoring is notably biased, in all
con?gurations. In some but not all cases, the bias is partly corrected by the estimator using W1i (t)
(i.e., the unstabilized estimator). In certain cases, the bias is actually more pronounced than for
the Wi (t) = 1 estimator. The bias due to dependent censoring is largely corrected by the stabilized
versions of the proposed estimator, with the ASE being similar to the ESD and, correspondingly,
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

375

Table 3: Simulation results based on 1,000 replications: setting 2A–2B.
Wi (t)

Estimator

BIAS

ESD

ASE

CP

0.139

0.901

Setting 2A: p1 = p2 = p3 = 0.15
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.096

0.136

0.090

0.149

0.151

0.911

0.042

0.140

0.146

0.953

0.028

0.136

0.143

0.954

-0.104

0.133

0.134

0.876

-0.082

0.150

0.147

0.911

-0.034

0.138

0.141

0.937

-0.021

0.135

0.139

0.947

0.916

Setting 2B: p1 = p2 = p3 = 0.10
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.101

0.169

0.171

0.100

0.182

0.184

0.916

0.053

0.172

0.177

0.947

0.040

0.168

0.174

0.955

-0.102

0.167

0.164

0.912

-0.083

0.184

0.179

0.926

-0.036

0.171

0.171

0.944

-0.023

0.168

0.170

0.948

coverage probabilities approximating the nominal level. Of the three proposed estimators, the
stabilized version based on W3i (t) has the best performance in the simulation studies.
5. APPLICATION
We applied the proposed methods to analyse pre-transplant mortality for patients with end-stage
liver disease (ESLD). Data were obtained from the Scienti?c Registry of Transplant Recipients
(SRTR). The study population consisted of patients who were initially wait-listed for liver transplantation in the United States at age = 18 between March 1, 2002 and December 31, 2008. Patients
were followed from the date of initial wait-listing until the earliest of death, receiving liver transplantation, loss to follow-up, or last day of the observation period (December 31, 2008). The time
scale was in days. The primary outcome of interest is pre-transplant mortality. Loss to followup, living-donor transplantation and administrative censoring were considered to be independent
censoring. Dependent censoring occurred through deceased-donor liver transplantation.
The Model of End-stage Liver Disease (MELD) score is time-dependent and is updated based
on a frequency that ranges from weekly to yearly and depends on the last reported MELD. In
the current liver allocation system, patients are ordered on the wait-list primarily by descending
MELD. That is, patients with higher MELD are considered to be at greater medical urgency
and, therefore, get higher priority for transplantation. However, for hepatocellular carcinoma
(HCC) patients, the calculated MELD based on laboratory measures has generally been thought
to understate actual medical urgency. As such, a MELD score of 22 is usually assigned to an
DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

376

SCHAUBEL ET AL.

Vol. 42, No. 3

Table 4: Simulation results based on 1,000 replications: setting 3A–3B.
Wi (t)

Estimator

BIAS

ESD

ASE

CP

p1 = p2 = p3 = 0.15
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.065

0.200

0.202

0.945

0.119

0.235

0.231

0.909

0.044

0.220

0.215

0.938

0.015

0.207

0.205

0.952

-0.260

0.216

0.206

0.757

-0.173

0.248

0.240

0.881

-0.065

0.241

0.230

0.935

-0.029

0.231

0.222

0.943

p1 = p2 = p3 = 0.10
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.064

0.252

0.248

0.935

0.120

0.296

0.280

0.906

0.045

0.275

0.262

0.926

0.017

0.259

0.252

0.944

-0.259

0.264

0.255

0.825

-0.170

0.297

0.292

0.905

-0.064

0.288

0.280

0.941

-0.028

0.279

0.273

0.955

HCC patient if the laboratory MELD is less than 22. The primary objective of our analysis is
to determine which range of (calculated) MELD score is actually consistent with the HCC pretransplant mortality hazard.
The factor of interest is, in a general sense, underlying liver disease at wait listing (the time
origin), classi?ed as hepatocellular carcinoma (HCC) versus not. The objective is to determine
the MELD score category to which pre-transplant HCC mortality is equivalent. Therefore, as will
be detailed later in this section, the pre-transplant death model will have HCC as the reference
category, to which all non-HCC patients (classi?ed by baseline MELD category) are compared.
The use of baseline (i.e., t = 0) MELD score is consistent with HCC diagnosis being applied
at baseline. Note that the use of baseline versus time-dependent MELD depends on the analytic
objectives. In Section 6, we describe a setting where use of time-dependent MELD is indicated.
In many studies, it has been shown that MELD is the dominant risk factor for liver pretransplant mortality. Moreover, as stated in the previous paragraph, MELD also strongly affects
the liver transplant hazard. Therefore, since the death model does not adjust for time-dependent
MELD, pre-transplant mortality and deceased-donor liver transplantation will be correlated, such
that it is necessary to account for liver transplantation as dependent censoring.
It is necessary to account for geography in our analysis. There are 60 Organ Procurement
Organizations (OPO) in the United States, each of which maintains a liver transplant wait-list
pertaining to a population size of a state, on average. Adjustment for OPO is important since
it re?ects geography, which can affect mortality in ways not re?ected by the remainder of the
adjustment covariates. With respect to transplantation, the transplant hazard differs markedly by
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

377

Table 5: Sampling design for the analysis of liver pre-transplant mortality.
Patients

OPO size

Deaths

Transplants

Censored

Total

546

4,475

Full-cohort
HCC
non-HCC

All

418

3,511

=400

418

1,499

490

2,407

401–1,300

3,676

10,816

5,230

19,722

6,076

12,794

10,477

29,347

10,588

28,620

16,743

55,951

>1,300
Total

Sampling fractions
HCC
non-HCC

All

1

1

1

=400

1

1

1

401–1,300

0.30

0.10

0.10

>1,300

0.15

0.10

0.10

Case-control Sample
HCC
non-HCC

All

418

3,511

546

4,475

=400

418

1,499

490

2,407

401–1,300

1,079

1,075

496

2,650

893

1,295

1,018

3,206

2,808

7,380

2,550

12,738

>1,300
Total

OPO owing in part to differences in organ availability. For both the time-to-death and time-totransplant models, we adjusted for OPO through strati?cation, since it may not be appropriate to
assume proportionality with respect to the 60 OPO-speci?c baseline transplant or death hazard
functions.
Speci?cs regarding the full cohort (n = 55,951), sampling fractions and case-control sample
are provided in Table 5.
Since a relatively small fraction of the full cohort was diagnosed with HCC (8%; 4,475 patients), HCC patients were over-sampled. In addition, we wanted to have suf?cient representation
from OPOs of various sizes in the case-control sample. The way we classi?ed the size of the OPO
was based on the number of patients wait-listed in that OPO in the full-cohort. Random sampling
within end-point would lead to inadequate representation of patients from smaller OPOs. Therefore, we assigned greater (lesser) sampling fractions to the small (large) OPOs. After Bernoulli
sampling, the case-control sample consisted of 12,738 patients.
There are additional issues regarding the data structure which must be taken into account. In
particular, a patient who is too sick to receive a transplant can be inactivated (usually a temporary
measure) or removed (permanent) from the wait-list. During these intervals, the patient is ineligible to receive a transplant. Therefore, an appropriate Cox model in this setting is given by the
following,
C
T
?C
ij (t) = Ai (t)?0j (t) exp{a Vi (t)},

DOI: 10.1002/cjs

(10)

The Canadian Journal of Statistics / La revue canadienne de statistique

378

SCHAUBEL ET AL.

Vol. 42, No. 3

Table 6: Analysis comparing HCC (assigned MELD of 22; reference) versus non-HCC (by lab MELD)
pre-transplant mortality.

Patients

Case-control

Case-control

Full-cohort

Unweighted: W = 1

Weighted: W2

Weighted: W2

MELD

߈

SE

p

߈

0

SE

p

߈

0

SE

p

HCC

“22”

0

non-HCC

6–8

-1.07

0.12

<0.0001 -1.02

0.15

<0.0001 -0.81

0.09

<0.0001

9–11

-0.60

0.09

<0.0001 -0.48

0.11

<0.0001 -0.50

0.08

<0.0001

12–13

-0.53

0.09

<0.0001 -0.40

0.11

0.0002 -0.24

0.08

0.002

14–15

-0.25

0.09

0.005

16–17

0.10

0.10

0.32

0.002

0.11

0.98

0.24

0.12

0.0498

0.01
0.24

0.08

0.88

0.08

0.004

18–19

0.26

0.12

0.02

0.63

0.15

<0.0001

0.52

0.09

<0.0001

20–22

0.55

0.11

<0.0001

0.83

0.12

<0.0001

0.81

0.09

<0.0001

23–24

0.90

0.17

<0.0001

1.41

0.21

<0.0001

1.12

0.11

<0.0001

25–29

1.59

0.16

<0.0001

1.93

0.19

<0.0001

1.44

0.11

<0.0001

30–39

2.38

0.16

<0.0001

2.69

0.17

<0.0001

1.86

0.12

<0.0001

40

3.68

0.29

<0.0001

3.65

0.37

<0.0001

1.79

0.20

<0.0001

where j denotes OPO number (j = 1, . . . , 60) and Ai (t) is an indicator of being active on the
wait-list (i.e., not being deactivated or previously removed) as of time t. The time-dependent
covariate vector Vi (t) includes MELD at time t (grouped into intervals: [6,8], [9,11], [12,13],
[14,15], [16,17], [18,19], [20,22], [23,24], [25,29], [30,39] and 40) with HCC patients chosen as
the reference group. These MELD categories have been suggested by several previous analyses;
the use of a categorical version of MELD negates the need to pin down its precise functional
form (which is known to not be linear). The vector Vi (t) also includes the following baseline
covariates: age group, gender, race and blood type, with age less than 40, Female, Caucasian and
blood type O as reference levels, respectively. Note that, for the intervals where the patient was
either inactivated or removed, the transplant hazard was treated as 0, as indicated by equation
(10). As such, we delete patient subintervals with Ai (t) = 0 to ?t model (10). However, since the
inactivated or removed patients are still at risk of pre-transplant death, patient subintervals with
Ai (t) = 0 are re-included in the ?tting of the pre-transplant mortality model.
The pre-transplant death model is given by
?ij (t) = ?0j (t) exp{ßT Zi (0)},

(11)

where j again represents OPO and Zi (0) contains indicators for the afore-listed MELD categories
(HCC serving as the reference), and the adjustment covariates used in the transplant model (10).
Since the IPCW weights could be very large toward the tail of the observation time, we truncated
IPCW weights with 10.
Results of the analysis are shown in Table 6. We present results for two case-control analyses:
the “unweighted” analysis (Wi (t) = 1), in which dependent censoring (liver transplantation) is
treated as independent censoring; and the proposed weighted analysis, which corrects for dependent censoring through W2i (t). We also carried out a full-cohort analysis, again based on W2i (t).
Table 6 shows that when the dependent censoring is treated as independent, HCC appears to
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

379

have pre-transplant mortality equal to that of MELD group 16–17 (see bolded entries in Table 6).
However, accounting for the dependent censoring, HCC is the pre-transplant mortality equivalent
of the MELD 14–15 group, a result consistent with the full-cohort analysis.
Based on the simulation studies in Section 4, the unstabilized weight W1i (t) was out-performed
by both W2i (t) and W3i (t) in terms of empirical bias and variance. In our application, results based
on W1i (t), W2i (t) and W3i (t) were all quite similar. It appeared from the simulations that W3i (t)
performed somewhat better than W2i (t); in each case a reduction in bias (for W3i (t) relative to
W2i (t)) was accompanied by a concomitant reduction in standard deviation. However, in the fullcohort analysis, standard errors for W2i (t) tended to be less than those for W3i (t). Hence, we chose
W2i (t) for the main line analysis. As mentioned previously, the IPCW component of the weight
is susceptible to unduly large values, which implies the use of a cap. We employed a cap of 10,
consistent with the simulation study. In the case-cohort analyses, the cap affected 16,507 records
for W1i (t) (1.55% of the total 1,066,630 records used); 2,616 records (0.25%) for W2i (t) and 3,351
(0.31%) for W3i (t). Full cohort and case-control results based on W1i (t) and W3i (t) are provided
in the Supplementary Materials.
The bias in the analysis which does not account for dependent censoring (Wi (t) = 1) is substantial, particularly upon consideration of the objectives of our analysis. This is clear upon comparing
the estimators based on Wi (t) = 1 and W2i (t), either through relative bias, or simply the absolute
difference in point estimates relative to the standard error. By either of these measures, the bias
is considerable, except for the two most extreme categories: MELD 6–8 and MELD=40. Note
that these two extreme categories are of least interest, since it was widely anticipated that the
lowest (highest) MELD category would have much lower (higher) pre-transplant mortality risk
than HCC. For the unweighted analysis, the HCC-mortality-equivalent is given by MELD 16–17,
whereas for the weighted analysis, it appears that MELD 14–15 and HCC have equal mortality.
From this perspective, the difference between the unweighted and weighted analysis would be
considered very important to the liver transplant ?eld.

6. DISCUSSION
In this article, we propose methods for proportional hazards modelling of survival data subject to
dependent censoring and obtained through case-control sampling. The proposed methods employ a
double-inverse-weighting scheme, through which the proposed estimators adjust for the sampling
bias and overcome dependent censoring. Simulation studies show that the proposed estimators
are approximately unbiased and that our asymptotic results are applicable to ?nite samples. The
proposed estimates can be computed using standard software that accommodates left-truncation,
weighting and offsets (e.g., PROC PHREG in SAS; coxph in R).
The case-control sampling scheme we consider involves either sampling all observed failures (cases), or a randomly selected fraction. The censored subjects (controls) are categorized
as either censored dependently, or independently. There is an incentive to over-sample dependently censored subjects, such that one can estimate the IPCW weights with suf?cient precision.
Although the sampling scheme we study involves random sampling based only on ?nal status
(dead, censored), the methods extend in a straightforward fashion to the setting in which sampling
also depends on ?xed covariates, a property illustrated through the application to the liver data
on Section 5. Although we consider case-control sampling explicitly, the proposed methods also
apply if instead case-cohort sampling is utilized.
One could also view the sampling scheme considered in this article to be a form of outcomedependent sampling (ODS). In an ODS design, one collects covariate information from a sample
by allowing selection probabilities to depend on individuals’ outcomes (e.g., death, survival). An
ODS design concentrates resources on observations carrying the greatest amount of information.
DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

380

SCHAUBEL ET AL.

Vol. 42, No. 3

There is a large literature on analysing data arising from ODS; see, for example, Breslow &
Holubkov (1997a), Zhou et al. (2002), Schildcrout & Heagerty (2008), Song, Zhou, & Kosorok
(2009) and Wang et al. (2009). It would be possible to generalize the proposed methods to the ODS
setting, interpreting the version we develop in this article as a censored-data-speci?c instance of
inverse-weighted generalized estimating equations.
We propose three different weights to correct the bias induced by dependent censoring. In
general, when the dependent censoring is light or moderate, the unstabilized weight W1 (t) works
well. However, when censoring is heavy, W1 (t) may be quite large toward the tail of the observation
time resulting in unstable estimates. In this case, stabilized weights, W2 (t) and W3 (t), may be
preferable and usually result in a less biased and more precise estimator than that arising from
using the unstabilized weight W1 (t). With respect to the stabilized weights, we found that W3 (t)
tended to out-perform W2 (t), although not uniformly. An apparent advantage of W2 (t) is that only
one dependent censoring hazard model is required. However, details regarding the programming
and ability to exploit functions currently available (in SAS or R) lead to W3 (t) actually being the
easier of the two stabilized weights to compute, even though a second hazard regression model is
required. Overall, it would appear that W3 (t) would generally be the preferred weight, although the
stabilized weight that actually performs best is likely to be application-dependent. For instance,
in our analysis of the liver disease data, W2 (t) resulted in parameter estimators with much smaller
standard errors than W3 (t), leading to our preference for the former in this particular case.
In simulation studies, we treated the IPCW weights and IPSW weights as ?xed to simplify
the computation, which would result in conservative covariance estimators because those weights
are actually estimated as opposed to being known. However, simulation results suggest that the
proposed ASEs by treating the IPCW weights and IPSW weights as ?xed are quite accurate. The
simpli?cation of the variance calculation would likely lead to invalid inference when the variance
of the estimated weight was substantial, which would be in cases where the subsample was not
large enough or did not contain a suf?cient number of failures or dependently censored subjects.
In such cases, the variability due to estimating the weight parameters would not be a negligible
fraction of the total variability, meaning that the suggested approximation would be inaccurate
and coding of the full variance (or, perhaps application of an appropriate bootstrapping method)
would be required.
The proposed methods require the consistency of the IPCW weight. Therefore, the proportional
hazards model for dependent censoring should be correctly speci?ed. This may be approximately
true when a suf?cient number of both baseline and time-dependent covariates are incorporated.
In our analysis of chronic liver disease patients wait-listed for liver transplantation, we found
that (calculated) Model for End-stage Liver Disease scores of 14–15 were consistent with the
pre-transplant mortality hazard of patients granted an exception due to hepatocellular cancer. The
results based on case-control sampling were consistent with those from the full-cohort analysis.
In contrast, if no adjustment was made for dependent censoring, it appeared that MELD 16–17
and HCC had equivalent pre-transplant mortality. Therefore, our results indicate that the current
MELD exception score of 22 granted to HCC patients overstates the actual medical urgency,
and that an assigned MELD score of 14 or 15 may be more appropriate. If this adjustment were
actually implemented, it would considerably decrease the frequency with which HCC patients
receive liver transplants. The results of our analysis provide currently the most de?nitive evidence
that HCC patients are given inappropriately high priority for liver transplantation in the United
States.
In certain settings, the factor of interest (in addition to the adjustment covariates) may be timedependent. In our analysis of the liver disease data, HCC designations are typically made at time 0,
implying that the appropriate comparison is to patients not classi?ed as HCC at time 0. However,
if we were interested in evaluating exception scores generally, then the most useful analysis which
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

381

addressed the appropriateness of (what would primarily be non-HCC) exception scores would
probably use time-dependent factors in the death model. This is because a hepatologist may
observe a patient clearly getting worse, but the patient’s MELD score not increasing, implying
that the score does not accurately re?ect the patient’s need for liver transplantation. This analysis
is currently under our consideration, and it appears that the proposed methods cannot readily be
extended to accommodate such settings.
APPENDIX
Proofs of Theorem 2 and Theorem 3 are provided in the Supplementary Materials document.
ACKNOWLEDGEMENTS
The authors thank the Associate Editor and Referees for their constructive comments and suggestions, which resulted in a substantially improved manuscript. This work was supported in part by
National Institutes of Health Grant R01 DK070869. The authors also thank the Scienti?c Registry
of Transplant Recipients (SRTR) access to the end-stage liver disease data. The SRTR is funded
by a contract from the Health Resource and Service Administration, U.S. Department of Health
and Human Services.
BIBLIOGRAPHY
Barlow, W. E. (1994). Robust variance estimation for the case-cohort design. Biometrics, 50, 1064–1072.
Basto, S. T., Perez, R., Villela-Nogueira, C., Fernandes, E., Barroso, A., Victor, L., Pereira, B., Ribeiro, J.
& Coelho, H. (2008). MELD as a predictor of long term mortality in liver transplantation waiting list.
Liver Transplantation, 14, S219–S219.
Binder, D. A. (1992). Fitting Cox’s proportional hazards models from survey data. Biometrika, 79, 139–147.
Borgan, Ø., Langholz, B., Samuelsen, S. O., Goldstein, L. & Pogoda, J. (2000). Exposure strati?ed casecohort designs. Lifetime Data Analysis, 6, 39–58.
Breslow, N. E. & Holubkov, R. (1997a). Maximum likelihood estimation of logistic regression parameters
under two-phase outcome-dependent sampling. Journal of the Royal Statistical Society Series B, 59,
447–461.
Chen, K. (2001). Generalized case-cohort sampling. Journal of the Royal Statistical Society Series B, 63,
791–809.
Chen, K. & Lo, S. H. (1999). Case-cohort and case-control analysis with Cox’s model. Biometrika, 86,
755–764.
Cox, D. R. (1972). Regression models and life-tables (with discussion). Journal of the Royal Statistical
Society Series B, 34, 187–220.
Gor?ne, M., Zucker, D. M., & Hsu, L. (2009). Case-control survival analysis with a general semiparametric
shared frailty model: A pseudo full likelihood approach. The Annals of Statistics, 37, 1489–1517.
Gray, R. J. (2009). Weighted analyses for cohort sampling designs. Lifetime Data Analysis, 15, 24–40.
Hern´an, M. A., Brumback, B., & Robins, J. M. (2000). Marginal structural models to estimate the causal
effect of Zidovudine on the survival of HIV-positive men. Epidemiology, 11, 561–570.
Hsu, L. & Gor?ne, M. (2006). Multivariate survival analysis for case-control family data. Biostatistics, 7,
387–398.
Hsu, L., Chen, L., Gor?ne, M. & Malone, K. (2004). Semiparametric estimation of marginal hazard function
from case-control family studies. Biometrics, 60, 936–944.
Huo, T. I., Wu, J. C., Lin, H. C., Lee, F. Y., Hou, M. C., Lee, P. C., Chang, F. Y. & Lee, S. D. (2005).
Evaluation of the increase in model for end-stage liver disease (MELD) score over time as a prognostic
predictor in patients with advanced cirrhosis: risk factor analysis and comparison with initial MELD
and Child-Turcotte-Pugh score. Journal of Hepatology, 42, 826–832.
DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

382

SCHAUBEL ET AL.

Vol. 42, No. 3

Kalb?eisch, J. D. & Lawless, J. F. (1988). Likelihood analysis of multi-state models for disease incidence
and mortality. Statistics in Medicine, 7, 149–160.
Kalb?eisch, J. D. & Prentice, R. L. (2002). The statistical analysis of failure time data, 2nd ed., Wiley,
Hoboken, NJ.
Kremers, W. K., van IJperen, M., Kim, W. R., Freeman, R. B., Harper, A. M., Kamath, P. S. & Wiesner,
R. H. (2004). MELD score as a predictor of pretransplant and posttransplant survival in OPTN/UNOS
status 1 patients. Hepatology, 39, 764–769.
Kulich, M. & Lin, D. Y. (2000). Additive hazards regression for case-cohort studies. Biometrika, 87, 73–87.
Li, H., Yang, P., & Schwartz, A. G. (1998). Analysis of age at onset from case-control family studies.
Biometrics, 54, 1030–1039.
Lin, D. Y. (2000). On ?tting Cox’s proportional hazards models to survey data. Biometrika, 87, 37–47.
Lin, D. Y. & Ying, Z. (1993). Cox regression with incomplete covariate measurements. Journal of the
American Statistical Association, 88, 1341–1349.
Merion, R. M., Schaubel, D. E., Dykstra, D. M., Freeman, R. B., Port, F. K. & Wolfe, R. A. (2005). The
survival bene?t of liver transplantation. American Journal of Transplantation, 5, 307–313.
Moger, T. A., Pawitan, A. and Borgan, Ø. (2008). Casde-cohort methods for survival data on families from
routine registers. Statistics in Medicine, 27, 1062–1074.
Nan, B., Kalb?eisch, J. D., & Yu, M. (2009). Asymptotic theory for the semiparametric accelerated failure
time model with missing data. Annals of Statistics, 37, 2351–2376.
Oakes, D. (1981). Survival times: aspects of partial likelihood (with discussion). International Statistical
Review, 49, 235–264.
Pan, Q. & Schaubel, D. E. (2008). Proportional hazards models based on biased samples and estimated
selection probabilities. Canadian Journal of Statistics, 36(1), 111–127.
Prentice, R. L. (1986). A case-cohort design for epidemiologic cohort studies and disease prevention trials.
Biometrika, 73, 1–11.
Prentice, R. L. & Brewlow, N. E. (1978). Retrospective studies and failure time models. Biometrika, 65,
153–158.
Robins, J. M. (1993). Information recovery and bias adjustment in proportional hazards regression analysis
of randomized trials using surrogate markers. Proceedings of the Biopharmaceutical Section, American
Statistical Association, 24–33.
Robins, J. M. & Finkelstein, D. M. (2000). Correcting for noncompliance and dependent censoring in an
AIDS clinical trial with inverse probability of censoring weighted (IPCW) log-rank tests. Biometrics,
56, 779–788.
Robins, J. M. & Rotnitzky, A. (1992). Recovery of information and adjustment for dependent censoring using
surrogate markers. In AIDS Epidemiology—Methodological Issues, Jewell N. P., Dietz K., & Farewell
V. T., editors. Birkh¨auser, Boston, pp. 297–331.
Rubin, D. B. (1977). Inference and missing data. Biometrika, 63, 581–592.
Schaubel, D. E., Wolfe, R. A., Sima, C. S. & Merion, R. M. (2009). Estimating the effect of a timedependent treatment by levels of an internal time-dependent covariate. Journal of the American Statistical
Association, 104, 49–59.
Schildcrout, J. S. & Heagerty, P. J. (2008). On outcome-dependent sampling designs for longitudinal binary
response data with time-varying covariates. Biostatistics, 9, 735–749.
Self, S. G. & Prentice, R. L. (1988). Asymptotic distribution theory and ef?ciency results for case-cohort
studies. Annals of Statistics, 16, 64–81.
Shih, J. H. & Chatterjee, N. (2002). Analysis of survival data from case-control family studies. Biometrics,
58, 502–509.
Song, R., Zhou, H., & Kosorok, M. R. (2009). On semiparametric ef?cient inference for two-stage outcomedependent sampling with a continuous outcome. Biometrika, 96, 221–228.
Subramanian, A., Sulkowski, M., Barin, B., Stablein, D., Curry, M., Nissen, N., Dove, L., Roland, M.,
Florman, S., Blumberg, E., Stosor, V., Jayaweera, D. T., Huprikar, S., Fung, J., Pruett, T., Stock, P. &
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

383

Ragni, M. (2010). MELD score is an important predictor of pretransplantation mortality in HIV-infected
liver transplant candidates. Gastroenterology, 138, 159–164.
Wang, W., Scharfstein, D., Tan, Z. & MacKenzie, E. J. (2009). Causal inference in outcome-dependent
two-phase sampling designs. Journal of the Royal Statistical Society Series B, 71, 947–969.
Wiesner, R. H., McDiarmid, S. V., Kamath, P. S., Edwards, E. B., Malinchoc, M., Kremers, W. K., Krom,
R. A. & Kim, W. R. (2001). MELD and PELD: Application of survival models to liver allocation. Liver
Transplantation, 7, 567–580.
Zhou, H., Weaver, M. A., Qin, J., Longnecker, M. P. & Wang, M. C. (2002). A semiparametric empirical
likelihood method for data from an outcome-dependent sampling scheme with a continuous outcome.
Biometrics, 58, 413–421.

Received 23 December 2012
Accepted 31 March 2014

DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

Copyright of Canadian Journal of Statistics is the property of Wiley-Blackwell and its content
may not be copied or emailed to multiple sites or posted to a listserv without the copyright
holder’s express written permission. However, users may print, download, or email articles for
individual use.

Responses are currently closed, but you can trackback from your own site.

Comments are closed.

The Canadian Journal of Statistics

The Canadian Journal of Statistics
Vol. 42, No. 3, 2014, Pages 365–383
La revue canadienne de statistique

365

Semiparametric methods for survival analysis
of case-control data subject to dependent
censoring
Douglas E. SCHAUBEL1 *, Hui ZHANG2 , John D. KALBFLEISCH1 and Xu SHU1
1 Department
2 U.S.

of Biostatistics, University of Michigan, Ann Arbor, MI, USA
Food and Drug Administration, Silver Spring, MD, USA

Key words and phrases: Case-control study; Cox regression; dependent censoring; estimating equation;
inverse weighting
MSC 2010: Primary 62N01; secondary 62D05
Abstract: Case-control sampling can be an ef?cient and cost-saving study design, wherein subjects are
selected into the study based on the outcome of interest. It was established long ago that proportional hazards
regression can be applied to case-control data. However, each of the various estimation techniques available
assumes that failure times are independently censored. Since independent censoring is often violated in
observational studies, we propose methods for Cox regression analysis of survival data obtained through
case-control sampling, but subject to dependent censoring. The proposed methods are based on weighted
estimating equations, with separate inverse weights used to account for the case-control sampling and to
correct for dependent censoring. The proposed estimators are shown to be consistent and asymptotically
normal, and consistent estimators of the asymptotic covariance matrices are derived. Finite-sample properties
of the proposed estimators are examined through simulation studies. The methods are illustrated through an
analysis of pre-transplant mortality among end-stage liver disease patients obtained from a national organ
failure registry. The Canadian Journal of Statistics 42: 365–383; 2014 © 2014 Statistical Society of Canada
Re´ sume´ : L’´echantillonnage cas-t´emoins peut constituer un plan d’exp´erience ef?cace et e´ conomique dans
le cadre duquel les sujets sont choisis pour l’´etude en fonction du ph´enom`ene e´ tudi´e. Il est e´ tabli depuis
longtemps que le mod`ele de r´egression a` risques proportionnels peut s’appliquer a` des donn´ees cas-t´emoins.
Cependant, toutes les techniques d’estimation existantes supposent que les temps de d´efaillance sont cen´
sur´es de fac¸on ind´ependante. Etant
donn´e que l’ind´ependance de la censure est souvent bafou´ee dans le
cadre d’´etudes observationnelles, les auteurs proposent des m´ethodes pour la r´egression de Cox de donn´ees
de survie sujettes a` la censure d´ependante obtenues par un e´ chantillonnage cas-t´emoins. Les m´ethodes propos´ees se fondent sur des e´ quations d’estimation pond´er´ees dont les poids s´epar´es et inverses permettent de
tenir compte de l’´echantillonnage cas-t´emoins et de corriger le biais li´e a` la censure d´ependante. Les auteurs montrent que les estimateurs propos´es sont convergents et asymptotiquement normaux. Ils obtiennent
e´ galement des estimateurs convergents pour les matrices de covariance asymptotique. Ils examinent les propri´et´es de ces estimateurs sur des e´ chantillons de taille ?nie par voie de simulation et illustrent les m´ethodes
au moyen d’une analyse de donn´ees sur le taux de mortalit´e pr´etransplantation chez les patients atteints
d’une maladie h´epatique en phase terminale provenant d’un registre national d’organes d´efaillants. La revue
canadienne de statistique 42: 365–383; 2014 © 2014 Soci´et´e statistique du Canada

1. INTRODUCTION
Case-control and case-cohort sampling schemes are designed to over-sample subjects expected
to provide relatively large amounts of information in terms of parameter estimation. In the case* Author to whom correspondence may be addressed.
E-mail: deschau@umich.edu
© 2014 Statistical Society of Canada / Soci´et´e statistique du Canada

366

SCHAUBEL ET AL.

Vol. 42, No. 3

control study, subjects observed to experience the event of interest (the cases) are over-sampled,
while only a fraction of subjects not experiencing the event of interest (the controls) are selected. In
the original formulation of case-control study, all cases are selected but, more generally, cases are
over-sampled, such that their sampling fraction would be many times greater than that expected
through random sampling. In the context of proportional hazards regression (Cox, 1972), the casecohort design (Prentice, 1986) was proposed, in which one selects all cases, as well as a random
sample of the study population (the sub-cohort). The case-cohort design is especially well-suited
to Cox regression since the sub-cohort representing the risk set can be used to compute partial
likelihood contributions at each observed event.
A number of methods have been proposed for the regression analysis of case-control and
case-cohort studies under the proportional hazards model. Prentice & Breslow (1978) adapted
the Cox model for use in case-control studies, and studied in detail the correspondence between
Cox regression in the presence of prospective and retrospective sampling. Oakes (1981) provided a partial likelihood interpretation for the likelihood function used by Prentice & Breslow
(1978) and, hence, a theoretical justi?cation of the estimation procedure. Self & Prentice (1988)
provided a rigorous theoretical evaluation of the case-cohort design of Prentice (1986), including
detailed asymptotic derivations and various ef?ciency calculations. Kalb?eisch & Lawless (1988)
developed pseudo-likelihood methods for semiparametric estimation of the illness-death model,
applicable to case-cohort and other retrospective sampling designs. Lin & Ying (1993) considered
general missing data problems in the context of Cox regression, with the case-cohort design cast
as a special case. The asymptotic development by Lin & Ying (1993) leads to a different variance
estimator for the Prentice (1986) case-cohort estimator than that derived by Self & Prentice (1988).
Much of the subsequent research on the case-cohort design was focussed on modi?cations
to the sampling scheme and/or analysis in order to increase ef?ciency. For example, Chen & Lo
(1999) derived a class of estimators for case-cohort sampling; essentially, the methods involve
adding future cases to the risk set (in addition to the sub-cohort) in order to improve ef?ciency
relative to the original method by Prentice (1986). Borgan et al. (2000) proposed strati?ed casecohort designs to improve ef?ciency. Motivated by family-based studies, Moger, Pawitan, &
Borgan (2008) developed case-cohort methods for ?tting frailty models to large clustered data sets.
The methods were inspired by the work of Kalb?eisch & Lawless (1988) and Borgan et al. (2000).
The motivation for the work of Moger, Pawitan, & Borgan (2008) was to reduce computational
burden, rather than dollar cost.
With respect to the case-control study, much of the development in the last 15 years has been
directed at clustered data. For instance, in the case-control family study, one selects a random
sample of cases, random sample of controls, as well as the family members of the cases and
controls. Li, Yang, & Schwartz (1998) developed parametric methods for ?tting copula models to
clustered survival data obtained through case-control family sampling. Shih & Chatterjee (2002)
extended this work to the semiparametric setting through quasi-likelihood procedures. Hsu &
Gor?ne (2006) developed pseudo-likelihood methods for ?tting frailty model to case-control
family data using an approach similar to that of Shih & Chatterjee (2002). Hsu et al. (2004)
developed semiparametric methods for frailty models based on case-control family sampling,
with the focus being on the conditional regression parameter. Gor?ne, Zucker, & Hsu (2009)
further evaluated frailty models based on case-control family data through a pseudo-full likelihood
approach, and derived rigorous large-sample theory.
There have been several evaluations of cohort sampling in a general sense. Chen & Lo (1999)
described the close theoretical connection between the case-cohort and case-control study within
the framework of Cox regression. Chen (2001) considered “generalized case-cohort sampling”
(nested case-control, case-cohort and case-control, all within the same framework) in the context
of Cox regression. Gray (2009) studied various cohort sampling designs, focusing on Inverse

The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

367

Probability of Selection Weighted (IPSW) estimators applied to the Cox model (e.g., Binder,
1992; Barlow, 1994; Borgan et al., 2000; Lin, 2000; Pan & Schaubel, 2008; Nan, Kalb?eisch, &
Yu, 2009).
The motivation for subsampling of large epidemiologic databases is well-established in the
literature. Nothing prevents such data from being subject to dependent censoring. However, each
of the subsampling methods cited thus far (and, to the best of our knowledge, all existing methods
for Cox regression analysis of case-control data) assumes that subjects are censored in a manner
independent of the failure rate. In most practical applications in which the independent censoring
assumption is violated, censoring is actually a mixture of independent (e.g., administrative, or
random loss to follow-up) and dependent (e.g., censoring of pre-treatment death by the initiation of
treatment). In such cases, it is necessary to distinguish between subjects who were independently
and dependently censored. The most popular method for handling dependent censoring is Inverse
Probability of Censoring Weighting (IPCW) proposed by Robins & Rotnitzky (1992). From this
perspective, case-control studies appear to extend nicely to the dependent censoring setting. In
particular, since IPCW entails modelling the dependent censoring hazard, there is an incentive to
over-sample dependently censored subjects.
We propose semiparametric methods for the analysis of failure time data generated by casecontrol sampling and subject to dependent censoring. In the setting of interest, we over-sample
dependently censored subjects, since correcting for dependent censoring through IPCW requires
that the dependent censoring process be modelled. The dependent censoring hazard is modelled
using existing case-control survival analysis methods featuring Inverse Probability of Sampling
Weighting (IPSW). Parameter estimation for the failure time hazard model then proceeds through
weighted estimating equations, with the weights correcting for both the case-control sampling and
the dependent censoring. We derive asymptotic properties of the proposed estimator, in a manner
which accounts for the randomness in the estimated IPCW and IPSW components. Since the
derived asymptotic variance is complicated, we suggest an alternative variance estimator that is
much easier to program and much faster to compute. Finite-sample performance of the proposed
procedures is evaluated through simulation.
In the evaluation of a proposed subsampling method, it is useful to assess not only if the method
can be implemented in a real data set, but, in addition, whether the method gives reasonable answers
in practice. Along these lines, we apply the proposed methods to an analysis of pre-transplant
survival among end-stage liver disease (ESLD) patients. In this application, the full cohort is
available, meaning that the results based on the full-cohort can serve as a target for the results
obtained through the proposed case-control sampling methods. With respect to background, liver
transplantation is the preferred method of treatment. However, there are thousands more patients
needing a liver transplant than there are available donor organs. As such, patients clinically suitable
for receiving a deceased-donor liver transplant are put on a wait-list, with the pertinent time origin
(t = 0) then being the date of wait listing. The receipt of a liver transplant does not censor death
but, naturally, does censor pre-transplant death (i.e., the time until death in the absence of liver
transplantation). In liver transplantation, patients are ranked on the wait-list based on their most
recent Model for End-stage Liver Disease (MELD) score. MELD was chosen as the scoring system
largely because it is a very strong predictor of pre-transplant mortality (e.g., Wiesner et al., 2001;
Kremers et al., 2004; Huo et al., 2005; Merion et al., 2005; Basto et al., 2008; Subramanian et al.,
2010). Since MELD is time-dependent, an analysis of the effect of baseline risk factors (i.e.,
measured at time t = 0) on the pre-transplant death hazard could result in substantial bias if liver
transplantation were treated as independent censoring. This and other related issues in the liver
transplant setting are discussed by Schaubel et al. (2009).
In certain cases, special exceptions are made under which a wait-listed patient may be assigned
a MELD score which is higher than that calculated, in an attempt to re?ect the patient’s perceived

DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

368

SCHAUBEL ET AL.

Vol. 42, No. 3

medical urgency. The most frequent occurrence of such MELD exceptions is for patients with
hepatocellular carcinoma (HCC), a form of liver cancer. HCC patients are typically assigned a
MELD score of 22, which is often considerably higher than the score based on their laboratory
measures. To our knowledge, no existing analyses in the liver transplant literature have quanti?ed
whether the MELD score of 22 accurately re?ects the true “MELD-equivalent” wait-list mortality
risk faced by HCC patients. As a primary example in this article, we compare HCC versus
non-HCC patients, with the latter group categorized by baseline MELD score. Since MELD (at
time 0 but, also, after time 0) affects both death and liver transplantation probabilities, liver
transplantation is handled as dependent censoring of pre-transplant death time.
It should be noted that MELD exception scores for HCC are usually assigned at initial wait
listing (time 0). Therefore, given our analytic objective and in the interests of interpretation, it is
appropriate to compare HCC patients to non-HCC patients, with the latter categorized speci?cally
by time 0 MELD score. To use MELD at time > 0 would be tantamount to having the HCC and
comparator patients established at different times, which would render the resulting parameter
estimates uninterpretable. More generally, the perils of adjusting for time-dependent factors (such
as MELD at time >0) are well-described in the causal inference literature (e.g., Hern´an, Brumback,
& Robins, 2000). Another general setting for which adjustment for time-dependent factors is
explicitly avoided concerns the setting wherein there is a mediator. In evaluating a baseline factor
(e.g., treatment assignment), adjusting for the mediator (e.g., though a time-dependent indicator
covariate) would generally distort the resulting treatment effect; it might then be necessary to
treat the mediator as dependent censoring.
The remainder of this article is organized as follows. In Section 2, we describe the proposed
estimation procedures. In Section 3, we derive large sample properties for the proposed estimators.
We conduct simulation studies in Section 4 to investigate the ?nite-sample properties of the
proposed estimators. Section 5 provides an application of the methods to the afore-described
end-stage liver disease data. The article concludes with a discussion in Section 6.
2. METHODS
Let Z1i denote the q1 -vector of time-constant covariates for subject i (i = 1, . . . , n). Let Z2i (t)
i (t) =
be the q2 -vector of time-dependent covariates at time t, Zi (t) = {ZT1i , Z2i (t)T }T , and Z
{Zi (u) : 0 = u = t} denote the history of Zi (·) up to time t. Let Ti and Ci be the potential failure
and censoring times, respectively. We suppose that Ci = C1i ? C2i , where a ? b = min {a, b},
C1i is the censoring time due to mechanisms that are independent of Ti given Zi (0), and C2i
denotes the dependent censoring time; that is, C2i is dependent on Ti given Zi (0). Let Xi = Ti ?
Ci , Yi (t) = I (Xi = t), 1i = I (Ti = Ci ), 2i = I (C2i = C1i , C2i < Ti ), 3i = (1 – 1i )(1 –
2i ), Ni (t) = I (Xi = t, 1i = 1) and NiC (t) = I (Xi = t, 2i = 1), where I(·) is the indicator
function. The observable data are assumed to be n independently and identically distributed copies
of {Ni (·), NiC (·), Yi (·), Zi (·)}. Let ?i indicate whether or not subject i is sampled. The variate ?i
is allowed to depend on 1i , 2i and 3i so that the sampling probability can be different
for subjects who fail, subjects who are dependently censored and those who are independently
censored. Let the cohort be divided into three strata according to the outcome (1 , 2 , 3 )
such that
Lk = {i : ki = 1}, k = 1, 2, 3. Let pk = pr(?i = 1 | i ? Lk ), p = (p1 , p2 , p3 )T and
?i (p) = 3k=1 ki ?i /pk . Note that ?i (p) weights the ith subject by the inverse probability that
the subject is sampled.
We assume that the hazard of failure for individual i is speci?ed by the following proportional
hazards model (Cox, 1972),
?i {t | Zi (0)} = ?0 (t) exp{ßT0 Zi (0)},
The Canadian Journal of Statistics / La revue canadienne de statistique

(1)
DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

369

where ?0 (t) is an unspeci?ed baseline hazard function for failure time, and ß0 is a (q1 + q2 )dimensional regression parameter. Note that we are chie?y interested in inferring the role of Zi (0)
on the failure time hazard, as opposed to {Zi (t) : t > 0}, for reasons of interpretation. For example,
it is straightforward to predict survival probability from time 0 using a pre-speci?ed value of Zi (0)
i (t) would be
along with parameter estimates from model (1). To do so using a model based on Z
much more complicated, and would generally require modelling the stochastic properties of the
process Zi (t).
If it were also the case that C2i was independent of Ti given Zi (0) (unlike the setting of interest),
the root of the estimating equation U(ß) = 0, where
then ß0 could be consistently estimated by ß,
U(ß) =

n

t

?i (p){Zi (0) – Z(ß, t)}dNi (t),

(2)

0

i=1

where
follow-up time, Z(ß, t) = S(1) (ß, t)/S (0) (ß, t), S(d) (ß, t) =
n t < 8 is the?dmaximum
T
exp{ß Zi (0)}, with a?0 = 1, a?1 = a, and a?2 = aaT . Estimating equai=1 ?i (p)Yi (t)Zi (0)
tions of the same general structure as (2) and arising from IPSW have been proposed by several
previous authors, for example, Kalb?eisch & Lawless (1988), Binder (1992), Borgan et al. (2000)
and Lin (2000) for the Cox model; Kulich & Lin (2000) for the additive hazards model; and Nan,
Kalb?eisch, & Yu (2009) for the accelerated failure time model.
Since Zi (t) affects both the event and censoring times, and since Zi (t) is not incorporated into

model (1), C2i would generally not be independent of Ti given Zi (0). In this case, the estimate ß
derived from (2) could be substantially biased because (2) does not accommodate the dependence
i (t), the hazard of
between C2i and Ti . We assume that conditional on the covariate history Z
dependent censoring C2i at time t does not further depend on the possibly unobserved failure time
Ti , that is,
C

?C
i {t | Zi (Ti ), Ci = t, Ti = t, Ti } = ?i {t | Zi (t), Ci = t, Ti = t}.

(3)

This fundamental assumption is called “no unmeasured confounders for censoring” (Rubin,
1977; Robins, 1993). Borrowing terminology from the competing risks literature (Kalb?eisch
and Prentice, 2002), assumption (3) allows us to identify the cause-speci?c hazard for C2i . We
assume a time-dependent Cox proportional hazards model for the right-hand side of Equation (3),
C
T

?C
i {t | Zi (t), Xi = t} = ?0 (t) exp{a0 Vi (t)},

(4)

where ?C
0 (t) is an unspeci?ed baseline hazard function for dependent censoring, Vi (t) is a
i (t), and a0 is a s-dimensional regression parameter.
s-vector consisting of functions of Z
We propose the following estimating function,
UR (ß) =

n

i=1

t

Ri (t){Zi (0) – ZR (ß, R, t)}dNi (t),

(5)

0

where
ZR (ß, R, t) =

S(1) (ß, R, t)
S (0) (ß, R, t)

S(d) (ß, R, t) = n-1

n

Ri (t)Yi (t)Zi (0)?d exp{ßT Zi (0)}

i=1

DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

370

SCHAUBEL ET AL.

Vol. 42, No. 3

Ri (t) = ?i (p)Wi (t)
C

Wi (t) = ei (t) ?i (t),
t
C
T
where C
i (t) = 0 exp{a Vi (u)}d0 (u) and the function ?i (t) in the weight Wi (t) is a stabilization
factor. We consider three choices of ?i (t). One choice is ?1i (t) = 1. However, when the censoring
C
is heavy, ei (t) could be quite large and lead to instability in the estimation. In this case, the
t

choice of ?2i (t) = exp[- 0 exp{aT Vi (0)}dC
0 (u)] or ?3i (t) = exp[-i {t | Zi (0)}] may be more

appropriate, where i (t) is based on a time-to-censoring model that uses only the baseline coC
variate values, Zi (0). Hereafter, we denote Wji (t) = ei (t) ?ji (t), j = 1, 2, 3, and correspondingly

and ß
, the solutions to UR (ß) = 0 with weights W1i (t), W2i (t) and
estimate ß0 with ß
W1
W2
W3
W3i (t), respectively.
C (t)}, where
The weight W1i (t) can be estimated using exp{
i
C

i (t) =
C (t, a) =

0

t

0

C (s, a
)
exp{
aT Vi (s)}d
0

n

t

0

i=1

?
?-1
n

?
?j (p)Yj (s) exp{aT Vj (s)}? ?i (p)dNiC (s),
j=1

, the partial likelihood estimate of a0 , is the root of UC (a) = 0, where
where a
UC (a) =

n

0

i=1

t

{Vi (t) – V(a, p, t)}?i (p)dNiC (t),
(1)

(0)

is an IPSW-based estimating function, with V(a, p, t) = SC (a, p, t)/SC (a, p, t) and

T
(d)
SC (a, p, t) = n-1 ni=1 ?i (p)Yi (t)Vi (t)?d ea Vi (t) .
C (t)}, where
C {t, a
|
The weight W2i (t) can be estimated using
?2i (t) exp{
?2i (t) = exp[-
i
i
Zi (0)}]; that is, ?2i (t) is estimated using the same model (4), but only using baseline covariate
values,
C
| Zi (0)} =

i {t, a

0

t

C (s, a
).
exp{
aT Vi (0)}d
0

C (t)}, where
† (t)}, with
?3i (t) exp{
?3i (t) = exp{-
The weight W3i (t) can be estimated by
i
i
?3i (t) estimated using an additional baseline model for C2i ,

?i {t | Zi (0), Ci = t, Ti , Ti = t} = ?0 (t) exp{a† Vi (0)},
T

such that we have
† (t) =

i
† (t, a† ) =

0

t

0

T

n

i=1

(s, a
† ),
exp{
a† Vi (0)}d
0

0

t

?
?-1
n

T
?
?j (p)Yj (s) exp{a† Vj (0)}? ?i (p)dNiC (s),
j=1

The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

371

† is the partial likelihood estimate of a† under the model for dependent censoring with
and a

hazard ?i (t). Weight stabilizers analogous to ?3i (t) have been suggested, for example, by Robins
& Finkelstein (2000) and Hern´an, Brumback, & Robins (2000). We propose the stabilizer ?2i (t)
as an alternative. The performance of each of W1i (t), W2i (t) and W3i (t) are compared through
simulations studies described in Section 4.
(j = 1, 2, 3) is computed as the root of the estimating equation in (5),
To summarize, ß
Wj
i (t). After estimating ß0 , the cumulative baseline hazard, 0 (t), can then
with Ri (t) replaced by R
be estimated by
W (t) =

0

n

0

i=1

t

n

-1

T Z     (0)}
     (s)Y     (s) exp{ß
R
Wj

i (s)dNi (s).
R

=1

3. ASYMPTOTIC PROPERTIES
The following conditions are assumed throughout this section.
{Ni (·), NiC (·), Yi (·), Zi (·)}, i = 1, . . . , n are independently and identically distributed.
P {Yi (t) =1} > 0.
t
|Zij (0)| + 0 |dZij (t)| < BZ < 8, where Zij is the jth component of Zi and BZ is a constant.
There exists a neighbourhood B of ß0 such that supu?[0,t],ß?B S(d) (ß, R, u) –

s(d) (ß, R, u) -? 0 in probability for d = 0, 1, 2, where s(d) (ß, R, u) = E S(d) (ß, R, u)
is absolutely continuous, for ß ? B, uniformly in u ? (0, t], E (·) denotes expectation. Moreover, s(0) (ß, R, u) is assumed to be bounded away from zero.
(d)
(e) There exists a neighbourhood BC of a0 such that supu?[0,t],a?BC SC (a, p, u)

(a)
(b)
(c)
(d)

(d)

(d)

(d)

-sC (a, u) -? 0 in probability for d = 0, 1, 2, where sC (a, u) = E {SC (a, p, u)} is ab(0)
solutely continuous, for a ? BC , uniformly in u ? (0, t]. Moreover, sC (a, u) is assumed to
be bounded away from zero.
(f ) The matrices A(ß0 ) and AC (a0 ) are positive de?nite, where

A(ß) =

t

0

AC (a) =

0

t

s(2) (ß, R, u)/s(0) (ß, R, u) – z(ß, R, u)?2 dF (u)

(2)
(0)
sC (a, u)/sC (a, u) – v(a, u)?2 dF C (u)
(1)

(0)

with z(ß, R, u) = s(1) (ß,R, u)/s(0) (ß, R,
u), v(a, u) = sC (a, u)/sC (a, u), F (u) = E
{Ri (u)Ni (u)}, F C (u) = E ?i (p0 )NiC (u) .
(g) 0 (t) < 8, C
0 (t) < 8.
We describe the asymptotic properties of the proposed estimators in the following theorems.

– a0 converges to a mean zero
Theorem 1. Under conditions (a) – (g), as n ? 8, n1/2 a
Normal distribution with covariance AC (a0 )-1 (a0 )AC (a0 )-1 , where AC (a0 ) is as de?ned by
Condition (f) and (a) = E{?i (a, p)?2 }, with ?i (a, p) being asymptotically independent and
identically distributed for i = 1, . . . , n; we defer the de?nition of ?i (a, p) to the Supplementary
Materials document.
DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

372

SCHAUBEL ET AL.

Vol. 42, No. 3

– a0 =
In the Supplementary Materials document, we show that n1/2 a

– a0 is essentially a scaled
AC (a0 )-1 n-1/2 ni=1 ?i (a0 , p0 ) + op (1); hence, n1/2 a
sum of n independent and identically distributed random quantities with mean zero and ?nite
variance. By the Multivariate Central Limit Theorem (MCLT) and empirical process theory, the
asymptotic normality is proved.
Theorem 2.

– ß , converges to a mean
Under conditions (a) – (g), as n ? 8, n1/2 ß
W1
0

zero Normal distribution with covariance A(ß0 )-1 (ß0 , R)A(ß0 )-1 , with A(ß) having been de?ned in Condition (f) and where (ß, R) = E{i (ß, R)?2 }, with i (ß, R) being independent
and identically distributed mean 0 variates (i = 1, . . . , n) asymptotically. The explicit de?nition
of i (ß, R) is provided in the Supplementary Materials.
The proof of Theorem 2 (provided in the Supplementary Materials) begins by de C (t) – C (t)} into n1/2 {
C (t; a
C (t; a
C (t; a
) –
, p
, p0 )} + n1/2 {
, p0 ) –
composing n1/2 {
0
0
0
0
0
C
C
C
C
C
1/2
1/2

0 (t; a0 , p0 )} + n {0 (t; a0 , p0 ) – 0 (t)}. Then n {0 (t) – 0 (t)} can be expressed
asymptotically as a sum of independent and identically distributed zero-mean variates, as n ? 8.
i (t) – Ri (t)} can
Combining this result and the Functional Delta Method, we can show that n1/2 {R
be written asymptotically as a sum of independent and identically distributed zero-mean variates, as n ? 8. Finally, through the Functional Delta Method, the asymptotic normality of
– ß ) is obtained.
n1/2 (ß
W1
0
is very complicated and dif?cult to
The expression for the asymptotic covariance of ß
W1
implement numerically. A practical way to estimate the variance of the proposed estimators is to
treat the weights Ri (t) as known rather than estimated. In the setting where the weight function
is known, results derived in the Supplementary Materials show that
– ß ) = A(ß )-1 n- 21
n1/2 (ß
0
0

n


Ui ß0 , R + op (1)

(6)

i=1

t

with Ui (ß0 , R) = 0 {Zi (0) – z(ß, R, t)}Ri (t)dMi (t) and dMi (t) = dNi (t) – Yi (t)di (t). Hence,
– ß ) is asymptotically a scaled sum of independent and identically distributed zeron1/2 (ß
0
is estimated by
mean random quantities with ?nite variance. Therefore, the variance of ß
W1
-1
R)
and
R)
-1 , where ‡ (ß, R) = E{U‡ (ß, R)?2 }, A(
ß)
ß)
ß)
‡ (ß,
‡ (ß,
A(
are calculated by
A(
i
replacing limiting values with their corresponding empirical counterparts.
– ß ) and n1/2 (ß
– ß ).
By similar arguments, the asymptotic normality holds for n1/2 (ß
W2
0
W3
0
– ß ). Therefore,
However, the covariance will be even more complicated than that of n1/2 (ß
W1
0
and ß
. Note that (6)
similarly, we can treat Ri (t) as ?xed to calculate the variance of ß
W2
W3
and ß
can be estimated by
holds when using W2 or W3 , such that the variances of each of ß
W2
W3
-1
R)
-1 , with Ri (t) being replaced by ?i (
ß)
ß)
‡ (ß,
A(
2i (t) and ?i (
3i (t), respectively.
A(
p)W
p)W
Our ?nal asymptotic result pertains to the proposed cumulative baseline hazard function
estimator.
W (t) – 0 (t)} converges to a zero-mean GausTheorem 3. Under conditions (a) – (g), n1/2 {
0
sian process as n ? 8, with an explicit covariance function estimator.
A proof of Theorem 3 is provided in the Supplementary Materials, including de?nitions
pertinent to the limiting covariance function.
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

373

Table 1: Simulation study: data con?gurations.
Parameter

1A

1B

2A

2B

3A

3B

?0

0.1

0.1

0.1

0.1

0.05

0.05

ß1

0.406

0.406

0.406

0.406

-0.406

-0.406

ß2

0.406

0.406

0.406

0.406

0.406

0.406

?C0

0.1

0.1

0.05

0.05

0.05

0.05

a1

-0.5

-0.5

-0.406

-0.406

-0.406

-0.406

a2

0.693

0.693

1.010

1.010

1.010

1.010

a3

0.095

0.095

0.095

0.095

0.095

0.095

%T

51%

51%

62%

62%

27%

27%

%C2

44%

44%

28%

28%

45%

45%

%C1

5%

5%

10%

10%

28%

28%

corr(T, C2 )

0.13

0.13

0.20

0.20

0.27

0.27

Full-cohort

2,500

2,500

2,500

2,500

2,500

2,500

p1

0.15

0.10

0.15

0.10

0.15

0.10

p2

0.15

0.10

0.15

0.10

0.15

0.10

p3

0.15

0.10

0.15

0.10

0.15

0.10

Case-control

375

250

375

250

375

250

4. SIMULATION STUDY
We investigated the ?nite-sample properties of the proposed estimators through a series of simulation studies. In each setting, the study population (i.e., full-cohort; prior to case-control sampling),
was n = 2,500. The baseline covariate, to be used in the failure time hazard model, consisted
of two independent Bernoulli variates, Zi (0) = [Z1i , Z2i (0)]T , where Z1i does not change over
time. Each of Z1i and Z2i (0) took the value 1 with probability 0.5. The independent censoring
times C1i were constant and equal to 12. After generating UT from a Uniform(0,1) distribution,
the failure time T was generated from a Cox model with hazard function
?i (t) = ?0 exp {ß1 Z1i + ß2 Z2i (0)} ,

(7)

T
by solving the equation 0 ?0 (u) exp {ß1 Z1i + ß2 Z2i (0)} du = – log UT for T , so that UT corresponds to the survival function at T . The time-dependent covariate Z2i (t) was generated as
Z2i (0)I (UT = 0.3) + {Z2i (0) + UT × int(t)} I (0.3 < UT = 0.6)
+ {Z2i (0) + UT /2 × int(t)} I (UT > 0.6) ,

(8)

where int(t) is the integer part of t. The dependent censoring time C2i was then generated from a
Cox model with hazard function
C
?C
i (t) = ?0 exp [a1 Z1i + a2 Z2i (t)Z2i (0) + a3 Z2i (t){1 – Z2i (0)}] .

(9)

We evaluated three data (full-cohort) con?gurations and, within each, two different case-control
sampling schemes. For each of the six scenarios, 1,000 replicates were generated. Parameter
speci?cations are given in Table 1. Note that the Wi (t) component of the weight was bounded by
10 in all runs.
DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

374

SCHAUBEL ET AL.

Vol. 42, No. 3

Table 2: Simulation results based on 1,000 replications: setting 1A–1B.
Wi (t)

Estimator

BIAS

ESD

ASE

CP

0.152

0.928

Setting 1A: p1 = p2 = p3 = 0.15
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.059

0.152

0.054

0.169

0.169

0.930

0.015

0.152

0.154

0.955

0.014

0.150

0.152

0.957

-0.087

0.150

0.149

0.905

-0.040

0.171

0.168

0.930

-0.003

0.151

0.153

0.948

-0.002

0.150

0.152

0.950

Setting 1B: p1 = p2 = p3 = 0.10
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.061

0.190

0.187

0.933

0.060

0.213

0.206

0.925

0.021

0.190

0.189

0.951

0.020

0.188

0.187

0.947

-0.085

0.187

0.183

0.922

-0.041

0.212

0.203

0.927

-0.005

0.185

0.187

0.947

-0.004

0.184

0.186

0.948

By the speci?cations above, the time-dependent covariate Z2i (t) is correlated with the event
time Ti through Equation (8). In addition, Z2i (t) also affects the censoring time C2i via model
(9). Since only the baseline value of Z2i (t), that is, Z2i (0), is adjusted for in the model (7), the
censoring time C2i is dependent on Ti given Z1i and Z2i (0).
For each data generation, we selected failures, dependent censoring subjects and independent
censoring subjects by Bernoulli sampling, with p1 = p2 = p3 = 0.15 (Settings 1A, 2A, 3A)
which, on average, resulted in a case-control sample n = 375 subjects; or p1 = p2 = p3 = 0.10
(Settings 1B, 2B, 3B) which resulted in an average of n = 250 subjects.
For each scenario, we report bias; empirical standard deviation (ESD) across the 1,000 replicates; average asymptotic standard error (ASE); and empirical coverage probability (CP) for 95%
con?dence intervals. Four estimators were evaluated, which differ with respect to the handling
of the IPCW component. The ?rst estimator did not make an adjustment for dependent censoring
(Wi (t) = 1). The second, third and fourth estimators are the three proposed estimators, with W1
being based on the unstabilized correction for dependent censoring; while W2 and W3 refer to the
different versions of the proposed stabilized estimators based on W2i (t) or W3i (t), respectively.
Results of the simulation study are provided in Tables 2, 3 and 4. Some general conclusions are
as follows. The estimator which did not account for dependent censoring is notably biased, in all
con?gurations. In some but not all cases, the bias is partly corrected by the estimator using W1i (t)
(i.e., the unstabilized estimator). In certain cases, the bias is actually more pronounced than for
the Wi (t) = 1 estimator. The bias due to dependent censoring is largely corrected by the stabilized
versions of the proposed estimator, with the ASE being similar to the ESD and, correspondingly,
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

375

Table 3: Simulation results based on 1,000 replications: setting 2A–2B.
Wi (t)

Estimator

BIAS

ESD

ASE

CP

0.139

0.901

Setting 2A: p1 = p2 = p3 = 0.15
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.096

0.136

0.090

0.149

0.151

0.911

0.042

0.140

0.146

0.953

0.028

0.136

0.143

0.954

-0.104

0.133

0.134

0.876

-0.082

0.150

0.147

0.911

-0.034

0.138

0.141

0.937

-0.021

0.135

0.139

0.947

0.916

Setting 2B: p1 = p2 = p3 = 0.10
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.101

0.169

0.171

0.100

0.182

0.184

0.916

0.053

0.172

0.177

0.947

0.040

0.168

0.174

0.955

-0.102

0.167

0.164

0.912

-0.083

0.184

0.179

0.926

-0.036

0.171

0.171

0.944

-0.023

0.168

0.170

0.948

coverage probabilities approximating the nominal level. Of the three proposed estimators, the
stabilized version based on W3i (t) has the best performance in the simulation studies.
5. APPLICATION
We applied the proposed methods to analyse pre-transplant mortality for patients with end-stage
liver disease (ESLD). Data were obtained from the Scienti?c Registry of Transplant Recipients
(SRTR). The study population consisted of patients who were initially wait-listed for liver transplantation in the United States at age = 18 between March 1, 2002 and December 31, 2008. Patients
were followed from the date of initial wait-listing until the earliest of death, receiving liver transplantation, loss to follow-up, or last day of the observation period (December 31, 2008). The time
scale was in days. The primary outcome of interest is pre-transplant mortality. Loss to followup, living-donor transplantation and administrative censoring were considered to be independent
censoring. Dependent censoring occurred through deceased-donor liver transplantation.
The Model of End-stage Liver Disease (MELD) score is time-dependent and is updated based
on a frequency that ranges from weekly to yearly and depends on the last reported MELD. In
the current liver allocation system, patients are ordered on the wait-list primarily by descending
MELD. That is, patients with higher MELD are considered to be at greater medical urgency
and, therefore, get higher priority for transplantation. However, for hepatocellular carcinoma
(HCC) patients, the calculated MELD based on laboratory measures has generally been thought
to understate actual medical urgency. As such, a MELD score of 22 is usually assigned to an
DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

376

SCHAUBEL ET AL.

Vol. 42, No. 3

Table 4: Simulation results based on 1,000 replications: setting 3A–3B.
Wi (t)

Estimator

BIAS

ESD

ASE

CP

p1 = p2 = p3 = 0.15
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.065

0.200

0.202

0.945

0.119

0.235

0.231

0.909

0.044

0.220

0.215

0.938

0.015

0.207

0.205

0.952

-0.260

0.216

0.206

0.757

-0.173

0.248

0.240

0.881

-0.065

0.241

0.230

0.935

-0.029

0.231

0.222

0.943

p1 = p2 = p3 = 0.10
1
W1
W2
W3
1
W1
W2
W3

ß1

ß1W1

ß1W2
W

ß1 3

ß2

ß2W1

ß2W2
W

ß2 3

0.064

0.252

0.248

0.935

0.120

0.296

0.280

0.906

0.045

0.275

0.262

0.926

0.017

0.259

0.252

0.944

-0.259

0.264

0.255

0.825

-0.170

0.297

0.292

0.905

-0.064

0.288

0.280

0.941

-0.028

0.279

0.273

0.955

HCC patient if the laboratory MELD is less than 22. The primary objective of our analysis is
to determine which range of (calculated) MELD score is actually consistent with the HCC pretransplant mortality hazard.
The factor of interest is, in a general sense, underlying liver disease at wait listing (the time
origin), classi?ed as hepatocellular carcinoma (HCC) versus not. The objective is to determine
the MELD score category to which pre-transplant HCC mortality is equivalent. Therefore, as will
be detailed later in this section, the pre-transplant death model will have HCC as the reference
category, to which all non-HCC patients (classi?ed by baseline MELD category) are compared.
The use of baseline (i.e., t = 0) MELD score is consistent with HCC diagnosis being applied
at baseline. Note that the use of baseline versus time-dependent MELD depends on the analytic
objectives. In Section 6, we describe a setting where use of time-dependent MELD is indicated.
In many studies, it has been shown that MELD is the dominant risk factor for liver pretransplant mortality. Moreover, as stated in the previous paragraph, MELD also strongly affects
the liver transplant hazard. Therefore, since the death model does not adjust for time-dependent
MELD, pre-transplant mortality and deceased-donor liver transplantation will be correlated, such
that it is necessary to account for liver transplantation as dependent censoring.
It is necessary to account for geography in our analysis. There are 60 Organ Procurement
Organizations (OPO) in the United States, each of which maintains a liver transplant wait-list
pertaining to a population size of a state, on average. Adjustment for OPO is important since
it re?ects geography, which can affect mortality in ways not re?ected by the remainder of the
adjustment covariates. With respect to transplantation, the transplant hazard differs markedly by
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

377

Table 5: Sampling design for the analysis of liver pre-transplant mortality.
Patients

OPO size

Deaths

Transplants

Censored

Total

546

4,475

Full-cohort
HCC
non-HCC

All

418

3,511

=400

418

1,499

490

2,407

401–1,300

3,676

10,816

5,230

19,722

6,076

12,794

10,477

29,347

10,588

28,620

16,743

55,951

>1,300
Total

Sampling fractions
HCC
non-HCC

All

1

1

1

=400

1

1

1

401–1,300

0.30

0.10

0.10

>1,300

0.15

0.10

0.10

Case-control Sample
HCC
non-HCC

All

418

3,511

546

4,475

=400

418

1,499

490

2,407

401–1,300

1,079

1,075

496

2,650

893

1,295

1,018

3,206

2,808

7,380

2,550

12,738

>1,300
Total

OPO owing in part to differences in organ availability. For both the time-to-death and time-totransplant models, we adjusted for OPO through strati?cation, since it may not be appropriate to
assume proportionality with respect to the 60 OPO-speci?c baseline transplant or death hazard
functions.
Speci?cs regarding the full cohort (n = 55,951), sampling fractions and case-control sample
are provided in Table 5.
Since a relatively small fraction of the full cohort was diagnosed with HCC (8%; 4,475 patients), HCC patients were over-sampled. In addition, we wanted to have suf?cient representation
from OPOs of various sizes in the case-control sample. The way we classi?ed the size of the OPO
was based on the number of patients wait-listed in that OPO in the full-cohort. Random sampling
within end-point would lead to inadequate representation of patients from smaller OPOs. Therefore, we assigned greater (lesser) sampling fractions to the small (large) OPOs. After Bernoulli
sampling, the case-control sample consisted of 12,738 patients.
There are additional issues regarding the data structure which must be taken into account. In
particular, a patient who is too sick to receive a transplant can be inactivated (usually a temporary
measure) or removed (permanent) from the wait-list. During these intervals, the patient is ineligible to receive a transplant. Therefore, an appropriate Cox model in this setting is given by the
following,
C
T
?C
ij (t) = Ai (t)?0j (t) exp{a Vi (t)},

DOI: 10.1002/cjs

(10)

The Canadian Journal of Statistics / La revue canadienne de statistique

378

SCHAUBEL ET AL.

Vol. 42, No. 3

Table 6: Analysis comparing HCC (assigned MELD of 22; reference) versus non-HCC (by lab MELD)
pre-transplant mortality.

Patients

Case-control

Case-control

Full-cohort

Unweighted: W = 1

Weighted: W2

Weighted: W2

MELD

߈

SE

p

߈

0

SE

p

߈

0

SE

p

HCC

“22”

0

non-HCC

6–8

-1.07

0.12

<0.0001 -1.02

0.15

<0.0001 -0.81

0.09

<0.0001

9–11

-0.60

0.09

<0.0001 -0.48

0.11

<0.0001 -0.50

0.08

<0.0001

12–13

-0.53

0.09

<0.0001 -0.40

0.11

0.0002 -0.24

0.08

0.002

14–15

-0.25

0.09

0.005

16–17

0.10

0.10

0.32

0.002

0.11

0.98

0.24

0.12

0.0498

0.01
0.24

0.08

0.88

0.08

0.004

18–19

0.26

0.12

0.02

0.63

0.15

<0.0001

0.52

0.09

<0.0001

20–22

0.55

0.11

<0.0001

0.83

0.12

<0.0001

0.81

0.09

<0.0001

23–24

0.90

0.17

<0.0001

1.41

0.21

<0.0001

1.12

0.11

<0.0001

25–29

1.59

0.16

<0.0001

1.93

0.19

<0.0001

1.44

0.11

<0.0001

30–39

2.38

0.16

<0.0001

2.69

0.17

<0.0001

1.86

0.12

<0.0001

40

3.68

0.29

<0.0001

3.65

0.37

<0.0001

1.79

0.20

<0.0001

where j denotes OPO number (j = 1, . . . , 60) and Ai (t) is an indicator of being active on the
wait-list (i.e., not being deactivated or previously removed) as of time t. The time-dependent
covariate vector Vi (t) includes MELD at time t (grouped into intervals: [6,8], [9,11], [12,13],
[14,15], [16,17], [18,19], [20,22], [23,24], [25,29], [30,39] and 40) with HCC patients chosen as
the reference group. These MELD categories have been suggested by several previous analyses;
the use of a categorical version of MELD negates the need to pin down its precise functional
form (which is known to not be linear). The vector Vi (t) also includes the following baseline
covariates: age group, gender, race and blood type, with age less than 40, Female, Caucasian and
blood type O as reference levels, respectively. Note that, for the intervals where the patient was
either inactivated or removed, the transplant hazard was treated as 0, as indicated by equation
(10). As such, we delete patient subintervals with Ai (t) = 0 to ?t model (10). However, since the
inactivated or removed patients are still at risk of pre-transplant death, patient subintervals with
Ai (t) = 0 are re-included in the ?tting of the pre-transplant mortality model.
The pre-transplant death model is given by
?ij (t) = ?0j (t) exp{ßT Zi (0)},

(11)

where j again represents OPO and Zi (0) contains indicators for the afore-listed MELD categories
(HCC serving as the reference), and the adjustment covariates used in the transplant model (10).
Since the IPCW weights could be very large toward the tail of the observation time, we truncated
IPCW weights with 10.
Results of the analysis are shown in Table 6. We present results for two case-control analyses:
the “unweighted” analysis (Wi (t) = 1), in which dependent censoring (liver transplantation) is
treated as independent censoring; and the proposed weighted analysis, which corrects for dependent censoring through W2i (t). We also carried out a full-cohort analysis, again based on W2i (t).
Table 6 shows that when the dependent censoring is treated as independent, HCC appears to
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

379

have pre-transplant mortality equal to that of MELD group 16–17 (see bolded entries in Table 6).
However, accounting for the dependent censoring, HCC is the pre-transplant mortality equivalent
of the MELD 14–15 group, a result consistent with the full-cohort analysis.
Based on the simulation studies in Section 4, the unstabilized weight W1i (t) was out-performed
by both W2i (t) and W3i (t) in terms of empirical bias and variance. In our application, results based
on W1i (t), W2i (t) and W3i (t) were all quite similar. It appeared from the simulations that W3i (t)
performed somewhat better than W2i (t); in each case a reduction in bias (for W3i (t) relative to
W2i (t)) was accompanied by a concomitant reduction in standard deviation. However, in the fullcohort analysis, standard errors for W2i (t) tended to be less than those for W3i (t). Hence, we chose
W2i (t) for the main line analysis. As mentioned previously, the IPCW component of the weight
is susceptible to unduly large values, which implies the use of a cap. We employed a cap of 10,
consistent with the simulation study. In the case-cohort analyses, the cap affected 16,507 records
for W1i (t) (1.55% of the total 1,066,630 records used); 2,616 records (0.25%) for W2i (t) and 3,351
(0.31%) for W3i (t). Full cohort and case-control results based on W1i (t) and W3i (t) are provided
in the Supplementary Materials.
The bias in the analysis which does not account for dependent censoring (Wi (t) = 1) is substantial, particularly upon consideration of the objectives of our analysis. This is clear upon comparing
the estimators based on Wi (t) = 1 and W2i (t), either through relative bias, or simply the absolute
difference in point estimates relative to the standard error. By either of these measures, the bias
is considerable, except for the two most extreme categories: MELD 6–8 and MELD=40. Note
that these two extreme categories are of least interest, since it was widely anticipated that the
lowest (highest) MELD category would have much lower (higher) pre-transplant mortality risk
than HCC. For the unweighted analysis, the HCC-mortality-equivalent is given by MELD 16–17,
whereas for the weighted analysis, it appears that MELD 14–15 and HCC have equal mortality.
From this perspective, the difference between the unweighted and weighted analysis would be
considered very important to the liver transplant ?eld.

6. DISCUSSION
In this article, we propose methods for proportional hazards modelling of survival data subject to
dependent censoring and obtained through case-control sampling. The proposed methods employ a
double-inverse-weighting scheme, through which the proposed estimators adjust for the sampling
bias and overcome dependent censoring. Simulation studies show that the proposed estimators
are approximately unbiased and that our asymptotic results are applicable to ?nite samples. The
proposed estimates can be computed using standard software that accommodates left-truncation,
weighting and offsets (e.g., PROC PHREG in SAS; coxph in R).
The case-control sampling scheme we consider involves either sampling all observed failures (cases), or a randomly selected fraction. The censored subjects (controls) are categorized
as either censored dependently, or independently. There is an incentive to over-sample dependently censored subjects, such that one can estimate the IPCW weights with suf?cient precision.
Although the sampling scheme we study involves random sampling based only on ?nal status
(dead, censored), the methods extend in a straightforward fashion to the setting in which sampling
also depends on ?xed covariates, a property illustrated through the application to the liver data
on Section 5. Although we consider case-control sampling explicitly, the proposed methods also
apply if instead case-cohort sampling is utilized.
One could also view the sampling scheme considered in this article to be a form of outcomedependent sampling (ODS). In an ODS design, one collects covariate information from a sample
by allowing selection probabilities to depend on individuals’ outcomes (e.g., death, survival). An
ODS design concentrates resources on observations carrying the greatest amount of information.
DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

380

SCHAUBEL ET AL.

Vol. 42, No. 3

There is a large literature on analysing data arising from ODS; see, for example, Breslow &
Holubkov (1997a), Zhou et al. (2002), Schildcrout & Heagerty (2008), Song, Zhou, & Kosorok
(2009) and Wang et al. (2009). It would be possible to generalize the proposed methods to the ODS
setting, interpreting the version we develop in this article as a censored-data-speci?c instance of
inverse-weighted generalized estimating equations.
We propose three different weights to correct the bias induced by dependent censoring. In
general, when the dependent censoring is light or moderate, the unstabilized weight W1 (t) works
well. However, when censoring is heavy, W1 (t) may be quite large toward the tail of the observation
time resulting in unstable estimates. In this case, stabilized weights, W2 (t) and W3 (t), may be
preferable and usually result in a less biased and more precise estimator than that arising from
using the unstabilized weight W1 (t). With respect to the stabilized weights, we found that W3 (t)
tended to out-perform W2 (t), although not uniformly. An apparent advantage of W2 (t) is that only
one dependent censoring hazard model is required. However, details regarding the programming
and ability to exploit functions currently available (in SAS or R) lead to W3 (t) actually being the
easier of the two stabilized weights to compute, even though a second hazard regression model is
required. Overall, it would appear that W3 (t) would generally be the preferred weight, although the
stabilized weight that actually performs best is likely to be application-dependent. For instance,
in our analysis of the liver disease data, W2 (t) resulted in parameter estimators with much smaller
standard errors than W3 (t), leading to our preference for the former in this particular case.
In simulation studies, we treated the IPCW weights and IPSW weights as ?xed to simplify
the computation, which would result in conservative covariance estimators because those weights
are actually estimated as opposed to being known. However, simulation results suggest that the
proposed ASEs by treating the IPCW weights and IPSW weights as ?xed are quite accurate. The
simpli?cation of the variance calculation would likely lead to invalid inference when the variance
of the estimated weight was substantial, which would be in cases where the subsample was not
large enough or did not contain a suf?cient number of failures or dependently censored subjects.
In such cases, the variability due to estimating the weight parameters would not be a negligible
fraction of the total variability, meaning that the suggested approximation would be inaccurate
and coding of the full variance (or, perhaps application of an appropriate bootstrapping method)
would be required.
The proposed methods require the consistency of the IPCW weight. Therefore, the proportional
hazards model for dependent censoring should be correctly speci?ed. This may be approximately
true when a suf?cient number of both baseline and time-dependent covariates are incorporated.
In our analysis of chronic liver disease patients wait-listed for liver transplantation, we found
that (calculated) Model for End-stage Liver Disease scores of 14–15 were consistent with the
pre-transplant mortality hazard of patients granted an exception due to hepatocellular cancer. The
results based on case-control sampling were consistent with those from the full-cohort analysis.
In contrast, if no adjustment was made for dependent censoring, it appeared that MELD 16–17
and HCC had equivalent pre-transplant mortality. Therefore, our results indicate that the current
MELD exception score of 22 granted to HCC patients overstates the actual medical urgency,
and that an assigned MELD score of 14 or 15 may be more appropriate. If this adjustment were
actually implemented, it would considerably decrease the frequency with which HCC patients
receive liver transplants. The results of our analysis provide currently the most de?nitive evidence
that HCC patients are given inappropriately high priority for liver transplantation in the United
States.
In certain settings, the factor of interest (in addition to the adjustment covariates) may be timedependent. In our analysis of the liver disease data, HCC designations are typically made at time 0,
implying that the appropriate comparison is to patients not classi?ed as HCC at time 0. However,
if we were interested in evaluating exception scores generally, then the most useful analysis which
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

381

addressed the appropriateness of (what would primarily be non-HCC) exception scores would
probably use time-dependent factors in the death model. This is because a hepatologist may
observe a patient clearly getting worse, but the patient’s MELD score not increasing, implying
that the score does not accurately re?ect the patient’s need for liver transplantation. This analysis
is currently under our consideration, and it appears that the proposed methods cannot readily be
extended to accommodate such settings.
APPENDIX
Proofs of Theorem 2 and Theorem 3 are provided in the Supplementary Materials document.
ACKNOWLEDGEMENTS
The authors thank the Associate Editor and Referees for their constructive comments and suggestions, which resulted in a substantially improved manuscript. This work was supported in part by
National Institutes of Health Grant R01 DK070869. The authors also thank the Scienti?c Registry
of Transplant Recipients (SRTR) access to the end-stage liver disease data. The SRTR is funded
by a contract from the Health Resource and Service Administration, U.S. Department of Health
and Human Services.
BIBLIOGRAPHY
Barlow, W. E. (1994). Robust variance estimation for the case-cohort design. Biometrics, 50, 1064–1072.
Basto, S. T., Perez, R., Villela-Nogueira, C., Fernandes, E., Barroso, A., Victor, L., Pereira, B., Ribeiro, J.
& Coelho, H. (2008). MELD as a predictor of long term mortality in liver transplantation waiting list.
Liver Transplantation, 14, S219–S219.
Binder, D. A. (1992). Fitting Cox’s proportional hazards models from survey data. Biometrika, 79, 139–147.
Borgan, Ø., Langholz, B., Samuelsen, S. O., Goldstein, L. & Pogoda, J. (2000). Exposure strati?ed casecohort designs. Lifetime Data Analysis, 6, 39–58.
Breslow, N. E. & Holubkov, R. (1997a). Maximum likelihood estimation of logistic regression parameters
under two-phase outcome-dependent sampling. Journal of the Royal Statistical Society Series B, 59,
447–461.
Chen, K. (2001). Generalized case-cohort sampling. Journal of the Royal Statistical Society Series B, 63,
791–809.
Chen, K. & Lo, S. H. (1999). Case-cohort and case-control analysis with Cox’s model. Biometrika, 86,
755–764.
Cox, D. R. (1972). Regression models and life-tables (with discussion). Journal of the Royal Statistical
Society Series B, 34, 187–220.
Gor?ne, M., Zucker, D. M., & Hsu, L. (2009). Case-control survival analysis with a general semiparametric
shared frailty model: A pseudo full likelihood approach. The Annals of Statistics, 37, 1489–1517.
Gray, R. J. (2009). Weighted analyses for cohort sampling designs. Lifetime Data Analysis, 15, 24–40.
Hern´an, M. A., Brumback, B., & Robins, J. M. (2000). Marginal structural models to estimate the causal
effect of Zidovudine on the survival of HIV-positive men. Epidemiology, 11, 561–570.
Hsu, L. & Gor?ne, M. (2006). Multivariate survival analysis for case-control family data. Biostatistics, 7,
387–398.
Hsu, L., Chen, L., Gor?ne, M. & Malone, K. (2004). Semiparametric estimation of marginal hazard function
from case-control family studies. Biometrics, 60, 936–944.
Huo, T. I., Wu, J. C., Lin, H. C., Lee, F. Y., Hou, M. C., Lee, P. C., Chang, F. Y. & Lee, S. D. (2005).
Evaluation of the increase in model for end-stage liver disease (MELD) score over time as a prognostic
predictor in patients with advanced cirrhosis: risk factor analysis and comparison with initial MELD
and Child-Turcotte-Pugh score. Journal of Hepatology, 42, 826–832.
DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

382

SCHAUBEL ET AL.

Vol. 42, No. 3

Kalb?eisch, J. D. & Lawless, J. F. (1988). Likelihood analysis of multi-state models for disease incidence
and mortality. Statistics in Medicine, 7, 149–160.
Kalb?eisch, J. D. & Prentice, R. L. (2002). The statistical analysis of failure time data, 2nd ed., Wiley,
Hoboken, NJ.
Kremers, W. K., van IJperen, M., Kim, W. R., Freeman, R. B., Harper, A. M., Kamath, P. S. & Wiesner,
R. H. (2004). MELD score as a predictor of pretransplant and posttransplant survival in OPTN/UNOS
status 1 patients. Hepatology, 39, 764–769.
Kulich, M. & Lin, D. Y. (2000). Additive hazards regression for case-cohort studies. Biometrika, 87, 73–87.
Li, H., Yang, P., & Schwartz, A. G. (1998). Analysis of age at onset from case-control family studies.
Biometrics, 54, 1030–1039.
Lin, D. Y. (2000). On ?tting Cox’s proportional hazards models to survey data. Biometrika, 87, 37–47.
Lin, D. Y. & Ying, Z. (1993). Cox regression with incomplete covariate measurements. Journal of the
American Statistical Association, 88, 1341–1349.
Merion, R. M., Schaubel, D. E., Dykstra, D. M., Freeman, R. B., Port, F. K. & Wolfe, R. A. (2005). The
survival bene?t of liver transplantation. American Journal of Transplantation, 5, 307–313.
Moger, T. A., Pawitan, A. and Borgan, Ø. (2008). Casde-cohort methods for survival data on families from
routine registers. Statistics in Medicine, 27, 1062–1074.
Nan, B., Kalb?eisch, J. D., & Yu, M. (2009). Asymptotic theory for the semiparametric accelerated failure
time model with missing data. Annals of Statistics, 37, 2351–2376.
Oakes, D. (1981). Survival times: aspects of partial likelihood (with discussion). International Statistical
Review, 49, 235–264.
Pan, Q. & Schaubel, D. E. (2008). Proportional hazards models based on biased samples and estimated
selection probabilities. Canadian Journal of Statistics, 36(1), 111–127.
Prentice, R. L. (1986). A case-cohort design for epidemiologic cohort studies and disease prevention trials.
Biometrika, 73, 1–11.
Prentice, R. L. & Brewlow, N. E. (1978). Retrospective studies and failure time models. Biometrika, 65,
153–158.
Robins, J. M. (1993). Information recovery and bias adjustment in proportional hazards regression analysis
of randomized trials using surrogate markers. Proceedings of the Biopharmaceutical Section, American
Statistical Association, 24–33.
Robins, J. M. & Finkelstein, D. M. (2000). Correcting for noncompliance and dependent censoring in an
AIDS clinical trial with inverse probability of censoring weighted (IPCW) log-rank tests. Biometrics,
56, 779–788.
Robins, J. M. & Rotnitzky, A. (1992). Recovery of information and adjustment for dependent censoring using
surrogate markers. In AIDS Epidemiology—Methodological Issues, Jewell N. P., Dietz K., & Farewell
V. T., editors. Birkh¨auser, Boston, pp. 297–331.
Rubin, D. B. (1977). Inference and missing data. Biometrika, 63, 581–592.
Schaubel, D. E., Wolfe, R. A., Sima, C. S. & Merion, R. M. (2009). Estimating the effect of a timedependent treatment by levels of an internal time-dependent covariate. Journal of the American Statistical
Association, 104, 49–59.
Schildcrout, J. S. & Heagerty, P. J. (2008). On outcome-dependent sampling designs for longitudinal binary
response data with time-varying covariates. Biostatistics, 9, 735–749.
Self, S. G. & Prentice, R. L. (1988). Asymptotic distribution theory and ef?ciency results for case-cohort
studies. Annals of Statistics, 16, 64–81.
Shih, J. H. & Chatterjee, N. (2002). Analysis of survival data from case-control family studies. Biometrics,
58, 502–509.
Song, R., Zhou, H., & Kosorok, M. R. (2009). On semiparametric ef?cient inference for two-stage outcomedependent sampling with a continuous outcome. Biometrika, 96, 221–228.
Subramanian, A., Sulkowski, M., Barin, B., Stablein, D., Curry, M., Nissen, N., Dove, L., Roland, M.,
Florman, S., Blumberg, E., Stosor, V., Jayaweera, D. T., Huprikar, S., Fung, J., Pruett, T., Stock, P. &
The Canadian Journal of Statistics / La revue canadienne de statistique

DOI: 10.1002/cjs

2014

CASE-CONTROL DATA WITH DEPENDENT CENSORING

383

Ragni, M. (2010). MELD score is an important predictor of pretransplantation mortality in HIV-infected
liver transplant candidates. Gastroenterology, 138, 159–164.
Wang, W., Scharfstein, D., Tan, Z. & MacKenzie, E. J. (2009). Causal inference in outcome-dependent
two-phase sampling designs. Journal of the Royal Statistical Society Series B, 71, 947–969.
Wiesner, R. H., McDiarmid, S. V., Kamath, P. S., Edwards, E. B., Malinchoc, M., Kremers, W. K., Krom,
R. A. & Kim, W. R. (2001). MELD and PELD: Application of survival models to liver allocation. Liver
Transplantation, 7, 567–580.
Zhou, H., Weaver, M. A., Qin, J., Longnecker, M. P. & Wang, M. C. (2002). A semiparametric empirical
likelihood method for data from an outcome-dependent sampling scheme with a continuous outcome.
Biometrics, 58, 413–421.

Received 23 December 2012
Accepted 31 March 2014

DOI: 10.1002/cjs

The Canadian Journal of Statistics / La revue canadienne de statistique

Copyright of Canadian Journal of Statistics is the property of Wiley-Blackwell and its content
may not be copied or emailed to multiple sites or posted to a listserv without the copyright
holder’s express written permission. However, users may print, download, or email articles for
individual use.

Responses are currently closed, but you can trackback from your own site.

Comments are closed.

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes