Objectives: To describe the long-term results of a previously developed a sirolimus-based sequential immunosuppression protocol for kidney transplant comprising 2 phases: sirolimus + cyclosporine + prednisolone for 3 months followed by sirolimus + prednisolone + mycophenolate mofetil with steroid minimization the first year. Two-year outcomes of patients on this protocol (group A) showed equivalent patient and graft survival, yet with significantly better function, compared with those on cyclosporine + mycophenolate mofetil + prednisolone (group B).
Materials and Methods: We report the 8-year outcomes in the same cohort (76 patients in group A and 37 in group B).
Results: 42% switched from group A to B versus 43% switching from B to A. Intent-to-treat patient survivals at 5 and 8 years were 88% and 85.5% for group A, and 78% and 73% for group B. Death-censored graft survivals were 93% for group A and 95% for group B. Graft function was significantly better at 8 years, with 91% of group A patients compared with 50% in group B having estimated glomerular filtration rates > 45 mL/min/1.73 m2, and a significantly lower incidence of chronic allograft nephropathy in the former. Secondary parameters including blood pressure control, new onset diabetes mellitus, proteinuria and other drug-related adverse events showed no significant differences between the groups.
Conclusions: The sirolimus-based sequential immunosuppression protocol was well tolerated in 58% of patients. The intent-to-treat and patients-on-therapy analyses revealed that it was equivalent to the widely used cyclosporine + mycophenolate mofetil + prednisolone protocol regarding patient and graft survival. It is associated with better graft function and lower incidence of chronic allograft nephropathy in 8 years’ follow-up. The incidence of drug-related adverse reactions was not statistically different from those in the comparator.
Key words : Kidney transplant, Sirolimus, Sequential immunosuppression, Calcineurin inhibitor toxicity
Introduction
The introduction of the mammalian target of rapamycin inhibitor (mTORi) sirolimus (SIR), into the arena of experimental immunosuppression has provided outstanding theoretical rationale for its use in solid-organ transplant in humans. Its main benefits in mice are powerful proliferation signal inhibition and promotion of tolerance.1-3 The latter specifically addresses the negative effect of calcineurin inhibitors (CNIs)4,5 on the interleukin-2 dependent tolerogenic pathway, a major drawback to their clinical use, despite significant overall advantages. Several early clinical trials have confirmed the efficacy of de novo cyclosporine (CyA)/SIR combination in the prevention of renal allograft rejection,6 yet at the expense of an adverse drug-drug interaction with significant nephrotoxicity. On the other hand, SIR permits fairly safe CNI minimization, elimination, or even avoidance with significant preservation or even improvement of long-term graft function.7,8
We developed a de novo SIR-based protocol in which CyA and prednisolone (P) are coadministered for the first 3 months posttransplant, and mofetil mycophenolate (MMF) + P is given thereafter. At 2 years, the mean graft function in 76 patients who received this protocol was significantly better than in 37 control patients who received CyA + MMF + P. There were no significant differences in patient or graft survival between the 2 groups.9
In this study, we report the 8-year outcomes of the same patients to determine if the benefits would be maintained in long-term follow-up.
Materials and Methods
This extension study included all 113 patients enrolled in the original trial who received kidney transplants at the Cairo Kidney Centre between July 2002 and July 2006.9 Their mean age upon enrollment was 44.7 years, and 39 were female. They were randomly assigned in a proportion of 2:1 to either group A (study group, 76 patients) or group B (controls, 37 patients). Table 1 shows the baseline characteristics; there were no statistically significant differences for any of the listed variables.
All patients had received transplants from live kidney donors after due evaluation and approval by the independent Hospital Ethics Committee and legal authorization by the official health authorities. All of the protocols conformed to the ethical guidelines of the 1975 Helsinki Declaration. Written informed consent was obtained from all subjects. Table 1 shows the donor/recipient tissue type mismatches and relevant virologic characteristics, which were not statistically different between groups. No panel reactive antibodies were detected pretransplant in any recipient and crossmatches were negative.
No antibody induction was used in either group. Group A patients were assigned to our sequential SIR-based protocol. They had received preoperative cyclosporine for 48 hours, intraoperative induction with pulse methyl prednisolone, and prophylactic immunosuppression with oral SIR + CyA + P for
3 months.
The CyA dosage was tailored targeting a blood level of 600 ng/mL 2 hours after administration, and the SIR dosage was tailored to a trough target level of 5 to 10 ng/mL. Unless an acute rejection occurred, CyA was replaced by MMF, and P was gradually withdrawn or minimized to 0 to 5 mg/day by the end of the study at the attending physician’s discretion (Table 2). If an acute rejection occurred, the switch was delayed for 3 more months.
Group B patients were assigned to a conventional P + CyA + MMF from the beginning. The target CyA C2 level was 1600 ng/mL 2 hours after dosage and gradually reduced as shown in Table 2. Like group A, P was gradually withdrawn or minimized to 0 to 5 mg/day by the end of the study at the attending physician’s discretion.
In the original study, the primary endpoints were patient and graft survival at 2 years. Graft failure was defined as either death or return to dialysis. The secondary endpoints were early and late graft functions, liver functions, number of drugs needed to control blood pressure at 130/80 mm Hg, proteinuria, other complications, and adverse reactions as described in the original publication.9
In this extension study, all patients were followed-up for 6 more years at quarterly intervals for 3 years and biannually thereafter. We were tolerant to noncompliance by including data within 8 weeks before or after the assigned follow-up dates. Primary follow-up parameters were patient and graft survival and graft function according to estimated glomerular filtration rate as calculated with the modification of diet in renal disease equation 4. Secondary parameters included clinical evaluation including blood pressure and peripheral edema; measurement of urinary protein/creatinine ratio; measurements of fasting blood sugar and glycated hemoglobin; blood cholesterol (with determination of high- and low-density lipoproteins) and triglycerides; peripheral blood hemoglobin, red cell counts and indices, white cell total counts and differentials, and platelet count; “liver enzymes” alanine aminotransferase, aspartate aminotransferase, gamma glutamyl transpeptidase, and alkaline phosphatase; and CyA level 2 hours after administration and/or SIR trough blood levels where applicable. Other routine or individually customized follow-up studies were available but were not included in the extension study.
Graft biopsies were obtained according to clinical indications. Acute rejection was defined and classified according to contemporary Banff criteria. While chronic histopathologic abnormalities were described as “chronic allograft nephropathy” in the original study, they were classified by an interstitial fibrosis/tubular atrophy (IFTA) score in the extension study.
All data were subjected to intent-to-treat (ITT) analysis. Because the drop-out rate was relatively high, further analysis was made by a patients-on-therapy (POT) per protocol. Statistical analyses were carried out using SPSS software (SPSS: An IBM Company, version 20, IBM Corporation, Armonk, NY, USA). Qualitative comparisons were made using chi-square or Fisher exact test while quantitative data were compared using the t test. Time-to-events (death or graft failure) were studied using Kaplan-Meier survival analyses and were compared using log-rank tests. In all tests, a P value less than .05 was considered significant.
Results
By the end of July 2012, twenty-two patients (16 from group A [21%] and 6 from group B [16%]) were lost to follow-up. The difference between the 2 groups was not significant (P = .542).
Thirty-two patients from group A (42%) were converted to the control protocol (B) for the reasons shown in Table 3 and Figure 1. Sixteen patients from group B (43%) were switched to the SIR + MMF + P based on evidence of functionally significant, biopsy-confirmed CyA toxicity. The time to conversion is shown in Figure 2. There was no statistically significant difference between the 2 groups (P = .855, log-rank).
After 8 years, 8% of patients in group A and 44% in group B were still receiving steroids (P = .034) at average daily doses of 3.75 and 8.13 mg.
Twenty-one patients had died since enrollment (including those reported in the original study) (Figure 3), including 11 from group A (14.5%) and 10 from group B (27%) (Table 4). The difference was statistically insignificant both by ITT and POT analysis (log-rank values 0.150 and 0.663).
Of those who remained alive, 5 patients from group A (6.5%) and 2 from group B (5.4%) lost their grafts. There was no statistically significant difference in graft survival censored-for-death at 8 years either by ITT or POT analysis (log-rank values 0.585 and 0.455) (Figure 4).
Graft function at 8 years was numerically better in group A (mean 61.9, SD 17.95 mL/min/1.73 m2) compared with group B (mean 47.7, SD 27.64 mL/min/1.73 m2), but the difference was not statistically significant (P = .069). However, when graft function was stratified into chronic kidney disease stages, significantly more patients from group A were in stages II and IIIa (Figure 5).
The relative frequencies of secondary follow-up parameters are shown in Table 5. None of these showed a statistically significant difference between the 2 groups by ITT or POT analysis. However, the incidence of chronic allograft nephropathy/IFTA was 13% in group A and 38% in group B (P < .003) by ITT analysis.
Discussion
The study participants were kept on the same protocol to which they were initially assigned or switched during the first 2 years.9 Less than one-fifth of patients were lost to follow-up, without significant difference between the 2 groups, so there was no effect on the analysis. Likewise, the proportion of patients switching from 1 group to the other was almost identical.
Because there was no significant difference in patient- or graft-survival between the 2 groups, the outcomes of ITT and POT analyses showed the same trend. It is noteworthy that the frequency of SIR discontinuation in our study was similar to that in the 5-year follow up of the SPIESSER study (46%). However, they reported less frequent CyA discontinuation (30%), which may be attributable to the use of smaller doses than in our series.8
The 2-year trends remain unchanged at 8 years. Patient and graft survival were not significantly different, while graft function was significantly better in the SIR group, despite the lower exposure to corticosteroids.
Eight-year patient survivals calculated by ITT were 85.5% and 73% in groups A and B, respectively (P = .150). The mean (95% confidence interval [CI]) patient survival time for protocol A was 7.18 years (range, 6.70-7.66 y), compared to 6.53 years (range, 5.70-7.37 y) for protocol B. By POT analysis, the patient survival rates were 86% and 81% in groups A and B (P = .663). The mean patient survival time (95% CI) for group A was 7.10 years (range, 6.40-7.79 y) compared with 6.894 years (range, 5.83-7.95 y) for group B. The numeric advantage of POT outcomes may be explained by the selection bias of excluding complicated cases that required switching between protocols. Results similar to our ITT analysis were reported in a recent study of 1967 living-donor transplants using different immunosuppressive protocols; their 10-year graft survival rate was 77.8%.10
The long-term numeric survival advantage of the SIR-based protocol in our study is supported by the 10-year data on 592 patients, where patient survival was 80% in patients receiving a combination of 80% reduced CyA + SIR, 73% received a combination of full-dose CyA + SIR, and only 68% on conventional CyA and MMF.11
On the other hand, no positive effect of SIR on patient survival was observed in other trials. For example, in a large historical cohort of United States kidney transplant recipients with 8-year follow-up,12 the risk of death was highest in patients who received mTORs without CNIs (3237 patients), intermediate in those who received a combination of mTORs and CNIs (10 510 patients), and least in those on CNI without mTORi (125 623 patients). A higher risk of death also was reported in a recent meta-analysis of data on 5876 patients from 21 clinical trials, including our initial publication.13 In contrast to our cohort, there was no account of pretransplant cardiovascular risk factors or dormant infections, which may explain the discrepancy with those 2 studies.12,13 Nevertheless, this particular issue obviously calls for further careful evaluation.
Because of a lack of consensus on patient survival outcomes under SIR treatment, we opted to limit graft survival data analysis to surviving patients. Our 8-year graft survivals censored for patient death were 93% and 95% in groups A and B by ITT (P = .585). The mean 95% CI graft survival time for protocol A was 7.58 (95% CI: 7.23-7.94), and 7.57 (95% CI: 6.99-8.15) for group B. Death-censored graft survivals by POT analysis were 95.5% and 90.5% for groups A and B (P = .455). The mean 95% CI graft survival time for group A was 7.64 years (95% CI: 7.16-8.12 y) compared with 7.24 years (95% CI: 6.24-8.24 y) for group B. These results compare favorably to those of Isakova and associates (2013) who reported a 5-year graft survival of 88.9% in patients maintained on SIR-based immunosuppression.12
On the other hand, graft function was significantly higher in group A despite lower steroid exposure. Thus, the breakdown of estimated glomerular filtration rate at 8 years by ITT analysis showed that 26.3% of patients were in stage II chronic kidney disease compared with 18.9% of group B patients (P = .037). These observations are strikingly close to those reported in the 5-year SPIESSER study follow-up, where the proportion of patients with glomerular filtration rate > 60 mL/min/1.73 m2 was 31% in the SIR group versus 15% in the CyA group (P = .047).8
The mean estimated glomerular filtration rates in our study were 61.9 ± 17.954 mL/min/1.73 m2 and 47.7 ± 27.645 mL/min/1.73 m2 in groups A and B, a numeric difference that did not reach statistical significance (P = .069). The difference was even more impressive in the POT analysis, which showed that 91.7% of group A compared with 62.5% of group B had an estimated glomerular filtration rate > 60 mL/min/1.73 m2. Again, this difference did not reach statistical significance, likely because of the relatively small numbers of patients who completed 8 years on the same protocol.
Analysis of the secondary endpoints at 8 years yielded similar results as those reported at 2 years. Thus, the cumulative incidences of biopsy-proven acute rejection episodes at 8 years were 30% in group A and 32% in group B (ie, averages of 3.75% and 4%/y). This reflects the usual decline in the incidence of biopsy-proven acute rejection with time; the respective values were 6.6% and 9.45%/year in our initial 2-year report.9
Most cases of acute rejection were classified as Banff IA or IB, and very few cases were IIA. Two cases in group A developed acute vascular rejection, 1 in the first 2 weeks and the other 5 years after transplant. Only 1 patient in group B experienced acute vascular rejection. So, the numeric advantage of SIR + CyA in preventing acute rejection was lost on CyA withdrawal. This is not unexpected as many studies have documented the importance of CNIs in the prevention of this complication, particularly during the early posttransplant course. In fact, an average biopsy-proven acute rejection rate as low as 2.7%/year was achieved in another study when a SIR + low-dose CyA was continued for 10 years.11 In the final analysis, we believe that the advantage of preserved glomerular filtration rate achieved in our protocol outweighs the significance of a 1%/year reduction in the chances of acute rejection.
This view is further supported by the observation that the incidence of IFTA (including CNI toxicity) was significantly lower in group A (13%) than group B (38%) at 8 years (P = .003). Patients-on-therapy analysis revealed the same trend, with IFTA rates of 11% and 24% in groups A and B. However, because of the small number of cases, this difference did not reach statistical significance (P = .271). Similar observations were made in many other studies with long-term mTORi use, including SIR or everolimus.14,15
The incidence of new onset diabetes mellitus was more common in group A, affecting 15% of ITT patients compared with 11% in group B (P = .5). The diabetogenic effect of SIR was even more pronounced in the POT analysis, with new onset diabetes occurring in 20% of group A patients versus 5% of group B patients; although this is an impressive numerical difference, it was not statistically significant (P = .148).
The 8-year analysis did not show statistically significant differences in the number of drugs needed to control blood pressure, the incidence of edema, wound healing, anemia, hyperlipidemia, infections, or malignancy. Interestingly, there was also no difference in the incidence of proteinuria because SIR exposure was minimized by immediate withdrawal upon the detection of increasing protein excretion crossing the albumin/creatinine ceiling of 500 mg/g. Proteinuria regressed in most of these patients with the use of an angiotensin-converting enzyme inhibitor.
Conclusions
This 8-year analysis confirms the advantages of our SIR-based sequential protocol in living-donor renal transplant. While it proved equivalent to more commonly used CNI-based protocols with regard to patient and graft survival, the clear benefit was sustained superior graft function. Based on the current knowledge, this would predict a lower incidence of cardiovascular complications, and longer patient and graft survival.
However, the incidence of adverse events often has been a compelling reason for withdrawal in favor of a CNI protocol. It appears that only the lucky 58% of patients who do not develop significant adverse events are those who will benefit from the advantages associated with the SIR protocol.
The safe use of mTORis requires familiarity with its adverse events and drug-drug interactions. Patients must be carefully monitored for proteinuria, hyperlipidemia, hyperglycemia, bone marrow suppression, and other adverse events.
Although we did not notice any decline in patient survival at 2 or 8 years, the recently raised concerns about a possible increase in mortality risk need to be carefully resolved before recommending the wider use of our protocol. Trials are underway in our institution to try different CNIs and mTORis to identify the most optimal protocol with regard to combinations and timing.
References:
Volume : 13
Issue : 1
Pages : 23 - 29
DOI : 10.6002/ect.mesot2014.L36
From the 1Department of Internal Medicine, Cairo University; the 2Cairo Kidney Centre; the 3National Research Center; the 4National Institute of Urology and Nephrology; the 5Department of Clinical and Chemical Pathology, Cairo University; and the 6Urology Department, Cairo University
Acknowledgements: The authors declare that they have no sources of funding for this study, and they have no conflicts of interest to declare.
Corresponding author: Tarek Fayad, MD, Professor of Internal Medicine, Kasr-El-Aini School of Medicine - Cairo University, The Cairo Kidney Centre, 3 Hussein El Meimar Str., Cairo, Egypt
Phone: +20 122 215 5371
E-mail: tfayad@Kasralainy.edu.eg or tfayad@yahoo.com
Table 1. Main Baseline Characteristics of the Patients Enrolled in the Original Study9
Table 2. Treatment Protocol in the Original Study9
Table 3. Causes of Conversion from Protocol A to B
Table 4. Causes of Death in the Study Groups
Table 5. Secondary Follow-Up Parameters by ITT and POT analyses
Figure 1. Proctoscopic Image of SIR-Induced Rectal Ulcers
Figure 2. Cumulative Conversion Between Protocols
Figure 3. Kaplan-Meier Survival Curves
Figure 4. Kaplan-Meier Graft Survival Curves Censored for Death
Figure 5. eGFR (by MDRD) by ITT analysis