Begin typing your search above and press return to search.
Volume: 9 Issue: 2 April 2011

FULL TEXT

LECTURE

The Mystique of Organ Transplantation Ernst Knobil Distinguished Lecture University of Texas Medical School at Houston December 8, 2010

Dean Colasurdo, ladies and gentlemen. I am honored more than I can say by the privilege of presenting a lecture dedicated to the memory of Ernst Knobil. He is, of course, well-remembered in Pittsburgh (Figure 1).

In 1961, the then 35-year-old Ernst Knobil was recruited from Harvard to the University of Pittsburgh to Chair the Department of Physiology. Many of his classic papers on the neuroendocrine control of the reproductive cycle were published from Pittsburgh during the next 20 years (Figure 2).

After moving to Houston in 1981 to become your third Dean, he instituted changes that clarified the academic mission of the new school and set the stage for its current pre-eminent position—all the while maintaining a research laboratory (Figure 3).

Knobil’s research was not fixated on esoteric details in isolation. The term “integrative biology” that was applied to his body of work has the same meaning that “systems biology” has today. His emphasis was on the development of the whole scientific context rather than only the acquisition of details (Figure 4).

When assembled, the puzzle pieced together by Knobil’s studies resulted in a new chapter in reproductive physiology and obstetrics—a major paradigm shift, defined as: “...the ability to envision a reality that is entirely different from the accepted view.” That brings me to the subject of my talk today: The Mystique of Organ Transplant. The mystique was caused by a pervasive early error that precluded the orderly development of transplant immunology and limited progress almost exclusively to the development of more-potent immunosuppressive drugs (Figure 5).

To understand how an error of this magnitude could have occurred, it is necessary to go back to the birth of modern-day transplant. The midwife was an English Zoologist named Peter Medawar (Figure 6).

The seed from which all else derived, was Medawar’s demonstration in 1943, that skin graft rejection is an immunologic event. In the 10 years that followed, efforts to weaken the immune response with irradiation or steroids had little or no effect on experimental graft survival (Figure 7).

During this time, a study by Medawar’s team of the natural tolerance in freemartin cattle revealed a chink in the immunologic armor. In freemartin cattle, fusion of their placentas allowed the mixture of 2 animal circulations during gestation. After birth and throughout life, the animals shared each others blood cells (blood chimerism). Moreover, the cattle were tolerant to each others tissues and organs as shown at the bottom (Figure 8).

Inspired by the freemartin findings, Medawar and his colleagues demonstrated in 1953 that similar chimerism-associated tolerance could be deliberately made. In their experimental model, splenic or bone marrow leukocytes were infused from adult mouse donors to newborn mouse recipients, whose immune system was not developed enough to reject the cells. With leukocyte engraftment, neonatal recipients had a lifetime tolerance to skin (or other tissues) from the original leukocyte donor, but not to tissues from any other donor. These chimeric mice were analogues of future patients with immune deficiency diseases who could be treated with bone marrow transplant (Figure 9).

Two years later at the National Institutes of Health (NIH), Main and Prehn extended these observations to adult mouse recipients whose whole immune system was weakened by high-dose total body irradiation before the cell infusion (the mouse in the middle). These mouse chimeras were analogues of future cytoablated human bone marrow recipients (Figure 10).

Stable leukocyte chimerism in both mouse models was achievable only when donors and recipients had a good histocompatibility match. Otherwise, the donor leukocytes were rejected or they turned the tables and rejected the immunologically defenseless recipient: graft versus host disease (GVHD) (Figure 11).

Because human histocompatibility antigens were yet to be discovered, clinical bone marrow transplant for the treatment of hematologic disorders and other indications was delayed until 1968. As in the mice, donor-specific tolerance was associated with leukocyte chimerism. Graft versus host disease was the most-common and specific complication that could be avoided or minimized with only a perfect HLA match (Figure 12).

This was a beautiful story. The escalation of the mouse tolerance models to humans with parallel developments in histocompatibility research was heralded as a perfect example of bench-to-bedside research (Figure 13).

In contrast, kidney transplant with survival of at least 1 year was precociously accomplished in 7 humans between 1959 and 1962 without a preceding animal model. The first 6 patients were irradiated before transplant, but had limited therapy afterward because drug immunosuppression was not yet available. The exceptional seventh patient (bolded here) was not irradiated, but was treated daily with azathioprine throughout the 17 months of graft function (Figure 14).

The 7 successful cases were isolated exceptions in more than 300 failures. Nevertheless, they were hailed as a collective breakthrough. The accomplishments were inexplicable. Engraftment had been achieved without donor leukocyte infusion, without HLA matching, and with no hint of GVHD. If there was any connection with Medawar’s mouse models or with the future human bone marrow transplant triumphs, it was not apparent (Figure 15).

Now, the over-arching error that I described at the beginning was introduced. Based largely on a handful of successful human cases, consensus was reached by 1962 that organ engraftment (exemplified by the kidney) did not depend on the donor leukocyte chimerism-associated mechanisms of the mouse tolerance models. Thus, organ transplant was disconnected from the scientific base soon to be occupied by human bone marrow transplant (Figure 16).

Parenthetically, I was not involved in the consensus. Between 1957 and 1961, I was preoccupied with development of canine liver replacement and multivisceral transplant procedures as tools for study of metabolic interactions between visceral organs. But with the early kidney transplant successes, and especially the advent of azathioprine, the potential human use of the visceral transplant operations was obvious. A prerequisite would be a record of kidney transplants (Figure 17).

The anticipated record dematerialized when the early results of kidney transplant with azathioprine were no better than with irradiation. Consequently, I obtained a supply of the drug and combined its use with prednisone in dog models. Based on the canine observations, we launched a clinical kidney transplant program in the autumn of 1962 with an unprecedented 1-year survival of 75%. The clinical results were reported in this article in 1963. The title described the 2 features of the alloimmune response that provided an empirical foundation for development of all kinds of organ transplant (Figure 18).

The features had been dramatically exposed by the use of this treatment algorithm. Azathioprine was started 1 to 4 weeks before kidney transplant from live donors. Large doses of prednisone were added posttransplant only to treat the breakthrough rejections that occurred in almost every patient. In about 85% of cases, the rejections were reversible with prednisone as indicated by the rise of serum creatinine and its subsequent fall. Partial tolerance was inferred from the rapidly declining need for immuno­suppression after rejection reversal (shown at the top) (Figure 19).

Since none of the patients was completely off drugs, the most-compelling argument that these patients were tolerant required the passage of time. Nine of the 46 renal allografts (19%) transplanted from genetically related donors functioned continuously for the next 4 decades, each depicted here as a horizontal bar. In 7 of these patients, immunosuppression eventually was stopped with subsequent drug-free intervals of 12 to 46 years (the red portion of the bars). Now, after 46 to 48 posttransplant years, these patients currently bear the longest functioning organ allografts in the world. However, no comparable cohort of drug-free kidney recipients has ever been produced again anywhere in the world in the following 40 years (Figure 20).

The probable reason was not recognized until the 1990s: namely, alteration of the immunosuppression strategy. At the end of 1963, the changes shown on your right were made. First, pretreatment with azathioprine was abandoned, in part because it would not be feasible using deceased-donor organs. The second change by early 1964 was administration of large doses of prednisone from the time of surgery instead of being added “as needed.” This modification was made to avoid the 15% to 20% loss of grafts whose rejection could not be reversed (Figure 21).

With the revised use of azathioprine and prednisone, a budding industry of clinical renal transplant was formed. However, further advances were driven almost exclusively by more-versatile or more-potent immunosuppressive drugs. Beginning in 1966, antilymphocyte globulin (ALG) extracted from the serum of horses immunized with human lymphoid tissue was added to the original combination of azathioprine and prednisone in Colorado (Figure 22).

Using the triple-drug immunosuppression, my original objective of human liver transplant was finally accomplished in 1967, ten years after the first steps were taken in dogs. This was followed by the first successful heart transplants in 1968, and in 1969 by the first 1-year survival after pancreas transplant—all with the 3-drug strategy (Figure 23).

As more-potent drugs became available, they were folded into the pre-emptive treatment formula introduced in 1964. Azathioprine was replaced as the baseline drug by cyclosporine, which was replaced in turn by tacrolimus. By the 1990s, a bewildering array of stacked drugs, begun at the time of transplant, had become the worldwide standard, with the stipulated purpose of reducing the incidence of acute rejection to zero (Figure 24).

The pre-emptive strategy allowed better mid-range patient and graft survival with all organs, epitomized here by the liver. With development of increasingly potent baseline drugs, the history of clinical organ transplant came to be written in terms of 3 eras defined by azathioprine-, cyclosporine-, and tacrolimus-based immunosuppression (Figure 25).

Although the golden age of transplant had arrived, there was a dark side. Chronic rejection and the devastating morbidity and mortality of long-term immunosuppression had now become nonresolvable problems. Moreover, the anticipated increase in drug-free kidney recipients that had not been rare in the pioneer experience was almost never seen again (Figure 26).

By 1992, the field of organ transplantation had reached the position of this mountaineer—unable to reach the top, but long since too far committed to go back down (Figure 27). However, during the 30-year climb, tantalizing clues had been encountered that could now be reassessed.

One clue was in the results of studies that had been done in our 1963 kidney transplant cases and had been reported in the Journal of Experimental Medicine. It was found that tuberculin, histoplasmin, and other positive skin tests in the donors were systematically transferred after transplant to their previously skin-test–negative kidney recipients. This evidence of chimerism-dependent adoptive transfer was not correctly interpreted until 30 years later (Figure 28).

Although tolerant kidney recipients had all but disappeared, a trickle of drug-free liver recipients continued to be seen. At my 80th birthday party in March 2006, the kidney longevity winner identified by an arrow (now 48 years posttransplant) was surrounded by drug-free liver recipients who survived from infancy to adult life and currently had follow-ups of 30 to 41 years. They had been off immunosuppression for 14 to 36 years. The woman in the back (Kim Hudson) is the liver frontrunner, at 41 posttransplant years (Figure 29).

The continued production of such liver recipients was not surprising. The unusual ability of the liver to self-induce tolerance with the aid of a short course of azathioprine was recognized in our earliest dog experiments. After 100 days of azathioprine treatment in 1963, this dog lived for the next 10 drug-free years (Figure 30).

Moreover, permanent liver engraftment without any treatment at all was reported in France and England in the mid-1960s in about 20% of outbred pig recipients. Moreover, such spontaneous liver tolerance is reliably induced in about 10% of rat strain combinations, and in 80% of mouse strain combinations. Importantly, heart and kidney allografts can also induce such spontaneous engraftment, although in many fewer strain combinations. In short, all kinds of organs are potentially tolerogenic without treatment (Figure 31).

These human and experimental exceptions to the usual outcome of rejection were dismissed as something other than tolerance, given descriptive names, and ascribed to various mechanisms. The list of possibilities involved various tolerogenic cells, antibodies, molecules, and other factors. However, experimental evidence for these theories was almost always model-specific. So-called “tolerogenic suppressor”, or “regulatory cells” were particularly vulnerable to critical assessment (Figure 32).

In contrast, my contention throughout was that clonal exhaustion-deletion—not of a cell, but of a cell population—was the seminal mechanism of organ alloengraftment. This view was depicted graphically in my 1969 textbook on liver transplant, and described in the highlighted caption. However, neither the existence nor the importance of clonal exhaustion-deletion was formally proved until the early 1990s. Consequently, the hypothesis was difficult to sustain (Figure 33).

Ideas advanced in the late 1960s by Clyde Barker also were ahead of their time. Working with Rupert Billingham at the University of Pennsylvania, Barker demonstrated in 1967 that skin grafts were not normally rejected if they were placed on an island of recipient skin that had been detached from lymphatic drainage and nourished by a vascular pedicle. This simple experiment exposed the fundamental principle that the immune system does not recognize the presence of donor antigen that fails to reach host lymphoid organs. The current term for this circumstance is "immune ignorance" (Figure 34).

Alloengraftment by immune ignorance was diametrically opposite to engraftment by clonal exhaustion-deletion. Between 1967 and 1975, Barker identified other privileged sites that had in common the absence or deficiency of lymphatic drainage. His rodent experiments established the foundation for transplant of pancreatic islets and bits of other endocrine tissues, for example, parathyroid, and thyroid. However, Barker’s experiments had broader implications than these immediate objectives. His observations also were crucial in eliminating the mystique of organ transplant (Figure 35).

During the same time Barker was doing his skin island experiments, there was another finding in the clinics that was slow to be understood. In 1967 and 1968, karyotyping studies in human female recipients of livers from male donors showed that while the hepatocytes and other parenchymal components retained their donor sex, the Kupffer cells—and most of the graft’s other bone marrow-derived leukocytes (symbolically depicted here as a bone silhouette)—disappeared and were replaced with female recipient cells of the same lineages (Figure 36).

Twenty-five years passed before it was recognized that the resulting composite structure (part donor/part recipient) was a feature of all other successfully engrafted organs (here, kidney) (Figure 37).

The obvious question was whether the missing donor cells had migrated into the recipient and survived. Studies of serial blood samples in rat and human recipients showed that donor cells in the early days after organ transplanting accounted for between 1% and 20% of the recipients’ circulating mononuclear leukocytes. The upper panel shows that in this human intestine recipient, the circulating donor cells quickly rose to a peak, and then diminished steadily until they were undetectable with flow cytometry after 30 to 60 days. The blood findings coincided with the disappearance of the passenger leukocytes from the graft (lower panel) (Figure 38).

A pivotal clarifying step finally was taken in 1992 with the study of 30 liver or kidney recipients whose allografts had been functioning for up to 3 decades. Biopsies were obtained from the depicted sites and studied with sensitive immunocytochemical and molecular methods. In all 30 patients, small numbers of multilineage donor cells were detected in 1 or more of the sampled sites (Figure 39).

The reports in 1992 and 1993 of these microchimerism discoveries provoked a firestorm. Donor leukocyte chimerism had not, to my knowledge, been proposed to be a factor in organ engraftment a single time in the 30 years of scientific literature between 1962 and 1992. Moreover, if our interpretation of the findings was valid, the perceived conceptual base of transplant immunology had crumbled (Figure 40).

An engrafted organ viewed as an island in a hostile sea inhabited solely by leukocytes of the recipient (Panel A). The revised view with microchimerism in various nonlymphoid and lymphoid recipient sites is shown just below in Panel C. In the reverse image of bone marrow transplant, the perfect result has been complete replacement of all hematolymphopoietic cells (Panel B).

However, in 1991 Donna Przepiorka and Donnall Thomas in Seattle detected a trace population of recipient leukocytes in essentially all such “perfect” bone marrow recipients (Panel D). Now, it was evident that organ recipients (Panel C) and bone marrow cell recipients (Panel D) were mirror image chimeras, differing fundamentally only in the reverse proportions of donor and recipient cells (Figure 41).

The surviving cells of the minority populations in both kinds of recipients obviously were the progeny of lymphopoietic stem cells, which had survived a violent double immune reaction the first few days or weeks after transplant. Alloengraftment was explained in our report by “. . . responses of co-existing donor and recipient cells, each to the other, resulting in reciprocal clonal exhaustion, followed by peripheral clonal deletion.” These mechanisms coincided with the reversal of rejection, and the development of variable tolerance first observed in kidney and liver recipients 30 years earlier (Figure 42).

The host response (the upright blue curve) was the dominant one in most cases of organ transplant. But there also was a graft-versus-host reaction (the yellow-inverted curve) that in exceptional cases was expressed as clinical GVHD. The GVHD complication usually was in recipients of a leukocyte-rich organ (a liver or intestine), but it also has been seen, albeit rarely, in kidney recipients (Figure 43).

In bone marrow recipients, the naturally weak or deliberately enfeebled immune system inverted the scale, explaining all of the major differences between bone marrow and organ transplant (Figure 44).

After 30 years of estrangement, bone marrow and organ transplant were united. However, a fundamental question remained about both kinds of transplant: “Why was the leukocyte the indispensable tolerogenic cell?” And how could such a small minority cell population survive, much less be the key factor in long-term graft survival? The answers could be found in the studies of Barker and Billingham that I described earlier (Figure 45).

Remember that the fundamental principle demonstrated by the Barker-Billingham experiment was that antigen that does not reach the host lymphoid organs is not recognized to be present (immune ignorance). The only mobile antigen in organs consists of passenger leukocytes (here the brown cells leaving a liver graft). The en masse migration of these leukocytes to organized lymphoid collections was a prerequisite for the seminal tolerance mechanism of clonal activation, exhaustion, and deletion (Figure 46).

Our studies in rodents and humans showed that the cell migration occurs in 2 stages and by the same pathways as those of the infused cells of bone marrow transplant. In stage 1 (your left), the donor leukocytes go selectively to host lymphoid destinations, where immune activation occurs. The second stage (your right) begins after 1 to 3 weeks. Cells that have escaped initial immune destruction move on to the skin and other nonlymphoid destinations that are relatively inaccessible to humoral and cellular effector mechanisms. Thus, thousands of tiny islands of multilineage donor leukocytes are established body-wide in protected locations—analogous to the privileged sites studied by Barker and his associates (Figure 47).

Colonialization of the donor leukocytes is graphically summarized in the circle on your left: from the organ via blood to the lymphoid compartment (the green halo). However, some of the donor cells can escape to nonlymphoid areas (the light-brown outer rim). We postulated that donor leukocytes percolated back from these protected sites to host lymphoid organs (the inward pointing red arrows in the right circle) and maintained the clonal exhaustion-deletion achieved at the outset. Despite much supporting evidence, our tolerance paradigm was viewed skeptically or repudiated outright by many critics (Figure 48).

A notable exception was Rolf Zinkernagel in Zurich, shown here on your left with Peter Doherty, his co-Nobel Laureate of 1996. In the 1970s, Zinkernagel and Doherty had elucidated the mechanisms of the MHC-restricted T-cell immunity induced by noncytopathic microorganisms—and by inference by allografts. However, the opposite outcome of tolerance had remained enigmatic. In 1993, and unaware of our 1992 publications, Zinkernagel independently proposed a paradigm of acquired tolerance to pathogens that was almost identical to our organ tolerance paradigm (Figure 49).

With the mutual recognition that the Pittsburgh and Zurich investigations were on parallel pathways, a crossover review was published in a December 1998 issue of the New England Journal of Medicine. Equivalent roles were attributed to allogeneic leukocytes and noncytopathic microorganisms. Consequently, much of the article consisted of descriptions of a range of transplant outcomes and their infection analogues (Figure 50).

However, the main purpose of the article was to propose the 2 generalizable rules of immunology that are printed here in white type. The most-fundamental rule is that the immune response is regulated by the migration and localization of antigen. The secondary principle is that the outcome of an immune response is determined by balances that I will explain by example. Bear in mind there had been no answers to the immune regulation issue of Rule 1 in the more than 100 years since the humoral and effector mechanisms of the immune system were discovered by von Behring, Metchnikoff, Bordet, and Erlich (Figure 51).

In this example of migration and localization, acute viral hepatitis and its analogue acute liver allograft rejection, are compared side by side. With hepatitis (shown on your left), the tropism of the hepatitis virus makes the liver the primary immune target. However, that is not where the immune response is generated. Instead, small numbers of virus travel to host lymphoid organs where a virus-specific clonal T-cell response is induced that attacks the infected cells in the liver and elsewhere. In the transplant analogue (your right), most of the passenger leukocytes of the allograft migrate to host lymphoid organs and induce a specific response against all donor cells, most of which again are in the outlying graft. What about the balance outcome (Figure 52)?

All infection and transplant outcomes—no matter what the pathogen or what kind of allograft—can be reduced to simple diagrams. The diagrams display the balance reached between antigen with access to host lymphoid organs (solid line) and the number of cytolytic T cells induced by the antigen at these lymphoid sites (dotted line). In the left panel, the antigen-specific clonal response catches up with the replicating antigen and gains ascendance. With complete elimination of a virus, or of the analogous donor leukocytes, the antigen-specific immune response is terminated without memory. Neither the infected patient nor the recipient of the failed allograft have been immunized. However, with the usual outcome shown in the right panel, viruses, or the analogous donor leukocytes that have survived in protected sites, can leak back into the host lymphoid organs and perpetuate cellular plus antibody memory (Figure 53).

Reverse balances in which the quantity of mobile donor leukocytes is consistently greater than the number of anti-donor T cells, is the necessary precondition for chimerism and alloengraftment and also defines carrier disease states in the infection analogue. The top solid line (the lateral arrow) defines the nearly complete chimerism of the bone marrow recipient—or the viral load of a heavily infested hepatitis carrier. It goes without saying that stable dominance of antigen without a need for maintenance immunosuppression is more likely with higher percentages of donor cells (the macrochimerism depicted by the other lateral arrows) but with an increased risk of GVHD. However, antigen dominance also is possible with microchimerism (as shown by the wavy blue line at the bottom). The spontaneous organ tolerance models and the anecdotal drug-free human organ recipients I have been describing throughout this lecture are examples. In most patients, maintenance of such a slender leukocyte advantage requires immunosuppression (to push down the cytolytic T-cells represented by the large downward arrow). Finding just the right amount of immunosuppression is the art of organ transplant medicine practiced worldwide (Figure 54).

It is axiomatic that long-term organ engraftment means that the recipient has developed some degree of chimerism-dependent donor-specific tolerance—without or with the aid of immunosuppression. The completeness of the tolerance can be inferred from the amount of immunosuppression required to maintain stable function and structure of the graft (Figure 55).

However, immunosuppression, without which human transplant would not be possible, is a 2-edged sword. In this second review with Zinkernagel in 2001, our foremost conclusion was that the widespread practice of heavy, multiple-drug immunosuppression during the early days and weeks after organ transplant is antitolerogenic (Figure 56).

Ever since the switch to such heavy prophylactic treatment in 1964, the extent to which acute rejection could be avoided had been considered the most-important criterion in judging the quality of immunosuppression regimens. The counter-argument in our 2001 article was that this policy systematically undermines the clonal activation, exhaustion, and deletion induced by the massive migration of the graft’s passenger leukocytes during the first 30 or 60 postoperative days. To the extent that this one-time only window of opportunity is closed by overtreatment, patients are committed at the outset to permanent dependence on unnecessarily heavy immunosuppression (Figure 57).

We proposed in our 2001 article that immunosuppression could be made more tolerance-friendly by applying 2 therapeutic principles, singly or together. The first principle was recipient pretreatment to weaken the immune response in advance of transplant, making clonal deletion easier of the impending donor-specific response. The second was the use of as little posttransplant immunosuppression as possible (Figure 58).

It will not have escaped your notice that these recommendations turned the clock back 40 years to the first-ever genuine series of successful kidney transplants of 1962-63 that I described at the beginning of this talk. The only difference was that the availability of more-potent drugs made this strategy safe. Immune enfeeblement in advance of transplant could now be done with a single, large dose of a lymphoid depleting agent such as the super-ALG, campath, followed by tacrolimus monotherapy could be subsequently weaned. This strategy allowed efficient clonal exhaustion-deletion, both acutely and later, with minimal risk of graft loss (Figure 59).

Both components of the strategy were implemented on the Pittsburgh organ transplant service in July 2001. The results and quality of life of all kinds of organ recipients were improved. The greatest effect was on the procedures with the most-troubled histories. Intestinal and lung transplant had been targets of criticism for years because of their high mortality. With a 25% gain in survival and greatly improved quality of life with tolerogenic immunosuppression (the yellow survival curves), both of these operations became reliable clinical services (Figure 60).

The extent to which long-term immunosuppression could be minimized by the improved management is epitomized by this intestinal recipient who was 68 years old at the time of her transplant on August 5, 2001. Now, in her 10th posttransplant year, she has been on 2 tacrolimus doses per week, with no added immunosuppression from the second year onward. Her only medical problem, shared with her loving husband, is obesity (Figure 61).

Elucidation of engraftment mechanisms and recognition of the stultifying effect of immune suppression on these mechanisms had other therapeutic implications. The obvious next step was to tip the antigen/T-cell balance that defines outcome toward antigen dominance (Figure 62).

This could be readily done in organ recipients with a properly timed adjunct infusion of donor leukocytes. The objective was to begin the donor-specific tolerization well before arrival of the transplanted organ. The second cell dose of the organ passenger leukocytes is depicted by the secondary antigen hump (Figure 63).

This strategy was used with encouraging results in Pittsburgh in 2006 in a small number of kidney and liver recipients whose organs were obtained from live donors. Three weeks before the organ transplant, cells were obtained from the donors by leukapheresis and infused into the recipients who had been lymphoid depleted the previous day with campath. The second bolus of leukocytes came 3 weeks later with the surge of passenger leukocytes from the transplanted organ. Immune suppression throughout was with tacrolimus, from which weaning was considered after about 4 months (Figure 64).

To achieve the effect of 2 leukocyte dosages in recipients of deceased-donor grafts, it would be necessary to reverse the order of events. Lymphoid depletion followed by transplant of the deceased-donor graft would have to be the first step, with the infusion 2 or 3 weeks later of stored leukocytes obtained from the deceased donor at the time of the original tissue and organ retrieval. Our original plan was to test this protocol in organ recipients, and then apply it for hand and face transplant. This eventually was done the other way round (Figure 65).

As it turned out, the leadership for the deceased-donor transplant initiative was taken by Andy Lee, the exceptionally able and intelligent Chief of our plastic surgical division who has a long-standing interest in hand transplant. In this photo, Dr. Lee is on the right. His 2 equally well-intelligent associates—Gerry Brandacher and Stefan Schneeberger—are at the middle and left (Figure 66).

The pure technical challenge of hand transplant makes organ transplant, even of the liver, look like child’s play. I don’t have time to go through a list of the structures that must be reconnected (Figure 67).

Suffice it to say the challenge was met by Dr. Lee and his associates in the 5 patients listed here in whom bilateral or unilateral hand transplants were successfully done with the immunosuppression protocol that included cell infusion. All of these patients are on low-dose single-drug immuno­suppression (Figure 68).

Because pictures are more persuasive than words, here are 3 of the recipients bearing new limbs. The man with the double forearm is nearing the 2-year milestone. Note that his right-sided graft includes the elbow joint—the only successful one in the world to date (Figure 69).

The first recipient, a marine who lost his hand and wrist in an explosion, is shown here being greeted by a handshake with one of his former commanding generals. On that high note, and with the knowledge that the proverbial red light has gone on, I will close with a short synopsis of the history of clinical transplant and then with a conclusion (Figure 70).

The history of transplant can be summarized by a short list of empirical steps, almost all taken by clinicians. Bone marrow transplant was conceptually anchored by mouse models that revealed the essential but unexplained association of donor leukocyte chimerism and tolerance. However, the evolution of organ transplant resembled the piecemeal construction of the floors of a house without an architectural blueprint. Both bone marrow and organ engraftment were made incrementally more practical as the years passed by the immunosuppressive agents shown on your left. For the most part, the drugs were used like sledge hammers in organ recipients without dependence on tissue matching, and like scalpels in scrupulously HLA-matched bone marrow recipients. Although there was early evidence that organs are inherently tolerogenic, this was not recognized to be related to the donor leukocyte chimerism of bone marrow recipients for the next third of a century. During all this time, the self-defeating antitolerogenic effects of immunosuppression were not appreciated. Nevertheless, and in spite of their fragile and almost entirely empirical foundation, bone marrow and organ transplant became 2 of medicine’s greatest triumphs. They also generated literature so vast and complicated that it resembles an assembly of all of the phone books in the world, each in its own language (Figure 71).

So much for this history. I believe that the mystical and incomprehensible transplant literature can reach consilience—the unity of knowledge described by great Harvard biologist E.O. Wilson, in his 1998 book. With unity of knowledge (consilience), a few, simple, natural laws accommodate observations, principles, and facts in all models, in all disciplines, and in all domains of knowledge (Figure 72).

Our observations in transplant and Zinkernagel’s studies of infection analogues have exposed 2 such natural laws of immunology. First, the immune responsiveness or nonresponsiveness to an antigen is governed by migration and localization of the antigen. The second law is that the outcome of immune activation is determined by the balance reached between the antigen and the antigen-reactive host cells (Figure 73).

With these 2 laws, consilience can be reached for all of the diverse observations in all of the tolerance and alloengraftment circumstances I have discussed today, and for that matter in all circumstances of transplant failures. The linkage throughout the tolerance and alloengraftment spectrum is, of course, donor leukocyte chimerism. Start with the freemartin cattle, the mouse models of Billingham, Brent, and Medawar, and parabiosis. Then, continue with human bone marrow, organ, hand, and face transplant. They are all the same (Figure 74). Thank you for your attention.


Dean Colasurdo, ladies and gentlemen. I am honored more than I can say by the privilege of presenting a lecture dedicated to the memory of Ernst Knobil. He is, of course, well-remembered in Pittsburgh (Figure 1).

In 1961, the then 35-year-old Ernst Knobil was recruited from Harvard to the University of Pittsburgh to Chair the Department of Physiology. Many of his classic papers on the neuroendocrine control of the reproductive cycle were published from Pittsburgh during the next 20 years (Figure 2).

After moving to Houston in 1981 to become your third Dean, he instituted changes that clarified the academic mission of the new school and set the stage for its current pre-eminent position—all the while maintaining a research laboratory (Figure 3).

Knobil’s research was not fixated on esoteric details in isolation. The term “integrative biology” that was applied to his body of work has the same meaning that “systems biology” has today. His emphasis was on the development of the whole scientific context rather than only the acquisition of details (Figure 4).

When assembled, the puzzle pieced together by Knobil’s studies resulted in a new chapter in reproductive physiology and obstetrics—a major paradigm shift, defined as: “...the ability to envision a reality that is entirely different from the accepted view.” That brings me to the subject of my talk today: The Mystique of Organ Transplant. The mystique was caused by a pervasive early error that precluded the orderly development of transplant immunology and limited progress almost exclusively to the development of more-potent immunosuppressive drugs (Figure 5).

To understand how an error of this magnitude could have occurred, it is necessary to go back to the birth of modern-day transplant. The midwife was an English Zoologist named Peter Medawar (Figure 6).

The seed from which all else derived, was Medawar’s demonstration in 1943, that skin graft rejection is an immunologic event. In the 10 years that followed, efforts to weaken the immune response with irradiation or steroids had little or no effect on experimental graft survival (Figure 7).

During this time, a study by Medawar’s team of the natural tolerance in freemartin cattle revealed a chink in the immunologic armor. In freemartin cattle, fusion of their placentas allowed the mixture of 2 animal circulations during gestation. After birth and throughout life, the animals shared each others blood cells (blood chimerism). Moreover, the cattle were tolerant to each others tissues and organs as shown at the bottom (Figure 8).

Inspired by the freemartin findings, Medawar and his colleagues demonstrated in 1953 that similar chimerism-associated tolerance could be deliberately made. In their experimental model, splenic or bone marrow leukocytes were infused from adult mouse donors to newborn mouse recipients, whose immune system was not developed enough to reject the cells. With leukocyte engraftment, neonatal recipients had a lifetime tolerance to skin (or other tissues) from the original leukocyte donor, but not to tissues from any other donor. These chimeric mice were analogues of future patients with immune deficiency diseases who could be treated with bone marrow transplant (Figure 9).

Two years later at the National Institutes of Health (NIH), Main and Prehn extended these observations to adult mouse recipients whose whole immune system was weakened by high-dose total body irradiation before the cell infusion (the mouse in the middle). These mouse chimeras were analogues of future cytoablated human bone marrow recipients (Figure 10).

Stable leukocyte chimerism in both mouse models was achievable only when donors and recipients had a good histocompatibility match. Otherwise, the donor leukocytes were rejected or they turned the tables and rejected the immunologically defenseless recipient: graft versus host disease (GVHD) (Figure 11).

Because human histocompatibility antigens were yet to be discovered, clinical bone marrow transplant for the treatment of hematologic disorders and other indications was delayed until 1968. As in the mice, donor-specific tolerance was associated with leukocyte chimerism. Graft versus host disease was the most-common and specific complication that could be avoided or minimized with only a perfect HLA match (Figure 12).

This was a beautiful story. The escalation of the mouse tolerance models to humans with parallel developments in histocompatibility research was heralded as a perfect example of bench-to-bedside research (Figure 13).

In contrast, kidney transplant with survival of at least 1 year was precociously accomplished in 7 humans between 1959 and 1962 without a preceding animal model. The first 6 patients were irradiated before transplant, but had limited therapy afterward because drug immunosuppression was not yet available. The exceptional seventh patient (bolded here) was not irradiated, but was treated daily with azathioprine throughout the 17 months of graft function (Figure 14).

The 7 successful cases were isolated exceptions in more than 300 failures. Nevertheless, they were hailed as a collective breakthrough. The accomplishments were inexplicable. Engraftment had been achieved without donor leukocyte infusion, without HLA matching, and with no hint of GVHD. If there was any connection with Medawar’s mouse models or with the future human bone marrow transplant triumphs, it was not apparent (Figure 15).

Now, the over-arching error that I described at the beginning was introduced. Based largely on a handful of successful human cases, consensus was reached by 1962 that organ engraftment (exemplified by the kidney) did not depend on the donor leukocyte chimerism-associated mechanisms of the mouse tolerance models. Thus, organ transplant was disconnected from the scientific base soon to be occupied by human bone marrow transplant (Figure 16).

Parenthetically, I was not involved in the consensus. Between 1957 and 1961, I was preoccupied with development of canine liver replacement and multivisceral transplant procedures as tools for study of metabolic interactions between visceral organs. But with the early kidney transplant successes, and especially the advent of azathioprine, the potential human use of the visceral transplant operations was obvious. A prerequisite would be a record of kidney transplants (Figure 17).

The anticipated record dematerialized when the early results of kidney transplant with azathioprine were no better than with irradiation. Consequently, I obtained a supply of the drug and combined its use with prednisone in dog models. Based on the canine observations, we launched a clinical kidney transplant program in the autumn of 1962 with an unprecedented 1-year survival of 75%. The clinical results were reported in this article in 1963. The title described the 2 features of the alloimmune response that provided an empirical foundation for development of all kinds of organ transplant (Figure 18).

The features had been dramatically exposed by the use of this treatment algorithm. Azathioprine was started 1 to 4 weeks before kidney transplant from live donors. Large doses of prednisone were added posttransplant only to treat the breakthrough rejections that occurred in almost every patient. In about 85% of cases, the rejections were reversible with prednisone as indicated by the rise of serum creatinine and its subsequent fall. Partial tolerance was inferred from the rapidly declining need for immuno­suppression after rejection reversal (shown at the top) (Figure 19).

Since none of the patients was completely off drugs, the most-compelling argument that these patients were tolerant required the passage of time. Nine of the 46 renal allografts (19%) transplanted from genetically related donors functioned continuously for the next 4 decades, each depicted here as a horizontal bar. In 7 of these patients, immunosuppression eventually was stopped with subsequent drug-free intervals of 12 to 46 years (the red portion of the bars). Now, after 46 to 48 posttransplant years, these patients currently bear the longest functioning organ allografts in the world. However, no comparable cohort of drug-free kidney recipients has ever been produced again anywhere in the world in the following 40 years (Figure 20).

The probable reason was not recognized until the 1990s: namely, alteration of the immunosuppression strategy. At the end of 1963, the changes shown on your right were made. First, pretreatment with azathioprine was abandoned, in part because it would not be feasible using deceased-donor organs. The second change by early 1964 was administration of large doses of prednisone from the time of surgery instead of being added “as needed.” This modification was made to avoid the 15% to 20% loss of grafts whose rejection could not be reversed (Figure 21).

With the revised use of azathioprine and prednisone, a budding industry of clinical renal transplant was formed. However, further advances were driven almost exclusively by more-versatile or more-potent immunosuppressive drugs. Beginning in 1966, antilymphocyte globulin (ALG) extracted from the serum of horses immunized with human lymphoid tissue was added to the original combination of azathioprine and prednisone in Colorado (Figure 22).

Using the triple-drug immunosuppression, my original objective of human liver transplant was finally accomplished in 1967, ten years after the first steps were taken in dogs. This was followed by the first successful heart transplants in 1968, and in 1969 by the first 1-year survival after pancreas transplant—all with the 3-drug strategy (Figure 23).

As more-potent drugs became available, they were folded into the pre-emptive treatment formula introduced in 1964. Azathioprine was replaced as the baseline drug by cyclosporine, which was replaced in turn by tacrolimus. By the 1990s, a bewildering array of stacked drugs, begun at the time of transplant, had become the worldwide standard, with the stipulated purpose of reducing the incidence of acute rejection to zero (Figure 24).

The pre-emptive strategy allowed better mid-range patient and graft survival with all organs, epitomized here by the liver. With development of increasingly potent baseline drugs, the history of clinical organ transplant came to be written in terms of 3 eras defined by azathioprine-, cyclosporine-, and tacrolimus-based immunosuppression (Figure 25).

Although the golden age of transplant had arrived, there was a dark side. Chronic rejection and the devastating morbidity and mortality of long-term immunosuppression had now become nonresolvable problems. Moreover, the anticipated increase in drug-free kidney recipients that had not been rare in the pioneer experience was almost never seen again (Figure 26).

By 1992, the field of organ transplantation had reached the position of this mountaineer—unable to reach the top, but long since too far committed to go back down (Figure 27). However, during the 30-year climb, tantalizing clues had been encountered that could now be reassessed.

One clue was in the results of studies that had been done in our 1963 kidney transplant cases and had been reported in the Journal of Experimental Medicine. It was found that tuberculin, histoplasmin, and other positive skin tests in the donors were systematically transferred after transplant to their previously skin-test–negative kidney recipients. This evidence of chimerism-dependent adoptive transfer was not correctly interpreted until 30 years later (Figure 28).

Although tolerant kidney recipients had all but disappeared, a trickle of drug-free liver recipients continued to be seen. At my 80th birthday party in March 2006, the kidney longevity winner identified by an arrow (now 48 years posttransplant) was surrounded by drug-free liver recipients who survived from infancy to adult life and currently had follow-ups of 30 to 41 years. They had been off immunosuppression for 14 to 36 years. The woman in the back (Kim Hudson) is the liver frontrunner, at 41 posttransplant years (Figure 29).

The continued production of such liver recipients was not surprising. The unusual ability of the liver to self-induce tolerance with the aid of a short course of azathioprine was recognized in our earliest dog experiments. After 100 days of azathioprine treatment in 1963, this dog lived for the next 10 drug-free years (Figure 30).

Moreover, permanent liver engraftment without any treatment at all was reported in France and England in the mid-1960s in about 20% of outbred pig recipients. Moreover, such spontaneous liver tolerance is reliably induced in about 10% of rat strain combinations, and in 80% of mouse strain combinations. Importantly, heart and kidney allografts can also induce such spontaneous engraftment, although in many fewer strain combinations. In short, all kinds of organs are potentially tolerogenic without treatment (Figure 31).

These human and experimental exceptions to the usual outcome of rejection were dismissed as something other than tolerance, given descriptive names, and ascribed to various mechanisms. The list of possibilities involved various tolerogenic cells, antibodies, molecules, and other factors. However, experimental evidence for these theories was almost always model-specific. So-called “tolerogenic suppressor”, or “regulatory cells” were particularly vulnerable to critical assessment (Figure 32).

In contrast, my contention throughout was that clonal exhaustion-deletion—not of a cell, but of a cell population—was the seminal mechanism of organ alloengraftment. This view was depicted graphically in my 1969 textbook on liver transplant, and described in the highlighted caption. However, neither the existence nor the importance of clonal exhaustion-deletion was formally proved until the early 1990s. Consequently, the hypothesis was difficult to sustain (Figure 33).

Ideas advanced in the late 1960s by Clyde Barker also were ahead of their time. Working with Rupert Billingham at the University of Pennsylvania, Barker demonstrated in 1967 that skin grafts were not normally rejected if they were placed on an island of recipient skin that had been detached from lymphatic drainage and nourished by a vascular pedicle. This simple experiment exposed the fundamental principle that the immune system does not recognize the presence of donor antigen that fails to reach host lymphoid organs. The current term for this circumstance is "immune ignorance" (Figure 34).

Alloengraftment by immune ignorance was diametrically opposite to engraftment by clonal exhaustion-deletion. Between 1967 and 1975, Barker identified other privileged sites that had in common the absence or deficiency of lymphatic drainage. His rodent experiments established the foundation for transplant of pancreatic islets and bits of other endocrine tissues, for example, parathyroid, and thyroid. However, Barker’s experiments had broader implications than these immediate objectives. His observations also were crucial in eliminating the mystique of organ transplant (Figure 35).

During the same time Barker was doing his skin island experiments, there was another finding in the clinics that was slow to be understood. In 1967 and 1968, karyotyping studies in human female recipients of livers from male donors showed that while the hepatocytes and other parenchymal components retained their donor sex, the Kupffer cells—and most of the graft’s other bone marrow-derived leukocytes (symbolically depicted here as a bone silhouette)—disappeared and were replaced with female recipient cells of the same lineages (Figure 36).

Twenty-five years passed before it was recognized that the resulting composite structure (part donor/part recipient) was a feature of all other successfully engrafted organs (here, kidney) (Figure 37).

The obvious question was whether the missing donor cells had migrated into the recipient and survived. Studies of serial blood samples in rat and human recipients showed that donor cells in the early days after organ transplanting accounted for between 1% and 20% of the recipients’ circulating mononuclear leukocytes. The upper panel shows that in this human intestine recipient, the circulating donor cells quickly rose to a peak, and then diminished steadily until they were undetectable with flow cytometry after 30 to 60 days. The blood findings coincided with the disappearance of the passenger leukocytes from the graft (lower panel) (Figure 38).

A pivotal clarifying step finally was taken in 1992 with the study of 30 liver or kidney recipients whose allografts had been functioning for up to 3 decades. Biopsies were obtained from the depicted sites and studied with sensitive immunocytochemical and molecular methods. In all 30 patients, small numbers of multilineage donor cells were detected in 1 or more of the sampled sites (Figure 39).

The reports in 1992 and 1993 of these microchimerism discoveries provoked a firestorm. Donor leukocyte chimerism had not, to my knowledge, been proposed to be a factor in organ engraftment a single time in the 30 years of scientific literature between 1962 and 1992. Moreover, if our interpretation of the findings was valid, the perceived conceptual base of transplant immunology had crumbled (Figure 40).

An engrafted organ viewed as an island in a hostile sea inhabited solely by leukocytes of the recipient (Panel A). The revised view with microchimerism in various nonlymphoid and lymphoid recipient sites is shown just below in Panel C. In the reverse image of bone marrow transplant, the perfect result has been complete replacement of all hematolymphopoietic cells (Panel B).

However, in 1991 Donna Przepiorka and Donnall Thomas in Seattle detected a trace population of recipient leukocytes in essentially all such “perfect” bone marrow recipients (Panel D). Now, it was evident that organ recipients (Panel C) and bone marrow cell recipients (Panel D) were mirror image chimeras, differing fundamentally only in the reverse proportions of donor and recipient cells (Figure 41).

The surviving cells of the minority populations in both kinds of recipients obviously were the progeny of lymphopoietic stem cells, which had survived a violent double immune reaction the first few days or weeks after transplant. Alloengraftment was explained in our report by “. . . responses of co-existing donor and recipient cells, each to the other, resulting in reciprocal clonal exhaustion, followed by peripheral clonal deletion.” These mechanisms coincided with the reversal of rejection, and the development of variable tolerance first observed in kidney and liver recipients 30 years earlier (Figure 42).

The host response (the upright blue curve) was the dominant one in most cases of organ transplant. But there also was a graft-versus-host reaction (the yellow-inverted curve) that in exceptional cases was expressed as clinical GVHD. The GVHD complication usually was in recipients of a leukocyte-rich organ (a liver or intestine), but it also has been seen, albeit rarely, in kidney recipients (Figure 43).

In bone marrow recipients, the naturally weak or deliberately enfeebled immune system inverted the scale, explaining all of the major differences between bone marrow and organ transplant (Figure 44).

After 30 years of estrangement, bone marrow and organ transplant were united. However, a fundamental question remained about both kinds of transplant: “Why was the leukocyte the indispensable tolerogenic cell?” And how could such a small minority cell population survive, much less be the key factor in long-term graft survival? The answers could be found in the studies of Barker and Billingham that I described earlier (Figure 45).

Remember that the fundamental principle demonstrated by the Barker-Billingham experiment was that antigen that does not reach the host lymphoid organs is not recognized to be present (immune ignorance). The only mobile antigen in organs consists of passenger leukocytes (here the brown cells leaving a liver graft). The en masse migration of these leukocytes to organized lymphoid collections was a prerequisite for the seminal tolerance mechanism of clonal activation, exhaustion, and deletion (Figure 46).

Our studies in rodents and humans showed that the cell migration occurs in 2 stages and by the same pathways as those of the infused cells of bone marrow transplant. In stage 1 (your left), the donor leukocytes go selectively to host lymphoid destinations, where immune activation occurs. The second stage (your right) begins after 1 to 3 weeks. Cells that have escaped initial immune destruction move on to the skin and other nonlymphoid destinations that are relatively inaccessible to humoral and cellular effector mechanisms. Thus, thousands of tiny islands of multilineage donor leukocytes are established body-wide in protected locations—analogous to the privileged sites studied by Barker and his associates (Figure 47).

Colonialization of the donor leukocytes is graphically summarized in the circle on your left: from the organ via blood to the lymphoid compartment (the green halo). However, some of the donor cells can escape to nonlymphoid areas (the light-brown outer rim). We postulated that donor leukocytes percolated back from these protected sites to host lymphoid organs (the inward pointing red arrows in the right circle) and maintained the clonal exhaustion-deletion achieved at the outset. Despite much supporting evidence, our tolerance paradigm was viewed skeptically or repudiated outright by many critics (Figure 48).

A notable exception was Rolf Zinkernagel in Zurich, shown here on your left with Peter Doherty, his co-Nobel Laureate of 1996. In the 1970s, Zinkernagel and Doherty had elucidated the mechanisms of the MHC-restricted T-cell immunity induced by noncytopathic microorganisms—and by inference by allografts. However, the opposite outcome of tolerance had remained enigmatic. In 1993, and unaware of our 1992 publications, Zinkernagel independently proposed a paradigm of acquired tolerance to pathogens that was almost identical to our organ tolerance paradigm (Figure 49).

With the mutual recognition that the Pittsburgh and Zurich investigations were on parallel pathways, a crossover review was published in a December 1998 issue of the New England Journal of Medicine. Equivalent roles were attributed to allogeneic leukocytes and noncytopathic microorganisms. Consequently, much of the article consisted of descriptions of a range of transplant outcomes and their infection analogues (Figure 50).

However, the main purpose of the article was to propose the 2 generalizable rules of immunology that are printed here in white type. The most-fundamental rule is that the immune response is regulated by the migration and localization of antigen. The secondary principle is that the outcome of an immune response is determined by balances that I will explain by example. Bear in mind there had been no answers to the immune regulation issue of Rule 1 in the more than 100 years since the humoral and effector mechanisms of the immune system were discovered by von Behring, Metchnikoff, Bordet, and Erlich (Figure 51).

In this example of migration and localization, acute viral hepatitis and its analogue acute liver allograft rejection, are compared side by side. With hepatitis (shown on your left), the tropism of the hepatitis virus makes the liver the primary immune target. However, that is not where the immune response is generated. Instead, small numbers of virus travel to host lymphoid organs where a virus-specific clonal T-cell response is induced that attacks the infected cells in the liver and elsewhere. In the transplant analogue (your right), most of the passenger leukocytes of the allograft migrate to host lymphoid organs and induce a specific response against all donor cells, most of which again are in the outlying graft. What about the balance outcome (Figure 52)?

All infection and transplant outcomes—no matter what the pathogen or what kind of allograft—can be reduced to simple diagrams. The diagrams display the balance reached between antigen with access to host lymphoid organs (solid line) and the number of cytolytic T cells induced by the antigen at these lymphoid sites (dotted line). In the left panel, the antigen-specific clonal response catches up with the replicating antigen and gains ascendance. With complete elimination of a virus, or of the analogous donor leukocytes, the antigen-specific immune response is terminated without memory. Neither the infected patient nor the recipient of the failed allograft have been immunized. However, with the usual outcome shown in the right panel, viruses, or the analogous donor leukocytes that have survived in protected sites, can leak back into the host lymphoid organs and perpetuate cellular plus antibody memory (Figure 53).

Reverse balances in which the quantity of mobile donor leukocytes is consistently greater than the number of anti-donor T cells, is the necessary precondition for chimerism and alloengraftment and also defines carrier disease states in the infection analogue. The top solid line (the lateral arrow) defines the nearly complete chimerism of the bone marrow recipient—or the viral load of a heavily infested hepatitis carrier. It goes without saying that stable dominance of antigen without a need for maintenance immunosuppression is more likely with higher percentages of donor cells (the macrochimerism depicted by the other lateral arrows) but with an increased risk of GVHD. However, antigen dominance also is possible with microchimerism (as shown by the wavy blue line at the bottom). The spontaneous organ tolerance models and the anecdotal drug-free human organ recipients I have been describing throughout this lecture are examples. In most patients, maintenance of such a slender leukocyte advantage requires immunosuppression (to push down the cytolytic T-cells represented by the large downward arrow). Finding just the right amount of immunosuppression is the art of organ transplant medicine practiced worldwide (Figure 54).

It is axiomatic that long-term organ engraftment means that the recipient has developed some degree of chimerism-dependent donor-specific tolerance—without or with the aid of immunosuppression. The completeness of the tolerance can be inferred from the amount of immunosuppression required to maintain stable function and structure of the graft (Figure 55).

However, immunosuppression, without which human transplant would not be possible, is a 2-edged sword. In this second review with Zinkernagel in 2001, our foremost conclusion was that the widespread practice of heavy, multiple-drug immunosuppression during the early days and weeks after organ transplant is antitolerogenic (Figure 56).

Ever since the switch to such heavy prophylactic treatment in 1964, the extent to which acute rejection could be avoided had been considered the most-important criterion in judging the quality of immunosuppression regimens. The counter-argument in our 2001 article was that this policy systematically undermines the clonal activation, exhaustion, and deletion induced by the massive migration of the graft’s passenger leukocytes during the first 30 or 60 postoperative days. To the extent that this one-time only window of opportunity is closed by overtreatment, patients are committed at the outset to permanent dependence on unnecessarily heavy immunosuppression (Figure 57).

We proposed in our 2001 article that immunosuppression could be made more tolerance-friendly by applying 2 therapeutic principles, singly or together. The first principle was recipient pretreatment to weaken the immune response in advance of transplant, making clonal deletion easier of the impending donor-specific response. The second was the use of as little posttransplant immunosuppression as possible (Figure 58).

It will not have escaped your notice that these recommendations turned the clock back 40 years to the first-ever genuine series of successful kidney transplants of 1962-63 that I described at the beginning of this talk. The only difference was that the availability of more-potent drugs made this strategy safe. Immune enfeeblement in advance of transplant could now be done with a single, large dose of a lymphoid depleting agent such as the super-ALG, campath, followed by tacrolimus monotherapy could be subsequently weaned. This strategy allowed efficient clonal exhaustion-deletion, both acutely and later, with minimal risk of graft loss (Figure 59).

Both components of the strategy were implemented on the Pittsburgh organ transplant service in July 2001. The results and quality of life of all kinds of organ recipients were improved. The greatest effect was on the procedures with the most-troubled histories. Intestinal and lung transplant had been targets of criticism for years because of their high mortality. With a 25% gain in survival and greatly improved quality of life with tolerogenic immunosuppression (the yellow survival curves), both of these operations became reliable clinical services (Figure 60).

The extent to which long-term immunosuppression could be minimized by the improved management is epitomized by this intestinal recipient who was 68 years old at the time of her transplant on August 5, 2001. Now, in her 10th posttransplant year, she has been on 2 tacrolimus doses per week, with no added immunosuppression from the second year onward. Her only medical problem, shared with her loving husband, is obesity (Figure 61).

Elucidation of engraftment mechanisms and recognition of the stultifying effect of immune suppression on these mechanisms had other therapeutic implications. The obvious next step was to tip the antigen/T-cell balance that defines outcome toward antigen dominance (Figure 62).

This could be readily done in organ recipients with a properly timed adjunct infusion of donor leukocytes. The objective was to begin the donor-specific tolerization well before arrival of the transplanted organ. The second cell dose of the organ passenger leukocytes is depicted by the secondary antigen hump (Figure 63).

This strategy was used with encouraging results in Pittsburgh in 2006 in a small number of kidney and liver recipients whose organs were obtained from live donors. Three weeks before the organ transplant, cells were obtained from the donors by leukapheresis and infused into the recipients who had been lymphoid depleted the previous day with campath. The second bolus of leukocytes came 3 weeks later with the surge of passenger leukocytes from the transplanted organ. Immune suppression throughout was with tacrolimus, from which weaning was considered after about 4 months (Figure 64).

To achieve the effect of 2 leukocyte dosages in recipients of deceased-donor grafts, it would be necessary to reverse the order of events. Lymphoid depletion followed by transplant of the deceased-donor graft would have to be the first step, with the infusion 2 or 3 weeks later of stored leukocytes obtained from the deceased donor at the time of the original tissue and organ retrieval. Our original plan was to test this protocol in organ recipients, and then apply it for hand and face transplant. This eventually was done the other way round (Figure 65).

As it turned out, the leadership for the deceased-donor transplant initiative was taken by Andy Lee, the exceptionally able and intelligent Chief of our plastic surgical division who has a long-standing interest in hand transplant. In this photo, Dr. Lee is on the right. His 2 equally well-intelligent associates—Gerry Brandacher and Stefan Schneeberger—are at the middle and left (Figure 66).

The pure technical challenge of hand transplant makes organ transplant, even of the liver, look like child’s play. I don’t have time to go through a list of the structures that must be reconnected (Figure 67).

Suffice it to say the challenge was met by Dr. Lee and his associates in the 5 patients listed here in whom bilateral or unilateral hand transplants were successfully done with the immunosuppression protocol that included cell infusion. All of these patients are on low-dose single-drug immuno­suppression (Figure 68).

Because pictures are more persuasive than words, here are 3 of the recipients bearing new limbs. The man with the double forearm is nearing the 2-year milestone. Note that his right-sided graft includes the elbow joint—the only successful one in the world to date (Figure 69).

The first recipient, a marine who lost his hand and wrist in an explosion, is shown here being greeted by a handshake with one of his former commanding generals. On that high note, and with the knowledge that the proverbial red light has gone on, I will close with a short synopsis of the history of clinical transplant and then with a conclusion (Figure 70).

The history of transplant can be summarized by a short list of empirical steps, almost all taken by clinicians. Bone marrow transplant was conceptually anchored by mouse models that revealed the essential but unexplained association of donor leukocyte chimerism and tolerance. However, the evolution of organ transplant resembled the piecemeal construction of the floors of a house without an architectural blueprint. Both bone marrow and organ engraftment were made incrementally more practical as the years passed by the immunosuppressive agents shown on your left. For the most part, the drugs were used like sledge hammers in organ recipients without dependence on tissue matching, and like scalpels in scrupulously HLA-matched bone marrow recipients. Although there was early evidence that organs are inherently tolerogenic, this was not recognized to be related to the donor leukocyte chimerism of bone marrow recipients for the next third of a century. During all this time, the self-defeating antitolerogenic effects of immunosuppression were not appreciated. Nevertheless, and in spite of their fragile and almost entirely empirical foundation, bone marrow and organ transplant became 2 of medicine’s greatest triumphs. They also generated literature so vast and complicated that it resembles an assembly of all of the phone books in the world, each in its own language (Figure 71).

So much for this history. I believe that the mystical and incomprehensible transplant literature can reach consilience—the unity of knowledge described by great Harvard biologist E.O. Wilson, in his 1998 book. With unity of knowledge (consilience), a few, simple, natural laws accommodate observations, principles, and facts in all models, in all disciplines, and in all domains of knowledge (Figure 72).

Our observations in transplant and Zinkernagel’s studies of infection analogues have exposed 2 such natural laws of immunology. First, the immune responsiveness or nonresponsiveness to an antigen is governed by migration and localization of the antigen. The second law is that the outcome of immune activation is determined by the balance reached between the antigen and the antigen-reactive host cells (Figure 73).

With these 2 laws, consilience can be reached for all of the diverse observations in all of the tolerance and alloengraftment circumstances I have discussed today, and for that matter in all circumstances of transplant failures. The linkage throughout the tolerance and alloengraftment spectrum is, of course, donor leukocyte chimerism. Start with the freemartin cattle, the mouse models of Billingham, Brent, and Medawar, and parabiosis. Then, continue with human bone marrow, organ, hand, and face transplant. They are all the same (Figure 74). Thank you for your attention.


References:

Dean Colasurdo, ladies and gentlemen. I am honored more than I can say by the privilege of presenting a lecture dedicated to the memory of Ernst Knobil. He is, of course, well-remembered in Pittsburgh (Figure 1).

In 1961, the then 35-year-old Ernst Knobil was recruited from Harvard to the University of Pittsburgh to Chair the Department of Physiology. Many of his classic papers on the neuroendocrine control of the reproductive cycle were published from Pittsburgh during the next 20 years (Figure 2).

After moving to Houston in 1981 to become your third Dean, he instituted changes that clarified the academic mission of the new school and set the stage for its current pre-eminent position—all the while maintaining a research laboratory (Figure 3).

Knobil’s research was not fixated on esoteric details in isolation. The term “integrative biology” that was applied to his body of work has the same meaning that “systems biology” has today. His emphasis was on the development of the whole scientific context rather than only the acquisition of details (Figure 4).

When assembled, the puzzle pieced together by Knobil’s studies resulted in a new chapter in reproductive physiology and obstetrics—a major paradigm shift, defined as: “...the ability to envision a reality that is entirely different from the accepted view.” That brings me to the subject of my talk today: The Mystique of Organ Transplant. The mystique was caused by a pervasive early error that precluded the orderly development of transplant immunology and limited progress almost exclusively to the development of more-potent immunosuppressive drugs (Figure 5).

To understand how an error of this magnitude could have occurred, it is necessary to go back to the birth of modern-day transplant. The midwife was an English Zoologist named Peter Medawar (Figure 6).

The seed from which all else derived, was Medawar’s demonstration in 1943, that skin graft rejection is an immunologic event. In the 10 years that followed, efforts to weaken the immune response with irradiation or steroids had little or no effect on experimental graft survival (Figure 7).

During this time, a study by Medawar’s team of the natural tolerance in freemartin cattle revealed a chink in the immunologic armor. In freemartin cattle, fusion of their placentas allowed the mixture of 2 animal circulations during gestation. After birth and throughout life, the animals shared each others blood cells (blood chimerism). Moreover, the cattle were tolerant to each others tissues and organs as shown at the bottom (Figure 8).

Inspired by the freemartin findings, Medawar and his colleagues demonstrated in 1953 that similar chimerism-associated tolerance could be deliberately made. In their experimental model, splenic or bone marrow leukocytes were infused from adult mouse donors to newborn mouse recipients, whose immune system was not developed enough to reject the cells. With leukocyte engraftment, neonatal recipients had a lifetime tolerance to skin (or other tissues) from the original leukocyte donor, but not to tissues from any other donor. These chimeric mice were analogues of future patients with immune deficiency diseases who could be treated with bone marrow transplant (Figure 9).

Two years later at the National Institutes of Health (NIH), Main and Prehn extended these observations to adult mouse recipients whose whole immune system was weakened by high-dose total body irradiation before the cell infusion (the mouse in the middle). These mouse chimeras were analogues of future cytoablated human bone marrow recipients (Figure 10).

Stable leukocyte chimerism in both mouse models was achievable only when donors and recipients had a good histocompatibility match. Otherwise, the donor leukocytes were rejected or they turned the tables and rejected the immunologically defenseless recipient: graft versus host disease (GVHD) (Figure 11).

Because human histocompatibility antigens were yet to be discovered, clinical bone marrow transplant for the treatment of hematologic disorders and other indications was delayed until 1968. As in the mice, donor-specific tolerance was associated with leukocyte chimerism. Graft versus host disease was the most-common and specific complication that could be avoided or minimized with only a perfect HLA match (Figure 12).

This was a beautiful story. The escalation of the mouse tolerance models to humans with parallel developments in histocompatibility research was heralded as a perfect example of bench-to-bedside research (Figure 13).

In contrast, kidney transplant with survival of at least 1 year was precociously accomplished in 7 humans between 1959 and 1962 without a preceding animal model. The first 6 patients were irradiated before transplant, but had limited therapy afterward because drug immunosuppression was not yet available. The exceptional seventh patient (bolded here) was not irradiated, but was treated daily with azathioprine throughout the 17 months of graft function (Figure 14).

The 7 successful cases were isolated exceptions in more than 300 failures. Nevertheless, they were hailed as a collective breakthrough. The accomplishments were inexplicable. Engraftment had been achieved without donor leukocyte infusion, without HLA matching, and with no hint of GVHD. If there was any connection with Medawar’s mouse models or with the future human bone marrow transplant triumphs, it was not apparent (Figure 15).

Now, the over-arching error that I described at the beginning was introduced. Based largely on a handful of successful human cases, consensus was reached by 1962 that organ engraftment (exemplified by the kidney) did not depend on the donor leukocyte chimerism-associated mechanisms of the mouse tolerance models. Thus, organ transplant was disconnected from the scientific base soon to be occupied by human bone marrow transplant (Figure 16).

Parenthetically, I was not involved in the consensus. Between 1957 and 1961, I was preoccupied with development of canine liver replacement and multivisceral transplant procedures as tools for study of metabolic interactions between visceral organs. But with the early kidney transplant successes, and especially the advent of azathioprine, the potential human use of the visceral transplant operations was obvious. A prerequisite would be a record of kidney transplants (Figure 17).

The anticipated record dematerialized when the early results of kidney transplant with azathioprine were no better than with irradiation. Consequently, I obtained a supply of the drug and combined its use with prednisone in dog models. Based on the canine observations, we launched a clinical kidney transplant program in the autumn of 1962 with an unprecedented 1-year survival of 75%. The clinical results were reported in this article in 1963. The title described the 2 features of the alloimmune response that provided an empirical foundation for development of all kinds of organ transplant (Figure 18).

The features had been dramatically exposed by the use of this treatment algorithm. Azathioprine was started 1 to 4 weeks before kidney transplant from live donors. Large doses of prednisone were added posttransplant only to treat the breakthrough rejections that occurred in almost every patient. In about 85% of cases, the rejections were reversible with prednisone as indicated by the rise of serum creatinine and its subsequent fall. Partial tolerance was inferred from the rapidly declining need for immuno­suppression after rejection reversal (shown at the top) (Figure 19).

Since none of the patients was completely off drugs, the most-compelling argument that these patients were tolerant required the passage of time. Nine of the 46 renal allografts (19%) transplanted from genetically related donors functioned continuously for the next 4 decades, each depicted here as a horizontal bar. In 7 of these patients, immunosuppression eventually was stopped with subsequent drug-free intervals of 12 to 46 years (the red portion of the bars). Now, after 46 to 48 posttransplant years, these patients currently bear the longest functioning organ allografts in the world. However, no comparable cohort of drug-free kidney recipients has ever been produced again anywhere in the world in the following 40 years (Figure 20).

The probable reason was not recognized until the 1990s: namely, alteration of the immunosuppression strategy. At the end of 1963, the changes shown on your right were made. First, pretreatment with azathioprine was abandoned, in part because it would not be feasible using deceased-donor organs. The second change by early 1964 was administration of large doses of prednisone from the time of surgery instead of being added “as needed.” This modification was made to avoid the 15% to 20% loss of grafts whose rejection could not be reversed (Figure 21).

With the revised use of azathioprine and prednisone, a budding industry of clinical renal transplant was formed. However, further advances were driven almost exclusively by more-versatile or more-potent immunosuppressive drugs. Beginning in 1966, antilymphocyte globulin (ALG) extracted from the serum of horses immunized with human lymphoid tissue was added to the original combination of azathioprine and prednisone in Colorado (Figure 22).

Using the triple-drug immunosuppression, my original objective of human liver transplant was finally accomplished in 1967, ten years after the first steps were taken in dogs. This was followed by the first successful heart transplants in 1968, and in 1969 by the first 1-year survival after pancreas transplant—all with the 3-drug strategy (Figure 23).

As more-potent drugs became available, they were folded into the pre-emptive treatment formula introduced in 1964. Azathioprine was replaced as the baseline drug by cyclosporine, which was replaced in turn by tacrolimus. By the 1990s, a bewildering array of stacked drugs, begun at the time of transplant, had become the worldwide standard, with the stipulated purpose of reducing the incidence of acute rejection to zero (Figure 24).

The pre-emptive strategy allowed better mid-range patient and graft survival with all organs, epitomized here by the liver. With development of increasingly potent baseline drugs, the history of clinical organ transplant came to be written in terms of 3 eras defined by azathioprine-, cyclosporine-, and tacrolimus-based immunosuppression (Figure 25).

Although the golden age of transplant had arrived, there was a dark side. Chronic rejection and the devastating morbidity and mortality of long-term immunosuppression had now become nonresolvable problems. Moreover, the anticipated increase in drug-free kidney recipients that had not been rare in the pioneer experience was almost never seen again (Figure 26).

By 1992, the field of organ transplantation had reached the position of this mountaineer—unable to reach the top, but long since too far committed to go back down (Figure 27). However, during the 30-year climb, tantalizing clues had been encountered that could now be reassessed.

One clue was in the results of studies that had been done in our 1963 kidney transplant cases and had been reported in the Journal of Experimental Medicine. It was found that tuberculin, histoplasmin, and other positive skin tests in the donors were systematically transferred after transplant to their previously skin-test–negative kidney recipients. This evidence of chimerism-dependent adoptive transfer was not correctly interpreted until 30 years later (Figure 28).

Although tolerant kidney recipients had all but disappeared, a trickle of drug-free liver recipients continued to be seen. At my 80th birthday party in March 2006, the kidney longevity winner identified by an arrow (now 48 years posttransplant) was surrounded by drug-free liver recipients who survived from infancy to adult life and currently had follow-ups of 30 to 41 years. They had been off immunosuppression for 14 to 36 years. The woman in the back (Kim Hudson) is the liver frontrunner, at 41 posttransplant years (Figure 29).

The continued production of such liver recipients was not surprising. The unusual ability of the liver to self-induce tolerance with the aid of a short course of azathioprine was recognized in our earliest dog experiments. After 100 days of azathioprine treatment in 1963, this dog lived for the next 10 drug-free years (Figure 30).

Moreover, permanent liver engraftment without any treatment at all was reported in France and England in the mid-1960s in about 20% of outbred pig recipients. Moreover, such spontaneous liver tolerance is reliably induced in about 10% of rat strain combinations, and in 80% of mouse strain combinations. Importantly, heart and kidney allografts can also induce such spontaneous engraftment, although in many fewer strain combinations. In short, all kinds of organs are potentially tolerogenic without treatment (Figure 31).

These human and experimental exceptions to the usual outcome of rejection were dismissed as something other than tolerance, given descriptive names, and ascribed to various mechanisms. The list of possibilities involved various tolerogenic cells, antibodies, molecules, and other factors. However, experimental evidence for these theories was almost always model-specific. So-called “tolerogenic suppressor”, or “regulatory cells” were particularly vulnerable to critical assessment (Figure 32).

In contrast, my contention throughout was that clonal exhaustion-deletion—not of a cell, but of a cell population—was the seminal mechanism of organ alloengraftment. This view was depicted graphically in my 1969 textbook on liver transplant, and described in the highlighted caption. However, neither the existence nor the importance of clonal exhaustion-deletion was formally proved until the early 1990s. Consequently, the hypothesis was difficult to sustain (Figure 33).

Ideas advanced in the late 1960s by Clyde Barker also were ahead of their time. Working with Rupert Billingham at the University of Pennsylvania, Barker demonstrated in 1967 that skin grafts were not normally rejected if they were placed on an island of recipient skin that had been detached from lymphatic drainage and nourished by a vascular pedicle. This simple experiment exposed the fundamental principle that the immune system does not recognize the presence of donor antigen that fails to reach host lymphoid organs. The current term for this circumstance is "immune ignorance" (Figure 34).

Alloengraftment by immune ignorance was diametrically opposite to engraftment by clonal exhaustion-deletion. Between 1967 and 1975, Barker identified other privileged sites that had in common the absence or deficiency of lymphatic drainage. His rodent experiments established the foundation for transplant of pancreatic islets and bits of other endocrine tissues, for example, parathyroid, and thyroid. However, Barker’s experiments had broader implications than these immediate objectives. His observations also were crucial in eliminating the mystique of organ transplant (Figure 35).

During the same time Barker was doing his skin island experiments, there was another finding in the clinics that was slow to be understood. In 1967 and 1968, karyotyping studies in human female recipients of livers from male donors showed that while the hepatocytes and other parenchymal components retained their donor sex, the Kupffer cells—and most of the graft’s other bone marrow-derived leukocytes (symbolically depicted here as a bone silhouette)—disappeared and were replaced with female recipient cells of the same lineages (Figure 36).

Twenty-five years passed before it was recognized that the resulting composite structure (part donor/part recipient) was a feature of all other successfully engrafted organs (here, kidney) (Figure 37).

The obvious question was whether the missing donor cells had migrated into the recipient and survived. Studies of serial blood samples in rat and human recipients showed that donor cells in the early days after organ transplanting accounted for between 1% and 20% of the recipients’ circulating mononuclear leukocytes. The upper panel shows that in this human intestine recipient, the circulating donor cells quickly rose to a peak, and then diminished steadily until they were undetectable with flow cytometry after 30 to 60 days. The blood findings coincided with the disappearance of the passenger leukocytes from the graft (lower panel) (Figure 38).

A pivotal clarifying step finally was taken in 1992 with the study of 30 liver or kidney recipients whose allografts had been functioning for up to 3 decades. Biopsies were obtained from the depicted sites and studied with sensitive immunocytochemical and molecular methods. In all 30 patients, small numbers of multilineage donor cells were detected in 1 or more of the sampled sites (Figure 39).

The reports in 1992 and 1993 of these microchimerism discoveries provoked a firestorm. Donor leukocyte chimerism had not, to my knowledge, been proposed to be a factor in organ engraftment a single time in the 30 years of scientific literature between 1962 and 1992. Moreover, if our interpretation of the findings was valid, the perceived conceptual base of transplant immunology had crumbled (Figure 40).

An engrafted organ viewed as an island in a hostile sea inhabited solely by leukocytes of the recipient (Panel A). The revised view with microchimerism in various nonlymphoid and lymphoid recipient sites is shown just below in Panel C. In the reverse image of bone marrow transplant, the perfect result has been complete replacement of all hematolymphopoietic cells (Panel B).

However, in 1991 Donna Przepiorka and Donnall Thomas in Seattle detected a trace population of recipient leukocytes in essentially all such “perfect” bone marrow recipients (Panel D). Now, it was evident that organ recipients (Panel C) and bone marrow cell recipients (Panel D) were mirror image chimeras, differing fundamentally only in the reverse proportions of donor and recipient cells (Figure 41).

The surviving cells of the minority populations in both kinds of recipients obviously were the progeny of lymphopoietic stem cells, which had survived a violent double immune reaction the first few days or weeks after transplant. Alloengraftment was explained in our report by “. . . responses of co-existing donor and recipient cells, each to the other, resulting in reciprocal clonal exhaustion, followed by peripheral clonal deletion.” These mechanisms coincided with the reversal of rejection, and the development of variable tolerance first observed in kidney and liver recipients 30 years earlier (Figure 42).

The host response (the upright blue curve) was the dominant one in most cases of organ transplant. But there also was a graft-versus-host reaction (the yellow-inverted curve) that in exceptional cases was expressed as clinical GVHD. The GVHD complication usually was in recipients of a leukocyte-rich organ (a liver or intestine), but it also has been seen, albeit rarely, in kidney recipients (Figure 43).

In bone marrow recipients, the naturally weak or deliberately enfeebled immune system inverted the scale, explaining all of the major differences between bone marrow and organ transplant (Figure 44).

After 30 years of estrangement, bone marrow and organ transplant were united. However, a fundamental question remained about both kinds of transplant: “Why was the leukocyte the indispensable tolerogenic cell?” And how could such a small minority cell population survive, much less be the key factor in long-term graft survival? The answers could be found in the studies of Barker and Billingham that I described earlier (Figure 45).

Remember that the fundamental principle demonstrated by the Barker-Billingham experiment was that antigen that does not reach the host lymphoid organs is not recognized to be present (immune ignorance). The only mobile antigen in organs consists of passenger leukocytes (here the brown cells leaving a liver graft). The en masse migration of these leukocytes to organized lymphoid collections was a prerequisite for the seminal tolerance mechanism of clonal activation, exhaustion, and deletion (Figure 46).

Our studies in rodents and humans showed that the cell migration occurs in 2 stages and by the same pathways as those of the infused cells of bone marrow transplant. In stage 1 (your left), the donor leukocytes go selectively to host lymphoid destinations, where immune activation occurs. The second stage (your right) begins after 1 to 3 weeks. Cells that have escaped initial immune destruction move on to the skin and other nonlymphoid destinations that are relatively inaccessible to humoral and cellular effector mechanisms. Thus, thousands of tiny islands of multilineage donor leukocytes are established body-wide in protected locations—analogous to the privileged sites studied by Barker and his associates (Figure 47).

Colonialization of the donor leukocytes is graphically summarized in the circle on your left: from the organ via blood to the lymphoid compartment (the green halo). However, some of the donor cells can escape to nonlymphoid areas (the light-brown outer rim). We postulated that donor leukocytes percolated back from these protected sites to host lymphoid organs (the inward pointing red arrows in the right circle) and maintained the clonal exhaustion-deletion achieved at the outset. Despite much supporting evidence, our tolerance paradigm was viewed skeptically or repudiated outright by many critics (Figure 48).

A notable exception was Rolf Zinkernagel in Zurich, shown here on your left with Peter Doherty, his co-Nobel Laureate of 1996. In the 1970s, Zinkernagel and Doherty had elucidated the mechanisms of the MHC-restricted T-cell immunity induced by noncytopathic microorganisms—and by inference by allografts. However, the opposite outcome of tolerance had remained enigmatic. In 1993, and unaware of our 1992 publications, Zinkernagel independently proposed a paradigm of acquired tolerance to pathogens that was almost identical to our organ tolerance paradigm (Figure 49).

With the mutual recognition that the Pittsburgh and Zurich investigations were on parallel pathways, a crossover review was published in a December 1998 issue of the New England Journal of Medicine. Equivalent roles were attributed to allogeneic leukocytes and noncytopathic microorganisms. Consequently, much of the article consisted of descriptions of a range of transplant outcomes and their infection analogues (Figure 50).

However, the main purpose of the article was to propose the 2 generalizable rules of immunology that are printed here in white type. The most-fundamental rule is that the immune response is regulated by the migration and localization of antigen. The secondary principle is that the outcome of an immune response is determined by balances that I will explain by example. Bear in mind there had been no answers to the immune regulation issue of Rule 1 in the more than 100 years since the humoral and effector mechanisms of the immune system were discovered by von Behring, Metchnikoff, Bordet, and Erlich (Figure 51).

In this example of migration and localization, acute viral hepatitis and its analogue acute liver allograft rejection, are compared side by side. With hepatitis (shown on your left), the tropism of the hepatitis virus makes the liver the primary immune target. However, that is not where the immune response is generated. Instead, small numbers of virus travel to host lymphoid organs where a virus-specific clonal T-cell response is induced that attacks the infected cells in the liver and elsewhere. In the transplant analogue (your right), most of the passenger leukocytes of the allograft migrate to host lymphoid organs and induce a specific response against all donor cells, most of which again are in the outlying graft. What about the balance outcome (Figure 52)?

All infection and transplant outcomes—no matter what the pathogen or what kind of allograft—can be reduced to simple diagrams. The diagrams display the balance reached between antigen with access to host lymphoid organs (solid line) and the number of cytolytic T cells induced by the antigen at these lymphoid sites (dotted line). In the left panel, the antigen-specific clonal response catches up with the replicating antigen and gains ascendance. With complete elimination of a virus, or of the analogous donor leukocytes, the antigen-specific immune response is terminated without memory. Neither the infected patient nor the recipient of the failed allograft have been immunized. However, with the usual outcome shown in the right panel, viruses, or the analogous donor leukocytes that have survived in protected sites, can leak back into the host lymphoid organs and perpetuate cellular plus antibody memory (Figure 53).

Reverse balances in which the quantity of mobile donor leukocytes is consistently greater than the number of anti-donor T cells, is the necessary precondition for chimerism and alloengraftment and also defines carrier disease states in the infection analogue. The top solid line (the lateral arrow) defines the nearly complete chimerism of the bone marrow recipient—or the viral load of a heavily infested hepatitis carrier. It goes without saying that stable dominance of antigen without a need for maintenance immunosuppression is more likely with higher percentages of donor cells (the macrochimerism depicted by the other lateral arrows) but with an increased risk of GVHD. However, antigen dominance also is possible with microchimerism (as shown by the wavy blue line at the bottom). The spontaneous organ tolerance models and the anecdotal drug-free human organ recipients I have been describing throughout this lecture are examples. In most patients, maintenance of such a slender leukocyte advantage requires immunosuppression (to push down the cytolytic T-cells represented by the large downward arrow). Finding just the right amount of immunosuppression is the art of organ transplant medicine practiced worldwide (Figure 54).

It is axiomatic that long-term organ engraftment means that the recipient has developed some degree of chimerism-dependent donor-specific tolerance—without or with the aid of immunosuppression. The completeness of the tolerance can be inferred from the amount of immunosuppression required to maintain stable function and structure of the graft (Figure 55).

However, immunosuppression, without which human transplant would not be possible, is a 2-edged sword. In this second review with Zinkernagel in 2001, our foremost conclusion was that the widespread practice of heavy, multiple-drug immunosuppression during the early days and weeks after organ transplant is antitolerogenic (Figure 56).

Ever since the switch to such heavy prophylactic treatment in 1964, the extent to which acute rejection could be avoided had been considered the most-important criterion in judging the quality of immunosuppression regimens. The counter-argument in our 2001 article was that this policy systematically undermines the clonal activation, exhaustion, and deletion induced by the massive migration of the graft’s passenger leukocytes during the first 30 or 60 postoperative days. To the extent that this one-time only window of opportunity is closed by overtreatment, patients are committed at the outset to permanent dependence on unnecessarily heavy immunosuppression (Figure 57).

We proposed in our 2001 article that immunosuppression could be made more tolerance-friendly by applying 2 therapeutic principles, singly or together. The first principle was recipient pretreatment to weaken the immune response in advance of transplant, making clonal deletion easier of the impending donor-specific response. The second was the use of as little posttransplant immunosuppression as possible (Figure 58).

It will not have escaped your notice that these recommendations turned the clock back 40 years to the first-ever genuine series of successful kidney transplants of 1962-63 that I described at the beginning of this talk. The only difference was that the availability of more-potent drugs made this strategy safe. Immune enfeeblement in advance of transplant could now be done with a single, large dose of a lymphoid depleting agent such as the super-ALG, campath, followed by tacrolimus monotherapy could be subsequently weaned. This strategy allowed efficient clonal exhaustion-deletion, both acutely and later, with minimal risk of graft loss (Figure 59).

Both components of the strategy were implemented on the Pittsburgh organ transplant service in July 2001. The results and quality of life of all kinds of organ recipients were improved. The greatest effect was on the procedures with the most-troubled histories. Intestinal and lung transplant had been targets of criticism for years because of their high mortality. With a 25% gain in survival and greatly improved quality of life with tolerogenic immunosuppression (the yellow survival curves), both of these operations became reliable clinical services (Figure 60).

The extent to which long-term immunosuppression could be minimized by the improved management is epitomized by this intestinal recipient who was 68 years old at the time of her transplant on August 5, 2001. Now, in her 10th posttransplant year, she has been on 2 tacrolimus doses per week, with no added immunosuppression from the second year onward. Her only medical problem, shared with her loving husband, is obesity (Figure 61).

Elucidation of engraftment mechanisms and recognition of the stultifying effect of immune suppression on these mechanisms had other therapeutic implications. The obvious next step was to tip the antigen/T-cell balance that defines outcome toward antigen dominance (Figure 62).

This could be readily done in organ recipients with a properly timed adjunct infusion of donor leukocytes. The objective was to begin the donor-specific tolerization well before arrival of the transplanted organ. The second cell dose of the organ passenger leukocytes is depicted by the secondary antigen hump (Figure 63).

This strategy was used with encouraging results in Pittsburgh in 2006 in a small number of kidney and liver recipients whose organs were obtained from live donors. Three weeks before the organ transplant, cells were obtained from the donors by leukapheresis and infused into the recipients who had been lymphoid depleted the previous day with campath. The second bolus of leukocytes came 3 weeks later with the surge of passenger leukocytes from the transplanted organ. Immune suppression throughout was with tacrolimus, from which weaning was considered after about 4 months (Figure 64).

To achieve the effect of 2 leukocyte dosages in recipients of deceased-donor grafts, it would be necessary to reverse the order of events. Lymphoid depletion followed by transplant of the deceased-donor graft would have to be the first step, with the infusion 2 or 3 weeks later of stored leukocytes obtained from the deceased donor at the time of the original tissue and organ retrieval. Our original plan was to test this protocol in organ recipients, and then apply it for hand and face transplant. This eventually was done the other way round (Figure 65).

As it turned out, the leadership for the deceased-donor transplant initiative was taken by Andy Lee, the exceptionally able and intelligent Chief of our plastic surgical division who has a long-standing interest in hand transplant. In this photo, Dr. Lee is on the right. His 2 equally well-intelligent associates—Gerry Brandacher and Stefan Schneeberger—are at the middle and left (Figure 66).

The pure technical challenge of hand transplant makes organ transplant, even of the liver, look like child’s play. I don’t have time to go through a list of the structures that must be reconnected (Figure 67).

Suffice it to say the challenge was met by Dr. Lee and his associates in the 5 patients listed here in whom bilateral or unilateral hand transplants were successfully done with the immunosuppression protocol that included cell infusion. All of these patients are on low-dose single-drug immuno­suppression (Figure 68).

Because pictures are more persuasive than words, here are 3 of the recipients bearing new limbs. The man with the double forearm is nearing the 2-year milestone. Note that his right-sided graft includes the elbow joint—the only successful one in the world to date (Figure 69).

The first recipient, a marine who lost his hand and wrist in an explosion, is shown here being greeted by a handshake with one of his former commanding generals. On that high note, and with the knowledge that the proverbial red light has gone on, I will close with a short synopsis of the history of clinical transplant and then with a conclusion (Figure 70).

The history of transplant can be summarized by a short list of empirical steps, almost all taken by clinicians. Bone marrow transplant was conceptually anchored by mouse models that revealed the essential but unexplained association of donor leukocyte chimerism and tolerance. However, the evolution of organ transplant resembled the piecemeal construction of the floors of a house without an architectural blueprint. Both bone marrow and organ engraftment were made incrementally more practical as the years passed by the immunosuppressive agents shown on your left. For the most part, the drugs were used like sledge hammers in organ recipients without dependence on tissue matching, and like scalpels in scrupulously HLA-matched bone marrow recipients. Although there was early evidence that organs are inherently tolerogenic, this was not recognized to be related to the donor leukocyte chimerism of bone marrow recipients for the next third of a century. During all this time, the self-defeating antitolerogenic effects of immunosuppression were not appreciated. Nevertheless, and in spite of their fragile and almost entirely empirical foundation, bone marrow and organ transplant became 2 of medicine’s greatest triumphs. They also generated literature so vast and complicated that it resembles an assembly of all of the phone books in the world, each in its own language (Figure 71).

So much for this history. I believe that the mystical and incomprehensible transplant literature can reach consilience—the unity of knowledge described by great Harvard biologist E.O. Wilson, in his 1998 book. With unity of knowledge (consilience), a few, simple, natural laws accommodate observations, principles, and facts in all models, in all disciplines, and in all domains of knowledge (Figure 72).

Our observations in transplant and Zinkernagel’s studies of infection analogues have exposed 2 such natural laws of immunology. First, the immune responsiveness or nonresponsiveness to an antigen is governed by migration and localization of the antigen. The second law is that the outcome of immune activation is determined by the balance reached between the antigen and the antigen-reactive host cells (Figure 73).

With these 2 laws, consilience can be reached for all of the diverse observations in all of the tolerance and alloengraftment circumstances I have discussed today, and for that matter in all circumstances of transplant failures. The linkage throughout the tolerance and alloengraftment spectrum is, of course, donor leukocyte chimerism. Start with the freemartin cattle, the mouse models of Billingham, Brent, and Medawar, and parabiosis. Then, continue with human bone marrow, organ, hand, and face transplant. They are all the same (Figure 74). Thank you for your attention.



Volume : 9
Issue : 2
Pages : 75 - 93


PDF VIEW [2291] KB.

From the Thomas E. Starzl Transplantation Institute, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
Address reprint requests to: Thomas Starzl, MD, PhD, Professor of Surgery, University of Pittsburgh, Thomas E. Starzl Transplantation Institute, UPMC Montefiore 7th Floor, South 3549 5th Ave., Pittsburgh, Pennsylvania, USA
Phone: +1 412 647 5800
Fax: +1 412 624 2010
E-mail: mangantl@upmc.edu