Collecting and Applying Data to Reduce Risks: Life Insurance Companies and Public Health Agencies, 1900-1950
"Life insurance companies" may not be the first thing to come to mind when "public health work" is mentioned, but during the first half of the twentieth century, several of the biggest American life insurance companies became closely involved in developing--and funding--public health initiatives. The policyholder data they gathered and analyzed year after year revealed patterns of risk far broader than the medical community could study at the time. Although their aims were different--life insurance companies had a strong interest in reducing life insurance claims by keeping death rates lower, while physicians and other public health workers wanted to strengthen the health of communities and reduce the spread of disease--both had a stake in fostering state and local public health infrastructure and developing systems for gathering health-related information. Some insurance company staff, such as Louis Dublin, chief statistician for the Metropolitan Life Insurance Company, were active participants in the American Public Health Association, and worked with medical and public health colleges to educate doctors and nurses about vital statistics.
Life insurance developed in the mid-1700s in England, which already had fire and marine insurance providers. As early developers of actuarial methods (using statistics to predict life expectancy, often based on the survival rates of their own policyholders), insurance companies used factors such as age, occupation, geographic residence, race, and sex to select clients. (The risks could be substantial, however, and the resulting cost of premiums limited the number of policyholders for many years.) Starting in the 1860s, they also pioneered the use of medical examinations to screen applicants. Industrial life insurance policies, which provided inexpensive sickness and death benefits to those who couldn’t afford or qualify for regular insurance, were first developed in England in the mid-1800s. Two decades later, three small American life insurance companies--Prudential Insurance, Metropolitan Life, and John Hancock--began offering industrial policies. Insuring low-income urban workers (who tended to have high mortality rates and periods of unemployment) required a different business model. The companies could use more liberal underwriting standards, because actuarial knowledge and tools had become more accurate. They graded applicants on a scale of risks, and determined premiums accordingly. They assigned agents their own territories in each city and made them responsible for sales, premium collection, and bookkeeping for that area. The agents went door-to-door each week to collect premiums, and developed relationships with families and communities. These close contacts helped agents sell more policies, and also built trust, which would be key to some of the health education work that the companies sponsored later. Insurance companies also changed their rules to allow easier renewal of lapsed policies. Soon, they had millions of policyholders enrolled, each paying a few cents weekly for benefits that covered final illnesses and burials. By the early 1900s, these policies had become a standard feature in working class life.
Although at first policy applicants could pick their own doctor to do the medical exam, insurance companies soon contracted with private physicians for this. Because the risks were higher in industrial policy customers, insurance companies expanded and sharpened their medical exam criteria. The goals of the insurance examiners were necessarily different from those of physicians in private practice; rather than reaching a diagnosis and determining how a particular medical impairment will affect a patient’s survival, insurance companies grouped large pools of policyholders with a particular medical impairment and used statistics to estimate how many extra deaths would occur in the group over a specified time period.
Company medical directors, physicians employed in the home office, appointed and supervised the local medical examiners, and determined the medical exam criteria. Much more knowledgeable than actuaries and underwriters about the medical examination, these directors standardized its content and met regularly with medical examiners in the field to oversee their activities. They also introduced diagnostic innovations on a company-wide basis and made sure that the examiners had the skills to use them. Since many physicians in private practice worked part time as insurance examiners (about half of all American practitioners by 1911), they adopted new diagnostic tools sooner than they might have otherwise.
The earliest medical tool adopted by the insurance examiners was urinalysis. Sugar in the urine had long been known to indicate diabetes, and in the early 1800s Richard Bright demonstrated that albumin in the urine (revealed by a simple chemical test) was associated with kidney disease. Although clinicians rarely did urinalyses in their practices, diabetes and kidney diseases were important causes of death, and insurance companies included the urine tests in their exams after 1885. As William Rothstein notes, "Urinalysis was a revolutionary advance in life insurance medicine. It predicted the development of life-threatening chronic diseases in apparently healthy applicants. It was inexpensive, could be obtained by even the least competent physician, and produced reasonably accurate results. If necessary, the test could be easily repeated. Statistical differences in levels of risk could be calculated by comparing the survival rates of groups with different test results."
As insurance companies continued to analyze policyholder data, they were able to broaden their number of insurable risk factors. The Actuarial Society of America published a study in 1903 that was based on the pooled experience of 98 classes of risks in 38 insurance companies, which showed that high-risk applicants might be approved, if evaluated with life tables that quantified the risks of factors such as build, occupation, medical history, and residence. This, and subsequent studies by a Joint Committee on Mortality formed by actuaries and medical directors, uncovered several new risk factors, including that of "build," which combined height and weight. The studies showed higher mortality rates for people who were more than 25% overweight for their height. The results also surprised many physicians when slightly underweight groups proved to have a significantly lower mortality rate. (In this era, being underweight was associated with malnutrition or wasting diseases like tuberculosis--a slight plumpness indicated someone well-fed and healthy.) By 1911, insurance company research had identified a variety of relevant medical and nonmedical risk factors and quantified the statistical risk of excess mortality associated with them. These included occupation, residence, build, physical condition, family medical history (including premature death), insanity, stroke, personal habits and medical history.
Life insurance companies also pioneered the routine use of blood pressure measurement in medical exams. Both high and low blood pressure measurements had been of interest to physicians for many decades, but it wasn’t until the development of the modern sphygmomanometer in the late 1890s that blood pressure was routinely used as a clinical indicator and a risk factor. Sphygmomanometers were first used by surgeons to track the low blood pressure that often indicated the onset of shock during operations. At the same time, insurance companies added blood pressure measurement to their medical exams, quickly accumulating data that showed its utility in diagnosing kidney diseases and predicting early deaths. Using data from both accepted and rejected applicants, the insurance statisticians were able to determine the average systolic blood pressure levels most associated with early mortality. Incorporating data from clinical studies, they were able to (by 1925) define hypertension as a systolic measurement of 140 to 160 mm Hg or higher with a diastolic level of 90 to 95 mm Hg or higher. The connections between hypertension and cardiovascular diseases took a long time to work out, because hypertension was often "silent," because it seemed a normal part of aging, and because many people lived long healthy lives despite it. Large-scale clinical trials were still decades away, and only the insurance industry had the statistical expertise and large pools of data to do long-term statistical studies of thousands of patient blood pressure measurements. The companies worked to improve the accuracy of blood pressure measurement, and to standardize the methods used, both in their own examiners and in medical schools. Medical directors also evaluated the quality of available sphygmomanometers, and even invented better ones.
Insurance medical examiners also were some of the first large-scale users of medical technologies such as chest X-rays to detect tuberculosis in the 1920s, and of electrocardiographs (EKG) to test for heart disease in the 1930s. One important result of the statistical identification of risk factors was that physicians began to accept the idea that certain physical and behavioral factors detectable in an exam could indicate long-term likelihood of developing diseases, even when the patient was asymptomatic.
Public health experts and statisticians in large American cities were also analyzing data--mainly vital statistics--to understand how morbidity and mortality levels in a given region were influenced by national origin, culture, and environment. Along with differences in living and working conditions that put people at risk, studies focused heavily on differences between various population groups, especially their nationalities, because in many cities of the northeast and Midwest U.S., between 50 and 80 percent of the population were either foreign-born or of foreign stock. (Louis Dublin and Gladden Baker did an early study: "The Mortality of Race Stocks in Pennsylvania and New York, 1910," Quarterly Publications or the American Statistical Association 17 (1920): 13-44). These studies turned up both expected and unexpected findings: While native-born Americans with native parents had lower overall mortality rates than immigrants, the researchers found that death rates among immigrant groups varied a lot, and the nationalities they expected to have higher rates (Italians, Russians, and Austro-Hungarians) in fact had the lowest, even when specific diseases, e.g. tuberculosis, were considered, and when income and living conditions were the same. Many other variables were examined: childhood nutrition, size and weight, home cleanliness, occupation, etc. Researchers concluded that many of the differences must be cultural, though they couldn’t point to specific beliefs or practices. This suggested that improving immigrants’ knowledge and changing their beliefs via educational health programs would be a good approach to reducing disease and extending lifespans.
Urban public health departments, particularly in large cities such as New York City, launched many programs to track disease, using recent bacteriological methods to identify disease organisms in patients, water and food supplies, and buildings. Early initiatives tried to identify infected people and separate them from the community for treatment and to prevent the spread of the disease. When these strategies proved inadequate, public health leaders developed health education programs to reach those at greatest risk (usually working-class immigrants living in crowded tenements). Health education flowed through a number of channels, starting with visiting nurses, which NYC began using in 1902 to work with tuberculosis patients and their families.
Tuberculosis patients were often sick for months or years, and thus could be a continuing source of infection, especially to family members. Visiting nurses could check on a patient’s situation--condition of the home, nutrition, ventilation, etc.--as well as the progress of the disease, and educate patients and their families in basic hygienic measures to reduce the spread of the disease, and in the early signs of infection. With repeated visits over time, they could observe what measures were effective, and what other factors might complicate a patient’s progress. Nurses gathered much relevant data about patients and their families, including basic vital statistics, which helped the health department track the disease and assess control efforts. Urban health departments also distributed educational pamphlets produced by the National Tuberculosis Association and Metropolitan Life.
Starting in 1909, Metropolitan Life developed its own visiting nurse service, the first of many social welfare programs for its millions of industrial life insurance policyholders. Agents, who visited policyholders weekly, could notify the local visiting nurse service whenever they found sick customers who weren’t getting adequate care. The nurses, in turn, provided care along with health education, and collected data. Metropolitan soon became the largest employer of nurses in the United States, providing substantial support for the expansion of visiting nurse services. In its recruiting efforts, the company encouraged young women to enter the nursing profession through a series of school health pamphlets featuring "health heroes."
Federal, state, and local public health agencies, along with private organizations, also used pamphlets on health topics in their education campaigns: prevention and control of infectious diseases through sanitation and other hygienic practices, care of infants and children, nutrition, dental care, foot health, posture, vision care, and many other subjects were covered. As with visiting nurse services, Metropolitan Life vastly outpaced state and local governments in providing health-related publications, distributing over 500 million pamphlets between 1912 and 1929. These four-to-eight-page works were written by professionals, in clear, simple language, and updated periodically. The more popular pamphlets were offered in as many as ten different languages. Though initially intended only for Metropolitan policyholders, millions of pamphlets were distributed by schools, clinics and hospitals, physicians, state and municipal health departments, and private businesses. The company also employed the advertising and public relations methods becoming common in other industries, placing public service announcements in magazines and other outlets. By the late 1920s, many Americans were encountering these educational materials frequently, and becoming used to thinking about "preventive maintenance" as well as learning more about identifying and managing various diseases. They were also accepting the idea that their own behaviors and choices could improve their health or put it at risk.
Insurance company statisticians continued to conduct studies of morbidity and mortality, using industry and public data. With their common interests in public health data, insurance company staff also participated in public health professional organizations. And the companies (especially Metropolitan Life) helped fund many studies and public health projects. One of these studies was a survey of public health departments in U.S. cities, carried out by the American Public Health Association to better understand the level of services, and develop standards for public health work. This survey functioned as an assessment of risk factors within urban areas rather than in individuals. Correspondence in the Louis Dublin Papers from 1919 to 1922 illustrates how this study evolved and the obstacles that came up. The survey led to the establishment of a permanent APHA Committee on Municipal Public Health Practices, funded in part by Metropolitan Life. Later correspondence shows how the APHA struggled to continue funding surveys and supporting the efforts of municipal health departments during the economic depression of the 1930s.
Metropolitan Life also co-sponsored one of the first community health studies, the Framingham Community Health and Tuberculosis Demonstration, which ran from 1917-1923. This project brought together physicians, nurses, statisticians, and other personnel from the National Tuberculosis Association, the U. S. Public Health Service, the Massachusetts State Department of Health, Metropolitan Life, private health agencies, and the health agencies and citizens of Framingham, Massachusetts. Their mission was to evaluate the overall health of the town’s 17,000 residents, with a special focus on finding and treating cases of tuberculosis. Active public participation was a key feature of the plan. They began by enlisting the cooperation of local political, business, and community leaders, as well as health professionals and the press. Investigators carried out a sickness survey of about one-third of households, and gave complete medical exams to about 11,000 citizens during the course of the study. As part of this effort, researchers analyzed the community’s statistical background and did studies of the general sanitary conditions in schools, factories, and the wider community. They developed and applied a wide range of community-level interventions--including intensive health-education campaigns--and measured their impact on rates of illness and death. A second goal of the project was to demonstrate community-based methods of disease control and health administration, and establish a model for effective public health systems.
The Framingham Community Health and Tuberculosis Demonstration program produced measurable and impressive results: tuberculosis mortality rates per 1,000 population in Framingham declined from an average of 1.2 in 1907-1916 to 0.4 in 1923, a much larger change than in the seven cities used as controls. Framingham continued to have a considerably lower tuberculosis mortality rate than the control towns for more than two decades, along with a 40% decrease in infant mortality rates. As historian William Rothstein has noted, "[T]he Framingham study contributed to growing national recognition that community-wide public health programs could have an impressive impact on health. The study was at the forefront in applying the principle that a wide range of community-level changes was required for effective control of tuberculosis or any widespread disease. It demonstrated the importance of public education and participation. By the end of the study, Framingham residents understood that they were collectively responsible for and could improve the health of their community.
The Framingham study confirmed three conclusions that many public health officials had reached. Effective public health programs required the active participation of the public. Active participation occurred only when the public was systematically educated about the nature of the programs and the role of the public. Last, the programs could be evaluated only by statistical analyses. Statistics had become the final arbiter of all public health programs, and it would play an equally significant role in the campaign to reduce infant mortality.”
In 1948, twenty-five years after the Framingham Tuberculosis study ended, another community health project began in Framingham. The Framingham Heart Study was (and is) conducted by the National Heart Institute at NIH. Coronary heart disease among older adults had emerged as a leading cause of death by the 1920s, as infectious diseases were reduced by better sanitation, vaccines, education, and rising standards of living. Long regarded as an inevitable consequence of age and genetic constitution, coronary heart disease wasn’t extensively studied or classified until after World War I. It was clearly becoming more common, but it was also difficult to diagnose, treat, or prevent. Physicians knew that diseases such as syphilis and rheumatic fever damaged the heart, but the causes of heart attacks in seemingly healthy patients over age 40 were much less clear. Atheroschlerotic plaque in arteries (caused by high levels of cholesterol), high blood pressure, excess weight, and other factors were correlated with heart attacks in some studies, but didn’t seem to lead to heart disease in all patients. During the 1920s and 1930s, Louis Dublin and others began doing limited studies with clinical records of physicians to ferret out more information on types and rates of heart disease in America. The death of President Franklin D. Roosevelt from cardiovascular disease in 1945 highlighted the destructive power of this often latent and "silent" condition, and the urgent need to find out which of many risk factors were most at work. The FHS investigators recruited about 5,000 Framingham residents between the ages of 29 and 62 (45% men and 55% women) who had no heart disease. They then examined them every two years for 14 years, recording all aspects of their health as time went on, and correlated that data with heart disease morbidity and mortality. The initial period of the study found that age and sex were risk factors independent of other elements, and confirmed many risk factors identified earlier, such as blood pressure, serum cholesterol levels, body weight and smoking. It also found correlations between heart disease and diabetes, and EKG abnormalities. An important finding was that individuals with multiple risk factors had much higher coronary heart disease rates than indicated by the sum of the risks alone.
The initial phase of the Framingham Heart Study was limited in several ways: the patients were mainly middle-class and of European ancestry; other ethnic groups were not added to the group until the 1990s. Also, unlike the insurance companies, the FHS didn’t track physical activity or social factors (occupation, income, education, marital status, living conditions, etc.) to any great extent in the first cohort study. Though inspired in part by the earlier Framingham community health study, the FHS had a narrower scope; the FCHTD was about reducing tuberculosis but also, in the process, improving environment and health services so that community health overall improved. It provided treatment along with disease tracking. The FHS was conceived as a long-term study focused solely on finding the risk factors operating in cases of heart disease, and did not provide treatment to its subjects.
Although insurance companies had been defining and analyzing risk factors for many years, the first big report of the FHS in 1961 marked the point where the term "risk factor" began to be adopted more widely by medical practitioners and the general public. As ongoing studies generated statistical evidence in the following decades, health agencies worked to educate the public about risks for a wide range of diseases and conditions, and how those risks could be managed. In the early twentieth century, most individuals thought about their health only when they became sick. By the end of the century, most health professionals and laypeople alike had accepted that health risks could be defined, and that they could be reduced at least in part through changes in individual and community behaviors.