In Joseph Frank's Lectures, we are taught how to read well, think well, and, as for all grand endeavors, to live with eschatological apprehension.
The 1996 welfare reform law seemed, at the time, like a very big deal. Critics denounced it as a savage assault on those Americans whose lives were already precarious. Supporters hailed it as the first reversal since 1932 of a relentless trend: individual government welfare programs grew more numerous, while each one spent more money and enrolled more recipients. Moreover, they contended, the law put the government and the republic fully on the side of personal responsibility and work, as the name of the act made clear, and endorsed the proposition that welfare should be a temporary measure to help the needy, as the name of the program created by the law made clear.
At the signing ceremony with which Michael Tanner begins his Liberty Forum essay, President Clinton repeated a phrase he had used many times during his 1992 campaign: welfare should be “a second chance, not a way of life.”
Tanner demonstrates persuasively, however, that from the vantage point of 2016 the 1996 law is in fact quite a small deal. Federal and state governments spend much more on anti-poverty programs today than they did in 1996. Poverty rates have fluctuated in a narrow range over the past 20 years, the same range in which they fluctuated prior to 1996. Such changes as there have been in the number of Americans living in poverty appear to have more to do with the overall state of the economy, and with exogenous social variables like the out-of-wedlock birthrate, than with changes in welfare policy. Nor is there evidence that the law catalyzed a renewed commitment to personal responsibility and honest work. Families appear no more likely to form and cohere than they did 20 years ago, and growing numbers of able-bodied adults are becoming disengaged from the workforce.
Welfare “reform,” then, appears not to have reformed welfare. It certainly didn’t re-form it—make it into something fundamentally different from the pre-1996 approach to addressing the problem of poverty. Basically, the law slightly modified but largely affirmed the fundamental premises and commitments of welfare policy that had been built up over the six decades prior to 1996.
Good policies are better than bad policies, and Tanner offers proposals that sound as though they would create a welfare system more successful than today’s at assisting the poor, promoting self-reliance, and fortifying the nation’s commitment to limited government. It’s important to remember, however, that the 1996 welfare reform once seemed promising, too. Why did it make so little difference?
Policy design is one part of the answer, and policy implementation another. But in a democracy, the importance of politics should never be underestimated. Humans are prone to believe things they find congenial. “Confirmation bias” tells us that we attach particular weight to those bits of information that fit with and confirm the ideas we already hold about how the world does and should work. Many conservatives, accordingly, believed that the 1996 welfare reform was but the first victory of an aggressive, sweeping campaign to simplify, diminish, and even delegitimize the welfare state.
After 20 years, that beginning is looking more like a culmination. A lack of political will—not just in the governing class, but in the public at large—has to be one factor explaining the dog that didn’t bark, the comprehensive reconstruction of the welfare state that America never pursued. The 1996 law has no progeny, subsequent legislation attempting further, similar reforms. Not only that, it was implemented in ways that largely vitiated its announced purposes, because the democratic desire to avoid big changes in the welfare state was stronger than the desire to effect them.
No one would design from scratch the sprawling, expensive, ineffective, redundant, inefficient, mystifying, and frequently counterproductive welfare state that exists today. But, it also appears, there is no consensus about how to change it. Even when we enact legislation that appears significant, as in 1996, the political determination to make sure the law is administered in accordance with its original purposes is overmatched by: inertia; the strength of interest groups, such as social workers, and their clients, such as the poor; and a widespread reluctance to pursue the arguments against the welfare state to their logical, and practical, conclusions.
Since The Political Beliefs of Americans (1967) by Lloyd A. Free and Hadley Cantril, it has been a mainstay of political science that Americans are ideologically conservative but operationally liberal. We dislike Big Government but we also think that most of what Big Government does, or says it intends to do, needs doing.
This ambivalence is especially pronounced regarding the welfare state. Ideological conservatism remains potent in this regard, partly because people are dubious about expanding government’s ambit, but mostly because we oppose dependence and irresponsibility, which we don’t want to sanction and subsidize. One of the many blunders that kept Senator George McGovern (D-S.D.) from the presidency in 1972 was his embrace of a “demo-grant” income redistribution program, and his subsequent explication of its underlying philosophy: “I would just provide that every person in this country is given a certain minimum income. If he wants to work in addition to that, he keeps what he earns.” The rationale, McGovern explained, was that “you can’t force somebody to work if they don’t want to work.” Such opinions offended voters who took pride in providing for their families by doing their jobs, day-in-and-year-out, even jobs that were boring, infuriating, demeaning, or dangerous.
At the same time that we are instinctually conservative, we appear to be operationally liberal. We don’t want people to starve or freeze. In particular, we don’t want children to suffer as a result of their parents’ bad choices or habits. We look to the government to prevent these personal catastrophes, sometimes when all else fails but usually well before all else fails.
In February 2013, the Pew Research Organization conducted a public opinion survey on government spending for various purposes:
|Would you increase, decrease, or keep spending the same for:|
|Aid for world’s needy||21%||28%||48%|
|Aid to needy in U.S.||27||44||24|
|Roads and infrastructure||38||43||17|
|Food and drug inspection||33||50||14|
|Natural disaster relief||34||50||12|
Clearly, one reason conservatives have failed to reduce government is because Americans don’t want it reduced. Pew could not find a majority in favor of less spending for any of the 19 purposes it asked about. In only three areas—foreign aid, the U.S. State Department, and helping the unemployed—did more people favor less spending, as opposed to more. For the social welfare functions among the remaining 16, the preference to increase rather than decrease spending ranged from the slight (aid to U.S. needy), to the clear (health care and Medicare), to the overwhelming (education and Social Security). Among self-identified Republicans in the survey, spending increases were more popular than cuts for Medicare (24 percent to 21 percent), Social Security (35 percent to 17 percent), and education (46 percent to 15 percent).
An important disclaimer: Public opinion surveys are biased in favor of government spending in a way that elections for officeholders or ballot propositions are not. The survey, that is, invites respondents to enter an artificial world where spending more on X never entails spending less on Y, and where additional spending does not require the government to borrow more or raise taxes.
Every Democratic presidential nominee and platform promises dramatic expansions in government’s social welfare spending. But since Walter Mondale lost 49 states to Ronald Reagan in 1984 after pledging higher taxes, all Democratic nominees have also been at pains to say that the only federal tax increases they favor are those that will affect only the most affluent 5 percent (or less) of the population. In the state of Washington, which President Obama carried easily in 2008 and 2012, the voters rejected by 64 percent to 36 percent a ballot proposition that would have instituted the state’s first income tax. The terms of the proposition, which was on the ballot in the off-year election of 2010, made clear that it would apply only to households with incomes over $200,000 a year. Even so, the possibility that the state legislature might someday decide to apply the tax below the $200,000-a-year line doomed the measure.
Is it any surprise our welfare state is a mess, given our confused and contradictory ideas about what we want for and from it? We want the government to do more but don’t want to pay for it, either because we childishly refuse to face basic mathematical realities or, more creditably, because we want convincing evidence that the government is effectively spending the money we already give it before we consent to give it more.
Many of our ideas, proposed and implemented, about the welfare state are efforts to square the circle: to affirm individual responsibility and self-reliance; and, at the same time, pursue a commitment to alleviate suffering. The New Deal’s emphasis on social insurance is the most important effort to synthesize these two very different aspirations. According to J. Douglas Brown, an economist who helped author and implement the Social Security Act of 1935, the government benefits provided by social insurance programs constituted “the honest fulfillment of a contract between the citizen and the state.” Benefits are calibrated according to complex formulas that reflect the severity of need, but also, more prominently, the so-called contributions to social insurance programs, “since we still believe in America that a man should be rewarded for his own efforts.”
Americans have eagerly embraced the logic of social insurance, to the point that even Republican voters favor expanding rather than reducing the biggest, most famous programs. More broadly, we have endorsed the premise that has shaped the welfare state: Government programs that honor and promote work in particular, and personal responsibility in general, are worthwhile; any that dishonor or discourage these virtues are unacceptable.
Aid to Families with Dependent Children (AFDC), an entitlement program widely and correctly viewed as doing very little to encourage recipients to be responsible for their own welfare or that of the children they brought into the world, became politically vulnerable over time, and by 1996 was untenable. At the same time, however, the Earned Income Tax Credit (EITC) grew steadily, with the approval of Republicans and Democrats at both ends of Pennsylvania Avenue. It has only gotten larger since 1996. Both are “means-tested,” available only to people with low incomes. What sets the tax credit apart from AFDC is that its benefits are confined to those who . . . earn income. EITC, that is, operates as a negative income tax or wage supplement, helping those—and only those—who help themselves.
We can see in the Pew results that Americans’ opinions about government spending reflect the importance we attach to reciprocity. One of the most popular spending items is assistance for military veterans. Another is education, which recommends itself as a way to help people trying to help themselves. A third is Social Security, which Americans really have come to regard as “the honest fulfillment of a contract between the citizen and the state.”
In one important respect, this contract is not honest: Social Security and Medicare do not just give back to Americans what they paid into the system, but in most cases provide a great deal more than they paid in, as C. Eugene Steuerle of the Urban Institute has shown. As a matter of politics, however, the legend of reciprocity has overwhelmed the reality of large subsidies to some taxpayers from other taxpayers for social insurance. The problem of unsustainable entitlements is less one of “greedy geezers” than of beneficiaries who have completely embraced the logic of social insurance. Having paid what they were told to pay, they demand the benefits they’re convinced they’ve earned.
Reciprocity also explains why other parts of the welfare state are less popular than social insurance programs. We are, according to the Pew survey, less enthused about government spending for people whose claim is based on their need rather than their contributions. If you’re simply sick, or unemployed, or poor, then the margins in favor of helping you are modest rather than overwhelming.
One of Bill Clinton’s favorite expressions was that no American who “works hard and plays by the rules” should be impoverished. This rhetoric was intended to direct our attention to the parts of the welfare state that we were encouraged to feel good about, the parts that helped people to help themselves. And it was designed to convince us that such laudable endeavors were pretty much the essence of the welfare state, that its beneficiaries in general were seeking only a second chance, not dependence as a way of life.
Clinton’s formulation does not, however, point us in any particular direction when we try to figure out what to do about people who don’t work hard or play by the rules. To say that welfare should be a second chance is not the same as saying it should be a last chance, that we’ll ever follow through on our threat to cut you and your dependents off from public support if you don’t get your act together.
And this ambivalence is spread across the ideological spectrum. Thus, the Cato Institute’s Michael Tanner favors replacing in-kind welfare benefits with cash transfers, in part because of the administrative simplicity, but mostly because doing so “treats recipients like adults.” Food stamps cannot be used to purchase alcohol or tobacco, for example, but cash can. The message of substituting the latter for the former would be that we trust you to know and do what’s best for your situation.
In the next breath, Tanner correctly points out that you’re very unlikely to be poor—and in particular to be chronically rather than temporarily poor—if you finish high school, avoid having children you’re unable to care for, and stay in the labor force. These, however, are all basic, respectable—but by no means heroic—adult responsibilities. Since most poor people are poor because they failed to discharge them, “treating them like adults” is wishful thinking. It’s perilous for a republic to govern itself by embracing the dubious maxim that trust edifies, and absolute trust edifies absolutely.
In short, the 1996 welfare reform didn’t amount to much because Americans had no clear ideas or strong convictions about what they wanted it to amount to. A significant portion of our population has grown detached from participating in the economy and adhering to basic social norms. We threw money and policy ideas at the problem before 1996, and nothing worked. With the Personal Responsibility and Work Opportunity Reconciliation Act, we threw more money and somewhat different ideas at the problem. It didn’t work, either.