NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
National Research Council; Division of Behavioral and Social Sciences and Education; Commission on Behavioral and Social Sciences and Education; Committee on Basic Research in the Behavioral and Social Sciences; Gerstein DR, Luce RD, Smelser NJ, et al., editors. The Behavioral and Social Sciences: Achievements and Opportunities. Washington (DC): National Academies Press (US); 1988.
The Behavioral and Social Sciences: Achievements and Opportunities.
Show detailsThis chapter is mainly about situations in which people make choices and goods and services are distributed. The most familiar such situation is the market, but choices and allocation play essential roles in all organizational and political contexts. Among the questions that have come to dominate recent research in this area are the following: What are the distinctive features of collective—in contrast with individual—choice and decision making? Is it true that “who controls the agenda controls the decision,” and if so, in what sense? What do electorates really choose, and how does the choice situation presented to them affect the outcome? What are the forms and consequences of internal political struggles within organizations? How do external, institutional constraints (such as constituencies interested in the fate of organizations) affect these processes?
In looking more narrowly at markets and related economic activities, current research considers such questions as: How are choices in markets affected by the availability of information and the structure of incentives? When and how do efforts to influence incentives through regulation contribute to more or less efficient market systems? To what extent are the expectations of economic agents based on “rational” factors and to what extent are they influenced by other factors?
What considerations do economic agents take into account when they negotiate and strike bargains with one another? How exactly do repeated interactions and stability of a relationship among agents change these processes?
How do complex patterns of information, incentives, constraints, and discrimination affect wage rates and other outcomes in the labor market?
The issues of social groups selecting among alternative possibilities and deciding on allocations of scarce goods and resources, coupled with the use of power to enforce certain decisions, are central to two disciplines, political science and economics. But other kinds of knowledge are also involved. Since organizations are often the key to carrying out those choices and allocations, insights from social psychology and sociology are needed, and recent work has incorporated laboratory experimental methods evolved in behavioral psychology. Many issues are addressed most effectively by combining all of these perspectives.
Collective Choice and Organizational Behavior
Decisions are inevitably affected by faulty memory, limited capacity to process information, and uncertainty about various factors that affect the outcome. Within these inescapable constraints, people who must make decisions should take into account all of the possible consequences of choice, good and bad, and their probabilities of occurring, not overlooking events whose probabilities are extremely small. The growth and understanding of individual decision making was discussed in Chapter 1. But important decisions are often assigned to groups—boards of directors, committees, legislatures, and the like. Such decisions, or collective choices, involve a social process that introduces a range of considerations quite different from those involved in individual decision making.
Research on collective choice has focused mostly on formal decision mechanisms of voting and on resource allocation, especially of public goods. Of particular interest is a mathematical approach that illuminates and yields strong predictive power in analyzing legislative agenda formation and electoral procedures. A major challenge at present is to understand the group processes that underlie many private decisions in the business world as well as issues that arise in public voting. This work on group process, carried out mainly by investigators in social psychology and organizational behavior, has amassed a great deal of data and has led to a considerable body of informal theory, but so far detailed formal or computer-simulated models have been rare. Work in this area is expected to grow, in part because of the enormous economic and social importance of improved decision making to both government and business.
Setting Agendas
Many organizations have the capacity to make certain choices that are likely to be disadvantageous or even oppressive to some members, and even in the face of dissent or dissatisfaction they can, to a degree, enforce these choices. At the highest level of social organization, taxation without reciprocal services, conscription to fight in a war that some individuals, alone or as part of an organized minority, regard as illegitimate, and imprisonment are extreme examples. Most organizations, even fundamentally political ones, are formally voluntary—citizens can, in principle, leave the city, state, or nation; stockholders can sell their stock; workers can quit their unions—but these “exit” options may be so costly, unattractive, extreme, or ineffective that they are hardly real options. More serious are the options of vocal contention, disruption, or withdrawal of support, active participation, enthusiasm, and diligence. The need to maintain such support in spite of disagreements and conflicts with members leads to the establishment of complex and sophisticated procedures for legitimating decisions, generating loyalty, and resolving intraorganizational disputes.
Because majority-rule voting systems are explicit, formal, and very common, they are the best understood of such procedures. A centrally important, undesirable feature of such voting systems is that they typically exhibit a particular kind of fundamental indeterminacy. For example, suppose that a group of three people (or three voting factions of equal size) uses a majority-rule voting procedure to select one of three alternatives—a, b, or c. Suppose further that the first voter or group prefers a to b and b to c, and so a to c; the second voter, b to c and c to a, and so b to a; and the third voter prefers c to a and a to b, and so c to b. Thus, one simple majority (voters 1 and 3) prefers a to b, another majority (voters I and 2) prefers b to c, and a third majority (voters 2 and 3) prefers c to a. No alternative is preferred to both of the other two. So the choice of any one of the three alternatives appears to be wholly arbitrary and will be determined by the order in which the alternatives are considered.
A major theoretical finding of the 1950s, whose significance is only gradually being recognized in practice, was a general proof that whenever a system of majority rule is used to choose among more than two alternatives, the outcome will in general depend on the order in which pairs of options are considered, the so-called voting agenda. This general mathematical property of majority rule under these conditions makes it highly susceptible to manipulation. It was shown, under plausible assumptions about the distribution of individual preferences, that if the agenda for voting can have any effect at all, then it completely dominates all other effects. The voting order can be selected so as to lead a group to choose nearly any option on the table, including those options that virtually everyone would initially consider to be undesirable. The logical structure of majority rule thus entails the possibility that anything can happen, depending on the agenda sequence. This means that control over the agenda—over the order of voting or other procedures—is extraordinarily important, as is obvious in the simple example above.
These theoretical observations have been shown to apply empirically to many complex and realistic situations. Whenever the number of alternatives is sizable, the probability that there is a single, universal majority winner over all other options is very small, and the outcome depends largely on the procedures for determining the order of consideration of the alternatives—the agenda. Recent experimental work clearly demonstrates that an agenda of a particular form, implemented and rigidly followed in situations in which individuals are unaware of other’s preferences (as in secret-ballot elections), can succeed in controlling a group decision. Furthermore, this phenomenon has been shown to hold not only for majority rule, but for most of the voting rules in common use that entail subdividing the set of alternatives into a series of votes.
Two avenues of research in this area are now under intensive study: explicit treatments of improved, fairer agenda setting for those social decisions that involve sequential voting among alternatives, and the development of amalgamated-preference decision schemes that require only a single, simultaneous vote and are not subject to serious distortions due to strategic voting. One important approach in this area, approval voting, is discussed below.
Sequential and Simultaneous Votes
Since the source of power in organizations rests heavily on the ability to affect agendas, both their constitutions (or other fundamental contracts) and the strategic behavior of conflicting parties (for example, their rhetorical presentation of issues) must be analyzed in terms of agenda processes. Basic theoretical research on agendas begins with the much simplified case in which agenda items arise in a known order and the preferences of all participants are fully known to everyone. As research knowledge about such extreme cases has increased, it has become possible to study rigorously cases much closer to the real world, in which knowledge of the preferences of others is imperfect. Ultimately the test of any attempt to design a better agenda-setting procedure is that it should both be easy to implement and should reduce the probability of disastrous outcomes, such as electing candidates who are low on everyone’s ranking or those who are backed by an intense, organized minority even though these candidates are disapproved by the majority.
The natural alternative to sequential voting procedures are ones in which each voter must report, at one time, something about his or her preference ordering of all of the alternatives, and this information is amalgamated by some specific rule to generate the social choice. A famous result, much studied and elaborated, shows that if all voters provide a detailed ranking of their alternatives, it is mathematically impossible to devise a fully satisfactory procedure, one that meets all of the usual requirements of fairness, to yield an amalgamated group ranking. The most pernicious feature of most procedures for amalgamating preference orderings is that they invite strategic voting, in which a voter reports deceptively about alternatives other than his or her most preferred one.
For example, in one widely used by small committees attempting to rank several alternatives, each voter ranks the alternatives from best to worst, and the group ranking is obtained by adding the numerical rankings assigned to each alternative. A common strategic move is for a voter to put his or her second (third, and so on) choices at the bottom of the list if those options appear to be the first choices of a substantial number of other voters. The effect of such a move is to increase the likelihood that the strategic voter’s first choice will win out despite the fact that more people prefer one of the other choices.
Such strategic voting clearly creates a distortion in the social process, and realization of that fact has prompted a search for procedures that are far less susceptible to such manipulation, although they will necessarily have some other undesirable feature. One example, not yet fully understood, is approval voting, in which each voter partitions all of the candidates into only two sets, approved and not approved. The procedure seems to work well in practice if each voter splits the approvals and nonapprovals about equally, but additional theoretical, field, and experimental work is needed to understand more fully its properties—both limitations and virtues—when the approval/nonapproval split is not equal. To some degree, progress has been hampered by limited resources since the experimental and observational work involved must be quite extensive. But because voting is such a pervasive feature of modern societies, especially ours, it seems to be a wise investment to understand better how best to carry it out.
Electorates
One problem of institutional design that is important for contemporary society is to render electoral systems fair to individuals and to significant groups during periods of social change as well as during more stable times. In the United States, Great Britain, Australia, and Canada—unlike continental Europe and most other areas of the world—the most common election procedure is plurality: the candidate with the most votes wins. While single-member-district plurality elections predominate at the national and state level in the United States, multimember-district plurality elections are quite common at the municipal or county level. Plurality multimember districts have come under increasing attack in federal courts for diluting the voting strength of racial and linguistic groups, and testimony based on collective choice theory has played a critical role in these challenges to multimember districts. Certain provisions for majority runoff elections have also come under challenge as racially discriminatory in their effects. A body of research has been conducted during the past decade on the properties of runoff systems, and future work to fuse this analytic theory with other approaches may lead to definitive knowledge about these systems.
No country in the world makes greater use of balloting of one sort or another as a decision mechanism than does the United States, which has by far the highest ratio of elected officials per citizen. Only rarely is the choice between just two alternatives to be decided by majority. Not only are there often more than two choices, but much balloting rests on a complex federal system whose multitiered structure also generates layers in political parties. In addition, special majorities are required for certain kinds of actions; concurrence of more than one voting body is usually required for legislation (bicameralism); veto powers of various sorts govern the relationships between legislature and executive; one unit may have the power to propose and another the power to block; and so on.
Understanding the properties of such complex, layered organizational arrangements is no easy task, but much progress has been made in the past two decades. In particular, models based in part on game theory have permitted researchers to reexamine and develop new understanding about many complex institutional arrangements (for example, the veto powers of the “big five” in the United Nations Security Council, the character of the U.S. presidential electoral college, voting rules in the European Economic Community) in terms of a common framework. Such models have made it possible to discover when apparently minor changes in procedures may actually have major effects that otherwise would not be anticipated.
Research has also illuminated the link between types of electoral systems, distribution of partisan support (as measured in votes received by a party’s candidates), and legislative seat shares. For example, an important hypothesis, the algebraic cube law of the relationship between the proportion of votes cast for a party across all districts and the proportion of seats it will win in the legislature, has been reformulated in a very general fashion, incorporating factors such as the average number of seats being contested per district and the effective number of political parties contesting them. This revised model permits far more accurate predictions than were previously available of the probable consequences of changes in election procedures, as in France’s shifts between plurality and proportional representation. A number of key issues, such as changes in candidate and voter behavior in response to changes in election rules, await additional observations and theory to be incorporated into the model.
Going beyond matters of particular decision making to the overall theory of democracy, one central and long-standing problem concerns the relation between voting and the ideals of popular participation and control over government. In one popular vision of democracy, voting is expected to produce programmatically coherent results so that popular participation is sensible and effective. Yet analyses of multiple-option voting agendas reveals that it is quite possible for voting results to be persistently incoherent. This discovery challenges the town meeting view of the fundamental basis of democracy. In particular, the concept of direct voting as a means to enact directly a consistent popular will into law, which is rapidly becoming technically feasible because of electronic networks, simply may not be a coherent way to legislate. In contrast, the concept of voting as a means of changing officials and thus affecting the law at a greater remove—embodied in most U.S. legislative procedures—seems quite consistent with the discoveries of collective choice theory.
Founding Political Systems
A final major area of research in collective choice is the founding of political systems. Virtually all societies experience periods when the fundamental procedures of voting, agenda control, and dispute resolution become matters of conscious collective choice, perhaps exercised through an ad hoc representative body such as a constitutional assembly, a party convention, or a committee convened under military auspices and subject to plebiscitary ratification. The adoption of the Constitution is the most familiar example in the United States, but there are many others in recent history, including Japan, West Germany, and the numerous states that emerged from the territories of European colonies after World War II. At these moments in history, society (or at least parts of it) becomes the designer of its own organizational structures, and its choices determine in large part the future viability, effectiveness, and justice of those structures. Continued historical and comparative analyses of the creation of such political organizations—including studies of the alternatives then available—will lead to understanding the information, incentives, and conflicts that were pertinent to those who created them, as well as the effects of the procedures by which those organizations were created. It is also important to understand how later generations of leaders and citizens have read and interpreted those earlier historical moments and how they brought them to bear—successfully or not—on the economic, social, and political developments that could not have been envisioned by the organization’s original designers.
Although overall research progress in the study of agendas and voting systems has been driven largely by theoretical work, a substantial commitment exists to empirical testing, observation, and application. The descriptive literature on political parties, interest groups, committees, and related organizational forms is rich, but needs further codification in terms of collective-choice models. Progress has been made in developing laboratory and systematic observational methods for studying collective choice processes, but this work is in a relatively early stage and will benefit from additional, focused research.
Organizational Design and Change
The past 20 years have seen considerable progress in research on the determinants of organizational structure. The first phase in this program of research developed what has come to be known as contingency theory. According to this perspective, optimal organizational design buffers the technological core, which is the material process of production, from external shocks. It does so by creating peripheral structures designed to deflect or absorb such environmental turbulence as market volatility, political change, major legal rulings, and the like. The optimal design depends on the detailed needs of the technical production system and the nature of the environmental variations and uncertainties.
In this model, if the technology is relatively stable and the environment varies along a limited spectrum of possibilities, the needed organizational structure is highly routinized and unchanging. Most organizational change is thus contingent on fairly revolutionary shifts in either the technological base or in the economic, political, or legal environment. More recent research has broadened the theoretically admissible sources of change—and sources of stability in the face of pressures for changes—by focusing on less formalized variables in the organization and its environment and studying closely the evolutionary movements that bring organizations more slowly but just as surely into new alignments with new capabilities.
Organizational Politics and Institutional Constraints
Research in the past decade has focused on two informal features of organization that have far-reaching implications: organizational politics and institutional constraints. Resource allocation within organizations is subject to intense political contest among agents within the organization, in business as in government. There are two main causes of such struggles. First, in large modern firms information and decision making is in fact decentralized, whatever the formal structure, due to the very limited capacities of decision makers, even aided by large computers, to observe, process, and communicate information efficiently. The largest private employer in the United States, General Motors, employs some 660,000 people, enough to fill every job in metropolitan San Diego or in the state of West Virginia. While most firms, even in the Fortune 500, are much smaller than General Motors, the median size of the these firms is still 13,000 employees, far too large an internal economy to be managed effectively without substantially decentralized information and decision making.
The private information of a decision maker yields a measure of power to pursue private goals that may be in conflict with corporate goals. Moreover, the many players in the internal corporate economy—shareholders, directors, managers, workers, and, sometimes, creditors—typically have at least partly divergent interests; hence, it is difficult even to impute to a firm a single overall objective. Among the objectives of the several types of players in the firm are profits, market share, growth, monetary compensation for managers and workers, quality of work, perquisites, and status. In addition, the different players may have different attitudes towards risks. Resource allocation and, thus, ultimately, organizational structure and strategy depend, at least in part, on processes of coalition formation and contest, especially when the costs and benefits of alternative allocations are difficult to measure and forecast. To go beyond the insights provided by case studies, the systematic analysis of these organizational coalitions and contests, now requires a move toward large-scale data collection among representative samples of organizations.
An active line of research has been concentrating on institutional constraints on organizations. Organizational designs are constructed and evaluated in a sociocultural context. Some designs have extensive social backing, that is, they are codified and promulgated by professional associations and schools or by government agencies. Designs also stand as markers of difficult-to-observe competencies, such as managerial acumen, and are therefore used strategically to signal such competencies. And seemingly neutral arrangements tend to become infused with moral value by members of organizations, turning means into ends. Designs may proliferate even when they make little or no contributions to productive efficiency if they serve the political or institutional purposes of subgroups within organizations or other powerful agents in the environment. Ethical and religious factors continue to play important roles as they have throughout history, such as the centuries-long effect on economic organizations of religious views about usury.
Research on organizational politics and institutional processes has made clear that organizations face strong inertial pressures. Attempts at radical redesign, especially in large, established organizations, spark political opposition and activate institutional resistance. Even without those pressures, there are bound to be transaction costs, that is, the costs of change. Opposition and costs can delay reorganizations that would take advantage of changing opportunities or enable better response to competitive threats. A core problem in explaining the spread of organizational forms is to learn how structural arrangements affect the speed and flexibility of response of large organizations.
Organizational Evolution
Although the dynamics of organizational evolution are more difficult to understand than the maintenance of existing organizational structures, a number of new developments are noteworthy. Using both theoretical and empirical techniques, researchers have developed insights into the formation of commodity and financial markets, the evolution of regulatory structure, the emergence of legal rules, the development of political institutions, and the principles of organizational change generally.
Much of the theorizing and empirical research on organizational evolution focuses on environmental factors. Some theories rest essentially on the diffusion of technological change as a driving force: when historically inherited organizations are unable to function in newer, technologically reshaped environments, new organizations arise to take their place. Some researchers have argued that the limited liability corporation evolved in direct response to the increasing pace of technological change. Other environmental factors are political in character: for example, it has been suggested that the rise of the seniority system in the House of Representatives is, in part, the result of changes in congressional constituencies that led to long incumbencies. When change is rapid, it becomes important to learn how new forms of organization are linked with processes of entrepreneurial activity. Since most entrepreneurs come from existing organizations, the dynamics of organizations undoubtedly affect the rates at which entrepreneurs are spun off and the likelihood that new organizational forms will establish footholds in competitive environments.
Some recent lines of theory and research emphasize rational-adaptive learning and copying; others emphasize competitive selection. Rational-adaptation theory accounts for both the structure and performance of organizations by focusing on the way in which large and powerful organizations respond to threats and capitalize on opportunities in their environments, which include not only the well-understood problems of availability of resources and markets but also the strategies of other organizations. Such research suggests the importance of growth by planned accretion or acquisition of new components by existing organizations, leading to the many-armed giants that seem to be a characteristic of modern times, including multicampus university systems, diversified corporations with many component subdivisions, and increasingly complex systems of military organization.
The competitive or ecological approach takes the notion of a population or system even more explicitly into account. Its central feature is that the diversity of organizations in society is seen as a kind of outpouring of new (and residue of old) organizational experiments and variations, much as the diversity of biological species must be regarded, with survival depending on the ways in which new organizations are created, the advantages they might possess in a changing environment, what kinds of environmental niches they might find, and whether previously existing organizations fail successfully to adapt to changing environmental conditions, including the arrival of new organizations.
Theory and research on organizational dynamics have developed considerable momentum. In particular, appropriate dynamic models are now in use for studying life histories of single organizations and groups of organizations. Likewise, promising starts have been made in modeling organizational learning and copying. Some convergence with other lines of social and economic dynamics have become clear, but they have not yet been exploited. Progress in understanding the issues discussed here will grow significantly as a result of building explicit bridges with work at the frontiers of dynamic modeling. In addition, this kind of approach strongly reinforces the necessity for data that uses as a sampling frame across time the universe of organizations, rather than of individuals or households (see discussion below, “New Sources of Data on jobs and Careers”). Only with such data bases can theories of ecological process and structure be tested and new ideas based on greatly enhanced observational capabilities—rather than analogies with biological evolution or older historical theories—begin to appear.
Markets and Economic Systems
Among the principal features of contemporary economic systems—markets, firms, and various forms of centralized, bureaucratic planning—markets may be the most important subtype, especially in Western industrialized countries, partly because they entail relatively little overall, expensive, organization. Markets are a fundamental type of arrangement whereby allocations and exchanges of goods and services take place. The characteristics of any market specify how individuals who enter it as agents (on their own behalf or as representatives of others) attach value to items or services to be exchanged and how they assign related costs and responsibilities. Market agents can, in principle, be anonymous: that is, the rules of the market relationship between agents do not depend upon their individual identity (except, of course, in the event of discovery of fraud or coercion), but rather on formal rules of exchange. In theory, a pure market form of organization is fully characterized by a language (or logic) of communication and choice, a set of process rules that govern communication between the agents, and a set of allocation procedures that carry agents’ intentions and choices into final effect, thus clearing the market at the end of the trading period.
A trend toward markets is noticeable at the present time even in some economies strongly committed to central planning. Therefore, it is only natural that considerable current research is focused on market phenomena. At the same time, a great deal of attention is also being paid to nonmarket phenomena, especially those arising in the allocation of public goods and in handling non-direct costs (for example, pollution). The study of nonmarket phenomena is also essential to understanding the behavior and performance of most third-world, as well as socialist, economies. One central research problem in market and nonmarket economies—and in other organizational structures—is that of analyzing the interaction between incentives and information.
Prices, the sine qua non of markets, accomplish two things. First, as the common denominator of economic exchange, they make individual agents reveal information about their relative preferences, that is, the relative values, for commodities through their offers or lack of offers to buy and sell goods and services. Second, the price system aggregates all of this information so as to allocate commodities among agents. In a fully efficient market, the price mechanism facilitates a set of trades such that, when all accounts are settled, the available commodities will have been allocated so that, overall, owners are satisfied in the sense that no further trades could improve the general level of satisfaction (utility) of all agents. This is the general definition of efficient in the economic sense.
The neoclassical theory of market prices, with its normative implications for how markets should be organized in order to become fully efficient, rests on strong assumptions about the stability of agents’ preferences, the ease of exchange of information among them, and their spontaneous willingness to express their true preferences in bids, offers, rejections, and acceptances. During the past decade, theoretical work and empirical findings have accumulated about the extent to which these assumptions, often grouped together under the rubric of “perfect information,” can be verified, the circumstances under which they need to be relaxed or otherwise altered, and the results of doing so. A strong effort is in progress to formalize the character and results of market operations and other allocative processes involving information that is less than perfect, information that is privately held, strategically misrepresented, or intrinsically costly to obtain. This kind of research has been especially useful in analyzing policy issues related to the regulation and deregulation of major consumer and producer markets. At the same time, equally strong efforts have been made to extend the theory in a new direction, to explain financial market decisions that are predicated on inferences about future prices rather than on responses to present ones. In both of these areas, leading theoretical work has been followed up and engaged in a vigorous dialogue with empirical findings in a variety of contexts.
Public Goods and Strategic Revelation
Collective or public goods are goods whose benefits cannot be closely partitioned according to who paid how much for them—police, roads, and welfare are common examples. A very general problem is that whenever collective or public goods are to be provided through voluntary agreement, as in a marketplace or a system of taxation that includes a degree of shared decision making and voluntary compliance, public goods will be chronically undersupplied or underfinanced relative to actual individual preferences for them. There are several reasons why this is the case, but one that has received particular attention in recent years is the problem of “strategic revelation of preferences.”
To illustrate strategic revelation, suppose that two adjacent property owners are both interested in building a new fence along their common property line. One owner is really willing to pay as much as $5,000 to have the fence built, and the other owner is willing to pay as much as $3,000, for a total of $8,000. Now suppose that the actual cost of construction is only $4,000. If the first neighbor would pay $2,500, and the second $1,500, each would clearly be better off with the fence than without it. But each would also be better off if they split the cost evenly—or even if their shares were reversed. So each neighbor has a monetary incentive to try to convince the other that the fence is worth less to him or her than it really is. Given the incentive to feign indifference toward a new fence, to try to “bluff thy neighbor” and get the same benefit at a lower personal cost, the fence may not be built for quite a while, or at all, even though both parties may privately prefer otherwise and even though, if each could find out the other’s real valuation, they could readily reach an agreement.
As the example suggests, there is always an incentive to understate one’s true demand for a collective good—whether it be a simple fence, national defense, public education, police or fire protection, environmental protection, or civic beauty—in order to reduce one’s personal tax burden. At the same time, political forces often lead to considerable instability in the support of public goods due, in part, to the fact that some goods benefit one segment of society at the expense of a different, sometimes nonoverlapping, segment. The creation and elimination of water or air pollution are both examples of such an asymmetry. And support for public goods will depend on the entrepreneurial ability of political leaders to design lumps of public goods capable of attracting enough consensus.
At one time many researchers believed that no voluntary organizational arrangement or allocation mechanism could effectively solve the problem of strategic revelation or selective nondisclosure. In fact, it was proven in the 1970s that no mechanism can possibly achieve both full revelation and full allocative efficiency at the same time; in a broad class of collective choice situations, it is always in someone’s interest to disguise or misrepresent his or her true preferences. However, the most extreme alternative to voluntary financing—centralized allocation—has also been shown often to lead to inefficient allocation, such as overinvestment relative to actual demand for water projects, mass transit, or other items.
Theoretical and experimental research has shown that allocation systems can be designed that can induce more efficient results in many situations. Although, in general, agents have incentives to understate their valuations of public goods, it is possible to design rules (including property rights) such that these incentives diminish or sometimes disappear. These mechanisms make possible the production of a more efficient amount of public goods (in the example above, the fence would be built; in another, a reservoir would be constructed). However, these mechanisms involve additional costs and usually still do not allow the complete voluntary financing of the public good. One such mechanism requires each participant to state the level of public service desired and the share of expenditures he or she is willing to bear. The incentive structure is so designed as to push the group toward unanimity with respect to the level of public service. Unanimity will only be attained at an efficient level of the service.
Systems of this type might conceivably be used in arriving at decisions concerning areawide services, such as airports or waste disposal arrangements, in which various communities have both common and conflicting interests.
Information Asymmetry and Transmission
The problem of strategic revelation and, more generally, of privately held or asymmetrically distributed information, extends to markets for private goods as well. Consider the following recent, but already classic, analysis. Suppose that each potential seller in a dispersed market for used cars knows from experience the value or quality of his or her own car (which may be thought of as an index of how long that car is likely to continue running reliably) and that this quality cannot be directly observed by nonowners. Buyers thus do not have direct information about the quality of individual cars, though they can acquire statistical information (say, from a consumer organization) about the overall proportion of “lemons” on the market. If any high-quality cars are offered, buyers would be willing, based on the statistical odds, to pay a price for a used car that is higher than the value of the lowest quality car, but lower than the price that potential sellers of highest quality cars would consider acceptable. At this price, owners of most used cars will keep their cars, but owners of lemons will be happy to sell theirs. In this analysis, only lemons will ever be put up for sale voluntarily, purchasers who might prefer to buy higher quality used cars will find none for sale, and the market prices for used cars will end up being equal to the value of lemons.
This analysis of how asymmetrical information can shape a market explains why used car buyers look for owners who “must sell”; why people pay so much attention to whether a used car is “clean,” which has the virtue of giving visible evidence in a circumstance in which the most important information is hidden; and why used car dealers, whatever their business ethics, have so much trouble maintaining or establishing a respectable reputation. Moreover, this analysis shows that the used car market (the “market for lemons”) is necessarily an inefficient one, since asymmetry of information leaves unexploited gains (mutually worthwhile deals) still unmade after trading is completed.
To be sure, the analysis assumes an extreme case, because in reality buyers can gain some information about the nature of a particular car by taking it to a mechanic for testing, and sellers operate under social constraints such as product liability and warranty requirements. Each of these factors affects the structure of information and prices in the market. Yet such remedies for imperfect information involve real economic costs, which are called transaction costs. Such costs have been the subject of intense efforts at measurement. Insofar as they arise out of the basic conditions of the market for “lemons,” transaction costs redistribute or simply amortize to some degree the inefficiency that is due to informational constraints.
These several cases illuminate a general principle about the effects of private or imperfect information: it is costly to extract information about the characteristics of individual economic agents or commodities, and the relative costs of gathering and monitoring such information strongly affect the performance and efficiency of the market.
The extension of this work on market information and, especially, its fuller integration into neoclassical price theory and principles of market design, is an important challenge to theoreticians and experimenters; it may also prove of future value to agents, managers, and regulators.
In this context, an outstanding difference between market mechanisms and central planning is decentralization versus centralization of information. Markets require relatively limited transmission of information about preferences and costs because prices are extremely efficient in encoding the information that is relevant to transactions. In fact, it has been proved that, in a steady-state economy, no informationally decentralized mechanism which has any less “transmission capacity” than the price mechanism can assure efficient resource allocation.
But for individuals to make good decisions based on such decentralized information is not always easy. An informational aspect of economic mechanisms that has been studied recently is their computational complexity. As one example, the complexity of linear programming problems and algorithms has received much attention, and a beginning has been made in measuring the complexity of competitive markets. In its more abstract aspects the study of complexity is a bridge connecting current research in economics (for example, problems of decentralized resource allocation) with research in information and computer sciences (for example, problems of distributed computing), and certain branches of pure mathematics.
Regulation and Deregulation
During the past 20 years, largely fueled by theoretical developments of the sort discussed above, the study of regulation, deregulation, and the consequences for market performance of variations in the governance of commercial exchange has accelerated. Traditional regulatory efforts to maximize the public welfare are based on shifting the incentives or extracting private information that otherwise might lead firms to maximize profits at the relative expense of environmental or consumer health and safety. The question is to what degree those regulatory solutions are flawed because they distort the incentives of firms to minimize production costs, inviting costly efforts to circumvent regulation or requiring costly monitoring to ensure compliance. In a variety of industries—airlines, telecommunications, trucking, broadcasting—the view that public regulation is necessarily or probably the most effective remedy for inefficiencies or monopolistic domination and manipulation of prices and output has been particularly questioned. Many of the questions have arisen from studies on how the costs and incentives of firms are affected by a variety of alternative mechanisms for disciplining them, including quasicompetitive mechanisms and the availability of legal recourse to their customers.
Regulation is now known to be an inherently complicated political and legal process, influenced not only by Congress, the President, the courts, and the regulatory agencies, but also by the industries subject to regulation, their competitors, and consumers. Private interests often spend considerable resources in efforts to deflect the impact of regulation by trying to influence the regulatory agencies or the political officials who supervise them.
An example of how research advances can inform regulatory policy is found in the case of cable television. The main issue of the 1970s was what effect cable television would have on the viability of “free” over-the-air broadcasters, especially independent stations using UHF channels. The underlying cost structure of cable television, the demand for subscriptions to it, and the viewing patterns of subscribers were analyzed. On the basis of these studies, the winners and losers under various scenarios for cable development were determined. The research indicated that cable television would primarily increase the likelihood of effective competition with the three national networks, in part by improving the market position of UHF independent stations. These results influenced subsequent decisions by the Federal Communications Commission and by the several congressional oversight committees to relax regulatory constraints on cable television.
Similar kinds of research on costs, demand, and competition in the airline industry had a bearing on policy questions concerning the potential effects of deregulation on fares, safety, quality, and the availability of service to smaller communities. This research played an important role in convincing Congress that airline deregulation would not lead to a substantial reduction in service, but would lead to substantially lower fares, except for some smaller cities.
Both of the preceding examples referred to the use of research in making decisions to regulate or deregulate. Research has also played a significant role in addressing questions related to the consequences of changes in regulatory methods. For example, rapidly rising energy costs during the 1970s caused most electric power executives and regulatory officials to examine new methods for pricing electricity. The central issue was whether and how to introduce “peak-load” pricing: prices that vary over time to reflect differences in cost and demand conditions. In order to estimate the consequences of alternative pricing methods, the federal government financed several field experiments with peak-load pricing, to find out which form of pricing produced the greatest efficiency gains (prices could vary literally from moment to moment at one extreme or could change only seasonally at the other), and for which classes of customers a switch of pricing methods would yield a gain that exceeded the implementation costs.
Rational Expectations
A very active area of contemporary research deals with market decisions about the movement of capital. One common feature of such financial transactions is the inherent lack of definitive information regarding the future payoffs or values of the assets changing hands. Theories of asset pricing and risk sharing have depended greatly on identifying the expectations about future events that are held by participants in financial markets and have usually assumed that future expectations were largely based on—or biased by—the trend of past events.
An important innovation of the 1970s was a set of theories about “rational expectations.” These theories hold that individuals form expectations about the future that are based on all of the currently available information and that agents in financial markets can be presumed to use the best techniques available for drawing prospective inferences from these data. Rational expectations have the property of being “optimal” (that is, the best possible statistical forecasts) on the average. Stated most strongly, the hypothesis of rational expectations implies that financial markets are fully efficient, with the asset prices prevailing at time taking into account all available information on possible price movements. The innovative feature of such theories is that past price movements are taken to have no particular value in predicting future prices. The only way to “beat the market” consistently under these assumptions is to secure inside information. When applied to financial markets and to those nonfinancial markets that affect the economy as a whole, these efficient-market hypotheses lead to the further conclusion that taking planned government action to counteract the business cycle will prove ineffective because by the time such action is undertaken, it will have been anticipated and appropriately discounted or adjusted to by the market. If correct, which is a matter of ongoing research and debate, such a conclusion would have major implications for government economic policy.
The early testing of rational-expectations theories on actual market behavior has not supported those theories in their strongest form. For example, the stock market fluctuates excessively in relation to the underlying determinants of stock value: that is, the market is more volatile than efficient-market hypotheses permit. Related research on the maturity structure of financial markets—the distribution of times when credit instruments bearing various interest rates come due—has also rejected the strong rational-expectations hypothesis. This kind of empirical testing has been made possible by improved models of asset pricing with testable empirical implications, development of econometric methods of dealing with rational-expectations hypotheses, and the availability of excellent data on asset returns.
The failures of a strong version of rational-expectations theories at the aggregate level of stock price movements have led to intensive examination of their underlying statistical assumptions and to their revision. With respect to the stock market, innovations have included new theories of rational-expectations equilibria, which can produce random cyclical price movements without an external cause, and particularly of “speculative bubbles.” A speculative bubble exists when the market price of an asset diverges significantly from the price that would be indicated by those fundamental factors that should determine its value. But in addition to its basic value, a stock’s price is determined by the possibility of reselling it to someone else who, in turn, may expect to resell it again at an even higher price. This speculative process has been shown to be theoretically consistent with rational-expectations equilibria in which the probability of asset-value growth is high in comparison with the probability that the bubble will burst. Bubbles can develop in any economy in which the interest rate is less than the growth rate. In such economies bubbles may actually improve the allocation of resources—which is the underlying issue, not whether smart people can make easy money on the stock market.
The rational-expectations approach has been applied in areas other than financial markets. For example, the life-cycle and permanent-income theories of consumption and savings behavior imply that a person’s consumption rate is determined by income and wealth considered over the lifetime. A person’s current level of consumption thus depends not only on present income, but also on the person’s expectations concerning the present value of future income. Assuming that these expectations are rational, the life-cycle hypothesis implies that since decisions are based on overall expectations, levels of consumption in one time period have no direct impact on the consumption in the next time period. Empirical studies at both national and household levels are largely consistent with this hypothesis of independent variations through time, but they also show that consumption is nevertheless more closely related to fluctuations in current income than the rational-expectations, life-cycle assumptions imply. This finding has shifted theoretical and empirical investigation to the effects on consumption of constraints that make it more or less difficult (or impossible) to borrow against future income.
An important related issue is the effect of government budget deficits on real interest rates. The rational-expectations hypothesis implies that taxpayers know that the government will have to impose taxes in the future to retire the debt. Since the present value of future taxes is equal to the amount of the debt incurred, it follows that government borrowing is equal to future taxation. Thus, according to the hypothesis, explicit taxation and deficit financing should have exactly the same effect on savings and real interest rates. However, direct empirical tests of this theory have proven inconclusive. If further research using better data on private consumption and taxpayer behavior can establish that real interest rates are affected by the method the government uses to finance its spending, a major empirical issue in present debates over government deficits will be resolved.
The rational-expectations approach has been theoretically rigorous and innovative, has generated controversy, and has stimulated systematic empirical research. It has not survived fully intact; a number of empirical tests have produced results that either run counter to its boldest hypotheses or are inconclusive. This outcome poses both strategic and methodological challenges. The strategic challenge is to abandon an “either-or” approach to rational expectations and to press theory and empirical research forward by identifying the conditions under which rational expectations hold sway and the conditions under which other forces impinge or dominate. The methodological challenge is to make more concerted efforts to measure directly how expectations are actually formed by individuals in empirical settings. Research up to now has been largely based on market outcomes from which plausible expectations of actors are inferred. A more direct approach could be effected by using combinations of aggregate data analysis, experimental simulations, and panel studies to learn directly how individual agents form expectations and use them in decisions on financial holdings, savings, and borrowing.
Contracts
In theoretical terms, contracting relationships become important where classical market assumptions or requirements fail or are infeasible—as, for example, where the products or services to be bought or exchanged are not well defined, the risk of precommitment for uncertain exchanges is too great, transactions are nonrecurring, inputs are highly specific, or collective goods are at stake that must be shared over a long term. In short, conditions of imperfect or asymmetric information often lead to making contracts. The study of contractual relationships has become a major research front, with a diversity of methods and theoretical perspectives. It has proved to be an important area concerning the interplay of individual preferences with organizational structures, an area comprised principally of analyses about information flows and the information underlying decisions.
Principal-Agent Models
One main line of work on contractual relationships is the study of the principal-agent contract. In many situations, individuals contract for goods or services that cannot be described fully prior to delivery or performance—for example, a landowner seeking the services of someone to farm his or her land, or a client seeking the services of a lawyer, doctor, or financial manager. In these cases the individual who seeks the service, called the principal, does not ordinarily find it worthwhile or feasible to specify in advance exactly what constitutes adequate performance. This contractual vagueness provides opportunities for the provider of the service, the agent, to gain an advantage under some circumstances. At the heart of principal-agent research is the careful modeling of precisely what information each of the parties has at the outset and what information each is able to obtain during operations, which is particularly important when asymmetry of information exists about some aspect of the environment in which the contract is operating.
One of the first applications of principal-agent modeling was to sharecropper contracts. This type of arrangement between landowners and tenant farmers, commonly observed in both the United States and other countries, commits the parties to share the agricultural output in fixed proportions. This pattern was difficult to explain using previous generally accepted theories of economic organization, because such a contract results in the landowner and the tenant farmer bearing the same amount of risk in regard to their total compensation from the activity. Neoclassical utility theory predicted that the risk should be borne differently, with the landowner bearing more risk than the tenant, since a bad crop (and, hence, a small share to each party) would be much more hazardous to the relatively low-income tenant farmer than to the landowner.
Under this theory, the only explanation for the observed pattern of equally shared risks seemed to be that sharecropping is not a contract voluntarily entered into between equals. In this country it was most notoriously an arrangement that became common after the Civil War between white southern landowners and poor, illiterate former slaves in a social setting that terrorized and oppressed black farm families. But the system has appeared in much less coercive settings as well, which suggested that an economic rationale aside from exploitative subjugation might also be at work. Principal-agent considerations identified such a rationale. Consider the information-monitoring difficulties that would arise in a contract in which landowners bore most of the risk. Also consider that a contract with unequal risk would give tenant farmers a return that does not vary in proportion to the variation in total output, and so there would be incentive losses in the sense that tenant farmers would not expect their return to depend greatly on their efforts. A tenant farmer obviously knows the amount of effort exerted, but a landowner typically can observe in detail the level of activity of the farmer only at rather high cost. Having modeled carefully the information-monitoring difficulties in this problem, researchers then asked what kind of contract would lead to the largest net output (total output minus the cost and effort of producing it). It has been shown that the kinds of sharecropper contracts and related organizational structures observed were quite consistent with the model’s predictions.
The sharecropper problem is an example in which the principal-agent paradigm is helpful in understanding why a particular type of contract evolved. The analysis discloses that, depending on the assets available to the two parties, the possibilities of close monitoring, and other specifiable factors, sharecrop-ping may be a very efficient organizational form relative to the alternatives (such as land rents or wage labor) developed across centuries of agriculture. There are many problems of organization for which there simply do not exist centuries of experience, and the careful analysis of such problems, using the principal-agent paradigm, allows one to identify, at least in crude form, what kinds of contractual structures may be optimal. For example, many questions of public policy concern kinds of contracts and transactions firms should be allowed to engage in, and these problems can only be analyzed with some theory in mind as to why firms want to use particular contractual arrangements. A specific example, addressed in recent years, is retail price maintenance, in which a firm sells goods wholesale to a retailer but controls the price at which they can be resold to the retail customer. This practice has been both explicitly legal and illegal at various times and places. The discussion of whether it is in the customer’s interest that such practices be prohibited cannot be answered without some explanation as to why the original firm would want to control the retail price. Consistent explanations were lacking until the advent of principal-agent models, which define the business incentives and facilitate careful consideration of the conditions under which such practices benefit or cost the ultimate consumer.
In a similar matter, the question of whether it is in the general interest to encourage or discourage mergers between large firms requires an understanding of the purpose that mergers might serve. In the case of a vertical combination (that is, a merger between a firm and one of its suppliers), it has long been a question of what is the advantage of such a merger over a sophisticated contract between the two firms. A focus on the difficulty of monitoring contracts between separate firms, in contrast to internal processes in a single (merged) one, has led to a better understanding of the purposes of vertical combinations. This understanding can provide the groundwork for more informed policy regarding such business practices.
One striking success of the principal-agent is in managerial accounting, which deals primarily with how firms use accounting information for internal operations, management, and compensation (as opposed to financial accounting, which has to do with the use of a firm’s accounting data by people outside the firm). The principal-agent model has revolutionized the way managerial accounting is taught in business schools; to a large extent, the principal-agent model has become managerial accounting.
The opportunities for further extension and application of the principal-agent model are considerable. It has begun to be used extensively in international trade problems to analyze relationships between industries in various countries and the governments that must set rules for international competition with only partial information as to the internal structure of the industries. There have been initial attempts to study the evolution of organizations over time and to understand what forms of principal-agent contracts best allow organizations to have the flexibility to respond to changes in their environments. And many of the techniques of the principal-agent model are applicable to bargaining and arbitration problems in which an arbitrator is only partially aware of the costs and benefits of various alternatives.
Bargaining, Negotiation, and Repeated Interaction
A bargaining situation is one in which agents make offers to one another and accept or reject them in the context of agreed-upon organizational rules and understandings. Some of the variables involved in bargaining are the level of agents’ uncertainty about the value of an offer, how impatient they are to come to an agreement, and their personal attributes or preferences. Much of the work in this area relates to information problems in markets, discussed above.
The broad outlines of a theory of bargaining are becoming clear. For example, game theory—the mathematical analysis of interdependent, partially conflicted situations involving two or more decision agents—indicates that if all information is known to all parties, the first offer made will always be acceptable and accepted, and any trade that is efficient will be made. If all agents know the value of all offers, there is no point in bluffing or holding out. In this full-information case, the consequences of different kinds of rules for making offers can be easily calculated, which sets of rules give which players an advantage can be determined, and players’ preferences for different sets of rules can then be predicted.
In contrast, when there is private or proprietary information, bargaining, takes longer because the agents’ efforts to elicit information about one another become part of the action. In this case, bluffing and holding out—or believing that the other player is doing so—may be a useful strategy, but it may also prevent the completion of efficient transactions. Recent theoretical results yield models of sequences of offers in private-information situations that make it possible to estimate the value to agents of alternative rules. It has also been possible to account mathematically for the fact that when a bargaining situation occurs repeatedly, bargainers may develop reputations for toughness that affect outcomes.
Applications of this new research to breakdowns in bargaining relationships, such as in litigation, wars, and strike arbitration, are tentative but promising. Compulsory arbitration, for example, generally tends to reduce the cost of failing to reach a bilateral agreement and may have a chilling effect on bargaining. Final-offer arbitration schemes—ones in which, as in professional baseball disputes, the arbitrator is required to choose the offer of one side or the other rather than produce a compromise—create a powerful incentive for both sides to produce realistic offers. Further insights into these systems can be generated by the study of histories of union negotiations from the standpoint of contract theory, bargaining theory, and alternative theories of behavior in structured, repetitive, situations.
While the early formulations of principal-agent theory dealt mainly with static situations, theorists have recently been able to take dynamics explicitly into account, that is, to model how an individual, when deciding how to act at a given moment, anticipates future interactions as well as immediate payoffs. For example, even if an individual has a short-term incentive to cheat on a contract, he or she may choose not to do so because the long-term losses from cheating outweigh the short-term gains. Whether this is so depends in turn on how the cheated party reacts to cheating. These kinds of considerations have led to the formulation of a theory of reciprocity (that is, contracting in cases of repeated interaction).
One key to this theory centers on the capacity of contracting parties to detect and punish cheaters. If cheating is easy to discover and is regularly punished, the incentive to cheat will lessen; occasional repetition of the punishment may secure high levels of compliance. But whether or not cheating is punished depends on whether contracting parties are committed to carrying out punishment even if it turns out that punishing the violator is very costly. The theory has important implications for such problems as nuclear deterrence. If a nuclear power were to use a few nuclear weapons to obtain some specific foreign policy goal while retaining the capacity to initiate a general nuclear war, other powers would find the initiation of punishment very costly, since it would involve the risk of beginning a general war with consequent massive nuclear destruction. No matter what the powers promise themselves and others in advance, a potential cheater still can reasonably wonder whether nuclear threats would actually be carried out. Nevertheless, simulations indicate that even if the chance that swift and severe punishment might be carried out is relatively small, that chance is likely to sustain deterrence.
The notion of reputation or credibility is also central to the theory of repeated interactions. People in such interactions are sensitive not only to the future and their understanding of how long the relationship may continue, but also to the existence of private information. For example, in an international disarmament negotiation, both parties usually feign a large degree of indifference as to whether or not an agreement is reached. But this gains little advantage in the negotiations, unless the other party believes the apparent indifference to be real—which it ordinarily will not. Not to express indifference when one has a reputation for doing so, however, is highly credible.
Further progress on theories of repeated interaction and bargaining and negotiating sequences will require developments in several techniques. The mathematical skills required for their fullest development are not yet widespread, and the underlying extensions of game and bargaining theories are only beginning to be well understood. Moreover, many techniques of field observation and statistical analysis appropriate for work in this area are not adequately developed. Finally, laboratory methods that promise to be useful for research in this area have to be refined. In particular, the training of researchers in empirical research design, mathematical methods, and statistical techniques requisite for experimentation has to be expanded and links to researchers in cognition, language, and artificial intelligence have to be strengthened. A major interesting avenue of future research lies in the application of computer technology, which offers the opportunity of more thorough interactive experiments. The development of standardized software and operating systems would decrease the costs of performing such experiments and generate a large body of empirical data free from the effects of uncontrollably differing experimental designs.
Jobs, Wages, and Careers
The study of jobs and labor markets incorporates virtually all the issues that arise with respect to individual and collective choice: imperfect information, varying incentives and expectations, problems of negotiating and enforcing contractual agreements, and pressures of organizational inertia and change. Theoretical and empirical study has focused on the effects of the business cycle, the protocols used by individuals in preparing and searching for jobs, the practices of employers in recruiting workers, implicit understandings between employers and workers, demographic differences in earnings and job assignments, migration, and changes in the content of work. When viewed closely, employment contracts turn out to be an idiosyncratic and imperfect match between workers’ abilities and employers’ needs. Complex systems of formal and informal labor-management and labor-capital relations develop within organizations as ways of trying to ensure that future actions and events will be consonant with earlier agreements and decisions. This kind of study has been cross-pollinating with studies of other kinds of long-term agreements whose implementation is a matter of subsequent bargaining, monitoring, and exchange, heavily influenced by wider political circumstances, such as the contracts that electrical utilities make for fuel deliveries or that owners and tenants make for rental housing.
The depth and breadth of research relevant to jobs, wages, and careers and the extent of public interest in this set of topics lends the area a perennial interdisciplinary vitality. At the same time, a convergence of research interest and political imperative suggests the usefulness of a sustained effort to generate substantially better and more accessible collections of observational data on these topics.
Unemployment
Few topics in the behavioral and social sciences have attracted as much interest as unemployment. The subject is complex, not least because the term covers a number of very different ways in which unemployment may come about, and a distinct line of research corresponds to each of them.
First is cyclical unemployment, in which jobs disappear and reappear due to overall swings in the current or anticipated profitability of production, in short, due to the business cycle. Second is frictional unemployment, in which workers, after either voluntarily quitting or being fired from a job, seek different jobs and therefore are unemployed while the job search takes place. Third is structural unemployment, in which jobs disappear or do not exist because of such features as declines in the demand for particular kinds of labor, sometimes resulting from technological change; gaps between the skill demands of jobs and the skills of available workers; employers’ negative beliefs about some workers’ capacities for particular jobs or some forms of prejudice, which tend to produce an unemployable underclass; and other factors that create a long-term gap between the demand for and supply of particular workers. The degree to which cyclical, frictional, or structural factors predominate in a given level of unemployment has important implications for the type and effectiveness of programs or policies that would be appropriate for reducing unemployment.
Recent research with aggregate data on the business cycle and its temporal covariates indicates that overall wage rates respond only slowly to swings in productivity associated with downturns in the business cycle. Most of the adjustment to such downturns takes the form of layoffs and reductions in hours worked rather than lowering of wages. Some studies of contract structures for employment, marketing, sales, and delivery, especially focused on the lags and staggering of price adjustments, suggest that the reason output and employment often are highly sensitive to unanticipated shifts in demand is what is called price inertia—that prices, like wages, respond only slowly to demand shifts, possibly because of the costs associated with making changes in prices. This hypothesis certainly is consistent with the fact that both overall money prices and wage levels are relatively “sticky” and unemployment is highly sensitive to the business cycle. However, as yet there is no theoretical explanation for price and wage inertia. Clearly, arriving at such an explanation should have high priority.
Better data on contract terms—both explicit and implicit—and the details of price setting by firms and industries is essential for increased understanding of the dynamics of wage adjustments. Such data would also increase the ability to assess the effectiveness of economy-wide strategies for damping swings in unemployment and planning compensation schemes and layoff policies. Theoretical model-building is necessary in order to find ways to capture and test underlying ideas about how business cycles work. This kind of knowledge is also critical in understanding the degree to which both employers and workers profit and lose from unanticipated swings in the economy, and how these matters affect the commitment and well-being of managers and workers. Knowledge about these labor-market dynamics may help both employers and employees develop principles to guard against disastrous losses in difficult times and yet provide adequate incentives for performance.
A separate realm of research focuses on frictional unemployment, that due to between-job searches. Survey data of manufacturing jobs and of households reveal that average spells of unemployment are short, with the highest turnover rates found among young and secondary workers within families. This finding tends to support the view that frictional unemployment is rather common. Policies aimed at improving the efficiency of search procedures and job information could reduce this kind of unemployment, although such positive effects might also increase the attractiveness of undertaking searches for better job matches, thereby increasing frictional unemployment.
The same surveys show that, despite the shortness of the average period of unemployment, most of the total days of unemployment are accounted for by a minority of individuals who are out of work for prolonged periods each year: this finding indicates that some structural factors are responsible. Although there has been little empirical work on structural unemployment, a striking finding of theoretical studies on job searching and recruiting is that relatively minor mistakes by individuals in assessing the job market can be expected to result in excessively long periods of unemployment for some workers.
Research on unemployment has been limited by the scarcity of detailed and reliable data on search protocols and decision behavior. Additional data on these topics from the United States and other industrial economies (see Chapter 4) could be used to investigate incentives associated with income-security programs. Such data would make it possible to assess how effective—if at all—providing better job-information mechanisms or building different incentives into unemployment insurance programs or labor contracts might be in reducing unemployment.
Implicit Labor Contracts
The search for good matches between workers’ abilities and firms’ needs is costly for both employers and potential employees. It would be very costly—if not impossible—for employers to try to improve the quality of such matches by supervising workers in every detail of their jobs or attempting to measure their productivity more precisely. As an alternative method, employment contracts explicitly or, more often, implicitly provide incentives for workers to be loyal, diligent, and to acquire the specific job-related skills that can improve their performance. However, these contracts are imperfect, and monitoring and enforcing them can be costly. Examination of the incentive and information properties of such contracts as well as their relation to the legal system and the actual behavior of employers and employees shows that they often have unintended consequences, some desirable and some not.
Some significant findings have emerged from the analysis of long-term employment histories that have become available from research projects that were initiated in the 1960s. In contrast to the stereotype of “Americans on the move,” it has been found that following an initial period of considerable job mobility in their 20s, most workers settle into stable employment patterns and change jobs infrequently after age 30. (There is far more job mobility in the United States than in European countries and Japan.) Other data show that the ratio of earned wages to measured productivity rises with age. One possible explanation for this steady age-related growth in wages is that workers acquire skills on the job that are not measured by productivity data, and apparent wage premiums for simple seniority actually reflect payment for these unmeasured, experiential skills. An alternative explanation is that high wages for older workers hold out promises of future (delayed) compensation to younger workers in return for staying with the firm, thereby strongly discouraging younger workers from looking for jobs elsewhere, which would require employers to engage in costly searches for replacements.
Neither of these proposed explanations has been definitively confirmed or rejected, and the truth may turn out to be some mixture of the two. But the alternative theories have very different implications for such policies as anti-layoff legislation, minimum wage laws, and vesting of pensions. For example, neoclassical economic theory suggests that a minimum wage will exclude from short-term employment markets those workers—particularly young ones—whose productivity is below that minimum. The resulting reduction in the competitive pool of workers would increase the wages of somewhat more skilled workers, who would therefore support minimum wage legislation. But more significantly, in labor markets that provide long-term incentives for workers’ diligence and learning (in the form of delayed compensation such as premiums for seniority), a minimum wage may serve to discourage long-term employment and related skill development among just those workers whose immediate and potential long-term productivity are both above the minimum—because the higher wages that need to be offered at the outset must be financed by reducing the delayed wage premium for long-term employment. The resulting wage profile from the minimum on up may not increase steeply enough with seniority to keep many people from moving around among employers, preventing them from learning the unique job-specific skills that permit highest productivity.
Similar issues of implicit contracts involved compensation also affect the question of how to stimulate corporate managers to achieve high levels of productivity. Research using the theory of implicit contracts and incentives has shed some light on why the very highest corporate executives receive much higher salaries—two and three times as much—than those not far below them in the corporate hierarchy. These differences are not consistent with explanations
that match compensation with current or even future expected productivity, but can be understood as delayed compensation for past productivity.Although there are some existing data that can illuminate competing theories of job markets and the productivity of workers and executives, knowledge would be greatly improved by longitudinal data on employee behavior as a function of the initial contract of employment and related benefits, customs, and expectations in the employer-employee relationship.
Job Segregation and the Gender Wage Gap
The social issue of job discrimination by race or gender is of long-standing interest. Persistent gaps in wages and job status are found between white males and other demographic groups. According to the neoclassical economic theory of this labor-market phenomenon, these gaps reflect intragroup differences in average individual experience and investment in human capital (education and training). According to the theory, firms have little or no incentive to discriminate for noneconomic reasons because they would be punished by economic competition and lose profits if they did so. Research on exchange under asymmetric information provides a different perspective of the observed phenomena. If firms must forecast the productivity of potential employees from readily observable characteristics, then any historical differences in average productivity due to different levels of investment in human capital by race or gender, or to negative effects of irrational prejudice, or to active hostility in the workplace may lead firms to use criteria of race or gender for screening and job assignment. The motivation for such discriminatory behavior, even if it is illegal, may be economically rational from the firm’s point of view in the absence of specific information on the skills of the individuals who face such discrimination—unless, of course, the dangers of expensive litigation are sufficiently large. Consequently, the incentives for hiring or promoting disadvantaged people may well be slight, thereby reducing even further the opportunities for people from disadvantaged groups to improve their skills. Such a pattern results in a self-fulfilling prophecy, self-perpetuating discrimination.
In recent years, research on discriminatory processes concerning where people work and how much they are paid has advanced more rapidly with respect to gender differences than racial or ethnic differences. The gender distribution of the paid labor force in the United States has shifted substantially in the last century. During the years of heavy industrialization (1880–1940) men constituted more than four-fifths of the total labor force, and only one-tenth of all married women worked outside the home. With the rapid expansion of service and white-collar employment after 1945, the proportion of women employed increased steadily. Women now constitute 43 percent of the paid labor force, and more than 50 percent of all married women are employed. Moreover, the character of women’s working careers is becoming more like men’s in two major ways: women are working many more years than before in full-time rather than part-time jobs, and they are working uninterruptedly, in spite of childbearing.
Despite these changes in women’s participation in the paid labor force, their average earnings have remained remarkably stable at about 60 percent of men’s for full-time, year-round workers. This wage gap is due largely to the segregation of labor by sex: that is, men tend to be concentrated in occupations that receive higher wages, and, within occupations, at jobs that are at the higher end of the wage scale for a particular occupation. Like the wage gap, sex segregation across occupations has been remarkably persistent. Its level in the United States did not change much from the turn of the century until the 1970s, even though particular occupations came and went and others changed radically in composition from predominantly male to predominantly female or vice versa. Of 503 occupational categories in the 1980 U.S. census, 275 exhibited a sex ratio of four to one or more, and about half of all workers were employed in such relatively segregated jobs.
Accounting for wage differences and job segregation in ways that withstand the test of empirical evidence has proved a recalcitrant problem. One theory that tries to account for the excessive number of women in lower paying occupations on the basis of personal characteristics and rational choice: in anticipation that they will interrupt their employment to bring up a family, women make different occupational choices from men, selecting occupations that qualify them for jobs they can leave and reenter easily. Since men do not interrupt employment to bring up a family, these “easy-come, easy-go” jobs would be highly segregated, and since experience premiums would not be sought, the jobs would be at low wages. However, research has seriously challenged this trait-based theory. For example, empirical studies show no greater tendency for women whose working careers were interrupted rather than continuous to be concentrated in heavily female-dominated occupations. In addition, the negative effect on earnings of time out of the labor force appears to be no different for male-dominated occupations than for other occupations. In addition, the rates at which the earnings of women in predominantly female occupations increase with experience do not differ from those for women in less segregated occupations. In short, women in male-dominated occupations earn more than women of measurably equivalent talent and training in female-dominated ones and experience no greater tendency to be “punished” for family interruptions. So for women freely to choose careers in “women’s work” does not seem an economically rational choice, whatever their family plans.
An alternative theory about why women enter segregated occupations, also based on personal characteristics, is that sex role socialization leads women strongly to prefer certain occupations for reasons unrelated to economic rationality and therefore to choose the training and education appropriate for them. However, the research evidence has also accumulated to the contrary for this theory. Young men and women both display considerable movement within the labor force, including a moderate amount of mobility across sex-typed occupations. Women have, in fact, proven quite responsive to newly available opportunities at increased wage rates for at least 100 years, including: the dramatic increase of women clerical workers relative to men from 1880 to 1900; the flood of women into previously male jobs during World War II; the rapid movement of black women out of domestic service and into clerical work after 1950; and the more recent sharp increases in the proportions and numbers of women becoming doctors, lawyers, and coal miners. The evidence is that women’s aspirations are shaped by their expectations about what kinds of occupations are accessible to them, rather than by fixed preferences for particular jobs.
Since the actual preferences of women have been shown to play a limited role in explaining their segregation into lower paying occupations, research attention has turned to the ways in which other influences affect the assignment of women to “women’s work.” For example, one line of accumulating research has investigated the role of people who make available information about various occupations, especially about their entrance requirements; the research shows that such information is presented differently to male and female students, from preschool to vocational training programs. Other research focuses on the detailed behavior of employers who characteristically do not hire women into men’s jobs and vice versa, or who steer prospective applicants into gender-typed openings, and of unions, particularly those that have histories of excluding women or of not representing demands for pregnancy leave and other benefits that favor women, while supporting seniority benefits and other demands that favor men.
Analytic attention has also focused on the effects of husbands discouraging their wives from job training or employment that would modify their regular home activities or of persuading their wives to leave their jobs when the husbands relocate. These effects may be reinforced by differential job ladders and training programs for men’s and women’s positions in firms, firm-wide job evaluation systems that underestimate the training and conditions of jobs held primarily by women, and institutionalized requirements in many jobs held primarily by men to work overtime or relocate at the employer’s bidding. The wage gap may itself perpetuate a pattern of household decision making in which a husband’s occupational opportunities and choices come first because they are more critical for household income. The lower valuation accorded to work by women by the structure of wage rates becomes, in effect, a self-fulfilling prophecy.
What is especially promising in many of these new lines of research is the focus on developing longitudinal or process data, which permit researchers to discriminate between competing theories that may all be inferentially consistent with data on outcomes alone.
Technology, Migration, and Mobility
The organization of work and the evolution of working careers has long been an active and productive research area. One area of current controversy concerns how present trends of technological change affect the quality of work-places and career opportunities. Do these changes increase or diminish skills, responsibilities, and commensurate rewards on the job? A second area of controversy surrounds the factors that shape the organization of work. One traditional view is that, when technologies change, firms adopt the work arrangements that best achieve administrative and technical efficiency, generally subject to a degree of bureaucratic lag. However, recent studies of technological change document considerable discretion in how specific jobs are organized, indicating that imperatives other than technical efficiency may systematically shape work redesign. Some researchers suggest that detailed distinctions among jobs serve to reduce the importance of high skill levels, diminishing workers’ control of job activities and dividing various subgroups of the labor force. Moreover, there is evidence that powerful people in organizations often redefine work roles to suit their idiosyncratic interests and abilities, not necessarily those of the firm.
Yet another controversial area of research involves the link between schooling and career. One view is that employers define jobs in terms of productivity requirements and then use educational credentials to decide whether individuals are likely to satisfy those requirements. An opposing view is causally the opposite, namely, that organizations define jobs and career paths around their workers’ educational attainments. Research has yet to provide evidence for selecting between these two views.
Recent analyses of labor markets focus attention both on career movement within organizations, including the manner in which vacancies occur at the top of an organizational ladder and move down them as a chain of promotions, and on movement among firms. Different determinants and consequences are associated with these two types of career mobility. For example, women, racial and ethnic minorities, the young, and the old are thought by some to move from firm to firm within the “secondary labor market,” where skills are nonspecific, job tenure is precarious, and few career advantages are obtained by switching firms. In this view, the “primary labor market” includes both career movement within firms (internal labor markets) and movement among high-skilled jobs among firms.
The effects of immigration and of international competition on domestic work organization, including wage rates, is another area of substantial interest. Because of substantial reductions in fertility in the United States in recent decades, immigration and regional migration have become major components of population change. Neoclassical economic theory, which viewed migration as an equilibrating response to differences in prices and wages, has proved inadequate to explain the observed population flows. Recent findings are that expectations about future earnings and changes in nonearnings income, rather than current interregional wage-rate differentials, induce migration, which in turn induces new business investment. Furthermore, research shows that areas with high inmigration also experience high outmigration, and that regional differences in income and unemployment change slowly despite high levels of place-to-place migration. Studies of migration decisions at the household level find that they are based on surprisingly little information. Better longitudinal data on expectations, perceived alternatives, and actual mobility decisions of households is needed to refine and detail the dynamic processes of expectation formation and decision making that influence migration. At other than the household level, data on relevant migrational firm behavior, the role of regional government competition for private investment and development, and the mutual effects between migration flows and the operation of local labor and housing markets are all sparse at this time.
At the international level, recent research has established that migration not only adds a new set of workers into the existing production organization, but it also changes the way in which work itself is organized. Immigrants, especially illegal ones, are much less likely to be employed in large factories than indigenous workers; they work in small, highly mobile shops, at home under a piece-rate system, as sharecroppers in agriculture, or as itinerant wage laborers under the gang system. Old industries like garment and footwear production and new ones like electronics have become increasingly “informalized” through their reliance on immigrant labor. These changes in the organization of work as a result of international migration are of fundamental theoretical as well as policy significance. Research is badly needed to asses the results of the new federal legislation on illegal immigrants in the context of labor markets that rely on immigrant labor.
New Sources of Data on Jobs and Careers
A central limitation facing researchers interested in explaining labor market phenomena is the lack of longitudinal data on individual work histories, work arrangements within firms, and the ways that both change over time that is detailed enough to distinguish competing theories of employment contracting, job search, employment, job design, career patterning, and wage allocation. At present, sizable research investments in dynamic data bases are largely devoted to samples based on households and families.
An example that illustrates the rich returns of these kinds of data is the finding on poverty from the Panel Study of Income Dynamics (PSID), which has been collecting data since 1968 on 5,000 families, chosen to be representative of the national population. (Prior to this research investment, the empirical information base for understanding the well-being of the population came almost entirely from cross-sectional or “freeze-frame” studies, which gathered information from independent samples at one or more points in time.) The PSID results confirmed the cross-sectional finding that, in each year over a 10-year period, about 7 percent of people were in families whose incomes fell below the line. But the PSID data showed that nearly 25 percent of the sample fell below that line in at least 1 of the 10 years, approximately 5 percent during 5 or more years, and approximately 3 percent during 8 or more years.
The PSID results made it possible to distinguish between people who are temporarily poor and those who are persistently poor and to assess the size and character of each of these populations. More generally, the PSID data showed that only about half of the best-off Americans were also best-off 7 years later and only half of the also worst-off were worst-off 7 years later. The major determinants of such changes in income level were transitions in marital status and similar familial events. Such a precise—and unexpected—finding of the relatively large degree of movement that exists within the seemingly stable American income distribution could not have arisen from earlier cross-sectional statistics of income distribution. Now, of course, on the basis of the new knowledge, new cross-sectional studies may be developed to confirm and extend such findings with retrospective queries.
To achieve a similarly enriched picture of job and career dynamics in the context of the employers and firms that produce income and invest in new technologies—in contrast with the households that consume them—requires longitudinal research data comparable in scale. Longitudinal data for large representative samples of firms, jobs, and workers would allow researchers to see how external forces on organizations, including business cycles and attitudinal and technological changes, compare with factors inside organizations, including managerial structures, promotion practices, and compositions of work forces, to shape hiring and recruitment, the design of jobs, and career outcomes. In addition, data could be obtained on decision making that affects employment in the face of sometimes rapidly changing technologies, contractual arrangements with employees, suppliers, and customers of firms, and organizational perceptions, politics, and cultures in firms over time. Such data would permit much sharper empirical tests than have so far been carried out concerning theories of job segregation, wage inequalities, unemployment, productivity, and organizational dynamics and might lead to completely new knowledge about the nature of work and organizations.
A second source of potentially rich data lies in historical knowledge on such matters as how the composition of the labor force responded to past immigration and how the nature of work was transformed by changing technologies and organizational structures in earlier periods. While good longitudinal and comparative data on work arrangements in the past are hard to find, researchers have recently identified several large-scale sources that could significantly enrich historical understanding of work and careers. Several large corporations have maintained detailed data describing employees’ job histories over many decades: these records, which are classed as inactive and are no longer of any practical value to the companies, represent a largely untapped source of data for researchers to assess how changing technologies, organizational structures, and labor market conditions affected job design and workers’ career outcomes in an earlier era.
Such archives have potential value not only for the study of organizational change and internal labor markets but also for the study of industrial science and technology (discussed in Chapter 4). Some work has already begun in these directions. This work would be well served by a general initiative to develop joint public/private sponsorship of data development and analysis projects to convert an appropriate sample of archives of major U.S. corporations into social and historical research centers or repositories.
During the 1930s the U.S. Employment Service began gathering data on the staffing patterns, promotion ladders, and job requirements of various establishments. Until the program was eliminated several years ago, these data were collected throughout the country (for some organizations, at more than one time) in order to prepare the Dictionary of Occupational Titles and other government publications. Microfilm or original documents exist in Washington, D.C., and in the program’s central repository in Raleigh, North Carolina. Machine-readable versions of these files would provide researchers with invaluable longitudinal and comparative information on the organization of work and opportunity in American industry over the last half century. Researchers have already converted a very small subset of these data (for example, California enterprises analyzed since 1959) into machine-readable format, developing coding procedures that could be used in a larger effort.
Other data sources in government records might also be useful for studies of work and careers. However, the need to protect the confidentiality of individual respondents places important restrictions on data about organizations that are in the files of the Census Bureau, the Internal Revenue Service, and the Bureau of Labor Statistics, among other agencies. And because of the growth of extensive privately held computer records regarding individuals, households, and firms—such as files maintained by business information companies, direct-mail firms, political action committees, health insurance consortia, credit bureaus, and the like—and the availability of increasingly sophisticated records-linkage software and fast, powerful computer workstations and supercomputers, federal agencies have become even less willing to make edited data accessible to researchers, even after identifying information has been deleted. In fact, there is no instance on record of a qualified scientific researcher using such records-linkage possibilities to identify individuals, much less using such information inimically; however, researchers do share the concern that less benign interests may exploit public-access files in ways that would not be acceptable either to those who have disclosed the information or to the agencies that have ultimate fiduciary responsibility for it. Given this situation, it is worthwhile to explore arrangements that might provide access to disaggregated data files (microdata) on firms and other kinds of organization without unacceptable risks to confidentiality. For example, contractual arrangements for temporary use of screened files by qualified researchers, providing penalties against breaches of confidentiality similar to those that bind government employees, may be an appropriate and acceptable solution.
Questions about individual career patterns that cannot be easily addressed by organization-based or job-based samples—in particular, occupational aspirations, mobility between organizations, and geographic migration—may best be addressed through work-event histories that are based on existing longitudinal panels, such as the Panel Study of Income Dynamics, the National Longitudinal Survey of Labor Market Experience, and the Survey of Income and Program Participation.
There is no simple formula to determine what kinds of research questions can best be answered through new data collection efforts rather than through concerted efforts to gain carefully protected access to file data. In many cases the two resources are complementary. In order to advance this field of study rapidly and efficiently, the agencies involved and representative researchers should work to formulate systematic long-term investment plans for the data needed for the research opportunities discussed above.
Opportunities and Needs
Three kinds of work are important to advance the understanding of decisional, allocative, and organizational phenomena: (1) theoretical analyses of choice, information, incentives, and behavior in markets and other organizational contexts; (2) empirical studies, especially of policy-relevant issues, using panel and longitudinal data on organizations and individuals in organizational context; and (3) refinement and extension of laboratory and field experiments to inform both theoretical and policy questions.
In spite of many past advances, knowledge about choice and allocation is still fragmentary. Experimental work has been limited to a few topics, and observational studies in policy settings have been limited by the availability of data. The advances do suggest, however, that continued research along the lines already initiated is fully warranted, and can be expected to generate valuable new knowledge in such diverse areas as regulatory and legislative reform, the design of financial markets, job segregation and wage gaps, and assessment of the implications and effects of corporate mergers and takeovers. We recommend a total of $56 million annually to support research on these topics.
Certain themes are virtually certain to continue in the next decade: the relationship between information, incentives, and the performance of organizations and market systems; the constraints imposed by human and technological limitations on information processing; the effects of divergent goals and dispersed information on the design of organizations; and the formal properties and purposes of contracts over time. New knowledge can be expected from further development and application of theories of individual decision making under conditions of uncertainty, as it arises from incomplete information about the environment and about the motivations and decisions of other actors in the situation. Important work can also be expected on theories of the mechanisms of collective choice, including strategic behavior, the manipulation of agendas, and the possibility of self-enforcing provisions against cheating. Such research will involve all three kinds of research work: theory development; empirical studies, including the collection of longitudinal data, particularly when repeated interactions are the issue; and laboratory and field experiments. For the study of mechanisms and institutions that promote organizational durability, flexibility, and effectiveness (as well as their opposites), a mix of historical, demographic, and ethnographic research methods are needed.
Much of the most valuable research in the topics covered in this chapter has been supported entirely by traditional investigator-initiated grants, and they can be expected to continue to yield rich results. We therefore recommend a substantial expansion of investigator grants, in an annual amount of approximately $20 million. While, overall, new equipment needs are not large by comparison with some other kinds of research, improved computer hardware and development of advanced software for investigators are essential, and we recommend, accordingly, approximately $4 million annually above current expenditure levels for these categories of equipment and support.
We are especially concerned with the diversion of young research talent at the postdoctoral level away from research careers, due to the attractiveness of opportunities in nonacademic careers and the paucity of postdoctoral research positions. We therefore recommend that an additional $5 million annually be added to the support of postdoctoral research fellows and an additional $1 million to predoctoral fellowships.
One of the most powerful devices for encouraging deep and rapid theoretical research has been the fostering, on a continuing basis, of networks of individuals working on closely related issues. The research at the Cowles Commission at the University of Chicago in the 1940s and 1950s offered many examples of the major breakthroughs that can occur if like-minded scientists interact on a regular basis. Much of the recent work on incentives and information has come from initial breakthroughs made by a group of researchers from across the country and abroad, meeting in regular colloquia twice a year. One or two week-long conferences during the academic year and a 2- to 4-week summer workshop, coupled with some resources for graduate students or postdoctoral trainees, have proved able to encourage very rapid breakthroughs on well-chosen subjects. Expenditure for such research workshops should be expanded with an additional $3 million annually.
One of the most important developments over the past two decades is the emergence of controlled experimentation on decision making and the design of market and other types of organizations for allocating resources. Even a simple market transaction is governed by many rules and understandings about property and contract, the value and stability of money, and credit. Political institutions of elections and representation are likewise nestled in a web of rules and procedures meant to regulate the process and produce recognizable and accepted outcomes. These processes have traditionally been studied in real world settings, but successful efforts have been made to bring the study of organizational and market behavior into the laboratory. In a number of cases, the success in doing so without falling into the trap of losing or distorting the phenomenon of major interest in the attempt to isolate it has been striking.
To investigate the importance of rules and procedures, laboratory experimenters have people engage in imaginary but familiar transactions. Experimenters can systematically change the rules of the transaction game, varying the procedures, incentives, information, or objectives given different groups of subjects. The terms of the transactions, for example, may be defined as simple one-shot bartering, as auction or bidding situations, as short-term, high-risk situations, or as long-term, predictable relations between trading partners. The results indicate that these properties are powerful determinants of traders’ preferences and show in detail how they can be expected to work. Laboratory experiments, although far from a substitute for field research—which, among other things, is needed to check on the idealizations introduced in the laboratory—constitute an efficient complement that permits much greater control over research variables and overcomes the need to wait for many events to occur in the real world in order to test every plausible hypothesis.
Experimental work is critical to theory development. Experimental methods require specification of the detailed structure of the processes presumed to be operating in the markets or other forms of organization under study. The impact of the new data on theory has also been dramatic. Many basic principles and assumptions have come under close examination, leading to new theories and important revisions of older ones.
Application of experimental methods requires some long-term investments. First, support is needed for the development of additional laboratories, both for equipment, space, and communication devices such as interactive computers, and, more important, for professional staff who can develop software and maintain and improve hardware. We believe that the establishment of new laboratories and the improvement of existing laboratories requires an increased annual expenditure of $2 million.
Second, theorists from a variety of disciplines must be able to participate in the design, evaluation, and interpretation of experiments. The theoretical issues are often so detailed or subtle that sustained communication is necessary to design appropriate experiments. The phenomena of interest often require input from several disciplinary sources so that the emerging set of principles can find use in applications. Colloquia, released time, and provisions for visiting scientists are needed.
Moreover, training in experimental methods must be expanded. A major strength of the experimental method is the opportunity for different researchers to replicate results. Replication necessitates standardization of procedures and methods. Such standardization has been facilitated in other laboratory sciences through decades of teaching laboratory methods in high school and college. Experimental methods to study markets, contracts, organizations, and agent behavior have not had the advantage of such large-scale background support. Training is needed for researchers to learn the procedures of laboratories where experimentation is being conducted, replicate the original results of others, and thereby consolidate scientific advances while gaining high-quality experimental skills. The additional training can be effected in part by postdoctoral fellowships and opportunities to spend periods of a month or two at existing facilities. All of these ancillary activities—interdisciplinary and interinstitutional collaboration in the design of experiments, periodic visits, and training in experimental procedures—could be sustained by a new program of experimental centers at $4 million per year.
This chapter identified specific areas where new kinds of empirical data are needed. Data on expectation-formation at the individual level, for example, possibly including programs of laboratory experiments, would help isolate the causes of departure from rational-expectations hypotheses that are observed in financial markets. Study of rigidities underlying unemployment at the macroeconomic level require detailed data on contract structure and the stickiness of price adjustments. Chronological data on collective bargaining and arbitration, possibly collected in field studies in which the arbitration rules vary, would permit much more accurate assessment of the role of rules and precommitments in successful collective bargains.
This chapter also singled out two types of longitudinal research that promises significant knowledge. First, longitudinal data are needed on the behavior of firms, particularly promotion practices and the trickle-down of vacancies, procedures for evaluating and rewarding performance, and the nature and extent of on-the-job training. These data need to be matched with panel data on workers. Detailed information about contracts with attention to the provisions for wage and hours adjustments and layoff rules need to be collected to assess the impact of economy-wide and firm-specific risk on the welfare of workers. The incidence and effects of multiyear labor contracts with unions should be studied in connection with the observed stickiness of wages. Second, longitudinal data on the organizations as such—in contrast to individuals or contracts—are required to gain a deeper understanding of their dynamics and their strategies of decision making; their response to changes in the economic, political, and legal environments; and their strategies for survival, expansion, and change in general.
Collecting longitudinal data is as expensive as it is important. The initiation of a number of appropriately designed large-scale longitudinal core projects, sustained over the requisite multiyear period, with solid support for archiving, documentation, dissemination, and technical and analytic assistance to users, will require an additional annual expenditure of $12 million. For maximum benefit of such expenditure these data collections should become the core to which are attached research projects that involve other methods, particularly ethnographic studies of the workplace and of occupations, network analyses of job opportunities, and field experiments.
A necessary complementary strategy to collecting new data is to cultivate, supplement, and disseminate existing research data and other potentially valuable data files more thoroughly. To do so requires establishing more effective, better supported lines of communication among academic research centers. An example of such communication is specialty-area computer networks, which have been established largely in psychological and human-developmental areas of research. This strategy also requires the establishment and maintenance of better lines of communication between researchers and the relevant data-collecting administrative agencies and private firms, to ensure that records assembled for purposes other than science or research can be made as useful as possible to the scientific community. We estimate that the total range of appropriate efforts to improve access to the most useful data that now exist in a series of academic research centers, government agencies, and private sources will cost approximately $5 million annually, of which $1 million should be especially directed to the exploration and cultivation of private record centers (such as insurance clearinghouses) and unused corporate and local government archives for research purposes.
- Choice and Allocation - The Behavioral and Social Sciences: Achievements and Opp...Choice and Allocation - The Behavioral and Social Sciences: Achievements and Opportunities
Your browsing activity is empty.
Activity recording is turned off.
See more...