Directory UMM :Data Elmu:jurnal:S:Structural Change and Economic Dynamics:Vol12.Issue1.Mar2001:

(1)

12 (2001) 29 – 57

Technological choice and network externalities:

a catastrophe model analysis of firm software

adoption for competing operating systems

Rense Lange

a

, Sean McDade

b

, Terence A. Oliva

c,

*

aIllinois Department of Education,Springfield,Illinois,USA

bManaging Consulting Director,Gallup Organization,Princeton,New Jersey,USA cDepartment of Marketing,School of Business,Temple Uni6ersity,1810N.13th Street,Philadelphia,

PA19122-6038,USA

Received 6 February 2000; received in revised form 23 June 2000; accepted 8 September 2000

Abstract

This paper presents an empirical estimation of a catastrophe model of organizational adoptions of a high technology product when network externalities are present. As such, it integrates work from the economics literature and the catastrophe literature to provide a broader look adoptions issues. Additionally, it is one of the few empirical studies we are aware of that attempt to model organizational adoption of high-technology products ‘for use’ rather than ‘for manufacture’. © 2001 Elsevier Science B.V. All rights reserved.

JEL classification: D23, O33, L15, C19

Keywords:Nonlinear dynamics; Network externalities; Organizational adoption; High-technology prod-ucts; Bandwagons

1. Introduction

Modern economies can be called network economies in at least two ways. Firstly, in dealing with new technologies and innovation, most firms are dependent on third parties with which they have to interact (to network) in order to obtain necessary resources. Secondly, in addition to this, more or less, intangible part of the network

* Corresponding author. Tel.: +1-215-2048150; fax:+1-215-7267808. E-mail address:[email protected] (T.A. Oliva).

0954-349X/01/$ - see front matter © 2001 Elsevier Science B.V. All rights reserved. PII: S 0 9 5 4 - 3 4 9 X ( 0 0 ) 0 0 0 2 8 - X


(2)

economy, more and more products (and technologies) can be seen as important and form the tangible part of the modern network economy. Hence, high technology and its associated products have become increasingly more important to organiza-tions seeking to gain an edge in today’s competitive environment. It is often critical for these organizations that the products they adopt are capable of interfacing with other internal and external organizational systems, have complementary additional products available, and have high levels of support available. In such situations, a community of users develops around the technology or product which provides increased benefits to the members of the community. The added benefits which can accrue to organizations affect the way that they adopt such high-technology products. In particular, firms will tend to switch from one high-technology product (standard or innovation) only if they believe that other firms will switch as well. Hence, and organization’s own high-technology preferences may be outweighed by their expectations concerning what other firms might do. This desire for community creates a market that is characterized as having network externalities.

Understanding the nature of organization adoptions when network externalities are present will continue to increase. Unfortunately, to date relatively little empiri-cal research has been done to examine this issue. Part of the reason for this dearth of research is due to the inherent complexity of modeling the organization adoption process when network externalities are present. In particular, since the adoption by firms is influenced by the tradeoffs between the anticipated benefits and the likelihood others will adopt, the market is characterized by bandwagons as firms move from the old product to the new product. This creates a discontinuity in the market adoption curve, and makes the modeling efforts using more traditional techniques difficult (e.g. Norton and Bass, 1987, 1992).

The research issue addressed in this paper is to examining the market-adoption dynamic for high-technology products when network externalities are present using a catastrophe model (Thom, 1975) to handle the discontinuity issue mentioned above. The application used for this study is derived from panel data on purchases of Lotus Freelance Graphics software as the DOS and Windows operating systems compete for dominance. In this context, the paper provides empirically-based information on the issue organizational adoption when network externalities are present, an area in which relatively little empirical work has been done. Further-more, by using a nonlinear technique we are able to overcome some limitations of more traditional approaches to the research problem. The result is that the paper both supports and extends the current literature on adoption when network externalities, and provides additional insight into the organizational adoption dynamic.

In summary, the main contributions of this paper are threefold: (1) Normally, high externalities are assumed in models of network externalities. In this paper we propose a model in which high as well as low levels of externalities are possible. In this way developing markets can be also analyzed; (2) The technique used allows us to model jump processes. Traditional techniques have difficulties dealing with non-smooth processes which usually violate their core assumptions; (3) Because little empirical work is done in this field the model is tested with panel data from the Techtel, Inc. database.


(3)

1.1.Rele6ant literature

In terms of the literature, relatively consistent descriptions of technological choice when network externalities are important have been developed by different researchers (Farrell and Saloner, 1985; Katz and Shapiro, 1985; Farrell and Saloner, 1986; Xie and Sirbu, 1995; Brynjolfsson and Kemerer, 1996). One impor-tant characteristic of such situations is that when firms switch from one product, standard, or innovation to another they tend to move in a bandwagon from the old choice to the new one. From an observer’s standpoint there is a discontinuity in the market as firms move in unison from the current standard to another. While the models developed have provided interesting observations regarding firm behavior, modeling efforts often had to rely on somewhat restrictive assumptions. For example, Farrell and Saloner (1985) assume high network externalities are present and technological choice is irreversible. The assumption of ‘high’ network external-ities in the market means that the market has already developed. Hence, this assumption precludes researchers from using the Farrell and Saloner (1985) model for newly evolving markets where externalities are nascent. Additionally, modeling of jump processes when the underlying processes are not smooth is difficult for most traditional techniques. Further limiting the impact of this research is that there has been a relative dearth of empirical support for the models that have been developed. Given the increasing economic importance of technological products in the market (whose success often is dependent on the development of a network) much more work in the area is needed. Toward this end, we propose to use a catastrophe model (Thom, 1975; Zeeman, 1976) to deal with the problem of describing discontinuous jumps in market behavior. Such models allow us to relax the typical assumptions so we can consider both developing and currently existing high network externality situations.

The specific application examines a sample of firm adoptions of PC presentation software (Lotus Freelance) for competing operating-system standards (DOS versus Windows) drawn from a panel of firms in 14 industries. Data cover the years from 1987 to 1995, which is the most dynamic time frame for the DOS versus Windows wars. To economize the effort and justification needed for the variables used in the study, three key indicators identified in the literature (Farrell and Saloner, 1985) were used. Specifically, we assume that firm adoption behavior is primarily driven

by the expected discounted cost/benefits due to switching from one product

standard to another, which are conditional on the level of network externalities present (Farrell and Saloner, 1985). Since our intent is to look at the adoption process over time, we depart from the literature and allow network externalities to vary from low to high. This is reasonable given that nascent markets often have only limited externalities as various technological formats compete for dominance. For example, in the beginning, numerous VCR formats were considered before the market resolved into the dichotomous BETA versus VHS battle, eventually won by VHS. Similar examples abound for competing PC operating systems during the early 1980s which continue until today (e.g. LINUX and UNIX). Oliva (1994) argues that for technological products the level of network externalities can change


(4)

as products move through the stages of the product life-cycle (Kotler, 1997). He argues that this is the result of different adopter types, with different characteristics, entering the market at different times (Oliva, 1994). Initially, ‘innovative adopters’1

pay a premium to be first and help firms recover their development costs. But innovators often use different versions of the formats offered. Given their small numbers, they do not determine the standard by themselves. Also, complementary goods and post-purchase support is limited, since vendors have no real incentive to join the technological community given the small installed base (Teece, 1986; Rosenkopf and Tushman, 1994; Wade, 1995). However, as the product life-cycle progresses, subsequently larger adopter segments enter the market picking the product that is perceived to be better. Over time a standard is settled on by the

newly evolving technological community2

. At this point new adopters will join the network, thereby strengthening the hold of the standard.

In time, a challenger may emerge that promises better benefits. The decision to stay with the current standard or switch to a new standard will become inherently risky for firms. This choice introduces chaos into the structure of the technological community because it creates a dilemma for organizations as they struggle to answer two crucial questions identified by Rosenkopf and Tushman (1994). They argue that adopting firms must worry about the following: (1) ‘What if we adopt (or switch to) a new standard and no one else does?’; and, (2) ‘What if we do not adopt (or switch to) the new standard and everyone else does?’ Hence, when externalities are large, the competing standard choice becomes a sort of zero-sum game. This produces an instability in the market that can result in a sudden bandwagon shift to the new standard if the benefits for making the switch are significant enough.

Using the firm adoptions, cost/benefits to switch, and network externalities, it is possible to conceptualize the situation in the form of a catastrophe model of firms’ adoption behavior (market behavior). In the sections that follow we present the description of the model, the approach used to estimate the model, a description of the data, how the variables were operationalized, and an analysis of the findings with conclusions.

2. The catastrophe model

We assume that catastrophe modeling is not new to this audience. We note that economists have used catastrophe theory to examine the following topics, to name

1The concept of an ‘innovative adopter’ is taken from the marketing literature which categorizes buyers (i.e. product adopters) by category depending on their characteristics. For example, Innovators purchase first as opposed to Laggards who enter last in terms of the product’s life-cycle (Kotler, 1997). 2This view is also supported by organizational theorists who describe technological change evolving over time as a process of variation, selection, and retention (Tushman and Anderson, 1986; Rosenkopf and Tushman, 1994).


(5)

a few: the business cycle (Varian, 1979); an extension of the Phillips Curve (Fischer and Jammernegg, 1986); the stability of stock exchange behavior (Zeeman, 1974); and a model of bank failures (Ho and Saunders, 1980). In this paper, the use of the cusp model is suggested as one way to examine jump behavior from a different perspective. This effort is an attempt to get around the more standard modeling assumption that the underlying economic behavior results from smooth decision rules. Hence, we recognize difficulty of modeling jump behavior, while recognizing there are endogenous decision rules vis-a´-via´ the process. In short, we present the canonical form of the cusp model in Eq. (1) below as a reasonable way to view the market, because the observed behavior meets the expected qualitative requirements for the use of such models. As such, the equation represents another way to view market behavior, rather than a definitive description of the underlying economic processes. To the degree our results are consistent with the existing literature, we are adding one more view of a very complex process.

Fig. 1 presents a description of a catastrophe model using the aforementioned three variables (Oliva, 1994). Movement occurs on the curved portion of the model

shown in Fig. 1. Changes in the control or independent variables (X — right/left

movement, and Y — back/front movement) cause changes in the behavior or

dependent variable (Z — vertical movement). If Y is low, smooth changes in Z

occur in proportion to changes inXas shown by examining the travel of points A

and B in Fig. 1. When Y is high (past the singularity) changes in X produce


(6)

relatively small changes in Z until a threshold is reached when there is a sudden

discontinuous shift inZ. This is depicted by the path from points C to D in Fig. 1.

Note, that a reversal inXback to the point of the shift in Z, will not cause Zto

return back to its original position, sinceXwill have to move well past to causeZ

to shift back. This is shown by the movement from point D to E. The locus of shift points is defined by the cusp points in the figure. The various moves on the surface are characterized by five qualities that Thom (1975) described as: divergence, catastrophe, hysteresis, bimodality, and inaccessibility, which are briefly reviewed in the appendix. The canonical form of the basic model in Fig. 1 is given by Eq. (1) below,

Z3

XYZ=0, (1)

where the dependent variable is Z, and the independent variables are X,Y. While

the cubic is basically simple, the model’s implicit term provides difficulties from an estimation standpoint (Oliva et al., 1987) since it generates an area of overlap which is both multivalued and discontinuous.

2.1.Dependent 6ariable

In the present conceptualization, the dependent variableZrepresents the percent

of firms choosing one of two standards, where one choice is the old or current standard and the second choice is the new or competing standard. The number of firms adopting either choice depends on theX-level of cost/benefits resulting from

abandoning the old standard for the new standard, whenY-level of compatibility

is required or desired in the market by firms. Unlike more standard approaches, the model has an area where bimodal response is possible from a given independent variable pair (i.e. a givenX,Ypair can have two differentZ-values associated with it). One of theZ-values representing the distribution of firms at the old technology

standard, and the other Z-value representing the number of firms at the new

standard (which is represented by the overlap area of Fig. 1). It is also this area of the model that gives it the ability to describe a wide variety of interesting behaviors like bandwagons, hysteresis (lags in adoption), first mover advantages, or a

predispositions towards a give standard. Determination of which value ofZis the

appropriate one to use in a given situation is made by examining the history or trajectory of the market (Zeeman, 1976). Hence, once you have a identified a historical point for the market, knowledge of the time series from that point forward enables you to resolve any ambiguity about which of two competing

standards the market adopted or will adopt3. There is no restriction on the

movement of firms, though in practice there is a tendency to move only in one direction with regard to technology. They may move back and forth between the old technology standard and new technology standard depending on changes in benefits and network externality level. An example of a situation where there has

3We arbitrarily define the market as having ‘no standard’ when the distribution of firms is at 50/50, the market is at the ‘new standard’ ifX\0.50, and the market is at the ‘old standard’ ifXB0.50.


(7)

been a retreat of sorts is with packaging in fast-food establishments. For environ-mental reasons McDonalds, Wendy’s, Burger King and other major vendors have replaced plastic food containers with paper moving back to the old packaging standard. In the approach used in paper, there would not be a problem with such a switch back as there would be with other modeling techniques.

2.2.Independent 6ariables

TheX-variable represents the cost/benefits that accrue from switching to the new or competing technology standard. In situations where no externalities are present

and cost/benefits are accrued from switching are zero, firms are equally divided

across the two technology choices. Like previous studies that have applied a benefit approach to the competing technology-product standards issue, we assume that all adopting firms have similar benefit functions over time (Farrell and Saloner, 1985). For high-technology product standards, adoption benefits are usually framed in terms of perceived technical superiority, ease of use, and manufacturers reputation (Farrell and Saloner, 1985; Katz and Shapiro, 1985, 1985; Arthur, 1989).

The Y-variable represents the degree of externality in the marketplace. Small

values ofY indicate low levels of network externalities exist or are desired, while

large vales ofYindicate high levels of network externalities exist or are desired. The

Y-variable is called the splitting factor, because asYmoves out from the origin, a

critical point is reached where the surface bifurcates. Prior to this point, no adoption bandwagons are expected to occur, while after this point, only adoption bandwagons are expected to occur. One implication of the model is that the size of any potential bandwagon is directly associated with the level of network externali-ties desired or existing in the market.

2.3.The response surface

The response surface is the curved portion of Fig. 1. Vertical and horizontal lines have been added for the purpose of helping orient the reader to the location and direction of the three axes. Location of the origin is at the back middle of the

surface. Values ofY increase from back to front, with low-externality represented

at the back of the diagram, and high externality represented at the front. Changes

in X values are represented by horizontal movement, such that ‘negative’ benefits

(costs) for switching to the new technology are on the left and ‘positive’ benefits are on the right. The percent of firms adopting the technology standard is represented by vertical movement in the figure. At the bottom left front part of the surface (bottom sheet) no firms are adopting the new technology standard, while at the right front part of the surface (top sheet) all firms have adopted the new standard. Beyond the threshold value, firms will only adopt the competing standard. The locus of bandwagon threshold points is identified by a cusp which has been

projected onto the XY-plane for easier visualization. This is the set of points at

which bandwagons will occur for givenX,Ycombinations. By restricting Y-values


(8)

Fig. 2. Planar views (Slices) of the ctaatrophe model.

Farrell and Saloner, 1985). That is, network externalities are high and firms will only shift to the new standard in bandwagons when benefits are sufficiently high enough.

2.4.Model dynamics

Fig. 2, panels A, B, and C, respectively, show three slices of the ZX-plane for

low, moderate, and highYlevels, while panel D presents a top down projection of

the surface onto theXY-plane. When the market prefers choice over compatibility

(low Y-values), the adoption function is relative flat as shown in Fig. 2A. In this

case the market is relatively indifferent and we would expect the distribution of

firms to be in the neighborhood of the median (i.e. near 50/50). Points AandBin

Fig. 1 trace changes in cost/benefits (X-values) for lowY-values, which corresponds with Fig. 2 panelA.

If the market wants compatibility, network externality increases, and both the shape and vertical dimensions of the adoption function change. Fig. 2B depicts the


(9)

situation at moderate network externality levels. The emergingS-shape implies that

as Y increases, the firm adoption distribution moves increasingly away from the

median, as firms favor one standard over the other. In the typical situation (switching from old to new), when benefits increase past the benefit-neutral posi-tion, increasingly more firms are willing to switch to the new standard for smaller additional benefits.

When the market desires high compatibility, then most of the firms will be at one

standard or the other. This phenomenon is depicted by the pronounced S-shape

(more like Z-shaped) bifurcation of the surface. At this point the firm adoption

ratio approaches (0, 1) or (1, 0) in favor of either the old or new standard (Fig. 2C)

depending on which provides the better benefits. TheS-shape has now become very

pronounced as has the area of overlap. Depending on which standard is dominant, movement to the benefit-neutral position from a benefit-extreme does not cause firms to switch to the alternative standard. In the typical situation, firms will not

switch from the old standard to the new untilx]x*. In cases where switching back

to the old is possible, this will not occur unless benefits are significant in the reverse

direction (i.e. x%0x). The locus of x* and x% forms the benefit thresholds

identifying where bandwagons will occur for given combinations of network

externalities and cost/benefits as shown in Fig. 2D. Points C, D, E in the Fig. 1,

illustrate such movement when externality levels are high. At point C, there is little

willingness to switch from the old standard. IfXcontinues to increase, eventually

a point is reached (x]x*) where benefits to switching are significant and a

bandwagon occurs as a group of firms shifts to the new standard at point D. The number of firms that switched is measured by the vertical distance between the bottom and top sheets of the surface. However, in order to return to the old standard, the cost/benefit value must go back pastx* tox%for the system return to

E. This will only occur when there are high negative benefits (costs) from being away form the old standard. The lag in switching represents the hysteresis effects inherent in the process and reasonably represents the kind of inertia found in actual markets.

Firms can be located at one of two states in the overlap area. In part, this captures the stickiness in the market and reflects the unwillingness of firms to leave the standard they are currently at when network externalities are high. This is also consistent with the nature of bandwagons discussed in Farrell and Saloner (1985), p. 76. Boundaries (width) of the bandwagon zone defined by the cusp vary in size with the level of network externality in the market as depicted in Fig. 1 and Fig. 2D. When network externality increases in the market, the width of the bandwagon zone increases, as does the vertical distance between the sheets. The large number of adoptions helps the new standard by providing a ready-made network of sorts. It also helps insure that a minimum critical mass exists to cover the degree of network externality needed in the market to establish the standard.

The expected track of firm adoptions in nascent markets is shown by the dotted line in Appendix Fig. 7 and Fig. 8. The two figures show the same trajectory from

different perspectives. Both depict how one product, which has a small cost/benefit


(10)

the market has developed. In Fig. 1 and Figs. 6 and 7 in the appendix, the emerging dominant standardFprovides slightly different cost/benefits vis-a´-visG. That is,F

is atx0\DxandGis atx0BDx, whereDxis a small change in benefits. Assuming

that firms favorF overG, this slight initial difference in position between FandG

gets magnified as network externalities grow. This drivesFand Gfarther apart as

shown in Figs. 1, 6 and 7. Interesting market examples of this behavior are

provided by the Apple/ATARI competition in the early 1980’s, and the VHS/BETA

competition in VCR’s. While the Apple/ATARI battle is more interesting, the

VHS/BETA competition is the most often cited. Conventional wisdom argues that

in the VCR’s the VHS versus BETA format ‘war,’ the small initial advantage of a 3 versus 2.5 h recording capability ultimately resulted in more consumers picking VHS over BETA. Since American football games run around 3 h, only VHS allowed them to be taped in their entirety. This led to more software (rental movies) being available in VHS format; which, in turn, led new consumers to pick VHS over BETA as they came into the market.

In time a better competitor may enter the market (G) to challengeFin the now

developed market. Since the market now has high network externalities, it must provide benefits in excess of the threshold value (x*), for firms to switch. If it does,

then the firms will shift in a bandwagon to the new competitorG, as shown in Figs.

6 and 7.

3. Estimation issues

Estimation of chaos models, in general, and catastrophe models, in particular, is difficult because of their nonlinear dynamic characteristics. For example, Eq. (1) presents a problem because it is implicit, multivalued in the dependent variable, and discontinuous. So while the cusp model is parsimonious in its ability to describe a large variety of complex behaviors, it presents major estimation difficulties.

Initial efforts to estimate catastrophe models were simplistic. The first published empirical social science application of a catastrophe model is generally credited to Zeeman et al., 1976. It focused on institutional disturbances (riots and takeovers) in a United Kingdom prison. The approach used might best be described as quasi-graphical. The approach was later refined by Sheridan and Abelson (1983) who used an updated version of the graphical approach in a study of employee turnover. Taking a different tact, Oliva et al. (1981) modeled a collective bargaining situation that used a set of rule-based predictions about the dependent variables behavior. Their method made predictions about bargaining system behavior, then used a Chi-square type measure to assess the accuracy of their model. Although more empirically satisfying than the Zeeman et al. (1976) method, it was simple and ad hoc. While these were critical first steps, they tended to provide only limited empirical support.

Important progress was made by Loren Cobb (1978, 1981) who was working on developing statistical distributions for catastrophe models in the biosciences. Draw-ing on Cobb’s analytical work, Guastello (1982) Guastello (1995) developed a


(11)

promising statistical specification for the cusp model by starting with the following deterministic equation: dz=(Z3

ZYX)=0. By inserting beta weights and

settingdt equal to 1, he developed his statistical expression:DZ=Z2−Z1=b0+ b1(Z1)

3+b

2Z1Y+b3X+o.

A major limitation of this and Cobb’s approach is that it does not allow for a priori specification of the control variables. Rather the technique ‘finds’ catastrophe if it exists and identifies (in a probabilistic sense) which independent variables are

associated with the control factor (X in Eq. (1)) and which independent variables

are associated with the splitting factor (Y in Eq. (1)). Clearly, this is a problem

when the researcher is trying to develop a confirmatory estimate a specific catastro-phe model. Additionally, the dependent variable is required to be univariate. Consequently, its usefulness is limited when the catastrophe model uses or requires a multivariate dependent construct. To deal with the problem, researchers using Cobb (1981) or Guastello’s (1995) techniques have typically averaged or otherwise scaled the measures to get a single dependent measure. Unfortunately, such averaging techniques can cause the loss of valuable information when a true catastrophe model is present as demonstrated in Oliva et al. (1987).

A solution to the problem was developed by Oliva et al. (1987) . Their method called the General Multivariate Methodology for Estimating Catastrophe Models (GEMCAT), used a scaling approach that allows for the a priori specification of variable type, and can handle multivariate constructs in all the variables. In

particular, Oliva et al. (1987) generalize the variables Z, Y, and X to their

multivariate counterparts. The Zi, Yj, and Xk are observable indicator variables

with weightsai,bj, and gk, respectively. Hence, we define Eq. (2), Eq. (3), and Eq. (4) below: This allows the catastrophe equation to be rewritten as shown in Eq. (5):

Z*t=% I

i=1

Zitai, (2)

Y*t=% J

j=1

YJTbj, (3)

X*t= % K

k=1

Xktgk, (4)

0=Zt *3X

t

*Y*t · Z*t (5)

From equation Eq. (5) the estimation goal is to minimize Eq. (6): Min

ai,bj,gkF=

t2=%T[

Zt *3

X*tY*t ·Zt]

2, (6)

where the error is equal to t. For a given set of measures on the constructs, the

object is to estimate the impact coefficients that define their respective latent variables. This is analogous to the minimization of error sum-of-squares in a regression analysis by makingFas close to zero as possible (Oliva, et al., 1987). We explicitly note that from a statistical standpoint the error structure in Eq. (5) would be an extremely complex equation. Hence, the approach reported is a way to make


(12)

the estimation problem tractable. Extensive simulation results Lang et al. (1999) indicate the approach can distinguish the difference between linear and nonlinear surfaces. And, Bootstrap and Jackknife procedures provide some assurance that the parameter estimates are reasonable. GEMCAT approaches have been successfully applied in a number of different organizational research contexts (e.g. Oliva, 1991; Gresov et al., 1993; Kauffman and Oliva, 1994). More recently, Lang et al. (1999) developed an improved version of the algorithm called GEMCAT II, which provides greater speed, efficiency, utility and flexibility in terms of analysis and testing. For example, the new version has options to perform both Bootstrap and Jackknife testing procedures and it produces SPSS files for further analysis. In addition, GEMCAT II is slightly more general as it allows offsetsa0,b0, andg0to

be included in equations Eq. (2), Eq. (3), and Eq. (4).

Finally, in their comparison of Cobb (1981) and Guastello (1995) techniques versus the GEMCAT approach, Alexander et al. (1992) note that for exploratory situations in which theory construction is the focus, or when the existence of catastrophe data is the issue, and univariate dependent measures are sufficient, Cobb related approaches are the best choice. However, Alexander et al. (1992) argue that GEMCAT is the best choice for theory testing or confirmatory contexts, and those requiring multivariate indicators in the dependent variable. Given the use of a multivariate dependent construct and confirmatory nature of this work, the GEMCAT II procedure is the appropriate estimation technique to use.

4. Data

Data for this study were provided by Techtel Inc. a major marketing research firm in Emeryville, California (www.techtel.com). Techtel has tracked organiza-tional adoption of PCs and PC software since 1984 and network equipment since 1989 surveying over 68 000 firms. The data for our study were developed from a quarterly survey sent to a panel of 2000 end-user firms. Firm officials who respond to the panel are recruited by Techtel and are required to be qualified as having influence in, and knowledge of, the PC buying process within their respective organizations. The issue of who supplies the data conforms to the criterion for quality found in Tornatzky and Klein (1982) meta-analysis of 75 innovation adoption studies. Respondent firms come from the following 14 industries: agricul-ture, manufacturing, finance, health care, construction, wholesale trade, public utilities, business services, retail, transportation, education, government, communi-cations, and publishing. The type of products in the database fit the criteria of high technology that are suggested in the following papers (Shaklin and Ryans, 1984; Moriarty and Kosnik, 1989; Heide and Weiss, 1995). Finally, the quality of the data is also measured by the quality of the clients who purchase Techtel’s data for their own businesses, e.g. Apple, Cannon, IBM, Netscape, Compaq, Symantec, Toshiba, Gateway, Sony, Intel, Hewlet Packard, DirecTV, PAGENET, VISO PSINet, and WRQ, to name a few (www.techtel.com).


(13)

The reports from the Techtel survey, called ‘PC/Market OpinionTM,’ are widely

used in industry and quoted frequently in business and trade publications. A confidentiality agreement requires that all data provided by Techtel be disguised with respect to: (1) the identity of the firms; and (2) the exact values reported. Examples of the types of products in the Techtel data set includes: spreadsheet software, personal computers, communications software, modems, video cards, word processing software, and the like. The data also includes multiple product classes (e.g. PCs, CD-ROMS, Modems), product forms (e.g. laptops, notebooks), and brands (e.g. IBM, Microsoft, Lotus, Novell).

A random sample of 128 firms who had adopted Freelance over 25 quarters of the study period from 1988 to 1994 were used. These firms all adopted the products and participated in the survey for the complete time frame. Other firms that did not conform to the foregoing were not used. This eliminates problems associated with firms who dropped out of the survey or joined later in the time period.

The study reported below focuses on the diffusion of competing technology standards rather than the products that are aligned with a given standard. A subset of the Techtel data containing firms who used or adopted the DOS versus Windows versions of the Lotus Freelance Presentation package was drawn. Hence, the competition is between DOS and Windows Operating Systems for Freelance adopters. We note that at the beginning of the study period DOS was the operating system in ascendance and Windows was struggling in the market. In fact, it was not until version 3.1 that Windows dominated the market. The product was held constant because of our focus on the diffusion of competing technology standards (PC operating systems) rather than competing products (e.g. Microsoft PowerPoint versus Lotus Freelance). Finally, we did not want adopter preferences for individual brands (e.g. Microsoft over Lotus and vice versa) to influence the diffusion process in any large measure.

5. Operationalization of variables

Seven indicator measures were developed from the Techtel database based on those found in the literature (e.g. Tornatzky and Klein, 1982 Norton and Bass, 1987, 1992; Brynjolfsson and Kemerer, 1996). These are described below and represent what we believe are the best initial set of measures available for this type of study. Our approach was to start with the maximum information possible within this database, then eliminate indicators as warranted based on the statistical analysis.

5.1.Dependent 6ariable Z (Adoptions)

Techtel’s database provided two variables to measure firm adoption of the new standard (Windows relative to DOS): (1) the relative percentage of firms who bought Freelance for Windows; and (2) the relative percentage of firms that tried Freelance for Windows during the current quarter. Relative rather than absolute


(14)

percentages were used so that the dependent variable could serve to track the adoption of both the DOS and Windows operating system. Trial was included in addition to purchase to provide a more complete measure of actual adoption. The marketing literature (e.g. Kotler, 1997) provides numerous rationales of why trial is a reasonable surrogate. Furthermore, adoption is the key measure found in Tornatzky and Klein (1982)review of innovation papers. Our operationalization of the two dependent indicator measures is as follows:

Z1 relative adoption (% bought Freelance for Windows−% who bought

Freelance for DOS)

relative trial (% tried Freelance for Windows−% tried Freelance for DOS)

Z2

5.2.Independent X-6ariable (Net benefits)

Three measures were selected to get at net adoption benefits from choosing the new standard. While the basic model shown in Fig. 1 in the earlier section

represents the X-variable in terms of cost/benefits, we assume that costs are

implicitly accounted for in the development of the following measures. That is, the data is derived from panel data in which firms evaluate the adoption of the new operating system version relative to the old operating system version. Hence, positive evaluations imply that there are greater benefits to costs in acquiring the new operating system version. A decision to stick with the old operating system implies the costs of switching are greater than the benefits.

Based on an examination of the literature (Tornatzky and Klein, 1982; Norton and Bass, 1987, 1992; Brynjolfsson and Kemerer, 1996), we chose the following proxies: (1) relative awareness; (2) relative consideration; and (3) relative opinion. In particular, we refer readers to the meta-analysis performed by Tornatzky and Klein (1982). Each of these indicators rationale is described below.

Relative awareness is a reasonable proxy for mass media communications that inform organizations about the various product standards. In terms of the promo-tion mix, most agree that mass media advertising is ideal for creating awareness. As more adopters become aware of a new standard compared to the old standard due to increased media spending, the benefits of adopting the new standard increase (or the costs decrease). As awareness grows in the market, adopters tend to feel more secure that the new standard will survive the competition. We note that advertising is often used as a safety heuristic for high-technology products because of the rapid pace of change in the industry (Business Week, 1996b). For example, experts suggest that computer buyers purchase from manufactures who have at least a two-page color advertisement for a minimum of six issues in popular computer magazines such as Computer Shopper (Business Week, 1996a). In a sense, relative awareness captures the coefficient of external influence in macro-level diffusion models found in the marketing literature.


(15)

Relative consideration measures the number of adopters who are seriously thinking about adopting or switching, but have not yet done so. As consideration increases in favor of Windows relative to DOS, it is reflected in the change in benefits of adopting Windows. Consideration may be related to inertia to the degree that it is the last step before the go-no-go decision is made. High levels of consideration without adoption may indicate that a firm prefers Windows but is not willing to adopt due to unclear or uncertain benefits. At the same time, continued consideration indicates benefits are sufficient to warrant significant organizational effort in terms of evaluation and testing.

Relative opinion is the most direct measure of adoption benefits we have. It represents the relative preferences for Windows compared to DOS. The Opinion measure captures benefits like technical superiority, ease-of-use, or manufacturer reputation. It is consistent with the literature on network externalities as an indicator of adoption benefits (Katz and Shapiro, 1985). The operationalization of

the independentX-variable measure is defined by the indicators as follows:

X1 relative awareness (% aware of Freelance for Windows−% aware

Freelance for DOS)

X2 relative consideration (% considering Freelance for Windows−%

consider-ing Freelance for DOS)

relative opinion (% positive opinion for Freelance for Windows−%

posi-X3

tive opinion Freelance For DOS)

5.3.Independent Y-6ariable (Externalities)

Wade (1995) points out that there have been few attempts to operationalize network externalities. The following are among the more salient published papers that have attempted to measure the network externalities construct (Greenstein, 1993; Gandal, 1994; Saloner and Shepard, 1995; Brynjolfsson and Kemerer, 1996). Basically three of these studies impute the existence of network externalities as an unobservable construct, while the fourth (Brynjolfsson and Kemerer, 1996) uses an approach similar to ours. In the first two articles externalities were assumed if there was a desire for compatibility. In the third study, number of bank branches is used as a proxy for externalities for the adoption of ATM technology.

In this paper we are interested in measuring the changes in the degree or level of network externalities as a market evolves over time. Following Saloner and Shepard (1995), we begin by considering the total percentage of adopters who have adopted either Windows or DOS during the previous quarter as a measure of direct network externalities as a combined installed base. Both are used since the operating systems are competing and users represent the interested market in the longer run. Clearly, the larger the installed base, the greater the opportunity to communicate directly with other adopters. In addition to the installed base measure, we also consider the total number of complementary goods adopted for both standards sold during the


(16)

current quarter. This provides a measure of an indirect network externality because the larger the total number of complementary goods available, the greater the opportunity to purchase compatible software. Techtel’s data provides a measure of the percentage of members who adopted complementary goods for Windows or DOS operating systems. Both of these measures were based on the average percentages for the two standards (DOS and Windows). For instance, if 25 and 35% of the panel adopted complementary goods for the DOS and Windows operations systems, respectively, then the total indirect network externality for that quarter would be at the average of 30%.

Keep in mind that we are measuring externalities at the product market (not operating system level). By doing this, the variable provides a measure of how important compatibility is as a whole. The trade-off is that we are not distinguish-ing between the relative degrees of externality for each operatdistinguish-ing system. However, this approach is consistent with other research (Gandal, 1994; Saloner and Shepard, 1995; Brynjolfsson and Kemerer, 1996). We have operationalized the independent

Y-variable measures as follows:

Complementary goods (average % of firms who adopted complementary

Y1

goods for either Windows or DOS during the current quarter)

Installed base (average% of firms who adopted either Windows or DOS

Y2

in the previous quarter).

6. Analysis and findings

Data on 128 organizations covering 25 quarters of Lotus Freelance software adoptions were analyzed using the GEMCAT II procedure. Preprocessing of the data was necessary to prepare it for use by the GEMCAT II program. First the data were standardized so that estimated construct parameters could be compared (Oliva, 1991). Since the data are measured over time, a linear constant was added to each score so all numbers would have positive values. The standard practice of taking log transforms to deal with time related effects in the data (Burns and Wholey, 1993) was also performed. Finally, a standard correlation analysis on the prepared data set was run to examine colinearity among the indicators.

Given that within measure colinearity was relatively high, we tested several different models fully expecting to use a reduced set of the original variables identified. Ultimately, we found that the best model used the following three

indicatorsX1(1=relative awareness), Y1(1=complementary goods),Z1(1=

rela-tive adoption) with aZoffset. In order to provide a more rigorous test of the cusp

model, we also ran compared the results against linear regression and fold catastro-phe models using the same indicators. The data were fitted using version 1.3 of


(17)

45

Lange

et

al

.

/

Structural

Change

and

Economic

Dynamics

12

(2001)

29

57

Table 1

GEMCAT II results for the cusp model

Z-Mean SE Bias P\0

Indicator measures Estimated coefficient Z-Coefficient Mean

0.8547 0.0068 −0.0015 0.8600 0.0073

X1 1.0705 0.0058

−0.0100 1.0000 0.0461

3.7621

Y1 0.1833 3.9782 0.1733

– 0.0000 0.0000 1.0000 –

Z1a 1.0000 1.0000

−0.0161 0.0000 0.0465

−8.0538

−0.3747

Constant −0.3586 −7.7068

df1 df2 Pseudo-F

Pseudo-R2

0.99967 2 22 33294.354

aZ


(18)

Table 2

Wilcoxon Friedman tests

Cusp model Fold model Linear model 1.56

Average rank sum 2.00 2.44

Friedman tests=9.680 Kend. Coefficient=0.194

0.0000236 between the actual and the predictedZ*. (Note that a perfect fit implies

thatFin Eq. (6) equals 0). The corresponding Pseudo-R2

index of fit for the cusp was 0.99967, while the residuals were approximately normally distributed. It would seem difficult to improve on this fit and indeed the linear model and the fold catastrophe (each containing the same number of parameters as the cusp) showed

lowerR2values. Specifically, the linear regression model produced an R2 value of

0.985. However, observations 1 and 13 were outliers and observation 1 had large

leverage. Also, the fold catastrophe produced a Pseudo-R2 of 0.99752, but again

two outlying observations were evident.

Because all three formulations provided high fit indices, we wanted to know if the differences across the three models were statistically significant. If not, it is arguable that using catastrophe theory is not worth the effort. Therefore we analyzed the squared residuals of the three models using Friedman’s non-parametric test for multiple groups as available in SPSS V7.0. The models show significantly different fit (x2

2

=9.680, PB0.008, 2 sided) and the average ranks shown in this Table 2

confirm the ordering of theR2

values. Additional (pairwise) Wilcoxon’s tests over


(19)

Fig. 4.YZPlanar view of organizational adoptions.

the squared residuals (not shown) indicated that the cusp significantly outperformed

the linear model (Z=4.671, PB0.001) as well as the fold catastrophe (Z=5.086,

PB0.001). Finally, Fig. 3 provides compelling evidence for arriving at the same

conclusion as the cusp has no outlying residual values, but, as pointed out earlier, the other models do. In other words, the cusp fits all observations whereas the linear model and the fold cannot deal with at least two of the 25 data points. This is a significant point in favor of our cusp catastrophe formulation that becomes even more compelling when the market adoption tracks are examined in Fig. 4, Fig. 5, and Fig. 6 below.

Given the relative merit of the cusp model in this case, we now look at Table 1 for the GEMCAT II estimates. Note that the indicators are listed in the first column, followed by their weights in the second column (‘Estimated Coefficient’).

Substitution of these weights into Eq. (5) yields the following cusp: (Z1

0.3586)3

−0.0073X1−0.1833Y1(Z1−0.3586)=0.

Table 1 further shows the average bootstrap values (‘Mean’) and standard error (‘SE’), which can be used to obtain standard (i.e. parametric) tests of significance of

the indicator weights (‘Z-Coefficient’ and ‘Z-Mean’). Although such tests are

sometimes appropriate, we prefer the bootstrap values listed in the last column

(‘P\0’) as these provide non-parametric tests of significance (Note: 500

replica-tions were used). According to this criterion, the weight of the Y1 indicator

(Complementary Goods) is statistically significant (PB0.01), whereas that of X1

(Relative Awareness) is not (P\0.13). Thus, networks externalities have the

greatest impact on the adoption process, a result that is consistent with economic theory (Farrell and Saloner, 1985; Katz and Shapiro, 1985).

The latent constructs calculated by the GEMCAT II procedure are given in Table 3 and their plots are shown graphically Fig. 4, Fig. 5 and Fig. 6. Each of the three possible planar combinations of the latent dependent and independent


(20)

respectively4The points in the adoption trajectory are labeled in historical sequence

by quarter from 1 to 25 for convenience.

Fig. 4 shows the movement in the independent variable YZ-plane with a

parabolic projection superimposed for reference (i.e. a sideways view). DOS has a small advantage in the first quarter, but it is not until after the fourth quarter that it starts to get established. During this early period network externalities are low and the surface is relatively flat. As network externalities increase, the surface bifurcates, and the trajectory moves downward locking DOS in as the standard. Between the 13th and 14th quarter there is a major discontinuous shift in the market, and a bandwagon of some 60 firms adopt Windows version of the software in the 14th period. In order to make this purchase the Firms must have already converted to Windows or done so at the same time. Note that externalities continue to increase through the remaining periods as shown in Fig. 4. The shift of 60 firms represents about 47% of the sample firms. While 60 firms is not directly translatable into total U.S. organizational sales of Freelance for Windows, it is a substantial

Fig. 5.XYPlanar view of organizational adoptions.

4Note how well the trajectories match the expected movement described by the dotted lines in Figs. 6 and 7. The lines show the a priori adoption path expected for a new market, where a given standard evolves, dominates, and is eventually supplanted by a competitor.


(21)

one-time shift that reasonably mirrors historical information regarding Lotus’ decision to ultimately move forward with a Windows-only version and abandon the DOS version. Fig. 5 presents the same data from a different perspective. In Fig. 5 DOS develops an initial benefit advantage which tends to get locked in as network externalities increase (user base expands). However, there is a shift in benefits between periods 13 and 14 in favor of Windows, and there is a sudden shift to the right, which crosses the right-cusp boundary signaling the switch away from DOS. The behavior shown is consistent with the sales information at the time Windows 3.0 and 3.1 were taking over the market. Note that after the switch both benefits and network externalities increase, but that the increases in network externalities are significant in comparison to the rather small gains in benefits. This would be expected in a market where network externalities are growing in importance.

Fig. 6 shows movement from the perspective of theZX-plane. It shows why the

fold model did so well in fitting the data. The figure shows a fold catastrophe, which is not surprising given the underlying structure of the basic catastrophe models described in the literature (Thom, 1975; Zeeman, 1976). Taken together, Fig. 4, Fig. 5 and Fig. 6 present a consistent theoretical and empirical picture of the market as we know it at that time.

Turning to the impact coefficients we see that the coefficients ofX1 and Y1had

values of (0.0073 and 0.1833, respectively. Since the data were standardized, we can compare the impacts of these two coefficients as is shown in Oliva et al. (1987). It is not surprising that the network externality measure dominates the benefit measure. In markets where compatibility is important, the firms are better off if they are on the same standard, even if there are only modest benefits. When network externalities matter, it changes the shape of the diffusion curve, and we expect that the impact coefficient of the network externality measure would dominate other measures. Of the various variable combinations tested, it is not surprising that awareness turned

out to be the best measure of cost/benefits. This makes sense since awareness can

be a proxy for external influence as mentioned earlier. Norton and Bass (1987) note that technological generations can differ in susceptibility to external influence, which can affect the speed and breadth of diffusion. Additionally, the result is consistent with the marketing literature on diffusion, and more broadly with marketing’s hierarchy-of-effects assumptions (see, e.g. Kotler, 1997). It appears that in this study as awareness increases it provides a sense of security regarding the adoption of the new standard. Finally, the result supports Farrell and Saloner (1985) assumption that whatever a firm’s decision is, it wants other firms to make the same choice.

Similarly, we are not surprised that of the variables triedY1(complementary goods) provided the best results. For software products operating system software compat-ibility is the critical issue. Apple Computers has suffered in the market because of a lack of software titles relative to the vast numbers of DOS, and subsequently, Window compatible products that were available.

The forgoing results clearly illustrate the efficacy of using the catastrophe theory models for examining firm adoptions in situations where competing standards are at issue. We believe that they add to the literature by providing a more detailed look at the dynamics involved in such processes.


(22)

7. Conclusions

The study reported in this paper has used a nonlinear dynamic approach. Specifically, it has developed a description of the adoption between competing standards when network externalities are important by adopting a catastrophe model. As such, we have added to the evidence of firm behavior regarding bandwagons in the presence of network externalities suggested by economists (Farrell and Saloner, 1985; Katz and Shapiro, 1985), because the present results strongly support their basic premise regarding the behavior of firms. Additionally, the extension via catastrophe theory has added a dynamic context for examining adoptions of high technology products where network externalities are present. In particular, to our knowledge, the dynamic aspect of seeing how network externali-ties develop in nascent markets had not been empirically shown before. Yet, it is this dynamic that is important in helping to the understanding of how networks evolve.

To be sure, the study is not perfect. We had to work with existent available data. Clearly, we would have liked to have had input into the development of the original questionnaire. This is, of course, impossible since it was started many years before

Table 3

GEMCAT II e¯estimated latent variables

External latentY Benefits latentX

Quarter Adoption latentZ

0.000000

0.003259 0.021442

1

2 0.002194 0.051142 0.030442

3 0.002194 0.062690 0.003442

4 0.002063 0.076071 0.021442

6 0.001771 0.087436 −0.036558

0.092568 −0.036558

0.001677 7

8 0.001771 0.096051 −0.057558

0.001327

9 0.096234 −0.079558

10 0.001283 0.095134 −0.079558

−0.128558 0.097334

11 0.001064

0.000575

12 0.099717 −0.212558

13 0.000000 0.119697 −0.358558

0.135094 0.361442

14 0.005366

0.140410

15 0.005613 0.373442

0.005584

16 0.155441 0.401442

17 0.005810 0.163506 0.410442

18 0.005861 0.174505 0.426442

0.433442 0.180554 19 0.005920 0.185320 0.006036 20 0.447442 0.006043

21 0.189719 0.451442

0.459442

22 0.006240 0.180004

0.190635

23 0.006204 0.464442

0.006255

24 0.201817 0.472442


(23)

this study was conceived. Getting access to better market data is a key issue for this and related research. However, we believe that since very little has been done measuring and developing empirical examinations of the network externality issue, this effort is a reasonable addition to the existing literature. More importantly, it yields results that are consistent with theory. From this perspective, we feel that we have provided an attack on the problem by using catastrophe theory which supplements the more standard approaches that have been used in the past. The result has been fruitful and added a new dimension to our way of thinking about the problem. Such integrated theorizing across modeling approaches is beneficial. While each approach has its strengths and weaknesses, the two together provide more compelling theoretical and empirical support in understanding how organiza-tions adopt high-technology products, standards, or innovaorganiza-tions when network externalities are an issue. Additionally, since this is one of the few papers to attempt to measure network externalities directly and attempt to develop a benefit function at the same time, we hope we have stimulated others to improve and extend work in this area. Clearly others with access to better databases can do better.

In terms of future research, an important result of this type of analysis is that it might eventually lead to predicting when major shifts might occur. If carefully designed questions would allow for the development of high quality measures, baseline models are possible for different markets and the prediction of bandwag-ons may ultimately be possible. Comparisbandwag-ons via meta-analysis may be able to provide key insights into parameter values once good measures are developed. Other future research issues include the behavioral implications suggested by the catastrophe model. Some of these can be examined using more traditional tech-niques. For example, among other things, the model suggests that the following three research propositions might be examined in other studies: (1) that network externalities can vary or be different in different markets; (2) that the size of bandwagons will be related to the level of network externalities that are present in the market; and (3) that there are hysteresis effects in the market adoptions which increase as the level of network externalities increase and vice versa. These all have important managerial implications which flow in different directions depending on when you are an adopter or seller of the products involved.

Acknowledgements

This work is supported by grants from the Marketing Science Institute (MSI) and Techtel Corportation of Emeryville California.

Appendix A. A Brief Review Cusp Catastrophe Model

This overview is presented as a brief reminder of the basic cusp model properties. Its intent is to economize the reader’s effort by providing a brief overview of the


(24)

Fig. 6.XZPlanar view of organizational adoptions.

approach. We start by pointing out that the cusp catastrophe model is based on Thom (1975) ideas as popularized by Zeeman (1974, 1976, 1977). General back-ground reading in catastrophe theory and chaos can be found in Woodcock and Davis (1978) and Guastello (1995). Varian (1979), p. 15 points out that catastrophe theory looks at the ‘interactions between short-run equilibria and long-run dynamic

processes’. Given the dynamical system below whereg is the potential function and

fis the behavior function (Thom, 1975):

g(Z,Y,X)=1/4Z4XZ1/2YZ3, (7)

Z%=f(Z,X,Y). (8)

It is presumed thatZis a vector of state variables,Z%represents the vector of their

derivatives, X and Y are vectors of parameters (Varian, 1979). For Z%=0, a

two-dimensional surface in the Euclidian three-space R3 is created as depicted in

Fig. 1. Catastrophes refer to those points where basic processes change toward states of minimum or maximum potential. Examples are: in mechanics, a pendulum stopping, or in politics decisions being based on maximum constituent support.

Such gradient dynamic system may be characterized as: −dZ/dt=0; and given

f(Z,Y,X), andZis dependent, then Eq. (8) indicatesZchanges in the direction of decreasing potential, at a rate proportional to the slope of the field. The equilibrium set are those values forZfor which dY/dt=0, and the partial offwith respect to


(25)

Zequals zero. If f(Z,Y)=Z4

−27, equilibrium is Z3ZY=0. For any given Y

there is an equilibrium at Z=0, and for positive values there are equilibria at

Z=Y1/2. Exact values of Ypredict exact values of Z, if we know their trajectory

(history). For example location of a pendulum in the next second depends on where it is this second. Hence, the term state-descriptive is often used to categorize such

systems. If we subtract the linear variableXfrom the left side of the equation, we

get Eq. (9) (discussion here combined from: Isnard and Zeeman, 1976; Cobb, 1978; Fararo, 1978). Eq. (10) is the locis of a cusp which is the boundary of overlap area

projection on to theXY-plane. This is also detailed in Fig. 1 and Fig. 7.

Z3XYZ=0, (9)

27X2=4Y3. (10)

In turn, these account for all the different ways in which changes in the two equilibria can occur, when controlled by the two independent variables:

Case 1: There is one stable equilibrium point;Case 2: There are two stable and one unstable equilibrium point(s); or Case 3: There is one stable equilibrium point, and one at which an instantaneous jump in the state variable occurs (a catastrophe event). Those points in the control plane at which catastrophe events can occur are called catastrophe points (Fischer and Jammernegg, 1986, p. 11). The locus of the points forms a cusp in the control space Eq. (10), and gives the model its name. Case 1 describes smooth continuous movement in behavior, given small changes in the independent variables. This may be interpreted as adjustments in the current equilibrium. Such adjustments might be in the form of a small increment or decrement in the number of firms using the dominant standard. Case 2 implies that there are two likely equilibrium states and one unlikely state. In terms of this paper and Farrell and Saloner’s 1985 work, these two equilibria represent the two standards which firms are switching between, such that there is no halfway or middle ground position. Case 3 describes the locus of points described by Eq. (10), where sudden switches occur between equilibria. Small changes in the independent variables will result in the market staying at the current standard unless one of these points are crossed. When this happens, a sudden shift in equilibrium occurs and firms switch to the other standard. From a descriptive standpoint Zeeman (1976) identifies the five conditions that are required for use of a cusp model. These are presented below. We point out that in any given empirical study with a finite sample size, not all conditions would necessarily be found.

A.1. Bimodality

The area of surface overlap which defines the cusp in Fig. 1. Within this area, the dependent variable can take on one or two different possible values (za,zb) for a given set of independent values (x1,y1). Hence, depending on the system’s history, either equilibrium is possible within this area. Relating to this paper, Bimodality may be an explanation for Arthur (1989) theory of potential inefficiency where


(26)

Fig. 7. Network externalities and adoption (side view).

inferior standards are chosen over superior ones. In this case two choices are offered and the ‘wrong’ one is chosen. It may also explain Norton and Bass (1992) contention that potential adopters do not immediately adopt a new standard, no matter what the advantages of the new technology are.

A.2. Di6ergence

As the magnitude ofYincreases positively form the origin, small difference in the

value of X can translate into opposite direction along the Z-axis. Hence, slight

differences in the initial preference for a standard can result in different market

choices as externalities increase. This is depicted in the movement of pointsFand

Gin Fig. 1 and Fig. 8. Divergence is consistent with Arthur (1989) ‘non-egrodicity’

principle which suggests that small historical events in the beginning of a market are not averaged away by the dynamics of the system, but rather ultimately deciding the winning technology. Wade (1995) found empirical evidence for non-egrodicity in his study of the microprocessor market from 1971 to 1988.

A.3. Catastrophe

Sudden, discontinuous shifts can occur along the dependent variable dimension. Such events can occur if the independent variable values are within the cusp region and change in a direction that takes them across a catastrophe point. Travel on the folded part of the surface can result in sudden shifts (falling up or down) to the


(27)

Fig. 8. Benefit threshold as externalities increase. A.4. Hysteresis

If a shift is made from on equilibrium (sheet) to another, a return to the former equilibrium will not occur at the same values of the independent variables that the original shift was made at. Shifts in equilibria do not occur at the same independent variable point as they would in a step function. There is a lag or

hysteresis in the process. Line segmentsCD and DE indicate lags in Fig. 1. Once

a shift is made fromC to D, the return to the original equilibrium occurs along

the pathDE.

A.5. Inaccessibility

The middle connecting sheet shown in Fig. 2 is inaccessible and represents the area of least likely behavior. It means that the dependent variable does not take on middle sheet values from changes in the independent variables. Virtually all firms will stay at the old technology or move en masse to the new standard with no middle ground. This property of the surface is demonstrated in a numerical example found in Oliva et al., 1988.

The dotted lines in Fig. 7 and Fig. 8 show the expected or theoretical

trajec-tory of a market where one product has a slight advantage initially (G over F in

Fig. 1). As the market evolves and network externalities increase, the market trajectory moves downward to the left and the product get locked in as the standard as shown in Fig. 7 and Fig. 8 If a competing product can provide significantly better benefits over time such that the right cusp border as seen in

Fig. 7 is crossed, a bandwagon will occur as firms move to the new productF as

shown in Fig. 8. For established markets the expected tracks would tend to be in the area of high network externalities and tend to be limited to horizontal motion in Fig. 7.


(28)

References

Alexander, R., Herbert, G., DeShon, R., Hanges, P., 1992. An examination of least-squares regression modeling of catastrophe theory. Psychological Bulletin 111, 366 – 379.

Arthur, B.W., 1989. Competing technologies, increasing returns, and lock-in by historical events. Economic Journal 99, 116 – 131.

Brynjolfsson, E., Kemerer, C., 1996. Network externalities in Microcomputer Software: an economic analysis of the spreedsheet market. Management Science 42 (12), 1627 – 1647.

Burns, L. and Wholey (1993). Adoption and abandonment of matrix management programs: effects of organizational characteristics and interorganizational networks, Academy of Management Review 106 – 138.

Business Week (1996a). The new workplace. (April, 29) 1 – 22. Business Week (1996b). Here comes the intranet. (February, 26). 1 – 22.

Cobb, L., 1978). Stochastic catastrophe models and multimodal distributions. Behavioral Science 23, 360 – 374.

Cobb, L., 1981. Parameter estimation for the cusp catastrophe model. Behavioral Science 26, 75 – 78. Fararo, T., 1978. An introduction to catastrophes. Behavioral Science 23, 291 – 317.

Farrell, J., Saloner, G., 1985. Standardization, compatibility, and innovation. The Rand Journal of Economics 16 (Spring), 70 – 83.

Farrell, J., Saloner, G., 1986. Installed base and compatibility: Innovation, product preannouncements, and predation. The American Economic Review 75 (December), 940 – 955.

Fischer, E., Jammernegg, (1986), Empirical investigation of a catastrophe theory extension of the philips curve,The Re6iew of Economics and Statistics, 9 – 17.

Gandal, N., 1994. Hedonic price indexes for spreadsheets and an empirical test for network externalities. RAND Journal of Economics 25, 160 – 170.

Greenstein, S., 1993. Did installed base give an incumbent any (measurable advantages in federal computer procurement? RAND Journal of Economics. 24, 19 – 39.

Gresov, C., Haveman, H, Oliva, T., 1993). Organizational design, inertia and the dynamics of competitive response. Organization Science 4 (May), 1 – 28.

Guastello, S.J., 1982). Moderator regression and the cusp catastrophe application of two-stage personnel selection, training therapy, and policy evaluation. Behavioral Science 27, 259 – 272.

Guastello, S.J. (1995). Chaos Catastrophe, and Human Affairs, Lawerence Erlbaum Associates, Mah-wah, N.J.

Ho, T., Saunders, A., 1980. A catastrophe model of bank failures. The Journal of Finance 35 (5), 1189 – 1207.

Heide, J., Weiss, A., 1995. Vendor consideration and switch behavior for buyers in high-technology Markets. Journal of Marketing 59 July, 30 – 43.

Isnard, C., Zeeman, E., 1976. Some models from catastrophe theory in the social sciences. In: Coffins, L. (Ed.), The use of models in the social sciences. Tavistock Publications, London, UK, pp. 44 – 100. Katz, M., Shapiro, C., 1985. Network externalities, competition, and compatibility. The American

Economic Review 75 June, 425 – 440.

Kauffman, R., Oliva, T., 1994. Multivariate catastrophe model estimation: method and application. Academy of Management Journal 37, 206 – 221.

Kotler, P. (1997).Marketing Management. 9thed. Upper Saddle River, N.J.. Prentice Hall.

Lang, R., Oliva, T., and McDade. (1999) An Algorithm for Estimating Multivariate Catastrophe Models: GEMCAT II, Presented at the AMA ART-Form Conference, Santa Fe, New Mexico, June. Moriarty, R., Kosnik, T., 1989. High-TECH marketing: Concepts, Continuity, and Change. Sloan

Management Review 30 (Summer), 7 – 17.

Norton, J., Bass, F., 1987). A diffusion theory of adoption and substitution for successive generations of high-technology products. Management Science 33 (9), 1069 – 1086.

Norton, J. and Bass, F. (1992). Evolution of technological generations: the law of capture, Sloan Management Re6iew. (Winter) 66-77.

Oliva, T., 1994. Technological choice under conditions of changing network externality. The Journal of High Technology Management Research 5, 279 – 298.


(1)

Fig. 6.XZPlanar view of organizational adoptions.

approach. We start by pointing out that the cusp catastrophe model is based on Thom (1975) ideas as popularized by Zeeman (1974, 1976, 1977). General back-ground reading in catastrophe theory and chaos can be found in Woodcock and Davis (1978) and Guastello (1995). Varian (1979), p. 15 points out that catastrophe theory looks at the ‘interactions between short-run equilibria and long-run dynamic processes’. Given the dynamical system below whereg is the potential function and fis the behavior function (Thom, 1975):

g(Z,Y,X)=1/4Z4XZ1/2YZ3, (7)

Z%=f(Z,X,Y). (8)

It is presumed thatZis a vector of state variables,Z%represents the vector of their

derivatives, X and Y are vectors of parameters (Varian, 1979). For Z%=0, a

two-dimensional surface in the Euclidian three-space R3 is created as depicted in

Fig. 1. Catastrophes refer to those points where basic processes change toward states of minimum or maximum potential. Examples are: in mechanics, a pendulum stopping, or in politics decisions being based on maximum constituent support. Such gradient dynamic system may be characterized as: −dZ/dt=0; and given f(Z,Y,X), andZis dependent, then Eq. (8) indicatesZchanges in the direction of decreasing potential, at a rate proportional to the slope of the field. The equilibrium set are those values forZfor which dY/dt=0, and the partial offwith respect to


(2)

Zequals zero. If f(Z,Y)=Z4

−27, equilibrium is Z3−ZY=0. For any given Y there is an equilibrium at Z=0, and for positive values there are equilibria at Z=Y1/2. Exact values of Ypredict exact values of Z, if we know their trajectory

(history). For example location of a pendulum in the next second depends on where it is this second. Hence, the term state-descriptive is often used to categorize such systems. If we subtract the linear variableXfrom the left side of the equation, we get Eq. (9) (discussion here combined from: Isnard and Zeeman, 1976; Cobb, 1978; Fararo, 1978). Eq. (10) is the locis of a cusp which is the boundary of overlap area projection on to theXY-plane. This is also detailed in Fig. 1 and Fig. 7.

Z3−X−YZ=0, (9)

27X2=4Y3. (10)

In turn, these account for all the different ways in which changes in the two equilibria can occur, when controlled by the two independent variables:

Case 1: There is one stable equilibrium point;Case 2: There are two stable and one unstable equilibrium point(s); or Case 3: There is one stable equilibrium point, and one at which an instantaneous jump in the state variable occurs (a catastrophe event). Those points in the control plane at which catastrophe events can occur are called catastrophe points (Fischer and Jammernegg, 1986, p. 11). The locus of the points forms a cusp in the control space Eq. (10), and gives the model its name. Case 1 describes smooth continuous movement in behavior, given small changes in the independent variables. This may be interpreted as adjustments in the current equilibrium. Such adjustments might be in the form of a small increment or decrement in the number of firms using the dominant standard. Case 2 implies that there are two likely equilibrium states and one unlikely state. In terms of this paper and Farrell and Saloner’s 1985 work, these two equilibria represent the two standards which firms are switching between, such that there is no halfway or middle ground position. Case 3 describes the locus of points described by Eq. (10), where sudden switches occur between equilibria. Small changes in the independent variables will result in the market staying at the current standard unless one of these points are crossed. When this happens, a sudden shift in equilibrium occurs and firms switch to the other standard. From a descriptive standpoint Zeeman (1976) identifies the five conditions that are required for use of a cusp model. These are presented below. We point out that in any given empirical study with a finite sample size, not all conditions would necessarily be found.

A.1. Bimodality

The area of surface overlap which defines the cusp in Fig. 1. Within this area, the dependent variable can take on one or two different possible values (za,zb) for a

given set of independent values (x1,y1). Hence, depending on the system’s history,

either equilibrium is possible within this area. Relating to this paper, Bimodality may be an explanation for Arthur (1989) theory of potential inefficiency where


(3)

Fig. 7. Network externalities and adoption (side view).

inferior standards are chosen over superior ones. In this case two choices are offered and the ‘wrong’ one is chosen. It may also explain Norton and Bass (1992) contention that potential adopters do not immediately adopt a new standard, no matter what the advantages of the new technology are.

A.2. Di6ergence

As the magnitude ofYincreases positively form the origin, small difference in the value of X can translate into opposite direction along the Z-axis. Hence, slight differences in the initial preference for a standard can result in different market choices as externalities increase. This is depicted in the movement of pointsFand

Gin Fig. 1 and Fig. 8. Divergence is consistent with Arthur (1989) ‘non-egrodicity’ principle which suggests that small historical events in the beginning of a market are not averaged away by the dynamics of the system, but rather ultimately deciding the winning technology. Wade (1995) found empirical evidence for non-egrodicity in his study of the microprocessor market from 1971 to 1988.

A.3. Catastrophe

Sudden, discontinuous shifts can occur along the dependent variable dimension. Such events can occur if the independent variable values are within the cusp region and change in a direction that takes them across a catastrophe point. Travel on the folded part of the surface can result in sudden shifts (falling up or down) to the other sheet. PointsC, D, and E depict such movement in Fig. 1.


(4)

Fig. 8. Benefit threshold as externalities increase. A.4. Hysteresis

If a shift is made from on equilibrium (sheet) to another, a return to the former equilibrium will not occur at the same values of the independent variables that the original shift was made at. Shifts in equilibria do not occur at the same independent variable point as they would in a step function. There is a lag or hysteresis in the process. Line segmentsCD and DE indicate lags in Fig. 1. Once a shift is made fromC to D, the return to the original equilibrium occurs along the pathDE.

A.5. Inaccessibility

The middle connecting sheet shown in Fig. 2 is inaccessible and represents the area of least likely behavior. It means that the dependent variable does not take on middle sheet values from changes in the independent variables. Virtually all firms will stay at the old technology or move en masse to the new standard with no middle ground. This property of the surface is demonstrated in a numerical example found in Oliva et al., 1988.

The dotted lines in Fig. 7 and Fig. 8 show the expected or theoretical trajec-tory of a market where one product has a slight advantage initially (G over F in Fig. 1). As the market evolves and network externalities increase, the market trajectory moves downward to the left and the product get locked in as the standard as shown in Fig. 7 and Fig. 8 If a competing product can provide significantly better benefits over time such that the right cusp border as seen in Fig. 7 is crossed, a bandwagon will occur as firms move to the new productF as shown in Fig. 8. For established markets the expected tracks would tend to be in the area of high network externalities and tend to be limited to horizontal motion in Fig. 7.


(5)

References

Alexander, R., Herbert, G., DeShon, R., Hanges, P., 1992. An examination of least-squares regression modeling of catastrophe theory. Psychological Bulletin 111, 366 – 379.

Arthur, B.W., 1989. Competing technologies, increasing returns, and lock-in by historical events. Economic Journal 99, 116 – 131.

Brynjolfsson, E., Kemerer, C., 1996. Network externalities in Microcomputer Software: an economic analysis of the spreedsheet market. Management Science 42 (12), 1627 – 1647.

Burns, L. and Wholey (1993). Adoption and abandonment of matrix management programs: effects of organizational characteristics and interorganizational networks, Academy of Management Review 106 – 138.

Business Week (1996a). The new workplace. (April, 29) 1 – 22. Business Week (1996b). Here comes the intranet. (February, 26). 1 – 22.

Cobb, L., 1978). Stochastic catastrophe models and multimodal distributions. Behavioral Science 23, 360 – 374.

Cobb, L., 1981. Parameter estimation for the cusp catastrophe model. Behavioral Science 26, 75 – 78. Fararo, T., 1978. An introduction to catastrophes. Behavioral Science 23, 291 – 317.

Farrell, J., Saloner, G., 1985. Standardization, compatibility, and innovation. The Rand Journal of Economics 16 (Spring), 70 – 83.

Farrell, J., Saloner, G., 1986. Installed base and compatibility: Innovation, product preannouncements, and predation. The American Economic Review 75 (December), 940 – 955.

Fischer, E., Jammernegg, (1986), Empirical investigation of a catastrophe theory extension of the philips curve,The Re6iew of Economics and Statistics, 9 – 17.

Gandal, N., 1994. Hedonic price indexes for spreadsheets and an empirical test for network externalities. RAND Journal of Economics 25, 160 – 170.

Greenstein, S., 1993. Did installed base give an incumbent any (measurable advantages in federal computer procurement? RAND Journal of Economics. 24, 19 – 39.

Gresov, C., Haveman, H, Oliva, T., 1993). Organizational design, inertia and the dynamics of competitive response. Organization Science 4 (May), 1 – 28.

Guastello, S.J., 1982). Moderator regression and the cusp catastrophe application of two-stage personnel selection, training therapy, and policy evaluation. Behavioral Science 27, 259 – 272.

Guastello, S.J. (1995). Chaos Catastrophe, and Human Affairs, Lawerence Erlbaum Associates, Mah-wah, N.J.

Ho, T., Saunders, A., 1980. A catastrophe model of bank failures. The Journal of Finance 35 (5), 1189 – 1207.

Heide, J., Weiss, A., 1995. Vendor consideration and switch behavior for buyers in high-technology Markets. Journal of Marketing 59 July, 30 – 43.

Isnard, C., Zeeman, E., 1976. Some models from catastrophe theory in the social sciences. In: Coffins, L. (Ed.), The use of models in the social sciences. Tavistock Publications, London, UK, pp. 44 – 100. Katz, M., Shapiro, C., 1985. Network externalities, competition, and compatibility. The American

Economic Review 75 June, 425 – 440.

Kauffman, R., Oliva, T., 1994. Multivariate catastrophe model estimation: method and application. Academy of Management Journal 37, 206 – 221.

Kotler, P. (1997).Marketing Management. 9thed. Upper Saddle River, N.J.. Prentice Hall.

Lang, R., Oliva, T., and McDade. (1999) An Algorithm for Estimating Multivariate Catastrophe Models: GEMCAT II, Presented at the AMA ART-Form Conference, Santa Fe, New Mexico, June. Moriarty, R., Kosnik, T., 1989. High-TECH marketing: Concepts, Continuity, and Change. Sloan

Management Review 30 (Summer), 7 – 17.

Norton, J., Bass, F., 1987). A diffusion theory of adoption and substitution for successive generations of high-technology products. Management Science 33 (9), 1069 – 1086.

Norton, J. and Bass, F. (1992). Evolution of technological generations: the law of capture, Sloan Management Re6iew. (Winter) 66-77.

Oliva, T., 1994. Technological choice under conditions of changing network externality. The Journal of High Technology Management Research 5, 279 – 298.


(6)

Oliva, T., 1991. Information and profitability estimates: Modeling the firm’s decision to adopt a new technology. Management Science 37 May, 607 – 623.

Oliva, T., Day, D., & MacMillan, I. (1988). ‘‘A Generis Model of Competitive Dynamics’’, Academy of Management Review, 13, 374-389.

Oliva, T., DeSarbo, W., Day, D., Jedidi, K., 1987. GEMCAT: A multivariate methodology for estimating catastrophe models. Behavioral Science 32, 121 – 137.

Oliva, T., Peters, M., Murthy, H., 1981. A preliminary empirical test of a cusp catastrophe model in the social sciences. Behavioral Science 26, 153 – 162.

Rosenkopf, L, Tushman, M., 1994. The co-evolution of technology and organization. In: Baum, J., Singh, J. (Eds.), Evolutionary Dynamics of Organizations. Oxford Press.

Saloner, G., Shepard, A., 1995. Adoption of technologyies with network effects: an empirical examina-tion of the adopexamina-tion of automated teller machines. RAND Journal of Economics. 23, 479 – 501. Shaklin, W.L. and Ryans, J.K. (1984). Organizing for high-tech marketing,Har6ard Business Re6iew

(November-December) 164-171.

Sheridan, J., Abelson, M., 1983. Cusp catastrophe model of employee turnover. Academy Management Journal 26, 418 – 436.

Teece, D., 1986. Profiting from technological innovation: implications for integration, collaboration, licencing and public policy. Research Policy 15, 285 – 305.

Thom, R. (1975). Structural stability and morphogenesis. Reading, MA: Benjamin.

Tornatzky, L., Klein, K., 1982). Innovation characteristics and innovation adoption implementation: a meta-analysis. IEEE Transactions On Engineering Management EM-29 (1), 28 – 45 Feb.

Tushman, M, Anderson, P., 1986). Technological discontinuities and organizational environments. Administrative Quarterly 31, 439 – 465.

Varian, H., 1979. Catastrophe theory and the business cycle. Economic Inquiry 27, 14 – 28.

Wade, J., 1995. Dynamics of organizational communities and technological bandwagons: an empirical investigation of community evolution in the microprocessor market. Strategic Management Journal 16, 111 – 133.

Woodcock, A., Davis, M., 1978. Catastrophe theory. E.P. Dutton, New York.

Xie, J., Sirbu, M., 1995. Price competition and compatibility in the presence of positive demand externalities. Management Science 41 (5), 909 – 926.

Zeeman, E., 1976. Catastrophe theory. Scientific American 234 May, 65 – 83.

Zeeman, E., 1977. Catastrophe theory: Selected papers 1972 – 1977. Addison Wesley, Reading, MA. Zeeman, E., 1974. On the unstable behavior of stock exchanges. Journal of Mathematical Economics 1,

39 – 49.

Zeeman, E., Hall, C., Harrison, P., Marriage, G., Shapland, P., 1976. A model for institutional disturbances. British Journal of Mathematical and Statistical Psychology 29, 66 – 80.


Dokumen yang terkait

Analisis Komparasi Internet Financial Local Government Reporting Pada Website Resmi Kabupaten dan Kota di Jawa Timur The Comparison Analysis of Internet Financial Local Government Reporting on Official Website of Regency and City in East Java

19 819 7

Analisis Pengendalian Persediaan Bahan Baku Tembakau Dengan Metode Economic Order Quantity (EOQ) Pada PT Mangli Djaya Raya

3 126 8

FAKTOR-FAKTOR PENYEBAB KESULITAN BELAJAR BAHASA ARAB PADA MAHASISWA MA’HAD ABDURRAHMAN BIN AUF UMM

9 176 2

ANTARA IDEALISME DAN KENYATAAN: KEBIJAKAN PENDIDIKAN TIONGHOA PERANAKAN DI SURABAYA PADA MASA PENDUDUKAN JEPANG TAHUN 1942-1945 Between Idealism and Reality: Education Policy of Chinese in Surabaya in the Japanese Era at 1942-1945)

1 29 9

Improving the Eighth Year Students' Tense Achievement and Active Participation by Giving Positive Reinforcement at SMPN 1 Silo in the 2013/2014 Academic Year

7 202 3

Improving the VIII-B Students' listening comprehension ability through note taking and partial dictation techniques at SMPN 3 Jember in the 2006/2007 Academic Year -

0 63 87

The Correlation between students vocabulary master and reading comprehension

16 145 49

Improping student's reading comprehension of descriptive text through textual teaching and learning (CTL)

8 140 133

The correlation between listening skill and pronunciation accuracy : a case study in the firt year of smk vocation higt school pupita bangsa ciputat school year 2005-2006

9 128 37

Transmission of Greek and Arabic Veteri

0 1 22