Hypothesis Case Study 2: Theory-testing research: testing a necessary condition

5.4.8 Measurement

For checking whether the case innovation project was successful and therefore could be included in the study, success was determined with a questionnaire that was filled out by the project manager of that pro- ject. Items on project performance in our questionnaire asked for spe- cific judgements regarding: meeting the time-to-market deadline; adherence to interim project deadlines; quality of the project; and budget performance of the project. A control item asking for an over- all judgement of project performance was also included. For each indi- cator we measured actual performance relative to expectations as perceived by the project managers on a five-point scale ranging from “very disappointing performance” to “a performance level well beyond expectations”. First, the average score for the first four items was cal- culated. Next, to reduce measurement error even further, we averaged the score for “overall project performance” with the average for the four items. Successful projects were defined as projects with a score of three which means that the project performed in line with expect- ations or higher. From the 30 projects that we analysed, we identified 15 successful projects; hence, our cases. For each case, the type of innovation was determined based on the qualitative project descriptions that we had collected. Additionally, the project manager filled out a questionnaire to determine a project’s degree of interface change using a four-point rating scale about “the degree of uncertainty regarding the interfaces to connect the applica- tion to the network” and “the degree of standardization of the plat- form to which the application was connected”. This latter scale ranged from “no standards” to “highly standardized”. Usually, newly intro- duced networks employ tailor-made platforms, whereas over time stand- ardized platforms emerge that manage the development and interconnection of applications. To rate a project’s degree of component technology change, we used a rating scale for “the uncertainty regarding the costs to develop this application”. For the distinction between core and peripheral projects we also primarily drew on the interview data with the project manager. We followed Gatignon et al. 2002 who characterize core components as strategically important to the firm andor tightly coupled to the larger system. During the interviews, we assessed the strategic importance of the application to the operator. We could cor- roborate these findings using data on the questionnaire item asking for “the urgency felt by the network operator to introduce this appli- cation quickly”. We hypothesized that operators experience high urgency for strategically important applications in order to build quickly a customer base. The extent of coupling, the number of inter- faces between an application and the network, was determined based on the technical characteristics of the project. Some applications, such as voice services or person-to-person text messaging, involve applica- tions that are integral parts of the mobile network, i.e. interconnected with many network elements. In contrast, peripheral applications are often connected to the mobile network, or in many cases to an appli- cation platform, through a single interface. For each case i.e. for each successful project we determined the organizational configuration by assessing the four dimensions of the organizational form coordination integration, ownership integra- tion, task integration, and knowledge integration using a qualitative interview with the project manager. Based on the interview data, we characterized each dimension as a low, medium, or high level of inte- gration. To check the measurement validity of our ratings, we com- pared the researcher’s ratings of ownership integration and task integration with the ratings by the project manager for these dimen- sions. The project manager rated these dimensions on a five-point scale using a questionnaire with the statements “the extent that the operator invested in the mobile application development project” and “the extent that the operator performed the project tasks”. No major deviations were found between the assessment of the researcher based on the interview data and the assessment of the pro- ject manager in the questionnaire. We performed the following procedures to collect the data. As indi- cated above, the project managers of the different projects were our key informants for both the dependent and the independent concept and the classification of the project into one of the six types of innova- tion. From each project performed in a single firm, the project manager was interviewed. If multiple firms were involved in the pro- ject, we interviewed only the project manager from the most important firm in some cases we did interview project managers from multiple firms though. At the project manager’s company, each project manager first completed a questionnaire in the presence of the researcher. Our presence allowed us to clarify the questionnaire if nec- essary and also might have acted as a barrier to self-report bias. The questionnaire contained not only questions about the organizational dimensions of the project but also about the respondent’s opinion on the performance of the project. After having completed the ques- tionnaire, respondents were interviewed in a semi-structured way, cov- ering the same topics as in the questionnaire and in the same order. The researchers’ prior experience in the mobile telecommunications