• 藍色版面
  • 綠色版面
  • 橘色版面
  • 粉紅色版面
  • 棕色版面
帳號:guest(120.119.126.29)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

研究生: 朱漢璇
研究生(外文): Patricia H. Ju
論文名稱: 資訊系統之通用性: 框架及方法論
論文名稱(外文): Generalizability in Information Systems Research: A Framework and Methodology
指導教授: 李志宏林蘋
指導教授(外文): Chih-Hung LiTeresa L. Ju
學位類別: 碩士
校院名稱: 樹德科技大學
系所名稱: 資訊管理研究所
論文出版年: 96
畢業學年度: 95
語文別: 英文
論文頁數: 73
中文關鍵詞: 通用性資訊系統研究設計科學研究研究方法研究典範軟體設計
外文關鍵詞: generalizabilitydesign science researchinformation systems researchsoftware designresearch paradigmresearch methodology
相關次數:
  • 被引用:0
  • 點閱:24
  • 評分:*****
  • 下載:0
  • 書目收藏:0
具通用性(generalizability)之研究成果可提升該研究的用途,讓所得的結果、理論、及預測應用到其他的母體(population)、地區、環境、和時間。通用性(也稱為外部效度, external validity)在某些類型的研究中已有深入的探討,諸如,實證(positivist)、闡釋(interpretive)、後實證(post-positivist)、批判理論(critical theory)、建構(constructivist)、及參與/合作(participatory/cooperative)等類型。這些類型的研究有廣泛的文獻提供指引和方法論(methodology)說明不同的研究設計如何減損或加強研究成果的通用性,例如,研究方法的文獻談到危害外部效度的因素,同時也指出如何設計實徵實驗、進行資料分析以避免這些因素。然而,資訊系統研究(Information Systems Research, ISR)可以是實證研究、闡釋研究、設計科學研究(Design Science Research, DSR)、或這些研究的綜合體的方式行。
  資訊系統(IS)建築師(Architect)的產出物(artifact)包括程式語言、設計樣式(design pattern)、元件(component)、軟體架構(software architecture)、及框架(framework)。通常他們在設計及評估產出物時都會採用IS導向的DSR。雖然在ISR的文獻中常提及通用性,但大都是在承認該研究在這方面力有未逮,很少有人面對問題討論如何確保該研究的通用性。此現象可能肇因於大家對DSR和ISR的通用性瞭解太少、相關文獻也欠缺,於是如何評估通用性成了一個知識缺口(knowledge gap)。
  本研究綜合三個研究方法論來探討ISR的通用性,包含德爾菲法(Delphi Method)、專家訪談(Expert Interview)、及親身觀察(Participation-Observation)。在德爾菲法訪談和專家訪談的過程中,有兩位大學教授參與,他們有IS界的豐富經驗而且曾設計過知識表達框架。筆者在業界任職軟體建築師,也有多年的經驗,在此研究中扮演親身觀察者的角色。德爾菲法的過程讓有關通用性共同且有力的觀點得以達成共識,其餘零星觀點照理會被捨棄。不過,德爾菲法有其限制,它會有專業偏見(specialization bias),亦即某個專家憑其特定領域的專長而提供的觀點可能因其他外行人不瞭解而被忽視。由於本研究屬探索性,而且有鑑於德爾菲法的不足,這些零星觀點還是用專家訪談和親身觀察做進一步的檢視。如此一來,之前被忽視的零星觀點經過專家訪談、親身觀察、以及持續的文獻探討後,其重要性還有浮現的機會。
  在驗證一個研究的通用性之前,我們必須知道要評估什麼?要如何評估?因此,本研究首先訂定出一個ISR通用性的框架,其中對設計、範疇、及通用性這些概念(construct)有所定義,對其間的關係也有所解釋,以便對ISR的通用性有更深的瞭解。範疇因不同的研究而異,因此必須決定與領域相關的通用性需求。此等需求,如統計取樣限制,在實證研究及其他研究中有深入的探討,但對ISR而言則不常見。因此本研究也提出一個ISR通用性分析方法論來協助IS研究者決定其特定ISR研究的通用性需求。由於這些需求因研究而異,此方法論必須超越任何研究案中與領域相關或與系統相關的細節。換言之,必須用心維持一個可應用於任何ISR研究的通用法則。
  ISR的通用性既可節省成本又可提升可靠性(reliability)和可維護性(maintainability)。以往的文獻很少談到如何讓IS研究者和IS執業者能有宏觀的瞭解、掌控、及分析他們的ISR產出物的通用性。本研究成果取材自軟體再用性(reusability)和IS設計的相關文獻,加上一個領域相關範疇需求的維度(dimension)。希望本研究提出的ISR通用性框架ISR通用性方法論有助於未來的研究者和資訊建築師評估及改善其ISR研究的通用性及再用性。

關鍵詞:通用性、資訊系統研究、設計科學研究、研究方法、研究典範、軟體設計
Generalizability of research findings increases the usefulness of the research study, allowing its results, theories, and predictions to hold true across multiple populations, geographic places, settings, and times. Generalizability, also referred to as external validity, has been thoroughly researched for certain types of research, namely positivist (e.g. natural science research) and interpretive (e.g. social science research) paradigms, as well as postpositivist, critical theory, constructivist, and participatory/cooperative paradigms. For these types of research, there exists extensive literature providing guidelines and methodologies for understanding how research design choices detract from or enhance generalizability of research findings. For example, research methodology literature describes factors that threaten external validity; literature is available to guide researchers to design empirical experiments and conduct data analyses to mitigate factors that threaten external validity. However, Information Systems Research (ISR) can be conducted as positivist, interpretive, or design science research (DSR) or a combination of all.

    Information Systems Architects commonly conduct IS-oriented DSR as they design and evaluate artifacts, among which include programming languages, design patterns, components, software architectures, and frameworks. Although generalizability is often touched upon in ISR literature, it is usually only brought up as an admission to the limitations of various research studies and rarely confronted head-on with a solution to ensuring the generalizability of said studies. This seems to be due to a minimal understanding and consequently limited literature available on what generalizability actually is in both DSR and ISR, therefore creating a knowledge gap on how to evaluate it.

    This study synthesizes three research methodologies to explore ISR Generalizability: the Delphi Method, Expert Interview, and Participant-Observation. Two university professors with extensive experience in the IS industry and who are directly involved in the design of a knowledge representation framework are included in a Delphi Method interview process and Expert Interviews. The author, also with extensive industry experience as a Software Architect, contributes as a participant-observer. The Delphi Method process allows common and strong viewpoints of generalizability to converge, leaving outliers that are normally discarded. However, since this is an exploratory study, and due to the limitation of the Delphi Method which overlooks specialization biases (in which one expert specializes in a particular aspect of the domain, therefore his or her contributions may not be understood or valued by the other experts who do not share the same specialization), these outliers are further examined using the Expert Interview method and in the participant-observation method. In this way, the importance of outliers previously mistakenly undervalued still have a chance to surface, based on expert interview results, participant observation, and continued literature review.

    Before validating generalizability of any research undertaking, we need to understand what to assess and how to assess it; thus this study first develops the ISR Generalizability Framework. The constructs of Design, Scope, and Generalizability are defined and its relationships explained for the better understanding of generalizability in ISR. Scope varies from study to study, thus it becomes necessary to determine the domain-specific requirements for generalizability. Such requirements, such as statistical sampling limits, have been examined in detail for positivist and for other research paradigms, but not for ISR. Therefore this study also develops an ISR Generalizability Analysis Methodology to aid IS researchers in determining generalizability requirements for their specific ISR study. Because such requirements differ from study to study, it is important for the methodology to transcend the domain-specific or system-specific details of any particular research project. Instead, care is taken to maintain a generalized approach that is easily applied to any ISR study.

    Generalizability in ISR both saves costs and improves reliability and maintainability.  Previous literature offers little for the ISR and IS practitioner to fully understand, control-for, and analyze the generalizability of their ISR artifacts in a holistic manner. The results of this study draws from established literature in software reusability and IS design, plus adds a new dimension of domain-specific scoping requirements. The ISR Generalizability Framework and the ISR Generalizability Analysis Methodology developed in this study hopefully will aid future researchers and industry information architects in assessing and improving the generalizability and reusability of their ISR studies.

    Keywords: generalizability, information systems research, design science research, research methodology, research paradigm, software design
Abstract  ii
Acknowledgments  iii
Table of Contents  iv
List of Tables  v
List of Figures  vi
Chapter 1  Introduction  1
    Section 1  Research Background - Generalizability in Research  1
    Section 2  Research Goal  5
    Section 3  Research Limitations  5
    Section 4  Thesis Structure and Flow  6
Chapter 2  Literature Review  7
    Section 1  Research Paradigms  7
    Section 2  Generalizability in the Research Paradigms  18
    Section 3  Information Systems Architecture  24
Chapter 3  Research Methodology  30
    Section 1  Synthesizing Three Methodologies  30
    Section 2  Sample Subjects and Study Case  35
Chapter 4  Analysis: Developing the Framework and Methodology  37
    Section 1  Interviews: Subjects and Responses  37
    Section 2  Developing the ISR Generalizability Framework  43
    Section 3  Developing the ISR Generalizability Analysis Methodology  54
Chapter 5  Conclusions  63
Chapter 6  References  68
Autobiography  73
Ajzen, I., and Fishbein, M. (1980). Understanding attitudes and predicting social behaviour, Eaglewood Cliffs, NJ: Prentice-Hall.
Arote, L., McEvily, B., and Reagans, R. (2003). “Managing knowledge in organizations: an integrative framework and review of emerging themes,” Management Science, 49(4) (2003) 571-582.
Aßmann, U. (2003). Invasive Software Composition. Heidelberg, Springer-Verlag.
Awad, E.M. and Ghaziri, H.M. (2004). Knowledge Management, Prentice-Hall: Pearson Education, Upper Saddle River, New Jersey, pp.170-171.
Bakus, G.J., Stillwell, W.G., Latter, S.M., and Wallerstein, M.C. (1982). “Decision making: With applications for environmental management,” Environmental Management, Springer Volume 6, Number 6 / November, 1982.
Bagozzi, R. P., Davis, F. D., and Warshaw, P. R. (1992). “Development and test of a theory of technological learning and usage,” Human Relations, 45(7), 660-686.
Basili, V.R., Shull, F., and Lanubile, F. (1999). “Building Knowledge through Families of Experiments,” IEEE Transactions on Software Engineering (25:4), Jul/Aug 1999, pp. 456-473.
Becker, H.S., Geer, B., Hughes, E.C., and Strauss, A.L. (1961). Boys in white: Student culture in medical school, Chicago: University of Chicago Press.
Boucekkine, R., del Rıó, F., and Licandro, O. (2005). “Obsolescence and modernization in the growth process,” Journal of Development Economics, 77(2005) 153-171.
Bracht, G.H. and Glass, G.V. (1968). “The external validity of experiments,” American Education Research Journal, 5, 437-474.
Brockoff, K. “The performance of forecasting groups in computer dialogue and face-to-face discussion,” in H. Linstone & M. Turoff (Eds.), The Delphi Method: Techniques and applications, Addison-Wesley.
Campbell, D.T. and Stanley, J.C. (1963). Experimental and Quasi-Experimental Designs for Research, Houghton Mifflin Company: Boston.
Carey, J.W. (1989). Communication as culture: Essays on media and society, Boston: Unwin Hyman.
Caws, P. (1969). "The Structure of Discovery," Science, 166(December): 1375-1380.
Christensen, L.B. (2000). Experimental methodology, 10th Ed., Allyn & Bacon, Boston.
Contadini. J.F. and Moore, R. (2003). “Life Cycle Assessment of Fuel Cell Vehicles,” Int. J. of LCA, 8 (3) 179 – 180 (2003).
Corbin, J., and Strauss, A. (1990). “Grounded Theory Research: Procedures, Canons and Evaluative Criteria,” Qualitative Sociology, 13:3-21.
Cox, J.W. and  Hassard, J. (2005). “Triangulation in Organizational Research: A Re-Presentation,” Organization (12:1), Jan 2005, London, p. 109.
Davis, F.D. (1989). “Perceived usefulness, perceived ease of use, and user acceptance of information technology,” MIS Quarterly, 13(3), 319-340.
Davis, F.D., Bagozzi, R.P., and Warshaw, P.R. (1989). “User acceptance of computer technology: A comparison of two theoretical models,” Management Science, 35, 982-1003.
Dennis, J.B. (1973). “Modularity,” in Advanced Course on Software Engineering, volume 81 of Lecture Notes in Economics and Mathematical Systems, Heidelberg: Springer.
Denzin, N.K. (1989) The Research Act (3rd edition), Englewood Cliffs, NJ: Prentice-Hall.
Denzin, N.K. (1978). The Research Act, Aldine: Chicago.
Denzin, N.K. and Lincoln, Y.S. (2003). The Landscape of Qualitative Research: Theories and Issues, Second Edition, Thousand Oaks, California: Sage Publications.
Desouza, K.C., Awazu, Y., and Tiwana, A. (2006). “Four Dynamics for Bringing Use Back into Reuse,” Communications of the ACM (49:1), January 2005, pp. 97-100.
Feigl, H. (2002). “Positivism,” Encyclopaedia Britannica.
Flick, U. (2002). An Introduction to Qualitative Research Second Edition, London, SAGE Publications.
Gall, M.D., Borg, W.R., and Gall, J.P. (1996). Educational research: An introduction, White Plains, NY: Longman.
Glaser, B.G. (1969). “The constant comparative Method of Qualitative Analysis, in G.J. McCall and J.L. Simmons (eds),” Issues in Participant Observation, Reading, MA: Addison-Wesley.
Glass, R. (1999). "On Design," IEEE Software, 16(2): 103-104.
Glass, R., Ramesh, V. and Vessey, I. (2004). "An analysis of research computing disciplines," Communications of the ACM, 47(6): 89-84.
Greenwood, D.J. and Levin, M. (2003). “Reconstructing the Relationships Between Universities and Society Through Action Research,” Landscape of Qualitative Research: Theories and Issues, Second Edition (Denzin, N.K. and Lincoln, Y.S., Eds.), California, Sage Publications.
Groat, L. and Wang, D. (2002). Architectural Research Methods, Canada: John Wiley & Sons, Inc.
Gross, H.-G. (2005). Component-Based Software Testing with UML, Heidelberg, Springer-Verlag.
Guba, E. and Lincoln, Y. (1994). “Competing Paradigms in Qualitative Research,” in The Handbook of Qualitative Research, N. Denzin and Y. Lincoln (eds.), Thousand Oaks, CA, Sage: 105-117.
Hair, Jr., J.F., Anderson, R.E., Tatham, R.L., Black, W.C. (2003). Multivariate Data Analysis, Fifth Edition, Prentice-Hall, Inc.: New Jersey.
Heron, J. and Reason, P. (1997). “A participatory inquiry paradigm,” Qualitative Inquiry, 3, 274-294.
Hevner, A.R., March, S.T., Park, J., and Ram, S. (2004). “Design Science in Information Systems Research,” MIS Quarterly (28:1), March 2004, pp.75-105.
Holsapple, C. (2001). “Knowledge management support of decision-making,” Decision Support Systems, 31 (2001) 1-3.
Holsapple, C. and Singh, M. (2001). “The knowledge chain model: activities for competitiveness,” Expert Systems with Applications, 20 (2001) 77-98.
Holsapple, C.W. (2005). “The inseparability of modern knowledge management and computer-based technology,” Journal of Knowledge Management (9:1), 2005.
Hume, D. (1739). A Treatise on Human Nature: Of the Understanding.
Janesick, V. J. (1994). “The dance of qualitative research design,” in Denzin and Lincoln (Eds.), Handbook of qualitative research design, (pp. 209–219). Thousand Oaks: Sage.
Jorgensen, D.L. (1989). Participant Observation: A Methodology for Human Studies, London: Sage Publications.
Ju, T., Wu, W., and Sun, S. “Capturing organizational memory with ISO 9000 processes and Topic Maps,” IACIS Pacific 2005 Conference (2005) 1070-1077.
Ju, T.L. (2006). “Representing Organizational Memory for Computer-Aided Utilization,” Journal of Information Science, 32(5), pp. 420–433.
Kaisler, S.H. (2005). Software Paradigms, New Jersey: John Wiley & Sons, Inc.
Kolakowski, L. (1968). The Alienation of Reason: A History of Positivist Thought, Doubleday, Garden City, NY.
Leavitt, F. (1991). Research Methods for Behavioral Scientists, Wm. C. Brown: Dubuque, IA., pp. 250-259.
Lee, A. and Baskerville, R. (2003), “Generalizing Generalizability in Information Systems Research,” Information Systems Research, 14(3), pp.221-243.
Libby, R. and R. K. Blashfield. (1978). “Performance of a composite as a function of the number of judges.” Organizational Behaviour and Human Performance, 21(2): 121-129.
Linstone, H. and Turoff, M. (editors) (1975). The Delphi Method: Techniques and Applications, Addison Wesley Advanced Book Program.  (Online version can be accessed via http://is.njit.edu/delphibook last accessed 6/10/2007)
Lincoln, Y.S. and Guba, E.G. (2003). “Paradigmatic Controversies, Contradictions, and Emerging Confluences,” Landscape of Qualitative Research: Theories and Issues, Second Edition, (Denzin, N.K. and Lincoln, Y.S., Eds.), California, Sage Publications.
Mangan, J., Lalwani, C., and Gardner, B. (2004). “Combining quantitative and qualitative methodologies in logistics research,” International Journal of Physical Distribution & Logistics Management (34:7/8), Bradford, p. 565.
March, S., Hevner, A. and Ram, S. (2000). "Research Commentary: An Agenda for Information Technology Research in Heterogeneous and Distributed Environments," Information Systems Research, 11(4): 327-341.
March, S. and Smith, G. (1995). "Design and Natural Science Research on Information Technology," Decision Support Systems, 15 (1995): 251 - 266.
Markus, M.L., Majchrak, A., and Gasser, L. (2002). “A Design Theory for Systems that Support Emergent Knowledge Processes,” MIS Quarterly (26:3), September 2002, pp. 179-212.
Meehl, P.E. (1986). “What Social Scientists Don't Understand,” in Metatheory in Social Science, Fiske, D.W. and Shweder, R.A. (eds.), Chicago: University of Chicago Press.
Merriam-Webster. (2003). Merriam-Webster’s Collegiate Dictionary (11th Ed.), Springfield, MA: Merriam-Webster, Inc.
Morçöl, G. (2001). “What Is Complexity Science?” Emergence 3(1), Lawrence Earlbaum Associates, Inc. pp.104-19.
Nandhakumar, J. and Jones, M. (1997). “Too close for comfort? Distance and engagement in interpretive information systems research,” Information Systems Journal 7(2), pp.109–131.
Nonaka, L., Toyama, R., and Konno, N. (2000). “SECI, Ba and Leadership: A Unified Model of Dynamic Knowledge Creation,” Long Range Planning, 33 (2000) 5-34.
Oppermann, M. (2000). “Triangulation, a Methodological Discussion,” International Journal of Tourism Research (2), pp. 141-146.
Parnas, D. (1998). "Successful Software Engineering Research," ACM SIGSOFT Software Engineering Notes, 23(3): 64-68.
Parnas, D.L. (1972). “On the criteria to be used in decomposing systems into modules,” Communications of the ACM, 15(12):1053-1058.
Purao, S. (2002). “Design Research in the Technology of Information Systems: Truth or Dare,” GSU Department of CIS Working Paper, Atlanta.
Ryan, G.W. and Bernard, H.R. (2003). “Data Management and Analysis Methods,” Handbook of Qualitative Research, 2nd ed., Norman Denzin and Yvonna Lincoln, eds., Thousand Oaks: Sage Publications.
Schmitt, R.R. (1978). “Threats to Validity Involving Geographic Space,” National Transportation Policy Study, Washington, D.C.
Schwandt, T.A. (1997). “Textual gymnastics, ethics and angst,” in W.G. Tierney & Y.S. Lincoln (Eds.), Representation and the text: Re-framing the narrative voice, Albany: State University of New York Press.
Sekaran, U. (2003). Research Methods for Business: A Skill Building Approach, 4th Edition, Southern Illinois Univ.
Shadish, W.R., Cook, T.D., and Campbell, D.T. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference, Houghton Mifflin Company: Boston.
Shadrick, S.B., Lussier, J.W., and Hinkle, R. (2005). “Concept Development for Future Domains: A New Method of Knowledge Elicitation, June 2005,” United States Army Research Institute for the Behavioral and Social Sciences.
Simon, H. A. (1996). The Sciences of the Artificial, 3rd ed., MIT Press: Cambridge, MA, 1996.
Tsui, F.F. and Karam, O. (2007). Essentials of Software Engineering, Massachusetts: Jones and Bartlett Publishers, Inc.
Turoff, M. and Hiltz, S.R.. (1996). “Computer Based Delphi Processes,” in Gazing into the Oracle: The Delphi Method and Its Application to Social Policy and Public Health, edited by Michael Adler and Erio Ziglio: Jessica Kingsley Publishers. http://eies.njit.edu/turoff/Papers/delphi3.html (last accessed 6/12/2007)
Venkatesh, V., and Davis, F.D. (2000). “A theoretical extension of the technology acceptance model: Four longitudinal field studies,” Management Science, (46:2), 186-204.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). “User acceptance of information technology: Toward a unified view,” MIS Quarterly, (27:3), 425-478.
Webb, E.J., Campbell, D.T., Schwartz, R.D., Sechrest, L. (1966). Unobtrusive Measures, Rand McNally College Publishing Company. Reprinted in 2000 by Sage Publications, Inc.: California.
Yin, R.K. (1994). Case Study Research: Design and Methods, 2nd ed., Thousand Oaks, Calif.: Sage Publications.
                                                                                                                                                                                                                                                                                                                                                                                                               
第一頁 上一頁 下一頁 最後一頁 top
* *