Kültür ve iletiţim, 2001, 4 (2), s. 185-207

Methodology Issues:
Problems in Published Empirical Research in Turkey

Ýrfan Erdođan

Abstract

This article, mainly using the positivist-empiricist theoretical framework, is an assessment of the present state of empirical research design and statistical analysis in Turkey. The main objective of the study is to illuminate the problem areas in applied and/or administrative social research and prompt concerned parties to design research in order to determine the extent of the problem and provide proper suggestions for plausible solutions. Examination of published empirical research indicates that there are widespread design and statistical usage problems, stemming from the lack of knowledge, expertise, ethic and rigor (from the standpoint of the mainstream theory), and rooted in dominant mode and relations of academic life (from the perspective of Marxist oriented critical schools in general).

Introduction

Academic life, including production mode and relations of academic and social life in Turkey are full of problems waiting for pertinent solutions. Academicians in their studies, master and doctorate students in their theses, and private research firms doing public opinion and/or marketing research for their clients in Turkey increasingly use empirical research methods and statistics. This article focuses on grave errors made in empirical research designs and statistical analyses in Turkey. The objective is to explore the problem areas in applied and/or administrative social research and hopefully motivate concerned parties to design research in order to determine the extent of the problem and put forward necessary suggestions for the corrective measures. The article uses positivist-empiricist theoretical framework, thus doesn’t critically evaluate the epistemological foundations of positivist-empiricism, rather concentrate on the problems of the design and usage. 1

I didn’t turn to any popular authority (neither god nor any famous professional academician) in order to seek support for my evaluation, because I have no identity problem; I am not in need of proving my academic knowledge and intellectual ability via doings and sayings of “advanced and better others.” That’s why the quality, validity and worth of this article shouldn’t be judged according to how extensively words of acknowledged authorities are used in the article. It is time to recognize our own worth and stop searching for our own selves in somewhere else (being deprived of own history), in those who control our material and intellectual resources (being deprived of self determination).2

Published research including books, journals, dissertations and reports are used for the evaluation. This article can be considered as a pilot or opening study urging concerned academicians to design and conduct specifically pointed and detailed ones.3

The order of presentation pursued the general steps of survey research design: Problems with problem formulation were analyzed, followed by theoretical framework, related studies, research questions and/or hypotheses, research method and findings. Each stage of design was analyzed for errors, inconsistencies and misuses.

Problem Formulation

Scientific investigation begins with asking questions leading to learn, explain, predict, experiment, observe and consequently advance the limits of the accumulated knowledge up to date. The selection and the formulation of research problem effect all subsequent research activities, because it is the starting point of a specific inquiry. A scientific research begins with an introduction that principally includes problem formulation, statement of objective and importance of the research. Problem formulation is supposed to provide empirically testable and feasible questions. Followings are main problems found in problem formulation:4

It is hard to find proper problem formulations in any research. There are only statements of some ideas and facts, but no conclusive arguments leading to problem (or issue) identification and setting up goals and importance of the research.

A properly titled research is extremely hard to find. The most titles are like book titles. For instance, titles like sustainable tourism and Turkey; Democracy and media; Sport and media; Internet and democracy; Olympics and tourism; what is rural tourism; terrorism and tourism are like book titles. Some articles don’t provide the basic information about the research. Some others don’t reflect the right content of the article.

One can’t use concepts like Turkey, Turkish people, Turkish corporations, British tourists, Hotels in Turkey and Turkish media in a title, unless it is a parametric study covering Turkey, Turkish people, Turkish corporations, British tourists, Turkish hotels and Turkish media. If the title has the word “Media” and radio is not included in the content, then there should be a convincing rationale for omitting the radio.

Objectives of the research are mostly misstated or confused with research procedures. In some studies, there is no relation between the presented objective and the content of the research. Researchers should understand that statements of “what to do” don’t constitute the objective of the research. The objective requires a convincing answer to the following question: why do you do what you want to do? For instance, stating that “the objective of this report is to device a comprehensive and detailed map of media in Turkey” indicates only what is going to be done, doesn’t show the objective of the research. The objective is to state why you want to device the map.

Some stated objectives represent deliberate lies or unconscious falsehoods. Most public opinion research purportedly tied with public policies done by public authority or private interest state unrealizable false objectives with ulterior motives.5 For instance, a statement like “the goal of the research is that findings will be used for the determination of policies in information technologies in Turkey” is surreptitious, if not, unsubstantiated assertion. Because public opinion research findings on technology is hardly ever used for the determination of public policy, instead used for policy justification. Committee on Atomic and Nuclear Energy, hiding its identity, designs a survey research with leading and ideologically loaded questions aiming at mind management through questions with pseudo-informative explanations in the questionnaire. It states its objective as learning from concerned people and in return arming them with right information on nuclear energy. This is outright and inconspicuous chicanery. In short, such survey research serves as mind management tool for the interest of industrial and state structures.

Importance of the research is very rarely stated in studies. If stated, it is Misdirected and tied with the success of, for instance, tourism industry, a firm, an institution or an organization; thus, academic importance is ignored, brushed aside or misunderstood. It is misunderstood in the sense that the so-called academic article has specific importance and serves a well-known purpose: it is a tool for bureaucratic advancement, because writer collects point for promotion, for instance, from assistant Professor to associate professor position. This is the dominant importance and unstated goal of article. There are very few empirical articles written by full professors in academic journals in Turkey. The basic reason is obvious: They are at the top of the bureaucratic ladder and nobody asks them to produce anything academically.

Statement of importance of a study is very important because the ultimate objective of the problem formulation is to explain and predict social phenomena not simply for pure academic activity, but for understanding the social issues in order to contribute to the solution. Namely, research problems should have social (economic, political, and cultural) relevance. It seems that researchers either have no idea about the social (and ideological) relevance of their study or perfectly aware of the relevance that is to serve the interest of a firm, a specific group or an institution.

Derivation of the research problems, which is one of the most necessary requirements of scientific inquiry, is simply nonexistent. Thus, such research seriously lacks academic rigor and scientific character.

It is unlikely to find any research that integrates the materials used and opinions presented in the introduction and, consequently, formulates the problems to be studied appropriately.

Related studies are integral part of a scientific research, however either not used or erroneously used. The related studies are supposed to function as means of problem formulation, objective setting and statement of the importance. A research using related studies in an appropriate and correct way is simply nonexistent. Related studies, if used, like in master and doctorate theses, wrongly used, because it means nothing merely to line up series of studies, their findings and/or theoretical statements in the area of interest.

Descriptive presentation (or promotion) of a measurement or data collecting tool or procedure (e.g., IOS 9000, GIS, Communications Auditing, critical incidents technique) disguised as research article can not have a scientific value. Designing a study in order to demonstrate “critical incident technique” or to show how to conduct “a communication auditing in an organization” is not scientific endeavor at all.

Model building is a serious undertaking that requires deep knowledge on theory and research. One can’t build a model by simply drawing a flow-chart and explaining the components of it.

Use of a model in a scientific research ultimately means test of the model, not sale-promotion of it via description and qualitative evaluation. For instance, a study “increasing the service quality by using work character model” should focus on not the conceptual definitions and descriptions, but on testing the model via experimental design or longitudinal observations.

One of the gravest design problems is to prepare some questions, collect the data and do some correlations, then try to come up with some findings. Trying to make sense out of some primary and secondary data is not the proper way of scientific design and inquiry.

 Theoretical Framework

 A research issue or problem in a scientific investigation should have theoretical significance. It should be connected to a set of interrelated empirical generalizations (a theory); otherwise it is not theoretically significant and becomes atheoretical and scientifically insignificant.

A statement of theoretical framework is customarily not expected when an administrative or applied social research is designed. However it is necessary to provide a theoretical rationale when doing an empirical research for academic purpose. Basic problems with theoretical framework are as follows:

  1. Theoretical framework is missing in almost every study, with the exception of some academic studies and theses.

  2. Theoretical framework in empirical academic studies and theses is confused with conceptual definition. A concept is placed within a theoretical framework when it is conceptually defined in a specific way. Conceptual definition is required in order to provide a theoretical framework of a concept so that an operational definition can be formulated for observation. Namely, a concept should be first theoretically, then operationally defined. A concept should be transformed into a measurable variable by operational definition. Otherwise, a measurement (observation) is not possible; thus an empirical testing or observation can not be validly and reliably realized. Theoretical framework and operational definition require adequate knowledge and expertise that can hardly ever be found in applied and scientific research in Turkey.

  3. Unfortunately most researchers have no or little idea about the theoretical structure of a research. For instance, it is wrong to state that “theoretical framework of the study is determined through the gathered information and findings. Then, a field research based on this theoretical framework was devised.”

  4. Statement of any theoretical rationale seems unnecessary in marketing oriented public opinion studies, because of the nature and objective of the research. However researcher is supposed to be aware of the importance of a theoretical basis, even if it looks completely needless or dispensable.

  5. Integration of theoretical framework with the extraction of research questions and with the evaluation of findings can’t be found in any research at all.

 Derivation and Statement of Hypothesis or Research Questions

A research question or a hypothesis doesn’t come out of thin air. It can not be simply stated and ready to investigate. There should be a rationale for each research question or hypothesis. A researcher should know that hypotheses or research questions are testable statements derived from a theoretical reasoning. Primary problems in research in this respect include followings:

  1. Some studies have no research questions or hypothesis what so ever. Some just state the research questions or hypotheses without any rationale. Some others state them in method or findings section of the study. There can be found no derivation of and no discussion leading to a hypothesis or research question in any study at all.

  2. Multi factor relations is presented in some studies, but bivariate analysis is done. Besides, number of variables/factors doesn’t make a study multivariate design, but nature of the design and statistical analysis.

  3. Wrong or baseless causal relationships are established in some designs, because of the lack of theoretical reasoning. For example, it takes an urban prejudice mind to establish causality between environmental sensitivity and readership of environmental magazine by rural and urban dwellers, because the result is obvious (urban people will be more environmentally sensitive because they read the magazine). One can not infer environmental sensitivity via readership of a magazine, because people can be environmentally sensitive, but can not have any access to the magazine, can not afford it, can not have time for it, can not see it as necessary to read in rural areas. Namely, the readership of environmental magazine doesn’t make a group environmentalist. Similarly, it is ridiculous to assume causality between existence of marketing department in a firm and selection of marketing channel, and between owning or renting a business building and selection of marketing channels. Likewise, it needs a convincing rationale in order to hypothesize that there is a positive causal relationship between work performance (as independent variable) and work attachment (as dependent variable).

 Method

 Method section of empirical research supposed to provide detailed information on modus operandi of a study. This is the section wherein the researcher explains how to do the research in order to collect reliable and valid data for hypotheses or research questions. Main problems are as follows:

  1. Method section of some studies includes unnecessary conceptual definitions. Some of such definitions, using different theoretical approaches, provide detailed and conflicting accounts of the concept, but never reaching to a synthesis.

  2. Definition of a concept requires a proper statement of defining characteristics of the word. Concepts are defined at two levels of abstraction: theoretical and observational. Definitions at the theoretical level are named conceptual definitions that define concepts by means of other abstract concepts. Definitions at the observational level are operational definitions that make a theoretical concept observable. A concept can not be measured unless it is operationally defined. Unfortunately it is hard to find any study with proper theoretical and operational definitions. Some uses are wrong because of the lack of theoretical definition of a term. For instance, a study finds an increase in the number of the newspapers in Turkey, relates the reason of the increase with the fact that newspapers engage in consumer goods promotion and sale by using coupons. Then, it concludes that newspapers are transformed into tools for consumption of various consumer products. There are at least two interrelated mistakes: (1) Underlying concept of communication via newspapers is wrong, because newspaper communication is not limited with the symbolic interaction through the written words (news, sport, editorial etc.). It also includes interaction through the written words orienting readers to commodity, setting the conditions of, starting and completing exchange of goods. (2) The conclusion is wrong because the causal relationship established between commodity promotion and “transformation to the tools of consumption” is not correct. Commodity sale or promotion doesn’t make newspapers tools of consumption, but a commercial enterprise selling and promoting symbolic and material forms that leads to consumption. It means newspapers are still tools of communication, because communication is necessary condition of social interaction of any kind.6

  3. Concepts are carelessly and wrongly used, thus factors and items are not understood right. For instance “physical and cultural travel motivations of tourists” are equated with various reasons for travel. This is wrong because motivations are not reasons of travel, but psychological drives underlying those reasons. Another study indicates that there are 10 thousand radio receivers in Turkey as compared to 20.5 thousand tv receivers. Based on this finding, it is concluded that radio is not a widespread communication tool as much as television. The statistics and, thus conclusion is wrong, basically because “the radio receivers” is not operationally defined right. People don’t listen to the radio only at home, but at work, outside, on the street, on the way to and from work, especially in their cars. It means there radio sets ownership outside the home. Thus it is wrong to limit the radio receivers with the ones at home. Furthermore, there is another mistake made by equating media use with the ownership of a medium. Ownership should not be confused with the extent of use. In another study, property relations are confused with the ownership. The study orients the reader to a table indicating that it is a map of property relations. Table shows the distribution of ownership of firms by corporations (who own what). Property relations include pattern and structure of ownership, but are not merely ownership. The mistake made is because of the lack of theoretical knowledge or rigor.

  4. Unit terms, character terms, relational terms and constructs should be operationally defined. Almost none of the studies provide operational definitions for the variables to be measured. That’s why there are a lot of mismatches, problems of scaling and measurement errors. For instance, “media access” is not defined, but reader sees that it refers to number of the radio and tv receivers. Then media access is correlated (without any statistical test) with “information rich” and information poor”. Herein, there is no theoretical framework, no proper theoretical definition of media access and audience (Information rich and information poor), no operational definition and no statistical test. One cannot become information rich or information poor because of the extent of media access defined as use of the finished media product. Quantitative abundance of media products may indeed mean profusion of junk, thus “information poor” may indeed mean “junk poor.” That’s why, access should be tied with the means and modes of media production. In another study, two subtitles (access and use) are given, but both are defined as the number of users: Access is equated with the frequency distribution of Internet use in six geographical regions in the globe. Researchers should know that access and use are interrelated but separate terms.

  5. A concept is not a variable. A variable is not necessarily “something that changes.”

  6. There should be only one operational definition for a variable. For instance, in one study, two criteria for operational definition of a variable are given as instruction to the interviewer: occupation is defined as field of education and personal ability. This is a grave mistake, because two different definition of a variable, even if correct, requires two different measurement and evaluation. Besides a concept is not supposed to be operationally defined for the interviewer and the stage of conducting a survey.

  7. Two or more concepts cannot be combined into one variable and operationally defined. For instance, “physical and mental relaxation” can not be measured as one single variable, because a single operational definition can not be provided. It can be defined either as relaxation and relaxation is grouped under physical and mental etc. or it is treated as two variables and defined and measured separately.

  8. Another measurement problem is that some researchers have no, little or wrong idea about levels of measurement. That’s why measurement design lacks consistency, reliability and validity.

  9. Type of research is generally not stated, misstated or stated with no explanation. It is not enough to write down that, for example, it is a field research. Type of research should be stated and a brief discussion should be given explaining why this type of research is preferred among others.

  10. Difference between and importance of parametric and non-parametric study and relation among population, sampling frame and sample are not clearly known. Knowledge about sample size is inadequate and generally wrong. For instance, a study indicates that there are 92 five star hotels and % 84.7 questionnaire sent to them is filled and returned. The researcher is concerned with the problem of representativeness, because of the % 84.7 return. He/she is not supposed to be concerned, because he/she is not using sampling, he/she is using the population.

  11. Studies talk about “universe” and indicate that they extracted sample from this universe. Concept of universe is misunderstood. You can’t extract your research sample from the universe and can’t make generalizations to an undefined and unidentified universe. Population is theoretical definition of a universe. Generalizations of findings are only made for this defined population in a parametric study, because sample is extracted form a sample frame that is the accessible population tied with the theoretical one.

  12. Sometimes type of study is named, but there is no such research type in the literature. For instance, “collecting data via questionnaire” is stated as research type. Some researchers invent a research type called “conceptual study.” In fact, their study is a kind of extremely primitive theoretical research. Furthermore, the study is not the type that is stated, but something else. For instance, a study titled “a conceptual study on increase in service quality” implies that some kind of conceptual, thus theoretical discussion will be provided concerning the service quality. However this study is nothing more than a descriptive publicity promotion of a model for effective management.

  13. Another problem of measurement is designing a question that doesn’t measure what it is supposed to measure: e.g., “How well do you know a foreign language?” How can you distinguish one person’s language level from the other by basing your judgement on such self-reported value question? Or how can you measure level of proficiency in English by asking people to rate themselves on an ordinal scale? It is wrong to ask students to evaluate the advancement opportunities or salaries in a sector or evaluate the curriculum in a school, because the students are not the right source of information.

  14. Data collection procedures are generally stated, but either simply named or full of mistakes. It is not enough to state that it is a content analysis or discourse analysis. Some studies indicate the method of data collection method, but they completely lack a systematic analysis, because method is merely mentioned but not properly and expertly used.

  15. Generally wrong sources for data collection are identified and used. For instance, the objective of study is stated as “to find number and extent of cellular phones used”, and sample of phone users are used for the collection of data. In another study, objective was to determine the number of cars in use in Istanbul and data source was sample of population of some 2000 people. You can’t make right estimation by using sample of phone users or Istanbul dwellers, because right source of data is somewhere else. It is preposterous, if not intentionally done, to ask municipal administrators if their solid waste landfill causes foul smell and annoys surrounding communities. Asking wrong people right question provides us only with invalid data: Does asking British tour operators “why do British tourists prefer Turkey?” give us a reliable and valid data? Absolutely not, unless we want to know projections of tour operators for some reason.

  16. Problems with questionnaire design are multifold. The most grave one is to translate the survey research questions and scales developed in the United States or elsewhere and use them. Currently the most popular one is the value analysis.

  17. Questionnaire development is not done properly. One can’t simply prepare some questions and conduct a survey. But you can in Turkey.

  18. Questionnaire design is packed with double, even triple barreled questions. Some examples: Did your child attend primary, secondary or high school in private school? Yes or No. Did you plan and/or implement a study that requires funding? Yes or No. “physical and mental relaxation”, “Interest in art-music-architecture and folklore” and “entertainment-excitement,” are treated as three variables measured with a likert type ordinal scale. In fact, these are double and triple barreled questions, thus completely wrong.

  19. Rules of nominal and ordinal category formation are broken in questionnaire design:

    Mutually exclusiveness rule is not complied with. For example, categories of a close-ended question include “social scientist, faculty teaching staff and architect”. My wife is natural scientist, social scientist, architect and at the same time faculty teaching staff. Forced choices in another study include 1. At home, 2. Out side, 3. Restaurant, Another one: 100 – 150, 150 – 200, 200 – 250 etc. These are all wrong.

    Exhaustiveness rule necessary for collecting reliable and valid information is generally not followed. Instead predetermined categories or choices that fit the objective of the researcher are stated. This is common problem in questions forcing respondents to choose among given selections. Adding “other” choice is not always a proper solution, since the given choices influence people.

    Inconsistent, irrational and/or unrelated categorization is provided: For instance, “what kind of work do you do at present?” (Work is defined as activity that brings income). Some of the forced selections are student, housewife, retired and unemployed.

    Too many categories are provided. For instance, 22 categories of occupation and 21 categories of income are too many to handle. How can you do a univariate and bivariate analysis using too many categories? You technically can, but can not do meaningful evaluation.

    Unnecessary and/or groundless categorization of interval measurement are provided. For instance, age is grouped under 5 category: 25 and less, 26-30, 31-35 etc. Questions like “what makes the difference between 30 and 31 years old? Why five category but not six?” can not be answered in such categorization. There must be a convincing rationale for the group intervals.

    Some categories in some studies are ideologically loaded or deliberately designed, thus subjective and leading.

    Some studies use wrong criteria for grouping: small size business (grocery owner) medium sized businessmen (max 10 workers); large sized business (more than 10). My brother in law employs 13 workers in his sweatshop and KOÇ Holding employs tens of thousands of people. Are they both large sized businesses? Can you put them under same group?

    Ordinal scales are not properly designed or balanced: For example, 1.good, 2. medium, 3. bad, 4. very bad; 1. Not satisfied at all, 2. Not satisfied much, 3. Partly satisfied 4. Satisfied very much; 1. Not agree 2. Generally agree, 3. Totally agree. None of the scales above is right.

    Inconsistency between question and categories are abundant in studies. For example, “do you watch tv everyday of the week? 1.Every day; 2. 5-6 nights; 3. 3-4 nights, 4. 1-2 nights; 5. seldom; 6. Other. The researcher is not aware of the fact that the measurement unit is “days of the week”. Thus, “seldom” is not appropriate. “Other” can not be used, because there is no other probability left. Besides, the question is not properly designed.

  1. Some studies have ideological overloaded questions. For example, “do your students gain sufficient practical skills when graduated? Yes No. (School provides liberal art education; it is not a job training school or a community college)

  2. Statements about statistical analysis in some studies are either nonexistent or lack proper explanation. Furthermore, it is not enough to state that SPSS is used for data analysis. SPSS is only a tool, a package program for statistical analysis; it doesn’t analyze the data for us.

  3. Scope of research and limitations of research are not understood right. The scope or delimitation is not the methodological or any other limitations of a study.

  4. Some researchers use formulas to explain the test they use (e.g., anova). Some others explain how to read a factor analysis table. This is done either because the researcher doesn’t know that there is no need for such explanation or because he/she wants to impress the reader.

  5. Statistical analyses in some studies are used wrong or interpreted wrong. For instance, the researcher studying the difference between males and females indicates that “According to the Levene test results, F=0.835 and p= 0.364 are found. Thus, there is no difference between the groups.” This is a wrong interpretation, because Levene test is to determine if the group variances are significantly different (or same). It is necessary to use the test since t-test has assumption of equal variance. Groups can have variance that doesn’t significantly differ, but they may still have different central tendency. Another researcher uses Mann-Whitney U test to compare two groups of nominal measures. This is wrong test for nominal measurement, because Mann-Whitney U requires ordinal level measurement. Thus, all findings and interpretations are invalid.

Findings/Discussions/Conclusions

The basic rule in reporting of the results is that finding and evaluations should be either separately presented or distinguishable.

    1. One of the most common problems is that unnecessary statistical correlation is made for no stated reason. Correlation for the sake of correlation is not a proper way of doing research. Correlation of every variable with the other is meaningless unless it is the part of the design.

    2. Studies are full of misstatement and misevaluation of the statistical results. Some studies don’t even provide p value for determination. For instance, researcher has no hypothesis, but uses anova in order to compare income group with “interest to art-music” measured with Likert type ordinal scale; then, states that as income increases, interest to art-music increases. Anova is a central tendency test and used to find if the groups differ in central tendency. If we assume that as one variable changes, the other one changes too, then, we have to have interval or ratio level measurement. Central tendency tests don’t tell us about any positive causal relationship. Similarly, a hypothesis stating that “as age increases, frequency of travel decreases” can not be tested using chi-square test. Chi-square distribution shows us relationship between two grouped variables. Furthermore, we can not infer linear relationship by looking at anova or chi-square test results.

    3. Univariate analysis of ordinal scales are mostly wrong, because of the use of the mean and standard deviation, instead of frequency distribution for a closed-end questions with 3 or five choices.

    4. Univariate analysis of nominal data are stated right, but misinterpreted. Most of the time no test is used in the studies, while Z test is required to determine if there is statistically significant difference in distribution.

    5. Univariate analysis of data is generally correct. Some interpretetions are wrong. For instance, distribution of internet access in global regions of the world (wrongly defined as number of users) is given and then concluded that there is imbalanced distribution. You can’t come to this conclusion unless there is a distribution of population in accord with it. For instance, if half of the world population leaves in the North America and half of the internet users are from there, then you can’t drive the conclusion of imbalance.

    6. Bivariate statistical tests are improperly used: For example, two groups or two nominal variables are compared with ordinal variables of motivation, attitudes, job satisfaction) using T-test or anova. Some studies use Pearson product moment correlation with two nominal measurements or one nominal one ordinal scale. These are completely wrong uses.

    7. Two or more statistical tests are used for a single bivariate analysis. Then, the one that serves the researcher’s purpose is selected and an invalid discussion is provided negating the results of other test(s): For example, the researcher indicates that the test result (r= 0.95) shows strong relationship, however the t-test value (-1.55) shows that this difference is not significant. There are few fundamental mistakes here: There is a significant relationship according to the Pearson product moment correlation and it is very strong. One can’t negate this and provide a contrary interpretation. T-test is used to find if there is significant difference between groups, while the Pearson test is the test of probable relationship. They are different tests for different purposes. Furthermore, one can’t determine the group difference by looking at the T-test value unless the p value is checked.

    8. Causal relations are inferred from the correlation analysis in some studies. This is a grave mistake, because correlation and causality are not the same. Causality is not inferred from the statistical results, but theoretically construed and tested. Correlation provides information on the significance, direction, strength of the relationship, nothing else.

    9. Correlation and causality is construed by merely looking at the univariate frequency distribution or central tendency measures. This is wrong because a proper test of significance should be used. Furthermore, sometime grave mistakes are made. For instance, it is stated in a study that “mean income level of cities in Turkey show normal distribution.” Here we have Turkish cities, level of income for each city, and normal distribution of income among cities. What is the theoretical assumption of the normal distribution of income? It is not stated. Does normal distribution of income mean income is distributed evenly among cities? That’s what it means, since we have a distribution on nominal scale (cities). This normality statement lacks relevance and factual meaning. The same study states that “in regard to population, normality disappear.” What do we suppose to expect: equal population for each city? This disappearance statement also is meaningless and invalid.

    10. Factor analysis is defined and used wrong in some studies.

    11. Tables and figures generally are not named and designed properly.

    12. A scientific research has to establish ties between the theory, hypothesis and findings, and reach conclusions by integrating and synthesis. It is extremely hard to find any research doing such integrating.

    13. Furthermore, no integration of findings with theoretical reasoning and related studies is found in the studies.

    14. Conclusions in some studies have nothing to do with statistical results and findings. Findings that don’t support researcher’s expectations are generally ignored or misinterpreted.

    15. Another grave mistake is that generalizations beyond the research populations are made in some studies.

It is rarely seen any thesis, any report, any book or any article in various journals in Turkey that has correctly designed an empirical research, properly used statistics, appropriately presented and systematically analyzed findings by integrating theoretical rationale, related studies, research questions and hypotheses and data. This article is only to indicate that there are grave errors in design and misuses of the methodology and statistics in published studies in Turkey. It is extremely important to go beyond this presentation and conduct research in order to find the extent of problems in each of the problem areas stated in this article and formulate viable solutions, especially for the academia. Unfortunately probability of such research initiative is extremely low, because very few in academia can take the heat. It is easy, beneficial and rather fulfilling to go along the dominant flow. Only a professor can dare to do such research, only if he/she is not planning for high administrative position in the university or public institutions in the future (before or after retirement). Dependence and inter-dependence nurture inter-subjectivity framed as objectivity in social sciences. That’s why nothing (or close to nothing) is done against the dominant work culture within the knit academic community. It is a kind of work culture that reproduces laziness in people and animosity against those few who work hard. There are people in academia, including research assistants, who haven’t read any book or article in years. Hence, mistakes, misuses, abuses are maintained and perpetuated. One of the best (surely worst) examples to such perpetuation is the student guidebooks or student handbooks for the master and doctorate theses in the universities. These guidebooks (e.g., Ankara University, Trabzon Technical University, Gazi University and Hacettepe University) are outdated, methodologically flawed and full of grave mistakes. The perpetuation remains not only because of the interdependence of interests and dependence, but also because of lack of concern and involvement that is reproduced by the oppressive mode of production and production relations in academia. Such reproduction inevitably nourishes a functionary bureaucrat masquerading, posing and passing as academician, lackeying, fawning, fakery, hypocrisy, further dependence, fixation, mindless and unquestioned dedication, forgery, recurring mistakes and unproductive stability. It is not merely because of the individual academician’s fault, incompetence or inability, if there are gross mistakes in research design, application and evaluation. Individual academician makes himself/herself under ruling organized social conditions, that’s why problem is not solely with the individual or specific mode of thinking, but with the prevailing conditions of daily academic and wider social production and production relations in a society and among societies.

Footnotes

1.The article has some critical evaluations based on Marxist approach throughout and especially at the end of the article. Otherwise, the main theoretical framework of the article is based on the mainstream empirical approach.

2. Herein I am not refusing the necessity of knowing of others and of accumulated knowledge; I am refusing slavish dependency on the authority for a presentation. It is worst when it reflects the fallacy of research or part of intellectual fallacy. When intersubjectivity reigns, the science suffers.

3. I am not revealing the sources of articles analyzed and examples taken from, since I don’t believe that the prime responsible party is the individual per se, but educational and editorship and referee systems.

4. Problem formulation means also selecting an issue to study. Namely it is not limited with only a problem.

5. To make money, to collect points for academic advancement or to solve corporate problem are not valid goals for scientific research.

6. Newspaper is also means of consumption in the sense that it is a commodity sold and bought for use. It has a use and exchange value.

 

Suggested Readings

Angus, I.H. & Lannamann, J.W. (1988). Questioning the institutional boundaries of US communication research: An epistemological inquiry. Journal of Communication, 38(3, Summer), 62-74.

Babbie, Earl R. (1998). The Practice of Social Research. 8th ed. Belmont: Wadsworth.

Bechhofer, Frank (2000). Principles of Research Design in the Social Sciences. NY: Routledge.

Becker, H. and I. L. Horowitz (1967). "Whose Side Are We On?" Social Problems 14 (3,Winter):230-47.

Becker, H. and I. L. Horowitz (1988). "Radical Politics and Sociological Research:Observations on Methodology and Ideology." Ch. 5 in Becker, ed., Doing Things Together. Evanston, Ill.: Northwestern University.

Blackburn, R. (1973). Ideology in social science. NY: Vintage.

Bottomore, T. B. (1974). Sociology as Social Criticism. New York: Random House.

Erdogan, Ý. (1998). Araţtýrma Dizayný ve Ýstatistik Yöntemler (research design and statistical methods). Ankara: Emel.

Glass, G. V. & Hopkins, K. D. (1996). Statistical methods in education and psychology. NY: Alyn and Bacon.

Horowitz, I. L. (1975). The use and abuse of social sciences. NJ: Transaction Books.

Lang, K. (1979). The critical function of empirical communication research: Observation on German-American influences. Media, Culture and Society, 1(1): 83-96.

Lazarsfeld, P. F. (1972). Qualitative analysis. Boston: Allyn and Bacon. (especially parts 2, 4 and 6 on empirical research).

Levin, J.R. & Levin, M.E. (1993). Methodological problems in research on academic retention programs for at-risk minority college students. Journal of College Student Development, 34, 118-124.

Nowak, S. (1976). Understanding and prediction: Essays in the methodology of social and behavioral theories. Boston: D. Reidel,

Paulos, J.A. (1988). Innumeracy: mathematical illiteracy and its consequences. New York: Hill & Wang.

Rose, H. And Rose, S. (1979) (eds.). ideology of/in natural sciences. Mass: Schenkman.

Seltzer, Richard A. (1996). Mistakes That Social Scientists Make: Error and Redemption in the Research Process. New York: St. Martin's.

Thernburn, G. (1976). Science, Class and Society. London:Routhledge.

Wallerstein, I. (1996). Open the social sciences: report of the Gulbenkian Commission on the restructuring of the social sciences. Stanford: Stanford University Press.

Weick, K. E. And L. R. Browning (1991). Fixing with the voice: A research agenda for applied communication. Journal of Applied Communication Research, 19(1-2): 1-19.

Wheeler, M. (1976). Lies, damn lies, and statistics: The manipulation of public opinion in America. NY: Liveright.

Wright, Daniel (1997). Understanding Statistics: an Introduction for the Social Sciences. Thousand Oaks, CA: Sage.

Bir
geriye

Ana
sayfaya