Monday, January 27, 2014

Questionnaire Development


Initially, 30 department chairs (in English, education,
engineering, chemistry, computer science, or psychology) were
contacted in 30 graduate institutions, and asked to identify three
faculty members in their departments who would be willing to provide
their insights into the analytical or reasoning skills that are most
critical for successful performance in graduate school. These 30
institutions were chosen from the GRE Directory of Graduate Programs
in such a way as to ensure some degree of geographical representation.
All of these departments require or recommentdh at applicants submit
GPE General Test scores; it was felt that these departments might be
more interested than nonrequiring departments in efforts to improve
the GRE General Test.
At this preliminary stage, faculty membersw ere informed of the
purpose of the project and asked to give, in an open-ended fashion,
examples of:
(a) the analytical, reasoning, or thinking skills they
perceived as most important for successful graduate
study in their fields (e.g., identifying assumptions
on which an argument is based), particularly as these
skills differentiate successful from marginal students
(b) specific critical incidents related to thinking or reasoning
that caused them to either raise or lower their estimation
of a student's analytical ability (e.g., failing to qualify
a conclusion as appropriate)
(c) particular reasoning or thinking "flaws" they have observed
in their students (e.g., using the conclusion as the premise
of an argument).
Useable responses were obtained from 33 faculty members, who
suggested a total of 138 important reasoning or thinking skills, 86
critical incidents, and 75 reasoning "flaws." Some of these responses
were duplicates. Several other respondents did not specify discrete
skills or errors but chose rather to send helpful discursive replies
to our inquiry. All responses were condensed and edited, and
generally evaluated with respect to whether they should be included in
the larger, more structured questionnaire that was planned. Some
responses constituted usable questionnaire items essentially as stated
by respondents (e.g., "the ability to break complex problems into
simpler components"). Other responses were revised or eliminated
because they were too general (e.g., "the ability to think
independently"), and others because they were too specific or applied
only to a particular field (e.g., "the ability to resolve into
enthymemic form any argumentative work" or "the ability to take
ecological validity into account").
The structured questionnaire was constructed on the basis of this
preliminary survey, on a review of relevant literature (Duran, Powers,
& Swinton, in press) and on a number of additional books or texts on
reasoning (e.g., Campbell, 1974; Fischer, 1970; Johnson & Blair, 1983;
Kahane, 1984, Nosich, 1982; Salmon, 1984; Striven, 1976; Toulmin,
Rieke, & Janik, 1984; Wason & Johnson-Laird (1972); and Weddle, 1978).
Several other articles, e.g., a seminal work by Ennis (1962) and a
list of skills by Arons (1979), proved especially useful. Various
issues of CT News, published by the Critical Thinking Project at
California State University at Sacramento, were also perused.
Previous work on critical incidents in graduate student performance
(Reilly, 1974a, 1974b) was also consulted, and several of the
incidents related to critical facility were included in the present
study. Finally, the list generated by Tucker (1985), who gathered the
impressions of ETS test development staff, philosophers, and cognitive
psychologists, also proved to be a valuable resource.
The final questionnaire (see Appendix A) was structured to
include questions about the importance and frequency of various
reasoning skills, of commonlyo bserved errors in reasoning, and of
specific incidents that may have led faculty to adjust their estimates
of students' analytical abilities. Questions were grouped under
several headings, mainly to give respondents some sense of their
progress in responding to the rather lengthy questionnaire.
The Sample
Six academic fields (English, education, psychology, chemistry,
computer science, and engineering) were included in the final survey.
These fields were thought to represent the variety of fields of
graduate study and the variation in the kinds of reasoning abilities
involved in graduate education. Using the data tapes of the Higher
Education General Information Survey (HEGIS), nonoverlapping samples
of 64 graduate institutions with doctoral programs were drawn for each
of the six graduate fields. A random sampling procedure was used such
that eight institutions from each of the eight HEGIS geographic
regions were selected for each field. This sampling was greatly
facilitated by the work of Oltman (1982). The admission requirements
of these institutions were determined from the Directory of Graduate
Programs (GRE/CGS, 1983), and only those that either required or
recommendeGd REG eneral Test scores were included in the sample. In
this manner, 40 institutions were selected for the final sample for
each field. In addition, one institution with a relatively large
proportion of Black students and one with a relatively large
percentage of Hispanic students were included in the samples for each
field, thus raising the total number of institutions to 42 per field.
Letters were then sent to departmental chairpersons, who were asked to
nominate two faculty membersw ho would be willing to complete the
questionnaire. Respondents were paid $25 for their participation.

No comments:

Post a Comment