In providing some information on faculty perceptions of the
involvement of various reasoning skills in their disciplines, the
study has, we hope, implications for developing future versions of the
GRE analytical ability measure. Converting this information to
operational test items will represent a significant step, however, and
it is not crystal clear at this stage exactly how helpful these
results may be eventually. Nonetheless, the findings do seem to
contain several useful bits of information:
1. Among the specific reasoning skills perceived as the most
important were several, e.g., "deducing new information from
a set of relationships" and "understanding, evaluating, and
analyzing arguments," that seem well represented in the two
item types (analytical reasoning and logical reasoning)
currently included in the analytical section of the General
Test. This suggests that these item types should continue to
play a role in future editions of the GRE General Test.
2. Some skills that are not measured by the current version of
the analytical measure were rated as very important.
"Reasoning or problem solving in situations in which all the
needed information is not known" was among the skills rated
as most important in each discipline, but currently
unmeasured, at least in any explicit manner, by the
analytical measure. In this regard, however, the previous
GRE-sponsored work of Ward, Carlson, and Woisetschlager
(1983) is noteworthy. These investigators studied
“ill-structured” problems, i . e. , problems that do not provide
all the information necessary to solve the problem, and noted
the resemblance of these problems to one variant of the
logical reasoning item type used in the analytical measure.
They concluded that there was no indication that “illstructured”
problems measure different aspects of analytical
ability than do “well-structured” problems, and therefore
that “ill-structured” problems could not be expected to
extend the range of cognitive skills already measured by the
GRE General Test. They did note, however, that the
“ill-structured” item type could be used to increase the
variety of items types in the test. The findings of the
current study suggest that the inclusion of this item type
would probably meet with faculty approval in most fields of
study.
3. With respect to their perceived importance, skills involving
the generation of hypotheses/alternatives/explanations tended
to cluster together, and the inability to generate hypotheses
independently was one of the incidents rated consistently as
having a substantial effect on faculty perceptions of
students’ analytical abilities.
A number of years ago the GRE Board sponsored a series
of studies (Frederiksen & Ward, 1978; Ward, Frederiksen, &
Carlson, 1978; Ward & Frederiksen, 1977; Frederiksen & Ward,
1975) that explored the development and validation of tests
of scientific thinking, including one especially promising
item type called “Formulating Hypotheses,” which required
examinees to generate hypotheses. Although the research
suggested that this item type complemented the GRE verbal and
quantitative measures in predicting success in graduate
school, the work was discontinued, largely because of
problems in scoring items that require examinees to
construct, not merely choose, a correct response. Carlson
and Ward (1986) have proposed to renew work on the
“Formulating Hypotheses” item type in light of recent
advances in evaluating questions that involve constructed
responses. The results of the faculty survey reported here
would appear to support this renewal.
4. Some of the highly important skills that are currently
well represented in the analytical measure are viewed as more
important for success in some disciplines than in others.
For example, “understanding, analyzing, and evaluating
arguments” was seen as more important in English than in
computer science. However, some skills seen as highly
important in some disciplines but not in others may not be as
well represented currently. For example, “breaking down
complex problems into simpler ones” was perceived as
extremely important in computer science and engineering but
not at all important in English. This would suggest,
perhaps, the need to balance the inclusion of items
reflecting particular skills, so that skills thought to be
important (or unimportant) in particular disciplines are
neither over- nor underrepresented.
The several dimensions that appear to underlie clusters of
reasoning skills may provide an appropriate way to extend the
current test specifications for the analytical measure,
especially if new item types are developed to represent some
of these dimensions.
The reasoning skills that were rated as very important, and
consistently so, across disciplines point to a potential
common core of skills that could be appropriately included in
an “all-purpose” measure like the GRE General Test. Other
skills judged to be very important in only a few disciplines
might best be considered for extending the measurement of
reasoning skills in the GRE Subject Tests. Faculty comments
about the difficulty in separating reasoning from subject
matter knowledge would seem to support this strategy.
No comments:
Post a Comment