Monday, January 27, 2014

Item-level Results


Tables 1-3 show the mean ratings by discipline for each question
included in the survey instrument. The numbers in the total column
are the grand means for all disciplines. Numbers under each
discipline represent for each item the deviations from these means.
The F tests in the right-most column indicate whether the means are
significantly different among the six disciplines. Because the
average ratings, over all respondents, for “frequency of use” and
“importance for success” correlated .99, only the importance ratings
are presented for reasoning skills. Likewise, only the “seriousness”
ratings are presented for reasoning errors, since their correlation
with frequency ratings was .98, and, for critical incidents, only the
average “effect” ratings are presented, since their correlation with
frequency ratings was .94.
Tables 1-3 show a substantial number of significant differences
among disciplines with respect to the importance placed on various
reasoning skills (Table l), the seriousness with which they regard
particular kinds of reasoning errors (Table 2), and the impact that
various critical incidents have on the estimation of students’
analytical abilities (Table 3). Table 4, showing only the very
highest rated skills and most critical errors and incidents, gives a
flavor of the differences among these six disciplines. For example,
chemistry faculty placed a high premium on being able to generate
hypotheses, questions, or experiments, to draw sound inferences from
observations, and to analyze and evaluate previous research. Engl i sh
faculty, on the other hand, saw greater importance in skills involving
argumentation-- being able to understand, evaluate, analyze, elaborate,
recognize, and support aspects of an argument.
Faculty in the six disciplines also appeared to have quite
different views as to the numbers of skills that were important in
their respective disciplines. The numbers of reasoning skills that
received average ratings of 4.0 or higher varied markedly by
discipline as follows: 23 for chemistry, 5 for computer science, 27
for education, 22 for engineering, 29 for English, and 26 for
psychology. These differences may have arisen, for example, from our
particular choice of questions, from differences in standards amonq
disciplines, or from some other factor(s) .
It can be seen, even from Table 4, however, that some skills were
-6-
viewed as very important by several disciplines. For example,
‘breaking down complex problems into simpler ones” was rated as the
single most important skill (of the 56 skills listed) in both computer
science and engineering. “Determining whether conclusions are
logically consistent with, and adequately supported by, the data” was
rated as one of the three most important skills by both education and
psychology faculty; “drawing sound inferences from observations” was
the highest rated skill in chemistry and nearly the highest in
education.
The extent to which faculty in different disciplines agreed on
the importance of various skills, errors, or incidents can be examined
in a slightly different manner. To get some idea of the skills,
errors, and incidents that were viewed as relatively important, and
for which average ratings did not differ significantly across
disciplines, Table 5 was prepared. This table shows only those skills
that received average ratings of importance of more than 3.5 over all
six disciplines combined, and for which analyses of variance did not
detect any significant differences among disciplines.
“Reasoning or problem solving in situations in which all the
needed information is not known” was the skill rated as most important
overall. Such skills as ‘detecting fallacies and logical
contradictions in arguments,” “deducing new information from a set of
relationships, ” and “recognizing structural similarities between one
type of problem or theory and another” were the next most highly rated
skills. These were followed closely by “taking well-known principles
and ideas from one area and applying them to a different specialty,’
“monitoring one’s own progress in solving problems,” and “deriving
from the study of single cases structural features or functional
principles that can be applied to other cases.”
Table 6 lists the reasoning errors and critical incidents that
were judged overall to be the most serious or to have the most effect
on the estimation of students’ abilities. Three errors/incidents were
judged to be most serious or critical: ‘accepting the central
assumptions in an argument without questioning them,” “being unable to
integrate and synthesize ideas from various sources,” and “being
unable to generate hypotheses independently.”
It should be noted that there are many other decision rules,
based on average ratings and differences among disciplines, that could
have been used here to form a “common core” of skills or errors/
incidents. Tables l-3 could be consulted to apply alternative rules.

No comments:

Post a Comment