Georgia State University Qualitative and Quantitative Articles Analysis Essay There are 2 articles with 2 different question sets. All information is in th

Georgia State University Qualitative and Quantitative Articles Analysis Essay There are 2 articles with 2 different question sets. All information is in the assignment it self. “I want to do the right thing but what is it?”: White Teachers’ …
Henfield, Malik S;Washington, Ahmad R
The Journal of Negro Education; Spring 2012; 81, 2; ProQuest
pg. 148
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
With the article, “I want to do the right thing but what is it?”: White teachers’ experiences with
African American students, do the following:
1. Outline the article. This outline should include the major components of a qualitative
research article. The outline may be written in outline form, 1. Introduction: then a
sentence or two noting the introduction. 2. The next component: explanation, etc. The
outline should be no longer than one page.
2. Answer the following questions in relation to the article:
a. What is the theoretical perspective?
b. What is the research question?
c. What is the research methodology?
d. What are the methods of collecting data?
e. What are the methods for analyzing data?
f. What are the ethical issues?
g. How do you know this is quality research?
h. What are your critiques of the article?
The answer to the questions in question 2, should be no more than one page.
Education Policy Analysis Archives
Volume 5 Number 3
January 15, 1997
ISSN 1068-2341
A peer-reviewed scholarly electronic journal.
Editor: Gene V Glass Glass@ASU.EDU.
College of Education
Arizona State University,Tempe AZ 85287-2411
Copyright 1997, the EDUCATION POLICY ANALYSIS
ARCHIVES.Permission is hereby granted to copy any article
provided that EDUCATION POLICY ANALYSIS ARCHIVES is
credited and copies are not sold.
Testing Writing on Computers:
An Experiment Comparing Student Performance on Tests Conducted via
Computer and via Paper-and-Pencil
Michael Russell
Boston College
Walt Haney
Boston College
Abstract
Computer use has grown rapidly during the past decade. Within the educational community,
interest in authentic assessment has also increased. To enhance the authenticity of tests of
writing, as well as of other knowledge and skills, some assessments require students to respond
in written form via paper-and-pencil. However, as increasing numbers of students grow
accustomed to writing on computers, these assessments may yield underestimates of students’
writing abilities. This article presents the findings of a small study examining the effect that
mode of administration — computer versus paper-and-pencil — has on middle school students’
performance on multiple-choice and written test questions. Findings show that, though
multiple-choice test results do not differ much by mode of administration, for students
accustomed to writing on computer, responses written on computer are substantially higher than
those written by hand (effect size of 0.9 and relative success rates of 67% versus 30%).
Implications are discussed in terms of both future research and test validity.
Introduction
1 of 20
Two of the most prominent movements in education over the last decade or so are the
introduction of computers into schools and the increasing use of “authentic assessments.” A key
assumption of the authentic assessment movement is that instead of simply relying on multiple
choice tests, assessments should be based on the responses students generate for open-ended
“real world” tasks. “Efforts at both the national and state levels are now directed at greater use of
performance assessment, constructed response questions and portfolios based on actual student
work” (Barton & Coley, 1994, p. 3). At the state level, the most commonly employed kind of
non-multiple-choice test has been the writing test (Barton & Coley, 1994, p. 31) in which
students write their answers long-hand. At the same time, many test developers have explored the
use of computer administered tests, but this form of testing has been limited almost exclusively
to multiple-choice tests. Relatively little attention has been paid to the use of computers to
administer tests which require students to generate responses to open-ended items.
The consequences of the incongruities in these developments may be substantial. As the
use of computers in schools and homes increases and students do more of their writing with word
processors, at least two problems arise. First, performance tests which require students to
produce responses long-hand via paper-and-pencil (which happens not just with large scale tests
of writing, but also for assessments of other skills as evidenced through writing) may violate one
of the key assumptions of the authentic assessment movement. For people who do most of their
writing via computer, writing long-hand via paper-and-pencil is an artificial rather then real
world task. Second, and more importantly, paper-and-pencil tests which require answers to be
written long-hand to assess students’ abilities (in writing or in other subjects) may yield
underestimates of the actual abilities of students who are accustomed to writing via computer.
In this article, we present the results of a small study on the effect of computer
administration on student performance on writing or essay tests. Specifically, we discuss the
background, design and results of the study reported here. However, before focusing on the study
itself, we present a brief summary of recent developments in computerized testing and authentic
assessment.
In 1968, Bert Green, Jr., predicted “the inevitable computer conquest of testing” (Green,
1970, p. 194). Since then, other observers have envisioned a future in which “calibrated measures
embedded in a curriculum . . . continuously and unobtrusively estimate dynamic changes in
student proficiency” (Bunderson, Inouye & Olsen, 1989, p. 387). Such visions of computerized
testing, however, are far from present reality. Instead, most recent research on computerized
testing has focused on computerized adaptive testing, typically employing multiple-choice tests.
Perhaps the most widely publicized application of this form of testing occurred in 1993 when the
Graduate Record Examination (GRE) was administered nationally in both paper/pencil and
computerized adaptive forms.
Naturally, the introduction of computer administered tests has raised concern about the
equivalence of scores yielded via computer- versus paper-and-pencil-administered test versions.
Although exceptions have been found, Bunderson, Inouye & Olsen (1989) summarize the general
pattern of findings from several studies which examined the equivalence of scores acquired
through computer or paper- and-pencil test forms as follows: “In general it was found more
frequently that the mean scores were not equivalent than that they were equivalent; that is the
scores on tests administered on paper were more often higher than on computer-administered
tests.” However, the authors also state that “[t]he score differences were generally quite small and
of little practical significance” (p. 378). More recently, Mead & Drasgow (1993) reported on a
meta-analysis of 29 previous studies of the equivalence of computerized and paper-and-pencil
cognitive ability tests (involving 159 correlations between computerized and paper-and-pencil
test results). Though they found that computerized tests were slightly harder than
paper-and-pencil tests (with an overall cross-mode effect size of -.04), they concluded that their
results “provide strong support for the conclusion that there is no medium effect for carefully
2 of 20
constructed power tests. Moreover, no effect was found for adaptivity. On the other hand, a
substantial medium effect was found for speeded tests” (Mead & Drasgow, 1993, p. 457).
Yet, as previously noted, standardized multiple-choice tests, which have been the object of
comparison in previous research on computerized versus paper-and-pencil testing, have been
criticized by proponents of authentic assessment. Among the characteristics which lend
authenticity to an assessment instrument, Darling-Hammond, Ancess & Falk (1995) argue that
the tasks be “connected to students’ lives and to their learning experiences…” and that they
provide insight into “students’ abilities to perform ‘real world’ tasks” (p.4-5). Unlike standardized
tests, which may be viewed as external instruments that measure a fraction of what students have
learned, authentic assessments are intended to be closely linked with daily classroom activity so
that they seamlessly “support and transform the process of teaching and learning”
(Darling-Hammond, Ancess & Falk, 1995, p. 4; Cohen, 1990).
In response to this move towards authentic assessment, many developers of nationally
administered standardized tests have attempted to embellish their instruments by including
open-ended items for which students have to write their answers. These changes, however, have
occurred during a period when both the real-world and the school-world have experienced a rapid
increase in the use of computers.
The National Center for Education Statistics report that the percentage of students in
grades 1 to 8 using computers in school has increased from 31.5 in 1984, to 52.3 in 1989 and to
68.9 in 1993 (Snyder & Hoffman, 1990; 1994). In the workplace, the percentage of employees
using computers has risen from 36.0 in 1989 to 45.8 in 1993. During this period, writing has
been the predominant task adult workers perform on a computer (Snyder & Hoffman, 1993;
1995). Given these trends, tests which require students to answer open-ended items via
paper-and-pencil may decrease the test’s “authenticity” in two ways: 1. Assessments are not
aligned with students’ learning experiences; and 2. Assessments are not representative of
‘real-world’ tasks. As the remainder of this paper suggests, these shortcomings may be leading to
underestimates of students’ writing abilities.
Background to this Study
In 1993, the Advanced Learning Laboratory School (ALL School,
(http://nis.accel.worc.k12.ma.us) of Worcester, Massachusetts decided to adopt the Co-NECT
school design (or Cooperative Networked Educational Community for Tomorrow,
http://co-nect.bbn.com). Developed by BBN, Inc., a Boston- based communications technology
firm, Co-NECT is one of nine models for innovative schooling funded by the New American
Schools Development Corporation. Working with BBN, the ALL School restructured many
aspects of its educational environment. Among other reforms, the traditional middle school grade
structure (that is, separately organized grade 6, 7 and 8 classes) was replaced with blocks which
combined into a single cluster students who otherwise would be divided into grades 6, 7 and 8. In
place of traditional subject-based classes (such as English Class, Math Class, Social Studies,
etc.), all subjects were integrated and taught through project-based activities. To support this
cooperative learning structure, several networked computers were placed in each classroom,
allowing students to perform research via the Internet and CD-ROM titles, to write reports,
papers and journals, and to create computer based presentations using several software
applications.
To help evaluate the effects the restructuring at the ALL School has on its students as a
whole, the Center for the Study of Testing, Evaluation and Educational Policy (CSTEEP) at
Boston College helped teachers gather baseline data in the fall of 1993 with plans to perform
follow-up assessments in the spring of 1994 and each spring thereafter. To acquire a broad
picture of students’ strengths and weaknesses, the forms of tests included in the baseline
3 of 20
assessment ranged from multiple choice tests to short and long answer open-ended assessments
to hands-on performance assessments covering a wide range of reading, writing, science and
math skills. To acquire insight into how cooperative projects affected the development of group
skills, some of the performance assessments required students to work together to solve a
problem and/or answer specific questions. Finally, to evaluate how the Co-NECT Model, as
implemented in the ALL School, affected students’ feelings about their school, a student survey
was administered. Assessments and surveys were administered to representative samples of the
whole school’s student population.
In the spring of 1994, the same set of assessments was re-administered to different
representative samples of students. While a full discussion of the results is beyond the scope of
this paper, many of the resulting patterns of change were as expected. For example, performance
items which required students to work cooperatively generally showed more improvement than
items which required students to work independently. On items that required students to work
independently, improvement was generally stronger on open-ended items than on multiple-choice
items. But there was one notable exception: open-ended assessments of writing skills suggested
that writing skills had declined.
Although teachers believed that the Co-NECT Model enhanced opportunities for students
to practice writing, performance on both short answer and long answer writing items showed
substantial decreases. For example, on a short answer item which asked students to write a recipe
for peace, the percentage of students who responded satisfactorily decreased from 69% to 51%.
On a long answer item which asked students to imagine a superhero, describe his/her powers, and
write a passage in which the superhero uses his/her powers, the percentage of satisfactory
responses dropped from 71% to 41%. On another long answer item that asked students to write a
story about a special activity done with their friends or family, student performance dropped from
56% to 43%. And on a performance writing item which first asked students to discuss what they
saw in a mural with their peers and then asked them to write a passage independently that
described an element in the mural and explain why they selected it, the percentage of satisfactory
responses decreased from 62% to 47%. These declines were all statistically significant, and more
importantly were substantively troubling.
Since writing was a skill the school had selected as a focus area for the 1993-94 school
year, teachers were surprised and troubled by the apparent decrease in writing performance.
During a feedback session on results in June 1994, teachers and administrators discussed at
length the various writing activities they had undertaken over the past year. Based on these
conversations, it was evident that students were regularly presented with opportunities to practice
their writing skills. But a consistent comment was that teachers in the ALL School were
increasingly encouraging students to use computers and word processing tools in their writing.
As several computers were present in all classrooms, as well as in the library, teachers believed
that students had become accustomed to writing on the computer. When one teacher suggested
that the decrease in writing scores might be due to the fact that all writing items in spring 1994
were administered on paper and required students to write their responses by hand, the theory
was quickly supported by many teachers. With a follow-up assessment scheduled to occur a year
later, several teachers asked if it would was possible for students to perform the writing items on
a computer.
After careful consideration, it was decided that a sub- sample of students in spring 1995
would perform a computer- administered version of the performance writing item and items from
the National Assessment of Educational Progress (NAEP) (items were mostly multiple-choice
with a few short answer items included). But, to preserve comparisons with results from
1993-94, the majority of the student population would perform these assessments as they had in
that year — via the traditional pencil-and-paper medium. Hence, we undertook an experiment to
compare the effect that the medium of administration (computer versus paper-and-pencil) has on
4 of 20
student performance on multiple-choice, short-answer and extended writing test items.
Study Design and Test Instruments
To study the effect the medium of administration has on student performance, that is
taking assessments on computer versus by hand on paper, two groups of students were randomly
selected from the ALL School Advanced Cluster (grades 6, 7 and 8). For the experimental group,
which performed two of three kinds of assessments on computer, 50 students were selected. The
control group, which performed all tests via pencil-and-paper, was composed of the 70 students
required for the time-trend study described above. The three kinds of assessments performed by
both groups were:
1. An open-ended (OE) assessment comprising 14 items, which included two writing items,
five science items, five math items and two reading items.
2. A test comprised of NAEP items which was divided into three sections and included 15
language arts items, 23 science items and 18 math items. The majority of NAEP items
were multiple-choice. However, 2 language arts items, 3 science items and 1 math item
were open-ended and required students to write a brief response to each item’s prompt.
3. A performance writing assessment which required an extended written response.
Both groups performed the open-ended (OE) assessment in exactly the same manner, by hand via
paper-and pencil. The experimental group performed the NAEP and writing assessment on
computer, whereas the control group performed both in the traditional manner, by hand on paper.
The performance writing assessment consisted of a picture of a mural and two questions.
Students formed small groups of 2 or 3 to discuss the mural. After 5 to 10 minutes, students
returned to their seats and responded to one of two prompts:
1. Now, it is your turn to pick one thing you found in the mural. Pick one thing that is
familiar to you, that you can recognize from your daily life or that is part of your culture.
Describe it in detail and explain why you chose it.
2. Artists usually try to tell us something through their paintings and drawings. They may
want to tell us about their lives, their culture or their feelings about what is happening in
the neighborhood, community or world. What do you think the artists who made this mural
want to tell us? What is this mural’s message?
Due to absences, the actual number of students who participated in this study was as
follows:
Experimental (Computer) Group: 46
Control (Paper-and-Pencil) Group: 68
It should be noted that the study described in this paper was performed as part of a larger
longitudinal study which relied heavily on matrix samplin…
Purchase answer to see full
attachment

Don't use plagiarized sources. Get Your Custom Essay on
Georgia State University Qualitative and Quantitative Articles Analysis Essay There are 2 articles with 2 different question sets. All information is in th
Just from $13/Page
Order Essay
Homework On Time
Calculate the Price of your PAPER Now
Pages (550 words)
Approximate price: -

Why Choose Us

Top quality papers

We always make sure that writers follow all your instructions precisely. You can choose your academic level: high school, college/university or professional, and we will assign a writer who has a respective degree.

Professional academic writers

We have hired a team of professional writers experienced in academic and business writing. Most of them are native speakers and PhD holders able to take care of any assignment you need help with.

Free revisions

If you feel that we missed something, send the order for a free revision. You will have 10 days to send the order for revision after you receive the final paper. You can either do it on your own after signing in to your personal account or by contacting our support.

On-time delivery

All papers are always delivered on time. In case we need more time to master your paper, we may contact you regarding the deadline extension. In case you cannot provide us with more time, a 100% refund is guaranteed.

Original & confidential

We use several checkers to make sure that all papers you receive are plagiarism-free. Our editors carefully go through all in-text citations. We also promise full confidentiality in all our services.

24/7 Customer Support

Our support agents are available 24 hours a day 7 days a week and committed to providing you with the best customer experience. Get in touch whenever you need any assistance.

Try it now!

Calculate the price of your order

Total price:
$0.00

How it works?

Follow these simple steps to get your paper done

Place your order

Fill in the order form and provide all details of your assignment.

Proceed with the payment

Choose the payment system that suits you most.

Receive the final file

Once your paper is ready, we will email it to you.

Our Services

No need to work on your paper at night. Sleep tight, we will cover your back. We offer all kinds of writing services.

Essays

Essay Writing Service

You are welcome to choose your academic level and the type of your paper. Our academic experts will gladly help you with essays, case studies, research papers and other assignments.

Admissions

Admission help & business writing

You can be positive that we will be here 24/7 to help you get accepted to the Master’s program at the TOP-universities or help you get a well-paid position.

Reviews

Editing your paper

Our academic writers and editors will help you submit a well-structured and organized paper just on time. We will ensure that your final paper is of the highest quality and absolutely free of mistakes.

Reviews

Revising your paper

Our academic writers and editors will help you with unlimited number of revisions in case you need any customization of your academic papers