xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001478795
007 cr mnu|||uuuuu
008 040811s2004 flua sbm s000|0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0000440
Heimisson, Gudmundur Torfi.
The importance of program-delivered differential reinforcement in the development of classical music auditory discrimination
h [electronic resource] /
by Gudmundur Torfi Heimisson.
[Tampa, Fla.] :
University of South Florida,
Thesis (M.A.)--University of South Florida, 2004.
Includes bibliographical references.
Text (Electronic thesis) in PDF format.
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
Title from PDF of title page.
Document formatted into pages; contains 37 pages.
ABSTRACT: Posttest performances after two forms of Web-based tutorial instruction were compared. Both forms were designed to teach students to identify musical compositions that typify Renaissance, Baroque, Classical, Romantic, and 20th Century music. The first treatment condition was a series of Web pages with text and accompanying hyperlinks to musical selections matched to the text. In this condition, students read and listened at their own discretion -- without Web software program restrictions. The second treatment contained exactly the same text and musical selections, but students in this condition read the text in small portions while being required to fill in missing words in the text presented. No time constraints were placed on participants. The essential difference between the conditions was 1) movement with the instruction content without restriction, and 2) advancement through the program being dependent upon correct responses to the text material (which included discriminative responding to accompanying musical examples). A statistically significant difference between pretest and posttest was found in both experimental conditions, but a difference in posttest scores between the two conditions was not found. Implications of the study and suggestions for future research were discussed.
Adviser: Darrel E. Bostow.
experimental analysis of teaching methods.
x Applied Behavior Analysis
t USF Electronic Theses and Dissertations.
The Importance of Program-Delivered Different ial Reinforcement in the Development of Classical Music Auditory Discrimination by Gudmundur Torfi Heimisson A thesis submitted in partial fulfillment of the requirements for the degree of Master of Arts Department of Applied Behavior Analysis College of Graduate Studies University of South Florida Major Professor: Darrel E. Bostow, Ph.D Kevin Murdock, Ph.D. Ken Richter, Ph.D. Date of approval: July 12, 2004 Keywords: Programmed instruction, educat ion, computer-assisted learning, web-based instruction, experimental an alysis of teaching methods. Copyright 2004, Gudmundur Torfi Heimisson
i Table of contents Abstract ii List of Tables iii List of Figures iv Chapter One 1 Chapter Two 6 Participants 6 Setting 6 Materials 6 Treatments (Independent variables) 7 Dependent variables 8 Experimental design and data analysis 8 Procedure 9 Chapter Three 6 Pretest data 12 Pretest-posttest comparisons 14 Post-experiment survey 15 Chapter Four 20 Appendices 26 Appendix A: Experimenter in structions to participants 27 Appendix B: Student survey of music lesson 29 Appendix C: Demographic Form/Informed Consent Form 31
ii The importance of program-delivered differen tial reinforcement in the development of classical music auditory discrimination. Gudmundur T. Heimisson ABSTRACT Posttest performances after two forms of We b-based tutorial instruction were compared. Both forms were designed to teach students to identify musical com positions that typify renaissance, baroque, classical, romantic, and 20 th century music. The first treatment condition was a series of Web pages with text and accompanying hyperlinks to musical selections matched to the text. In this c ondition, students read and listened at their own discretion without Web softwa re program restrictions. Th e second treatment contained exactly the same text and musical selections, but students in this condi tion read the text in small portions while being required to fill in missing words in the text presented. No time constraints were placed on participants. Th e essential difference between the conditions was 1) movement with the instruction content without restriction, and 2) advancement through the program being dependent upon correct responses to the text material (which included discriminative responding to accompan ying musical examples). A statistically significant difference between pretest and posttest was found in both experimental conditions, but a difference in posttest scor es between the two conditions was not found. Implications of the study and suggestions for future research were discussed.
iii List of Tables Table 1. Averages for individual programmed instruction tutorials 11 Table 2. Median times taking the pretest a nd posttests, across treatments 12 Table 3. Prose and PI pretest and posttest averages 12 Table 4. Participants reports of potential hearing problems 14
iv List of Figures Figure 1. Pretest score distribution, prose condition 11 Figure 2. Pretest score distribution, PI treatment 14 Figure 3. Posttest score di stribution, prose treatment 13 Figure 4. Posttest score distribution, PI treatment 13 Figure 5. Participants evaluations of their treatments effectiveness 15 Figure 6. Participants reports of how interesting or bor ing their lesson was 15 Figure 7. Classical music listening habits of participants 16 Figure 8. Participants estimate of lesson to increase taste for classical music 17
1 Chapter One Introduction Education is probably one of the most hotly debated topics in our time. Most people have an opinion about education; how it should or should not be, and how it should be improved. Sadly, it could be argued that modern education systems do not seem to advance and improve in proportion to the energy spent discussing them. B. F. Skinner argued that modern school systems were inherently punitive and restrictive, frustrating fast and slow learners alike by shaping all stude nts into the same mold of mediocrity (Skinner, 1984; 1968). In Ski nners view, educational development and reform is constantly misguided; money for educational institutions is too often allocated to things of trivial academic consequence (s uch as buildings and gadgetry), instead of to the heart of instruction the scientific analysis of the c ontingencies that shape academic behavior (Skinner, 1984). La ter authors have strongly ec hoed Skinners sentiment; for example, fingers have been pointed the at educational establishments purported rigidity against adopting scientific, empirically proven, and effective teaching methods (Engelmann, 1991; Lindsley, 1992). Still, despit e the apparent lack of interest from educational establishments, a handful of effective scien tific and empirical teaching practices have survived, one of them programmed instruction. The origins of programmed in struction date back to Sidney Presseys work in the 1920s (e. g. Pressey, 1927), but B. F. Skinners work in the 1950s (e. g. Skinner, 1954) is generally recognized as the breakthrough in the field (Holland, 1960). According to Skinner, programming means carefully arrang ing the relevant cont ingencies leading to the terminal performances, or outcomes, of education (Skinner, 1963). In programmed instruction, the learner goes through a series of pre-arranged material s presented in short units, and is required to respond ove rtly to each one to continue. Holland (1960) elaborated on several important principles of programmed instruction: 1) Reinforcement is provided im mediately after the student emits the correct response. 2) The student learns only when he or she emits a response that is reinforced. 3) Complex repertoires are built by starting to reinforce behavior already in the learners repertoire, and continue reinfo rcing successive approximations to the terminal repertoire
2 in a gradual progression of deve lopmentally ordered steps. At first, the program provides ample stimulus support, prompts or primes that give explicit or subtle hits about the answer. 4) Stimulus support is gradually withdrawn as the student becomes more proficient. This is a procedure known as fadi ng. 5) Since the student needs to constantly interact with the program, his or her study behavior is consta ntly observed and controlled. Therefore, a minimal part of the learning pr ocess is left to chance. 6) Programmed instructions establishes discriminations, such as the difference between simple graphic forms, or complex abstract concepts. 7) Th e student writes the program. If the student responds incorrectly to a question in the progr am, or progresses too rapidly through it, the program has failed to accurately assess the st udents initial behavior. Revisions to the program must be made contingent on the stud ents answers, and thus it is said that the student writes it. Programmed instruction is based on the prin ciples of learning, and is thus an effective teaching method by design. Unfort unately, programmed instruction never started the revolution of which B. F. Sk inner dreamed, it never even became popular. Some authors even claim that programmed instruction outright failed and died (e.g. Benjamin, 1988). However, times may be ch anging, thanks in part to technological advances. Publications on programmed instruct ion got more prominent again in the late 1980s, when the personal computer a most convenient delivery machine for programmed instruction (Thomas & Bostow, 1991) started to emerge as a household item. Another spurt in publications came in the late 1990s, probably not least due to educators apparent increased interest in using the Internet and computers for educational purposes. As this is written, most hardwa re problems have been solved, but a programmed instruction revival in educational circles still requires more wide-spread understanding of the approachs principl es (Bostow, Kritch, & Tompkins, 1995). All things considered, a programmed instruc tion revival may be more imminent than previously thought. With more and more hom es becoming connected to the Internet, interactivity has become a buzzword; the emphasis on interactivity, and public acceptance of the Internet may have opened the door for programmed instructions second coming.
3 According to recent research, interactivity (i. e. responding overtly to instructional material, and constructing responses to que stions) increases the effectiveness of instruction (Kritch & Bostow 1998; Miller & Malott, 1997; Kritch, Bostow, & Dedrick, 1995; Tudor, 1995, Tudor & Bostow, 1991). In addition, Miller & Malott (1997) showed that programmed instruction materials are mo re effective than read-only materials, regardless of whether or not ex traneous incentives such as extra credit points are offered. Early and recent research also demonstrates that programmed instruction increases in effectiveness as reinforcement for correct answers becomes more frequent (Terrace, 1966; Kritch & Bostow, 1998). The effectiveness of computer-programmed instruction has already been well-established for text -based stimuli (see e .g. Kritch & Bostow, 1993: Terrace, 1966; Tudor & Bostow, 1991; Kumar, Bostow, Schapira, & Kritch, 1993), but controversy still exists. Kritch & Bost ow (1998) found that several prominent investigators do not agree on the definition of programmed instruction. Also, programs designed for commercial use tend to compromise educational properties for the sake of sales-inducing gimmicks (Kritch & Bostow, 1998). In addition, generality issues still exist. For example, Kritch & Bostow ( 1998) established proficiency in verbal discrimination, but noted that the same appro ach may not apply to sensory discrimination an area that certainly merits researc h, but has seemingly been left untouched by researchers in programmed instruction so far. This study was a modified replica tion of Kritch & Bostows 1998 study: programmed instruction, usi ng general principles of programmed instruction, was compared to non-programmed instruction. In stead of responding primarily to verbal stimuli, as in Kritch & Bostow (1998), partic ipants were required to respond primarily to auditory stimuli. This was proposed to address generality issues with sensory discrimination (Kritch & Bostow, 1998). The su bject matter was classical music history. Music taste is probably acquired early on in life. As children grow up, they are exposed to music on the radio and television, music to which responses are likely to be reinforced differentially, either socially, intraverbally, or via respondent conditioning. However, in our times consumer culture, not all music is equal. Popular music dominates the mainstream airwaves, and airpla y of classical music is mostly left to
4 publicly funded radio stations with a small share of the market. Worldwide, classical music is only estimated to be 5% of all music sold, and a decline in sales has forced major record labels to downsize their cl assical divisions (Bam barger, 2002). The pop music industry is well known for using other mean s, such as sex appeal, to spur sales. To avoid losing their jobs, some classical musi cians have resorted to similar tactics, changing their traditional formal appearan ce to skimpy outfits, and even adding pop music instruments and effects to classical pieces (Clennell, 2004). It follows from the basic principles of be havior that people cannot acquire behavior if they have no opportunity to be reinforced for responding to it. Classical music is no exception. Public radio is almost the only kind of radio left to play classic works and has less marketing funds than commercial radio, as is evident by the fundraising drives characteristic of public radio. Adding to cla ssical musics problems is the music itself, more complicated and difficult to the untrained ear than popular music which often is especially designed to be simple and immediat ely reinforcing. Another factor may be that classical music is not present where youngsters come together for social functions, and thus, classical music misses an opportunity to be reinforced via respondent conditioning. Long reinforcement histories with po p music are likely to change easily. However, if people unfamiliar with classical music got easy access to it, were to be told about it in terms they know, and be reinforced differentially for responding to the music, people might eventually start valu ing it. A web-based instructional program featuring short instructional text and clips of music might be a way to access those potential listeners, and two me thods of doing so will be compared in the proposed study. To teach about classical music, it was j udged helpful to use the common grouping of musical periods: Early music (the middle ages and the renaissance), the baroque era, the classical era, the Romantic period, and modern music (20 th and 21 st Century). The first condition featured a web page with short instructional paragraphs about prominent characteristics of each period. Each paragr aph was accompanied with a relevant music clip, accessible by clicking on a link by the para graph. No other reaction to the material was required, and participants could move freely around the web page at their own
5 discretion. The second condition was programmed instruction, in which participants were taken through the material in a linear, temporal order, from early to modern music. Despite classical musics problems, there may be grounds for optimism if something is done. The current authors find it important that the science of behavior and the educational establishment join forces to teach about classical cultura l treasures that would otherwise go unnoticed.
6 Chapter Two Method Participants One hundred and sixteen undergraduate and gr aduate education students in various classes were randomized to experimental conditions with a computer program. Participants worked through the experimental music tutorials for academic course credit. Setting Previous research has revealed the importan ce of directly observing students as they work through computer delivered materials, this being necessary to ensure treatment integrity (R. Canton, personal communicat ion, February, 2004). Therefore, the experiment was conducted in a computer lab at the university. Particip ants registered for appointments at a USF computer lab, duri ng which the experimenter administered instructions, maintained appropriate lab condi tions, and ensured completion of pretest, tutorials and posttest. Institutional Review Board Approval All procedures were approved by the Inst itutional Review Board prior to data collection. Participants were in formed of the nature of the study at an orientation session for a program course. Taking part in th e study was a course requirement, but all participants signed an informed consent form for the course, which included informed consent for taking part in the study (see Appendix C). Materials The computer lab contained internet-connect ed PC computers with headphones. Six computer-delivered programmed instruction tutorials of approximately 30-40 frames each were used to teach auditory discrimination be tween examples of music from five periods: 1) The Middle Ages and the Renaissance, 2) The Baroque era, 3) The Classical era, 4) The Romantic period, and 5) the 20 th /21 st Century. The tutorials were designed to progressively develop skills by constantly requiring overt responding, but did not explicitly follow the programming guidelines of the Ruleg system designed by Evans, Homme, and Glaser (1960). The tutorials were field tested and revised based on an earlier trial round with volunt eer participants.
7 The tutorials and tests were delivered with software specially developed for this experiment, hosted on a USF server. The music samples for the tutorials were selected using properties widely-accepted and argue d by musicologists to exemplify and distinguish these periods. A pret est and generalization test we re composed from the same clip library as the tutorials. The test containe d 30 examples of music that were not part of the tutorial program, yet were assumed to be composed of the defini ng musical properties taught within the tutorials. No time limits we re imposed on participan ts when taking the tutorials and tests. An eight-question survey was written to follow the experiment. The purpose of the survey was to give participants an outlet fo r feedback, and to assess participants prior experience with classical music. Treatments (Independent variables) Programmed Instruction (PI): Five tutori als on music history were used as an independent variable. Music clips and text ual instructional material was presented concurrently, frame by frame. In order to adva nce through the tutorials, participants typed in answers to questions in each frame. Textua l feedback, Correct! or Incorrect, was provided contingent on the submission of a typed answer. Two tries were allowed for open-ended questions, but only one try for mu ltiple-choice or yes/no questions, as two tries for yes/no questions would have height ened the probability of haphazard guessing. No minimum score was necessary for advancemen t within or between tutorials, nor were there be any time contstraints to finish the tutorials. Non-programmed Instruction: In structional text identical to the one used in the programmed instruction tutorial s was presented on a single web page, but all blanks and questions were modified to l ook like coherent instructional paragraphs. Participants were able to access the material in its entirety, in any order, by scrolling up and down the web page, non-contingent on discriminative re sponses. When musical clips accompanied a paragraph, they were available as hyperlinks by the respective paragraph. No time constraints were placed on participants.
8 Dependent variables Time to complete all tutorials in the sequen ce, time to complete each tutorial, and to complete each individual frame, were all measured during the programmed instruction condition. During the PI treatmen t, the percentage of PI frames answered correctly was also measured. For the scrolling web page treatment, individual participant time going through the whole program was recorded. The number of clicks on each music link by each participant was also counted. A 30-item multiple-choice pretest was constructed, which included five examples from each musical period. This test was in th e form of a series of musical clips with a selection of appropriate choices on the screen. Clips were di fferent selections from the tutorials, yet contained key properties of the various musical periods. Thus, a generalization postt est of musical clips was also employed, to assess the amount of learning during the tutorials in this study. The generalization test was the same as the pretest used, but practice effects from the test were considered minimal, since the testing items were not included in the tutorials. Demographic data concerning undergraduate/graduate status, age, gender, etc., were coll ected at the commencement of the course sections. A survey was written to follow the experiment. On the survey, participants were asked to provide general feedback about th e experiment, and provide information about potential hearing impairments that could have affected their learning. Participants were also asked about their prior e xperience with classical music, and about the probability of them seeking to purchase or listen to classi cal music in the wake of this experiment. Experimental design and data analysis As described previously, th ere were two experimental conditions: 1) Programmed Instruction (PI), and 2) Scrolling web page with short instructi onal paragraphs. A oneway analysis of variance (ANOVA) was used to check for a statistically significant difference between the conditions, and a dependen t-sample t-test was used as a test of significance between pretesting and posttesti ng within conditions. A single-subject design was not deemed appropriate, nor even possibl e for treatment comparisons. This is due primarily to a changing substrate of discri minations required as the program advanced
9 and the potential confusion of effects produced by changing content rather than those produced by tutorial delivery differences. Procedure Requirements for participation were stated in course syllabi made available at the beginning of the course. Particip ants were required to sign up for their visit to a computer lab where the experiment was conducted, and a message with available times was posted on the computer-based bulletin board of the course. The experimenter supervised the computer lab at all times. Participants were led to computers as they arrived and were given a detailed instruction sheet (see Appendi x A). In addition to the instructions on the sheet, the experimenter read the following message to participants: When you have earned a score on the pretest, used the tutorial to which you were assigned, and earned a score on the posttest, your final course averag e will be boosted by 3 additional points in return for your participation. The score you earn on the posttest will NOT impact your course grade. Students who refused to atte nd or could not attend did not lose academic credit, and were not subjected to other tasks. Participants accessed a web page per their instructions and logged on with the login names they used for their web-based course management system. A computer program automatically randomized participants to the experimental conditions. Once participants had logged on, a computer program automatically displayed a multiple-choice pretest in which they listed to music clips with h eadphones, and attempted to identify the period from which the music was dated. Upon completi on of the pretest, grades were displayed to participants, and a comput er program automatically t ook them to a screen with instructions on their assignment Once participants pressed a key to continue, the program presented a screen with instructions on what to do next, and after pressing a button, the program opened the respective experimental condition. In the PI condition, participants first went through a brief introductory tuto rial that taught how to go through the programmed instruction tutorials. Those who received the prose condition first saw an instruction screen and were taken directly to their experimental c ondition thereafter, as the only skills needed were scrolling up a nd down a web page and clicking hyperlinks.
10 In the PI condition, music clips and text we re presented automatically, screen frame by screen frame, with a keyword missing with in presented text. E ach clip was around 30 seconds long, and the program constantly re peated it while the student was working on the respective frame. To advance from one fram e to the next, participants were required to respond discriminately to the music by typi ng an answer in a text box on the web page. Answer modes were two: completing a statem ent about the music, or typing in letters corresponding to the correct answer in multip le-choice questions. Two tries were allowed for open-ended questions, and one try for doubleor multiple-choice questions. Once participants had answered, they pressed the Enter key, and were taken to a screen where they received feedback. The f eedback message for correct answers read: Your answer [answer here] is correct. Press Enter to cont inue. The feedback for incorrect answers read: Your answer [answer here] is incorr ect. Press Enter to continue. The music continued playing through the f eedback screen. After pressing Enter, participants were either taken to the same frame again for a s econd try, or the next question and music clip. If a participant would answer all allowed a ttempts incorrectly, the program would give the feedback for an incorrect answer and provi de the correct answer, move on to the next frame, and lower the score. Progression through the program was constantly visible to participants in the upper left corner of the frame window (e.g. Frame 16 of 30), and so was their current score in the program (e.g. Score: 94%). The number of tries for a given frame was also displayed (e.g. Try: 1 of 2). No time constr aints were placed on participants. When the participants had finished all five tutorials, the program displayed an instruction screen to prepare them for the posttest to come Pressing a button started the test. In the prose web page condition, all the ma terial was presented on one scrolling web page. The same instructional text and music as in the PI condition were used, but all blanks had been filled in, and multiple-choice items had been changed into normal paragraphs. Advancement was completely at participants discreti on and not contingent on any overt discriminative response. To hear the music sample relevant to a paragraph, participants clicked on a hyperl ink by the paragraph. The clips were not on automatic repeat, but participants could click on each clip as often as they pleased. Clicking a
11 button reading Done would automatically op en an instruction screen. After pressing another button, participants could start their posttest. The generalizati on test was the same across conditions and similar in form to th e PI condition, but feedback was not presented after each frame. Upon completion of each c ondition, the delivery software presented a debriefing page, on which the experiments details were displayed. After that, participants filled out an eight-question survey on paper to check for hearing impairments, former exposure to classical mu sic, and opinions of the study. Participants had an option to answer the surv ey anonymously (see Appendix B).
Chapter Three Results Pretest data The total sample was 116 participants, but 109 students showed up for the study. Data from four participants had to be removed due to unanticipated participant errors. Such errors included accidentally closing the web browser window and starting again, thereby distorting the data in the computerized database. The remaining sample was a total of 105 participants--with 46 receiving the prose treatment and 59 receiving programmed instruction. On the pretest, the prose condition scores ranged from 6% 55% with a standard deviation of 8.86. Learners who received programmed instruction scored from 16% 58%, with a standard deviation of 8.68. The distributions of the scores are very similar in shape, but the main difference lies in the lower range of the grade scale (see figures 1 and 2). 02468101214161820110192837465564738291100Grade percentageNumber of learners Figure 1. Pretest score distribution, prose condition 12
02468101214161820110192837465564738291100Grade percentageNumber of learners Figure 2. Pretest score distribution, PI treatment Analysis of tutorials Characteristics of programmed instruction tutorials were measured individually, but scoring and measurement of each particular segment of prose material was not possible on the prose page due to the nature of the prose pages free-form design. However, time taken browsing the prose web page was measured. The PI participants took more time, on the average, to finish their material (32 minutes, standard deviation 13.59) than did their counterparts with prose (29 minutes, standard deviation 11.55). This difference proved to be statistically significant, t(58) = 18.03, p= .00. See Table 1 for a listing of PI tutorial averages. Difference in median posttest times between conditions were not remarkable (Table 2). Table 1. Averages for individual programmed instruction tutorials Renaissance Baroque Classical Romantic Modern All Tutorials Mean score (%) 88 89 90 82 89 88 Mean tutorial time (min) 5:12 6:73 6:32 8:17 5:37 6:33 Mean frame time (sec) 16:69 13:01 12:24 14:03 11:64 13:34 13
14 Number of frames 17 26 27 32 25 25.4 Table 2 Median times taking the pretes t and posttests, across treatments Treatment Pretest (minutes) Posttest (minutes) Prose 6 7 PI 6 6 Pretest-posttest comparisons The difference between pretest and postte st scores was analyzed with pairedsample t-tests. The difference between pret est and posttest scores proved statistically significant for the prose condition t (45)= 6.11, p =.00 as well as for the PI condition, t (58) = 6.44, p=.00. However, when analyzing posttest score distributions for a possible difference between the prose and programm ed instruction methods, a one-way ANOVA did not indicate an effect, F (1,104)= .108, p= .743. A breakdown of pretest and posttest scores is listed in Table 3. Table 3. Prose and PI pretest and posttest averages Condition Pretest score (%) Posttest score (%) Prose 29.9 39.7 PI 32.8 39.0 Total 31.6 39.3 The prose groups scores on the posttest had a wide distribution (see figure 3), ranging from 16% -77%, with a standard deviation of 12.46. However, students who received programmed instruction received a co nsiderably narrower range of scores (see figure 4), from 19%-65%, sta ndard deviation 9.59. The pos ttests Kuder-Richardson reliability score was .66.
02468101214161820110192837465564738291100Score percentageNumber of students Figure 3. Posttest score distribution, prose treatment 02468101214161820110192837465564738291100Score percentageNumber of learners Figure 4. Posttest score distribution, PI treatment Post-experiment survey After the experiment, an eight-question survey was administered to assess participants opinions of their respective learning experiences, their music tastes and classical music listening habits, and to gauge for potential hearing impairments that might 15
16 have compromised results. One question asked for outcome on the posttest, and since those results are described in more detail a bove, they will not be recounted in the present subsection. From 105 participants total, 99 surv eys were retrievable. An analysis of the survey follows: Q. 1: Have you got any major hearing impediments that yo u feel could have compromised your test results? Hearing problems were reported in only three cases (see Table 4), but anonymity made filtering of the relevant participants data impossible. Table 4. Participants reports of potential hearing problems Frequency Percent Yes 3 3.0 No 96 97 Total 99 100 Q. 2: What kind of instructional strategy did you receive? Of the 99 participants who completed a questionnaire, 62 participants answered that they had received the programmed inst ruction treatment, and 36 marked that they had received the prose condition. The self-reporting is inaccurate and does not match the experiment sample, in which 59 students received programmed instruction, and 46 received the prose method. One participant an swered as having both, so her answer was categorized as missing. Due to the discrepanc y between these self-reported data and the computer-collected data, this item is deem ed unusable for calculations of statistical significance. Q. 3: In your opinion, the inst ructional strategy you got was ...[effective vs. ineffective] Regardless of experimental condition, 52 students judged their instructional method to be either very or rather effective, as opposed to 20 who thought it rather or very ineffective. The answers are broken down in Figure 5.
0102030405060VeryeffectiveRathereffectiveNeithereffective norineffectiveRatherineffectiveVeryineffective Figure 5. Participants evaluations of their treatments effectiveness Q. 4: How interesting or boring was the lesson? A majority, a total of 70 learners, found their treatment either interesting or very interesting. Twelve described it as boring or very boring (see Figure 6). 0102030405060VeryinterestingInterestingNeitherinterestingnor boringBoringVery boring Figure 6. Participants reports of how interesting or boring their lesson was 17
Q. 5: How often or seldom do you usually listen to classical music? Most participants reported not listening often to classical music, only 10 claiming they listened to it often or every day. A third of participants reported listening seldom or never to classical music (Figure 7). 05101520253035Every dayOften, butnot everydayEvery nowand thenSeldomNever Figure 7. Classical music listening habits of part-icipants Q: 6. How likely or unlikely is this lesson to increase your taste for classical music, if only slightly (e. g. tune in more often to classical stations, check out the classical selection at your local music store, look for it on the internet, etc.)? A total of 55 participants reported that they would be more likely than before to listen to classical music, as a function of this study. Twenty-six participants claimed that taking part in the study would not influence them to listen more to classical music (Figure 8.). 18
051015202530354045Very likelyLikelyNeither likelynor unlikelyUnlikelyVery unlikely Figure 8. Participants estimate of lesson to increase taste for classical music Q. 7. Is there anything you would like to add? This was an open-ended question to get feedback from students on the experiment. Fifty-six of the 99 students who turned in the survey answered the question. A content analysis revealed the following most prominent issues: Both the programmed instruction and prose tutorials were interesting and fun. There was too much material for one session. The posttest was difficult considering what students thought they had learned. 19
20 Chapter Four Discussion This study extended on the literature on programmed instruction by experimentally evaluating two approaches to auditory instruc tional materials. This was a departure from the line of reseach on textual and visual material that has dominated the field. Programmed instruction, in which participan ts advanced frame by frame, was compared with a free form prose web page that allowed the learners to browse all of the material at once. Along with grades on a pos ttest, posttest times were measured. When compared to other instructiona l approaches, programmed instruction generally yields better results than the competing approach. However, previous research has mostly been on verbal and/or visual mate rial and no research with the current studys particular focus is believed to exist. Therefor e, it can only lay claim to be a pilot study. In the current study, a difference was not found be tween experimental conditions in posttest scores, nor in the time taken completing the posttest. Participants in the programmed instruction condition took thr ee minutes longer than their prose condition counterparts to complete the tutorials, a difference that prove d statistically significant, but bears trivial clinical significance, given that the groups performed evenly on the posttest. The pretest/posttest had a modest Kuder-R ichardson reliability score of .66, so inferences about instructional effects of th e tutorials are presented with appropriate caution. Items on the test and items on tutorial s were drawn from the same item pool, and the tutorials were pretested and edited after a trial run. However, pr etesting and statistical analyses of individual pretest/posttest items were not conducted before the experiment, which may compromise the validity of the test. Such validation is recommended for future research. PI learners averaged a score of 88% on the tutorials, a marked difference from their average posttest outcome of 39%. This discrepancy, along with the modest reliability of the test, presents a challenge to the tutorials and/or the tests internal and external validity. There are several possible explanations The first is that the tutorials may have been too easy or not adequate in scope. The creator of the PI tutorials was not an expert in the field, and further isolat ion of music characteristics may have been needed to pin
21 down the essential stimulus differences that de fine the periods of cl assical music. It is also possible that there were too many echoic frames that made guessing easy. For replications of the current study, it is recommended that conceptual properties defining the musical periods be isolated in a dvance before revising the tutorials. Music samples on the posttest may also have been too subtle for discriminationgiven the preparation resulting from the tutorials. Musi c styles tend to transcend time periods, and it is possible that the test was simply too difficult. The material music history from medieval to modern times, may have been too extensive. In addition, many aspects of possible partic ipant behavior during the prose tutorial condition were uncontrolled vari ables--due to the nature of their instruct ional program. Although the prose condition software was reco rded how much time prose participants took, it was impossible to monitor them adequate ly, or to estimate the differential effect of the verbal stimulation rece ived by their counterparts in th e PI group (I.e. the resulting feedback to each PI frame). On the post-experiment surveys, participan ts frequently commented that there was too much material for one session, an arguably respectable claim, given the scope of the subject matter. For future study, a sensible approach may be to combine the more similar musical periods initially and th en gradually create more s ubtle discriminations through careful programming. This strategy would ar guably make initial discrimination easier. For example, instead of using the common five periods, 1) early music/renaissance, 2) baroque, 3) classical 4) ro mantic, and 5) modern, it w ould be possible to group the periods into three: 1) early music/renai ssance/baroque, 2) clas sical/romantic, and 3) modern. For this purpose, it would be necessa ry to employ a profe ssional musicologists help to pick the quintessentially character istic music for each period, and analyze its properties. Those samples would be used as prime examples of a given period, and gradual fading would be based on subtle deviances from their characteristics. The professionals help would also serve as a vali dity measure possibly l acking in the current study. Another explanation for the discrepancy betw een tutorial scores and test scores may the issue of music characteristics vs. chronol ogy. Classification into the five periods was
22 accomplished by using dates of the compositions rather than musical stimulus properties. In other words, the tutorials focused upon t eaching characteristics of music by dates not stimulus dimensions. This emphasis on ch ronology may have made complicated the music discriminations. Oftentimes, discrimi nation between musical periods is difficult even for professionals. Even though laype ople may possibly tend to group musical periods by chronology, professionals may be more likely to group them by characteristics of the music. A professional classical musi c director at a public radi o station went through the posttest as a post-hoc validation measure and got all but two ques tions correct as defined by the creator of the posttest. Howeve r, if categorization by stimulus dimensions rather than date of composition had been us ed, the music directors score would have been 100%. (For example, a fugue is best known as a baroque musical form, but the composer Ludwig van Beethoven is best know n as a classical period composer. When taking the test, the professi onal listened to a fugue by B eethoven and guessed it was baroque, but the experimenter had coded the piece as classical to be congruent with the composers time period.) Similar issues e volve when comparing any periods with potential overlap: late baroque /early classical, late classical/early romantic, and so on. The third complication with the chronology appr oach is the definition of modern music. Modern, or 20th century, composers tend to wr ite music that could be classified under any period, especially the romantic period. Th is makes it very difficult to tell modern from older music. A related issue is the test design. Thirty -second music clips often do not adequately display characteristics of given musical periods, su ch as structure or lack of it, changes in dynamic range, orchestration, and so on. This can partly be solved by using several clips from different sections in a given piece, but the technique still calls for accurate working definitions and rules that describe characteristics of the music. In general, participants liked the material s, and several reported that they would be more likely to seek out classical music in th e future, as a function of taking part in the study. For follow-up in future research, coll aboration with a music store might be established. This could be accomplished by giving participants discount coupons for
23 classical music CD purchases at the store. The number of coupons used would give an indirect measure of the strength of the new repertoire. Another possi bility would be to give discounts to any type of music CD, and check how many classical vs. pop CDs are purchased. This would indirectly measure a new repertoire against a more established one. The results of this pilot study clearly point to better ways of teaching music discrimination. It is quite possible that behavi or analysis is a fiel d uniquely prepared to face the challenge. It has the advantage of de fining the dimensions of variables in the language of physics rather than metaphor (or chronology, in this case). In addition, behavior analysis focuses upon objective physical dimensions of stimuli in the effort to bring the verbal behavior of identification (or nomination) und er the control of abstract qualities of stimulus configura tions. The isolation of the stim ulus properties that lead to the correct identification of musical periods (or for that matter ANY concept) should be greatly advanced by building upon the behavi or analytical disc rimination research literature and its conceptual analysis. The next step in research will require the isolation of stimulus dimensions that are the physical referents of the various periods of classical music.
24 References Bambarger, B. (2002). Classical CD industry offers cautious optimism as sales plunge in 2001. Andante magazine [web-based article] : http://www.andante.com/article/article.cfm?id=15643 Benjamin, L. T. Jr. (1988). A history of teaching machines. American Psychologist, 43 703-712. Bostow, D. E., Kritch, K. M., & Tompkins, B. F. (1995). Computers and pedagogy: Replacing telling with in teractive computer-pro grammed instruction. Behavior Research Methods, Instruments, & Computers, 27 297-300. Clennell, A. (2004). Classical music sales resurgent thanks to easy listening brigade. The Independent [Web-based article] : http://enjoyment.independent.co.uk /music/news/story.jsp?story=499015 Engelmann, S. (1991). Change schools through revolution, not evolution. Journal of Behavior Education, 1, 295-304. Evans, J. L., Homme, L. E., Glaser, R. (1960). The ruleg system for the construction of programmed verbal learning sequences (Report No. CRP-691-3). University of Pittsburgh: Cooperative Research Project United States Office of Education. Holland, J. G. (1960). Teaching machines: An application of principles from the laboratory. Journal of the Experimental Analysis of Behavior, 3 275-287. Kritch, K. M., & Bostow, D. E. (1993). Verb al responses to past events: Intraverbal relations, or tacts to private events? The Analysis of Verbal Behavior, 11 1-7. Kritch, K. M., & Bostow, D. E. (1998). Degr ee of constructed-res ponse interaction in computer-based programmed instruction. Journal of Applied Behavior Analysis, 31 387-398. Kritch, K. M., Bostow, D. E., & Dedrick, R. F. (1995). Level of interactivity of videodisc instruction on college student recall of AIDS information. Journal of Applied Behavior Analysis, 28 85-86.
25 Kumar, N. B., Bostow, D. E., Schapira, D. V., & Kritch, K. M. (1993). Efficacy of interactive, automated inst ruction in nutriti on education for cancer prevention. Journal of Cancer Education, 8 203-211. Miller, M. L., & Malott, R. W. (1997). The importance of overt responding in programmed instruction even with added in centives for learning. Journal of Behavioral Education, 7 497-503. Lindsley, O. R. (1992). Why aren't eff ective teaching tools widely adopted? Journal of Applied Behavior Analysis, 25, 21-26. Pressey, S. L. (1926). A simple apparatus wh ich gives tests and scores-and teaches. School and Society, 23 373-376. Skinner, B. F. (1963). Reflections on a decade of teaching machines. Teachers College Record, 65, 947-954. Skinner, B. F. (1968). The Technology of Teaching New York: Appleton-Century-Crofts. Skinner, B. F. (1984). The shame of American education. American Psychologist, 39 947-954. Terrace, H. S. (1966). Stimulus control. In Honig, W. K. (Ed.), Operant Behavior. Areas of Research and Application New York: Appleton-Century-Crofts. Thomas, D. L., & Bostow, D. E. (1991). Eval uation of pre-therapy computer-interactive instruction. Journal of Computer-Based Instruction, 18 66-70. Tudor, R. M. (1995). Isolating the effects of active responding in computer-based instruction. Journal of Applied Behavior Analysis, 28 343-344. Tudor, R. M., & Bostow, D. E. (1991). Computer-programmed instruction: The relation of required interaction to practical application. Journal of Applied Behavior Analysis, 24, 361-368. Vargas, E. A., & Vargas, J. S. (1991). Programmed instruction: What it is and how to do it. Journal of Behavioral Education, 1 235-251.
27 Appendix A: Experimenter instructions to participants
28 Experimenter instructions to participants: Welcome. Read these instructions very caref ully. If you have any questions after reading this, please save them until youve finished reading this. Since we are teaching a course on learning and the application of learning principles, we need to practice what we preach and experi mentally test our po tential instructional programs experimentally. You are taking part in our evaluation of teaching methods, as a part of this course on learning. Your pers onal information and everything you submit will be held stricty confidential, as if it were your final exam grades. This evaluation has received IRB approval for protection of personal information. You will be doing three activities today: A pr e-test, an instructional program, and a posttest. When you have earned a score on th e pretest, gone thr ough the instructional program, and earned a score on the posttest, you r final course average will be boosted by 3 additional points in return for your participation. Th e score you earn on the posttest today will NOT impact your course grade. 1) Plug your headphones into the green jack on the rear on the computer. 2) Open Internet Explorer and enter the URL address: http://sirocco.coedu.usf.edu/mcohen/gummi/music 3) Write your WebCT login name in the text box. 4) Follow the on-screen instructions.
29 Appendix B: Student survey of music lesson
30 Student survey of music lesson This is your opportunity to tell us what you thought of the lesson you just had. Your input is extremely important. For multiple choice questions, just check one. You may answer anonymously if you want. Your name (optional) : ____________________________________ Your course (optional) : ____________________________________ 1. What kind of instructional strategy did you receive? [ ] A programmed tutorial that played music. I filled in blanks to answer. [ ] A web page with many short paragraphs. I scrolled up and down the page and clicked on links to hear music examples. 2. What grade did you receive on the last test for this lesson? __________. 3. In your opinion, the instructional strategy you got was... [ ] Very effective [ ] Rather effective [ ] Neither effective nor ineffective [ ] Rather ineffective [ ] Very ineffective 4. How interesting or boring was the lesson? [ ] Very interesting [ ] Interesting [ ] Neither interesting nor boring [ ] Boring [ ] Very boring 5. How often or seldom do you usually listen to classical music? [ ] Every day [ ] Often, but not every day [ ] Every now and then [ ] Seldom [ ] Never 6. How likely or unlikely is this lesson to increase your taste for classical music, if only slightly (e. g. tune more often in to classical stations, check out the classical selection at your local music store, look for it on the internet, etc.)? [ ] Very likely [ ] Likely [ ] Neither likely nor unlikely [ ] Unlikely [ ] Very unlikely 7. Have you got any major hearing impediments that you feel could have compromised your test results? [ ] Yes [ ] No 8. Is there anything you would like to add (you can also write on the back of this sheet)? ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________
31 Appendix C: Demographic Form/Informed Consent Form
32 Demographic Form for Dr. Bostows courses (Summer Term, 2004) Circle which course: EDF 3214 (Human Development and Learning) EDF 3228 (Human Behavior and Environmental Selection) EDF 6211 (Psych. Foundations of Education) EDF 6215 (Principles of Learning) EDF 6217 (Behavior Theory and Classroom Learning) EDF 7227 (Doctoral Course in Behavioral Analysis) Your name: _(last)________________(first)________________ Gender: (M/F) ___________ Your Email address: [print clearly]: _________________________________________ Social Security No. ________________________________ Age: ___________________ Your mailing address: (While at USF) Home phone: ____________ ________________________________________________ ________________________________________________ Work phone: _____________ City: Zip Various instructional techniques may be tried in this course. The following information will be used to determine whether they correlate in any way with your success in this course. If you feel it is an unwarranted intrusion into your life, simply skip that question. However, we request that you answer all possible questions that could relate to your succe ss in this course. The relations hips, if any, will be used to adjust teaching techniques and counsel future students. How many course hours are you taking this term?______________________________________ What year are you (fresh., soph., jr., sr., other)? _______________________________________ What is you major or specialization? ________________________________________________ What is your current GPA at USF (If transferred, list last GPA)? _________________________ If you have a computer at home, what type is it? (IBM, clone, Apple-Macintosh, other)? __________ Does your computer have a CD rom drive? _______ If you work, in addition to going to school, how many hours are you working per week? _______ Do you commute to campus classes? (Y/N)_____________ If so, how many miles to you travel each way? _________ What type of internet connection do you have? I.e., broadband _______ or dial-up connection (speed)? ____ I have listened carefully about the course EDF ______ in which I have enrolled. I understand that rules can change within the course, and that I will be notified ahead of timethis may include the deadline dates for quizzes throug hout the course during various weeks. I understand that Dr. Bostow may run various sections with slightly different course requirements to compare how students perform. Fu rther, I understand that I will pa rticipate in the evaluation of new instructional techniques as part of course re quirements. I understand that any differences in the effectiveness of these trial techniques will not influe nce my final grade in the course. I have been given the opportunity to ask questions about the course operations and possible variations that may occur, and have received satisfac tory answers. I understand the course requirements and agree to abide by them. Students signature: _______________________________ Date: _________________