Friday, October 30, 2020

Anorexic

If you order your research paper from our custom writing service you will receive a perfectly written assignment on anorexic. What we need from you is to provide us with your detailed paper instructions for our experienced writers to follow all of your specific writing requirements. Specify your order details, state the exact number of pages required and our custom writing professionals will deliver the best quality anorexic paper right on time.


Out staff of freelance writers includes over 120 experts proficient in anorexic, therefore you can rest assured that your assignment will be handled by only top rated specialists. Order your anorexic paper at affordable prices!


OPERANT (INSTRUMENTAL) CONDITIONING


The major theorists for the development of operant conditioning are Edward Thorndike, John Watson, and B. F. Skinner. This approach to behaviorism played a major role in the development of the science of psychology, especially in the United States. They proposed that learning is the result of the application of consequences; that is, learners begin to connect certain responses with certain stimuli. This connection causes the probability of the response to change (i.e., learning occurs.)


Thorndike labeled this type of learning instrumental. Using consequences, he taught kittens to manipulate a latch (e.g., an instrument). Skinner renamed instrumental as operant because it is more descriptive (i.e., in this learning, one is operating on, and is influenced by, the environment). Where classical conditioning illustrates S--R learning, operant conditioning is often viewed as R--S learning since it is the consequence that follows the response that influences whether the response is likely or unlikely to occur again. It is through operant conditioning that voluntary responses are learned.


Buy anorexic term paper


The -term model of operant conditioning (S-- R --S) incorporates the concept that responses cannot occur without an environmental event (e.g., an antecedent stimulus) preceding it. While the antecedent stimulus in operant conditioning does not elicit or cause the response (as it does in classical), it can influence it. When the antecedent does influence the likelihood of a response occurring, it is technically called a discriminative stimulus.


It is the stimulus that follows a voluntary response (i.e., the responses consequence) that changes the probability of whether the response is likely or unlikely to occur again. There are two types of consequences positive (sometimes called pleasant) and negative (sometimes called aversive). These can be added to or taken away from the environment in order to change the probability of a given response occuring again.


General Principles


There are 4 major techniques or methods used in operant conditioning. They result from combining the two major purposes of operant conditioning (increasing or decreasing the probability that a specific behavior will occur in the future), the types of stimuli used (positive/pleasant or negative/aversive), and the action taken (adding or removing the stimulus).


Outcome of Conditioning


Increase Behavior Decrease Behavior


PositiveStimulus PositiveReinforcement (add stimulus) Response Cost (remove stimulus)


NegativeStimulus NegativeReinforcement (remove stimulus) Punishment (add stimulus)


Analyzing Examples of Operant Conditioning


There are five basic processes in operant conditioning positive and negative reinforcement strengthen behavior; punishment, response cost, and extinction weaken behavior.


1. Postive Reinforcement--the term reinforcement always indicates a process that strengthens a behavior; the word positive has two cues associated with it. First, a positive or pleasant stimulus is used in the process, and second, the reinforcer is added (i.e., positive as in + sign for addition). In positive reinforcement, a positive reinforcer is added after a response and increases the frequency of the response.


. Negative Reinforcement-- the term reinforcement always indicates a process that strengthens a behavior; the word negative has two cues associated with it. First, a negative or aversive stimulus is used in the process, and second, the reinforcer is subtracted (i.e., negative as in a - sign for subtraction). In negative reinforcement, after the response the negative reinforcer is removed which increases the frequency of the response. (Note There are two types of negative reinforcement escape and avoidance. In general, the learner must first learn to escape before he or she learns to avoid.)


. Response Cost--if positive reinforcement strengthens a response by adding a positive stimulus, then response cost has to weaken a behavior by subtracting a positive stimulus. After the response the positive reinforcer is removed which weakens the frequency of the response.


4. Punishment--if negative reinforcement strengthens a behavior by subtracting a negative stimulus, than punishment has to weaken a behavior by adding a negative stimulus. After a response a negative or aversive stimulus is added which weakens the frequency of the response.


5. Extinction--No longer reinforcing a previously reinforced response (using either positive or negative reinforcement) results in the weakening of the frequency of the response.


Rules in analyzing examples. The following questions can help in determining whether operant conditioning has occured.


a. What behavior in the example was increased or decreased?


b. Was the behavior increased (if yes, the process has the be either positive or negative reinforcement), or decreased (if the behavior was decreased the process is either response cost or punishment).


c. What was the consequence / stimulus that followed the behavior in the example?


d. Was the consequence / stimulus added or removed? If added the process was either positive reinforcement or punishment. If it was subtracted, the process was either negative reinforcement or response cost.


Examples. The following examples are provided to assist you in analyzing examples of operant conditioning.


a. Billy likes to campout in the backyard. He camped-out on every Friday during the month of June. The last time he camped out, some older kids snuck up to his tent while he was sleeping and threw a bucket of cold water on him. Billy has not camped-out for three weeks.


l. What behavior was changed? camping-out


. Was the behavior strengthened or weakened? weakened (eliminate positive and negative reinforcement)


. What was the consequence? having water thrown on him


4. Was the consequence added or subtracted? added


Since a consequence was added and the behavior was weakened, the process was punishment.


b. Every time Madge raises her hand in class she is called on. She raised her hand time during the first class, times in the second and 4 times during the last class.


l. What behavior was changed? Handraising


. Was the behavior strengthened or weakened? strengthened (eliminates response cost, punishment, and extinction)


. What was the consequence? being called on


4. Was the consequence added or subtracted? added


Since the consequence was added and the behavior was strengthened, the process is positive reinforcement.


c. Gregory is being reinforced using a token economy. When he follows a direction / command he earns a point. At the end of each day, he can buy freetime, t.v. privileges, etc. with his points. When he misbehaves or doesnt follow a command, he loses points. Andrew used to call his mom names. Since he has been on the point system, his name calling has been reduced to almost zero.


l. What behavior was changed? name calling


. Was the behavior strengthened or weakened? weakened (eliminate positive and negative reinforcement)


. What was the consequence? losing points


4. Was the consequence added or subtracted? subtracted


Since the consequence was subtracted and the behavior was weakened, the process is response cost.


d. John does not go to the dentist every 6-months for a checkup. Instead, he waited until a tooth really hurts, then goes to the dentist. After two emergency trips to the dentist, John now goes every 6-months.


1. What behavior was changed? going to the dentist


. Was the behavior strengthened or weakened? strengthened (eliminate response cost and punishment)


. What was the consequence? tooth no longer hurting


4. Was the consequence added or subtracted? subtracted


Since the consequence was subtracted and the behavior was strengthened, the process is negative reinforcement.


Applications of Operant Conditioning to Education


Our knowledge about operant conditioning has greatly influenced educational practices. Children at all ages exhibit behavior. Teachers and parents are, by definition, behavior modifiers (if a child is behaviorally the same at the end of the academic year, you will not have done your job as a teacher; children are supposed to learn (i.e., produce relatively permanent change in behavior or behavior potential) as a result of the experiences they have in the school / classroom setting.


Behavioral studies in classroom settings have clearly established ways to organize and arrange the physical classroom to facilitate both academic and social behavior. Teaching itself has also been the focus of numerous studies, and has resulted in a variety of teaching models for educators at all levels. Programmed instruction is only one such model. Programmed instruction requires that learning be done in small steps, with the learner being an active participant (rather than passive), and that immediate corrective feedback is provided at each step.


Developed by W. Huitt and J. Hummel


Last Revised July , 17


http//chiron.valdosta.edu/whuitt/col/behsys/operant.html


Operant Conditioning


Operant conditioning, also called instrumental conditioning, is a method for modifying behavior (an operant) which utilizes contingencies between a discriminative stimulus, an operant response, and a reinforcer to change the probability of a response occurring again in that situation. This method is based on Skinners three-term contingency and it differs from the method of Pavlovian conditioning.


An everyday illustration of operant conditioning involves training your dog to shake on command. Using the operant conditioning technique of shaping, you speak the command to shake (the discriminative stimulus) and then wait until your dog moves one of his forepaws a bit (operant response). Following this behavior, you give your dog a tasty treat (positive reinforcer). After demanding ever closer approximations to shaking your hand, your dog finally comes to perform the desired response to the verbal command shake.


Skinner is famous for the invention of the Skinner box, an experimental apparatus which he designed to modify animal behavior within an operant conditioning paradigm.


Www.psychology.uiowa.edu


The Operant Conditioning of Human Motor Behavior


A very large body of experimental results have accumulated in the field of operant, or instrumental, conditioning of the rat, the pigeon, and of other experimental animals. The application to human behavior of the laws generated by such research is most often done by the use of theory. An alternative method is to demonstrate that the manipulation of classes of empirically defined variables that produce specific and highly characteristic changes in the behavior of small experimental animals in Skinner boxes produce similar changes in the behavior of college students.


This paper reports procedures for the direct application of the variables defining the paradigm for operant conditioning to human behavior and shows that human beings act very much indeed like experimental animals when they are subjected to the same experimental treatments. It suggests that direct application of conditioning principles to some categories of human behavior may be justified. The procedures are simple and they may be followed by anyone, with a minimum of equipment.


That it is possible to condition human motor behavior will surprise few who are concerned with behavior theory. Nevertheless, it has not always been clear what behaviors will act as responses, what events will prove to be reinforcing stimuli, or exactly what procedures would most readily yield reproducible results. This paper describes methods that have been worded out for easy and rapid operant conditioning of motor behavior in humans, states characteristic findings, and reports sample results. Developed in a series of exploratory experiments in an elementary laboratory course in psychology, the methods may have a wider utility.


Development of the Method


In one years class in the introductory laboratory, an attempt was made to reproduce the Greenspoon effect (1), in which the rate of saying plural nouns is brought under experimental control by the use, as a reinforcing stimulus, of a smile by the experimenter, or by his saying Mmmm, or Good. The results were indifferent a few students had good success with some subjects; the majority failed with all their subjects. The successful students seemed, casually, to be the best-looking, most mature, and most socially acceptable; they tended to have prestige. This suggested that the procedure was effective because S cared about Es behavior; that is, he noticed and responded in one way or another to what E said or did.


This observation is consistent with the Guthrian (but Skinner-box-derived) view that if one could isolate any single property shared by reinforcing stimuli (whether primary or secondary), it would prove to be that all reinforcing stimuli produce a vigorous response of very short latency (). Greenspoons procedure was therefore modified to force S to respond to the stimuli that E wished to use as reinforcers. Thereafter, the incidence of failures to condition human Ss dropped considerably.


Using these methods, many kinds of stimuli have been found to be reinforcing in the hands of student experimenters, and a wide variety of responses have been conditioned. Data have been gathered on performance under regular reinforcement, and under such other schedules as variable and fixed interval, and variable and fixed ratio (, 4), both in establishing rates of response and in yielding extinction curves of appropriate form after the termination of reinforcement. Experiments have been done on response differentiation, discrimination training and chaining. Indeed, there is reason to believe that the whole battery of operant phenomena can be reproduced in a short time. Incidental data have been obtained on awareness, insight, or what-have-you.


Here is a sample set of instructions to E for human conditioning. In presenting the method more fully, we shall amplify each section of these instructions in turn.


Procedure Human Operant Motor Conditioning


1. Instruction to subject Your job is to work for points. You get a point every time I tap the table with my pencil. As soon as you get a point, record it immediately. You keep the record of your own points--try to get as many as possible. As necessary Im sorry, I cant answer any questions. Work for points. DO NOT SAY ANYTHING ELSE TO S. Avoid smiling and nodding.


. Reinforcing stimulus pencil tap.


. Response tapping forefinger to chin. Be sure the tap on the chin is complete before reinforcing--that is, be sure that S has tapped his chin and withdrawn his finger. During regular reinforcement, be sure S does not jump the gun and record a point before you give it to him. If S does this, withhold reinforcement and say You got no point that time. You get a point only when I tap the table. Be sure you get a point before recording.


4. Procedures Observe S; determine operant level of chin-tapping before giving instructions.


a. Approximation conditioning of chin-tap (described later).


b. 100 regular reinforcements of chin-tap.


c. Shift to


[1/ of the subjects] 0-second fixed interval reinforcement.


[1/ of the subjects] fixed ratio reinforcement at ratio given by Ss rate per 0 seconds.


[When shifting from regular reinforcement to the schedule, make sure that S doesnt extinguish. If his rate has been high, youll have to shift him, perhaps, to a 01 ratio--with such a change, S will probably extinguish. Prevent this by shifting him first to a 5 1 ratio (for minutes), then to 10 1 (for minutes), then to 0 1. Similarly, put S on 10 second F. I., then a 0-second F. I., and finally on a 0-second one.]


Continue for 500 responses.


d. Extinguish to a criterion of 1 successive 15-second intervals in which S gives not more than responses in all.


5. Subjects awareness


[1/4 of Ss] Record any volunteered statement made by S.


[1/4 of Ss] At the end of the experiment, ask, What do you think was going on during this experiment? How did it work?


[1/4 of Ss] Add to instructions When you think you know why you are getting points, tell me. I wont tell you whether youre right or wrong, but tell me anyway. At about the middle of each procedure, ask, What do you think we are doing now?


[1/4 of Ss] At the beginning of each procedure, give S full instructions


a. Youll get a point every time you tap your chin, like this. (Demonstrate.)


b. From now on, youll get a point for every twentieth response, or ... for a response every 0 second. From now on, youll get no more points, but the experiment will continue.


6. Records


a. Note responses reinforced during approximation; record time required, and number of reinforcements given.


b. Record number of responses by 15-second intervals. Accumulate.


c. Draw cumulative response curves.


d. Be sure your records and graphs clearly show all changes in procedure, and the points at which S makes statements about the procedure.


e. Compute mean rates of response for each part of the experiment.


f. Record all spontaneous comments of S that you can; note any and all aggressive behavior in extinction.


General Notes


Duration and situation. As short a time as 15 minutes, but, more typically, a period of 40 to 50 minutes can be allotted to condition an S, to collect data under regular and partial reinforcement schedules, to develop simple discriminations, and to trace through at least the earlier part of the extinction curve. The experiment should not be undertaken unless S has ample time available; otherwise Ss tend to remember pressing engagements elsewhere when placed on a reinforcement schedule. We have not tried, as yet, to press many Ss very much beyond an hour of experimentation.


The experiments can be done almost anywhere, in a laboratory room, in students living quarters, or in offices. Background distractions, both visual and auditory, should be relatively constant. Spectators, whether they kibitz or not, disturb experimental results.


The E may sit opposite S, so that S can see him (this is necessary with some reinforcing stimuli), or E may sit slightly behind S. S should not be able to see Es record of the data. In any case, E must be able to observe the behavior he is trying to condition.


Subject and experimenter. Any cooperative person can be used as a subject. It does not seem to matter whether S is sophisticated about the facts of conditioning; many Ss successfully conditioned, who gave typical data, had themselves only just served as Es . However, an occasional, slightly sophisticated S may try to figure out how hes supposed to behave and try to give good data. He will then emit responses in such number and variety that it is difficult for E to differentiate out the response in which he is interested.


People who have had some experience with the operant conditioning of rats or pigeons seem to become effective experimenters, learning these techniques faster than others. The E must be skilled in delivering reinforcements at the proper time, and in spotting the responses he wants to condition. With his first and second human S, an E tends to be a little clumsy or slow in reinforcing, and his results are indifferent. About a third of our students are not successful with the first S. Practice is necessary.


Apparatus. The indispensable equipment is that used by E to record a watch with a sweep second hand, and paper and pencil. Beyond these, the apparatus man can have a field day with lights, bells, screens, recorders, and so on. This is unnecessary.


Instructions


Conditioning may occur when no instructions whatever are given, but it is less predictable. The instructions presented here give consistent success.


Subjects may be told that they are participating in a game, an experiment, or in the validation of a test of intelligence. All will work. Spectacular results may be achieved by describing the situation as test of intelligence, but this is not true for all Ss.


In general, the simpler the instructions the better. No mention should be made that S is expected to do anything, or to say anything. Experience suggests that if more explicit instruction is given, results are correspondingly poor. Elaborate instructions tangle S up in a lot of verbally initiated behavior that interferes with the conditioning process.


The instructions will be modified, of course, to fit the reinforcement. It seems to be important for S to have before him a record of the points he has earned. (This is not, of course, Es record of the data.) It seems to be better if he scores himself, whether by pressing a key that activates a counter, or by the method described here. Most Ss who do not have such a record either do not condition, or they quit working.


Reinforcing Stimuli


Any event of short duration whose incidence in time is under the control of E may be used as a reinforcing stimulus if S is instructed properly. The most convenient is the tap of a pencil or ruler on a table or chair arm, but E may say point, good, and so on. Lights, buzzers, counters, all work. One student found that getting up and walking around the room and then sitting down was a very effective reinforcer for his instructed S. (Make me walk around the room.)


The E may assign a value to the reinforcing stimulus in the instructions--e.g., for each 10 points S gets a cigarette, a nickel, or whatever. Members of a class may be told that if they earn enough points as Ss, they may omit writing a lab report.


Where no instructions are given, or where the instructions do not provide for an explicit response to a reinforcing stimulus (as in the Greenspoon experiment--i.e., when E wished to use a smile, or an mmmm, with the intention of showing learning without awareness) many Ss will not become conditioned.


The most important features of the operation of reinforcement are (a) that the reinforcing stimulus have an abrupt onset, (b) that it be delivered as soon as possible after the response being conditioned has occurred, and (c) that it not be given unless the response has occurred. Delayed reinforcement slows up acquisition; it allows another response to occur before the reinforcement is given, and this response, rather than the chosen one, gets conditioned. The best interval at which to deliver a reinforcing stimulus seems to be the shortest one possible--the Es disjunctive reaction time.


When S has been conditioned and is responding at a high rate, he may show conditioned recording--i.e., he will record the point before E has given it to him. The E must watch for this.


When S can observe E, it is entirely possible that Ss behavior is being reinforced, not by the chosen reinforcing stimulus, but by others of Es activities, such as intention movements of tapping the table, nods of the head, and recording the response. The effect of such extraneous reinforcers can be easily observed during extinction, when the designated reinforcing stimulus is withdrawn. The precautions to be taken here will depend upon the purpose of the experiment. The E should thus remain as quiet and expressionless as possible.


The Response


The E has great latitude in his choice of behavior to be conditioned. It may be verbal or motor, it may be a response of measurable operant level before reinforcement, it may be a complex and infrequent response that S seldom, if ever, has performed. One qualification is that the response must be one that terminates relatively quickly, so that reinforcement can be given. (One E conditioned an S to bend his head to the left, reinforcing when the head was bent. The S held his head bent for longer and longer times, and so got fewer and fewer reinforcements as the procedure became effective. He became bored and stopped working.)


Motor behavior. The E may observe S for a few minutes before proposing to do an experiment on him and choose to condition some motor behavior S occasionally shows, such as turning his head to the right, smiling, or touching his nose with his hand.


The E will then first determine its operant level over a period of time before he starts to reinforce. Here, changes in rate of response as a function of the reinforcement variables demonstrate conditioning. Such behavior is easily conditioned without awareness.


The E may decide in advance on a piece of Ss behavior he wishes to condition. In this case, he may choose something like picking up a pencil, straightening his necktie, and so on. If E chooses something as simple as this, he can usually afford to sit and wait for it to occur as an operant. If it does not, he may find it necessary to shape the behavior, as will be necessary if he chooses a relatively or highly unlikely piece of behavior, such as turning the pages of a magazine, slapping the ankle, twisting a button, looking at the ceiling, placing the back of the hand on the forehead, writing a particular word, or assuming a complex posture.


Many of the readers will question this use of the word response. It is being used in accordance with the definition made explicit in Skinners The Behavior of Organisms


Any recurrently identifiable part of behavior that yields orderly functional relationships with the manipulation of specified classes of environmental variables is a response.


So far, this concept has proven a useful one We have not explored the outer limits of the concept, with respect either to the topography or to the consequences of the behavior--we have not sampled broadly enough to find parts of behavior tentatively classifiable as responses that didnt yield such functions when we tried to condition them.


The contingencies of reinforcement, established in advance by E, determine the specifying characteristics of a response He may reinforce only one word, or one trivial movement. In this case, he gets just that word or movement back from S. If E reinforces every spoken sentence containing any word of a specifiable class (e.g., the name of an author) he gets back from S a long discussion of literary figures. Plotted cumulatively, instances of naming of authors and titles in a whole sentence behave as a response class. By restricting reinforcement to naming one author, the discussion is narrowed. This method may serve fruitfully in research on what some call response-classes and others call categories of behavior.


Discussion


Operant conditioning as it was described in The Behavior of Organisms is concerned with the behavior that the layman calls voluntary. This characterization is still valid--the behavior during conditioning is not forced, as one might characterize the conditioned knee-jerk, or necessarily unconscious, as might be applied to the conditioned GSR. Ss work because they want to. Ss behavior is nonetheless lawful and orderly as a function of the manipulations of E, and his behavior is predictable by extrapolation from that of lower animals.


These assertions, like the procedure itself, involve no theoretical assumptions, presuppositions, or conclusions about what is going on inside Ss head. It does not assert that all learning occurs according to this set of laws, or that this process of conditioning is typical of all human learning. It does not assert that S is no better (or worse) than a rat, or that his behavior is unintelligent, or that since, say Ss get information from a reinforcing stimulus, so too do rats. The behavior is highly similar in the two cases--we leave it to others to make assertions to the effect that rats think like men, or that men think like rats.


The procedures can be characterized as bearing close relationship to a number of parlor games. Indeed, such conditioning might be considered by some as nothing more than a parlor game. This would not be the first time, however, that examples of rather basic psychological laws turned up in this context. Parlor games, like other recreational activities, are, to be sure, determined culturally, but it is doubtful that a parlor game could be found whose rules were in conflict with the general laws of behavior.


That the procedure is more than a parlor game is demonstrated by the fact that it provides a situation in which a number of the variables controlling voluntary behavior can be experimentally isolated and manipulated; that stable measures of a wide variety of behavior are yielded and, finally, that the procedure yields orderly data that may be treated in any one of a variety of theoretical systems.


Theoretical Discussion


The data lend themselves very well indeed to theoretical discussion in terms of perceptual reorganization, habit strength, expectancy, or knowledge of results, as well as to simple empirical description in the vocabulary of conditioning. Chàcun a son got.


Summary


A series of procedures are presented that enable an experimenter to reproduce, using the motor (and verbal) behavior of human subjects, functions that have been previously described in the behavior of rats and pigeons. Some remarks on awareness in the situation are made.


http//cogprints.soton.ac.uk/documents/disk0/00/00/06/04/cog00000604-00/biblio5.html


Comparison of Classical and Operant Conditioning


Classical and operant conditioning share many of the same basic principles and procedures. For example, Kimble (161) has pointed out that the basic principles of acquisition, extinction, spontaneous recovery, and stimulus generalization are common to both types of learning. There are several differences, however, between classical and operant conditioning. Although a basic feature of operant conditioning is reinforcement, classical conditioning relies more on association between stimuli and responses. A second distinction is that much of operant conditioning is based on voluntary behavior, while classical conditioning often involves involuntary reflexive behavior. These distinctions are not as strong as they once were believed to be. For example, Neal Miller (178) has demonstrated that involuntary responses, such as heart rate, can be modified through operant conditioning techniques. It now appears that classical conditioning does involve reinforcement. And many classical conditioning situations also involve operant behavior. For example, lets assume that Tina was conditioned to fear rats like Little Albert. She would first learn to associate the rat with the loud noise through classical conditioning. Then presentation of the rat would produce a fear reaction and Tina would learn to escape from the aversive stimulus through operant conditioning (negative reinforcement). This is sometimes called the two-factor theory of avoidance conditioning (Mowrer & Lamoreaux, 14).


Classical Conditioning Operant Conditioning


Acquisition Acquisition


Extinction Extinction


Spontaneous recovery Spontaneous recovery


Stimulus generalization Stimulus generalization


Association between stimuliand responses Reinforcement


Based on involuntary reflexive behavior Based on voluntary behavior


http//tip.psychology.org/skinnerhtml


Operant Conditioning


(B.F. Skinner)


Overview


The theory of B.F. Skinner is based upon the idea that learning is a function of change in overt behavior. Changes in behavior are the result of an individuals response to events (stimuli) that occur in the environment. A response produces a consequence such as defining a word, hitting a ball, or solving a math problem. When a particular Stimulus-Response (S-R) pattern is reinforced (rewarded), the individual is conditioned to respond. The distinctive characteristic of operant conditioning relative to previous forms of behaviorism (e.g., Thorndike, Hull) is that the organism can emit responses instead of only eliciting response due to an external stimulus.


Reinforcement is the key element in Skinners S-R theory. A reinforcer is anything that strengthens the desired response. It could be verbal praise, a good grade or a feeling of increased accomplishment or satisfaction. The theory also covers negative reinforcers -- any stimulus that results in the increased frequency of a response when it is withdrawn (different from adversive stimuli -- punishment -- which result in reduced responses). A great deal of attention was given to schedules of reinforcement (e.g. interval versus ratio) and their effects on establishing and maintaining behavior.


One of the distinctive aspects of Skinners theory is that it attempted to provide behavioral explanations for a broad range of cognitive phenomena. For example, Skinner explained drive (motivation) in terms of deprivation and reinforcement schedules. Skinner (157) tried to account for verbal learning and language within the operant conditioning paradigm, although this effort was strongly rejected by linguists and psycholinguists. Skinner (171) deals with the issue of free will and social control.


Scope/Application


Operant conditioning has been widely applied in clinical settings (i.e., behavior modification) as well as teaching (i.e., classroom management) and instructional development (e.g., programmed instruction). Parenthetically, it should be noted that Skinner rejected the idea of theories of learning (see Skinner, 150).


Example


By way of example, consider the implications of reinforcement theory as applied to the development of programmed instruction (Markle, 16; Skinner, 168)


1. Practice should take the form of question (stimulus) - answer (response) frames which expose the student to the subject in gradual steps


. Require that the learner make a response for every frame and receive immediate feedback


. Try to arrange the difficulty of the questions so the response is always correct and hence a positive reinforcement


4. Ensure that good performance in the lesson is paired with secondary reinforcers such as verbal praise, prizes and good grades.


Principles


1. Behavior that is positively reinforced will reoccur; intermittent reinforcement is particularly effective


. Information should be presented in small amounts so that responses can be reinforced (shaping)


. Reinforcements will generalize across similar stimuli (stimulus generalization) producing secondary conditioning


Www.dushkin.com


Please note that this sample paper on anorexic is for your review only. In order to eliminate any of the plagiarism issues, it is highly recommended that you do not use it for you own writing purposes. In case you experience difficulties with writing a well structured and accurately composed paper on anorexic, we are here to assist you. Your persuasive essay on anorexic will be written from scratch, so you do not have to worry about its originality.


Order your authentic assignment and you will be amazed at how easy it is to complete a quality custom paper within the shortest time possible!


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.