Which of the following statements concerning schedules of reinforcement is true?

Which of the following statements about taste aversion learning is true? It is a special kind of classical conditioning involving the learned association ...

Which of the following statements regarding learning is not true? ... What might we predict based on conditioned taste aversion research?

Which of the following is true about the study of consciousnesses in ... c) animals must evaluate a situation cognitively before taste aversion develops.

Which of the following statements is true of learned taste aversion? It can occur even with a gap of up to 8 hours between exposure to the stimulus and the ...

Which of the following statements regarding learning is not true ... What might we predict based on conditioned taste aversion research.

Which of the following statements about taste aversion learning is true? It is a special kind of classical conditioning involving the learned association ...

Study with Quizlet and memorize flashcards containing terms like True or False? ... Which of the following statements about learning is false?

Taste aversion studies led researchers to which of the following conclusions? Taste aversion is a universal survival mechanism.

Which of the following statements about classical conditioning and food cues is true? Humans are resistant to learning taste aversions because they can figure ...

Items 1 - 7Taste aversion commonly comes after you eat a food that makes statements about taste aversion learning is true quizlet Which of the following ...

Concepts and Principles

Lisa N. Britton, Matthew J. Cicoria, in Remote Fieldwork Supervision for BCBA® Trainees, 2019

Schedules of Reinforcement

Schedules of reinforcement are very challenging for many trainees to grasp. Your instruction in this area should include the following topics:

Continuous reinforcement

Intermittent schedules of reinforcement

Fixed ratio schedule

Variable ratio schedule

Fixed interval schedule

Variable interval schedule

Compound schedules

Concurrent schedule

Multiple schedule

Chained schedule

Mixed schedule

Tandem schedule

Alternative schedule

Conjunctive schedule (Cooper et al., 2007, pp. 305–320)

Rehearsal and Performance Feedback

Provide examples of various schedules and have the trainees determine which schedule is in effect with each of the examples. Provide feedback regarding which answer is correct and why. Give the trainees feedback on their score individually. Continue providing instruction, assigning readings, and presenting more examples until the trainees achieve the previously established criterion with this activity. Appendix E includes examples for you to use within your instruction.

Ethics Related to Schedules of Reinforcement

Emphasize with your trainees that we have an ethical obligation to thin schedules of reinforcement to the natural reinforcers available in the environment. It is also critical for us to refrain from using reinforcers that may be harmful for our clients even when they may be effective (Bailey & Burch, 2016, p. 135).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128159149000043

Skill Acquisition

Jonathan Tarbox, Courtney Tarbox, in Training Manual for Behavior Technicians Working with Individuals with Autism, 2017

5.15.1 Intermittent Reinforcement

As described earlier in this section, intermittent reinforcement is when only some (i.e., not all) occurrences of a behavior are reinforced. Most of our everyday ongoing behavior is on some kind of intermittent schedule of reinforcement, so we need to teach the learners with ASD with whom we work to adjust to and rely upon intermittent reinforcement. To use intermittent reinforcement for encouraging maintenance, first check the maintenance section of the skill acquisition plan or check any existing separate maintenance plan. Generally speaking, reinforcement should be gradually thinned from continuous reinforcement during skill acquisition to reinforcing every other correct response, to reinforcing every two to three correct responses on a variable and unpredictable schedule.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128094082000052

Treatment Components

Jonathan Tarbox, Taira Lanagan Bermudez, in Treating Feeding Challenges in Autism, 2017

4.1.2 Reinforcement Schedules

The feeding intervention should clearly specify when and how much reinforcement should be provided, referred to as the “schedule of reinforcement.” Continuous reinforcement schedules provide reinforcement following every instance of the target behavior. For example, the client receives one bite of preferred food following each bite of nonpreferred he accepts. Continuous reinforcement is also referred to as a Fixed Ratio 1 schedule of reinforcement. In contrast, intermittent schedules of reinforcement specify how only some of the responses will result in a reinforcer. Intermittent reinforcement can be delivered after the client eats a fixed number of bites (e.g., the client receives reinforcement after every 5 bites he accepts) or a varied schedule of reinforcement, delivered after a varied number of responses (e.g., the client receives reinforcement after approximately 5 bites, ranging from 3 to 7).

To maximize effectiveness at the beginning of the treatment, use a denser schedule of reinforcement, so that varied and flexible eating is richly reinforced. After the client’s eating has reliably improved, consider changing to intermittent reinforcement. Gradually decreasing the frequency of reinforcement is referred to as “schedule thinning,” described in Chapter 6, Common Treatment Packages and Chapter 8, Caregiver Training and Follow-Up.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128135631000045

Methods in Behavioral Pharmacology

Richard A. Meisch, Gregory A. Lemaire, in Techniques in the Behavioral and Neural Sciences, 1993

9.1 Dependent variables

The basic dependent variables in studies of drugs’ reinforcing effects are the numbers of responses and drug deliveries. These are usually expressed in terms of rate, that is, numbers per unit of time. There are two additional measures related to response rate. One is the time course of responding (i.e. the temporal location of responding during experimental sessions). The other is the pattern of responding. Different schedules of intermittent reinforcement generate different patterns of responding and the pattern of responding that is characteristic of a particular reinforcement schedule will be present if the behavior is well controlled by the schedule. A change in the pattern of responding within a session can indicate changes due to direct drug effects, such as alterations in motor behavior that follow ingestion of large amounts of pentobarbital.

Two dependent variables related to the number of drug deliveries are the time course of drug deliveries and the total amount of drug injected or consumed per session. The time course of drug deliveries can be very important, for the effects of a drug depend not only on the amount consumed but also on the rate and temporal distribution of consumption. A given drug amount can have radically different effects, depending on the time course of intake. Ingestion of a given amount of drug within a short period of time will produce very different effects from a low, steady rate of intake of the same amount of drug. Total drug consumption per experimental session is usually expressed as the total amount of drug ingested per unit of body weight. When this information is not provided, it is difficult to compare a study with other studies or to judge what the direct drug effects may have been.

The most commonly used measure in drug self-administration studies is rate of responding. The use of response rate reflects the importance of this measure in the area of operant conditioning generally (cf. Katz, 1989). However, absolute response rate can be a misleading indicator of reinforcing effects. If one uses an incorrect measure of reinforcing effects, then it is likely that one will misinterpret the results of operant conditioning studies, including drug self-administration studies. Unfortunately, this important point is not widely appreciated.

The problematic features of absolute response rates can be clarified by a discussion of three other important topics: (1) relative response rates; (2) reinforcing effects of different drug doses; and (3) the shapes of dose-response functions. The findings of two sets of studies from our laboratory bear on these issues. In the first set, interactions between schedule size and drug dose were studied. In the second set, effects of concurrent access to different drug doses were analyzed.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978044481444950016X

Children & Adolescents: Clinical Formulation & Treatment

C. Eugene Walker, in Comprehensive Clinical Psychology, 1998

5.22.2.8 Effectiveness of Treatments

Since there is a spontaneous remission rate of approximately 15% each year and since virtually all people are able to control urination by the teenage years, it is often difficult to know whether a particular treatment has been successful or not. There have been thousands of studies in this area, however, and the following conclusions are justified. First, there is no doubt that the most effective form of treatment is the pad-and-bell (Djurhuus, Norgaard, Hjalmas, & Wille, 1992; Forsythe & Butler, 1989; Scott, Barclay, & Houts, 1992). Application of the pad-and-bell generally results in cessation of bedwetting for between 75% and 90% of cases over an 8–12 week period. Unfortunately, relapse rates are substantial and can be as high as 40% (Doleys, 1977). Reapplication of the procedure upon relapse is generally successful and in less time than the original course of treatment. Occasionally a third or fourth application may be necessary. Two other approaches that have been suggested to deal with the relapse problem are the use of overlearning (Young & Morgan, 1972) and intermittent schedules of reinforcement (Finley, Rainwater, & Johnson, 1982).

The various medications that have been employed have had only modest success. Although medications result in some improvement in most children, they often do not produce total cessation of the wetting and relapse rates are exceedingly high when medications are withdrawn. This, coupled with the ongoing expense and possibly dangerous side effects of the medications, makes these not the treatment of first choice except for short term interventions (Maizels et al., 1993; Thompson & Rey, 1995).

Other than a few anecdotal and case reports, psychotherapy and family therapy have not been shown to have significant effectiveness for enuresis (DeLeon & Mandell, 1966). The initial reports on cognitive psychotherapy indicate that this approach may have a higher success rate than more traditional approaches to psychotherapy (Ronen et al., 1992). Hypnosis has been shown to be effective when properly employed (Olness, 1975). Retention control and sphincter exercises initially showed good results but later research failed to confirm earlier findings (Doleys, Schwartz, & Ciminero, 1981; Harris & Purohit, 1977). Nevertheless, there is still interest in this approach and the sound physiological rationale would suggest that, properly employed, this treatment might be effective. Such approaches as restricting fluids, support and encouragement, awakening schedules, and so forth, are modestly effective with relatively simple cases. However, they are not sufficient for more difficult cases. Numerous reports have made excessive claims for the effectiveness of diet, acupuncture, chiropractic manipulation and so forth. Additional and more sophisticated research is needed on these approaches.

Probably the most effective approaches available are the multiple component programs by Azrin and Foxx, and Houts and his colleagues (Walker, 1995). These have much to recommend them in terms of careful research and demonstrated effectiveness. Their main drawback is the elaborate procedures required by the protocols and the amount of training, supervision, and effort required for successful execution of these programs. Some researchers doubt that the increase in efficiency of these programs over the simple use of a pad-and-bell justifies the amount of effort required.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080427073001310

Behavior Analysis

H.S. Roane, A.M. Betz, in Encyclopedia of Human Behavior (Second Edition), 2012

Basic Principles of Behavior

Reinforcement may be considered the most important principle of behavior and is the key to behavior change. Reinforcement occurs when there is a change in a stimulus event or condition that immediately follows a response, which increases the future frequency of that response under similar conditions. The changes in stimuli that function as reinforcers can be described as either presenting a new stimulus into the environment or removing an already present stimulus from the environment.

Positive reinforcement occurs when a behavior is immediately followed by the presentation of a stimulus which, as a result, increases the future frequency of that behavior. For example, a rat pushes a lever in an operant chamber and receives a pellet of food. If the frequency of lever pressing increases, one can conclude that the presentation of the food pellet positively reinforced the lever-pressing behavior. A similar effect might be observed for the child who throws a tantrum in the department store. If the parent ‘gives in’ to the child's behavior by purchasing a toy, the future occurrence of throwing a tantrum is strengthened such that the child may be more likely to throw a tantrum in the future to produce a desired outcome. (It should be noted that positive reinforcement is a technical term used to describe the effect of a contingency on the occurrence of a response. This is in contrast to the term ‘reward’ which is used to describe an item or activity presented to someone in an attempt to change behavior. That is, reinforcement is an effect demonstrated by an increase in behavior, whereas a reward is simply something presented to a person that may or may not affect their behavior).

When the frequency of a behavior increases because a stimulus in the environment has been removed, the process is referred to as negative reinforcement. Negative reinforcement is characterized by escape or avoidance contingencies in which the organism emits a response that either removes or avoids the presentation of an aversive stimulus. The concept of negative reinforcement can be demonstrated by revisiting the rat and operant chamber. For example, a low dose of electricity is sent through the floor of the operant chamber which gives a mild shock to the rat and when the rat pushes the lever the shock is terminated. If the frequency of lever presses in the presence of the shock increases, the lever press response has been negatively reinforced. In a more applied example, loosening one's belt following a large meal often results in the temporary attenuation of discomfort associated with eating too much. In this example, the response of loosening the belt is reinforced by the cessation of aversive stimulation (physical discomfort) which increases the future likelihood of this response in similar situations.

It is important to note that once a behavior has been established and strengthened with reinforcement, it is not necessary to reinforce that behavior on each occurrence. Many behaviors are maintained by intermittently reinforcing its occurrence. An intermittent schedule of reinforcement is a contingency of reinforcement in which some, but not all, occurrences of the behavior produce reinforcement. However, if reinforcement for a behavior that was previously reinforced is withheld for all occurrences, the frequency of that behavior will decrease to similar levels demonstrated prior to reinforcement or will cease altogether. This behavioral procedure is referred to as extinction.

Punishment is another principle of behavior that is defined based on its function. Punishment occurs when a change in a stimulus, event, or condition immediately follows a behavior and decreases the future frequency of that behavior. Like the process of reinforcement, punishment affects behavior through either the presentation or the removal of a stimulus. If an aversive stimulus is presented contingent on a particular behavior which results in a decrease in that behavior, positive punishment has occurred. For example, when a child runs into the street unsupervised, his mother reprimands him. Therefore, the frequency of child running into the street decreases. In contrast, negative punishment occurs when a stimulus (or access to forms of stimulation) is removed from the environment contingent upon a response, and decreases the future frequency of that response. An example of a negative punishment contingency can be demonstrated by a situation in which a child throws a toy. The mother then takes the toy away so he cannot play with it. In this example a preferred stimulus, the toy, is removed following inappropriate behavior. Timeout is another example of a negative punishment contingency in that, while in timeout the individual cannot access sources of reinforcement. If a punishment contingency is removed, the behavior will ultimately reverse (increase) to levels near those seen prior to punishment. This process is called recovery from punishment.

Although punishment has been shown to be an effective procedure to decrease behavior, it can be argued that there are potential side effects and problems that occur when implementing punishment procedures. First, punishment can produce emotional and aggressive reactions. This is especially seen when positive punishment procedures are used when an aversive stimulus is presented as a consequence for a response. Second, inappropriate escape and avoidance behaviors may arise when a behavior is being punished. For example, a child may begin to lie or hide behaviors to avoid contacting the punishment contingency. Third, punishment may involve undesirable modeling of the punishing behavior. Finally, the decrease in the undesirable behavior of the person being punished may negatively reinforce the behavior of the punisher. In other words, the person implementing the punishment contingency may continue to implement punishment procedures more frequently in the future.

Critical variables for the use of both punishment and reinforcement are consistency and contiguity. A consequence should be implemented consistently to obtain the desired effect on behavior. If reinforcement is applied sporadically, the response will be more resistant to strengthening and less resistant to extinction. Likewise, if punishment is applied on a variable basis, the behavior will not decrease as quickly. Contiguity also plays a role in the efficacy of reinforcement and punishment procedures. Generally speaking, delayed consequences are less effective at changing behavior than are more immediate consequences.

Some behavior analysts argue that, from a functional perspective, reinforcement and punishment are the only principles needed to explain the basic effects of behavioral consequences. However, a number of factors may influence a response within behavioral contingencies. One such factor is the antecedent stimulus that signals the availability of reinforcement to occur. Such a stimulus is called a discriminative stimulus (SD). For example, a friend asking ‘How are you’ acts as a SD for you to say, ‘I'm fine,’ which is then reinforced by social approval. One would not say, ‘I'm fine,’ in the absence of another person asking how you are feeling. Likewise, stopping at a traffic signal is an example of behavior under the influence of a SD. There is not physiological response governing this behavior; rather, our learning history dictates the relation between stopping or moving in the presence of red, yellow, or green lights. When a response comes under the control of some stimuli and not others, it is said to be under stimulus control (as in the case of stopping instead of driving in the presence of a red light).

Another factor that may potentially influence a response within a behavioral contingency is the reinforcing value of the consequence. A motivating operation (MO) refers to an environmental variable that alters the reinforcing effectiveness of a stimulus, event, or condition and alters the current frequency of all behavior that has been reinforced by that stimulus event or condition. A common example of an MO is food deprivation. Food deprivation acts as an MO that increases the reinforcing effectiveness and value of food. Therefore, food deprivation evokes behavior that, in the past, has been reinforced with food. For example, the response of food ingestion (e.g., eating a sandwich) is more likely to occur when one has not eaten for several hours. It is unlikely that the sandwich will be highly reinforcing immediately after eating a large lunch.

Throughout the past 70 years, literally thousands of studies have supported the fundamental principles of behavior analysis through extensive empirical research carried out in both basic laboratory and applied settings. In the following sections, the focus of research and practice will be outlined for both EAB and ABA.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123750006000598

Obsessive-compulsive disorder from the Bayesian brain framework (Fradkin et al., 2020): Critical and supplementary comments

Peter Prudon, in Journal of Obsessive-Compulsive and Related Disorders, 2021

6.2 Sources of transition uncertainty

Because non-optimal learning histories can be very different, the resulting trait-uncertainty may also be very different, as the three case vignettes already showed. Sources of either heterogeneity or extent of trait-uncertainty may be found by investigating:

How stable were the contingencies of reinforcement experienced?

To which degree, reinforcement was non-contingent?

What kind of intermittent reinforcement schedules were experienced? Were they gradually stretched over time, or rather abruptly?

How positive were the “rewards” and how negative the “punishments”, featuring in the contingencies of reinforcement? How gradually or abruptly had the “rewards” been attenuated, or enhanced?

How representative were the contingencies experienced during one's youth for the circumstances met later in life: in terms of their nature, intermittence, and stability?

How long ago or recent did a certain “regime” of reinforcement contingencies start, and how long did it last?

How much of what has been learned was based on direct experience with the contingencies and feedback from their natural consequences (i.e., contingency-shaped behavior), and how much was based on instruction, demonstration, social feedback, and knowledge transfer (i.e., rule-governed behavior)?10

If there was a one-sided emphasis on learning from instruction etc., at the expense of learning from direct experience, at what age did this emphasis start, and how long did it last?

This latter peculiarity of learning histories may help explain the dissociation between reflectively knowing how the world is constructed and what one should do (rule-governed) but nevertheless acting in a much more primitive, irrational way (contingency-shaped). Regarding this issue, the authors report an interesting finding of Vaghi et al. (2017):

“Participants completed a predictive-inference task designed to identify how action and confidence evolve in response to surprising changes in the environment. While OCD patients (like controls) correctly updated their confidence according to changes in the environment, their actions (unlike those of controls) mostly disregarded this knowledge. Therefore, OCD patients develop an accurate, internal model of the environment but fail to use it to guide behavior.” (Vaghi et al., 2017, p. 349)

Fradkin et al. (2020, p. 20) add: “… the authors report that patients acted as if the contingencies had to be learned anew on each trial. … [Such] behavior indicates a belief that … outcomes cannot be reliably predicted by past states … [However, this belief merely represents] a latent ‘Bayesian belief’ driving action, …[not a] “meta-cognitive self-reported confidence …”

Vaghi et al., 2017): “… we were able to demonstrate a novel dissociation in which actions can underutilize (or fail to access) information about the environment that is fully available to the decision maker and accessible to reports of confidence.” (p. 354) “Furthermore, our work shows that the degree of uncoupling between confidence and action is correlated with the severity of the OCD symptomatology, showing that this uncoupling … might be at the core of the computational deficit that characterizes OCD.” (p. 353). [It helps to explain] “the ego-dystonic nature of obsessive-compulsive disorder (OCD), where compulsive actions are recognized as disproportionate …” (p. 349).

It would be interesting to learn the authors’ view of this dissociation between “rule-governed” predictions and “contingency-shaped” ones? Have these twin concepts already been assimilated within the Bayesian approach, and applied to pathological behavior?

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S2211364921000191

Mean girls and bad boys: Recent research on gender differences in conduct disorder

Olga V. Berkout, ... Alan M. Gross, in Aggression and Violent Behavior, 2011

11 Theoretical Implications

Despite methodological issues noted earlier, several conceptual implications concerning the development of CD across gender can be gleaned from the collective progress of literature in the past decade. Conduct disorder can be conceptualized from a behavior analytic perspective, which fits within the purview of evidence-based treatments for the disorder (e.g., Henggeler, Cunningham, Schoenwald, Borduin, & Rowland, 2009). It is possible, for example, to view variables related to contextual factors as antecedents and the consequences of negativistic responding as maintaining behaviors given reinforcement. Organismic variables related to individual differences can be viewed as a filter that establishes reinforcement salience and provides a foundation for environmental selection (Goldfried & Sprafkin, 1976; Thompson, 2007). Temperamental variables, such as decreased autonomic response to potential threat and greater presence of CU traits, may alter effectiveness of social reinforcers and punishers so that behavior is less responsive to social control than to other stimuli. Organismic variables interact with contextual variables to shape an antisocial behavioral repertoire. Use of harsh and inconsistent punishment by parents provides both modeling of negative behavior and an intermittent schedule of reinforcement for engagement in negative behavior (particularly resistant to extinction). When children enter the school environment, engagement in antisocial and socially deviant behavior leads to academic and social difficulties, making association with deviant peers and subsequent antisocial behavior the easiest way to obtain rewards from the environment (Patterson et al., 1989).

Several adjustments to this general theoretical model of CD are apparent on the basis of gender differences reviewed. For example, gender influences both organismic and contextual variables within CD development. Girls with CD are more likely to present with comorbid internalizing disorders, suggesting the presence of a general tendency towards greater behavioral inhibition. Contrary to findings among males, autonomic under-arousal and decreased reactivity do not appear to be related to female social deviance and externalizing (Isen et al., 2010; Keenan et al., 1999; Loeber & Keenan, 1994; Marsman et al., 2008; Oritz & Rayne, 2004). Girls who develop CD appear to develop it in spite of, rather than because of, a greater tendency towards behavioral disinhibition, arguing that a greater constellation of CD risk factors may be needed to develop the disorder. Other risk factors show a similar pattern; unlike boys, girls typically display unhelpfulness towards others in addition to general sensitivity to reinforcement to develop CD (Cote et al., 2010). Potentially the combination of organismic variables of greater sensitivity to punishment and ability to inhibit behavior among females may require more extensive shaping for an antisocial behavioral repertoire to develop.

Additionally, recent literature points toward contextual particulars in the etiology of CD. Parental modeling of negative social interactions and inappropriate provision of consequences appear to shape engagement in antisocial behavior among both males and females (Hipwell et al., 2008; Pajer et al., 2008). Greater use of harsh punishment (argued to be more acceptable with male children) has been proposed to partially influence the gender difference in CD prevalence, although the retrospective nature of the data limits conclusions (Meier et al., 2009). Disadvantaged environmental background, which may provide poorer availability of reinforcement and greater reinforcement of antisocial behavior, appears to be related to CD development among both males and females (Keenan, Hipwell, et al., 2010). Girls who experience early physical maturity appear to experience additional risk, potentially due to social reinforcement of age inappropriate behavior and greater likelihood of association with a deviant peer group (Burt et al., 2006; Graber et al., 1997). Similar environmental factors appear to be relevant to CD development, but boys' and girls' gender and behavior may evoke and elicit differential provision of responding from their social environment.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1359178911000784

Which of the following is true regarding schedules of reinforcement?

Which of the following is true regarding schedules of reinforcement? Partial reinforcement of target behavior leads to greater resistance to extinction than does continuous reinforcement.

What are the 4 schedules of reinforcement quizlet?

Terms in this set (4).
Fixed ratio. Behavior is reinforced after a set # of responses. ... .
Variable ratio. Behavior is reinforced after an unpredictable # of responses..
Fixed interval. The first response after a fixed time period is reinforced..
Variable interval. The first response after a varying time interval is reinforced..

What is a schedule of reinforcement quizlet?

Schedule of Reinforcement: A schedule of reinforcement refers to the rule governing the response requirements for reinforcement to be given. Schedule Effects: A schedule effect is the particular pattern and rate of performance produced by the different schedules of reinforcement.

Which of the following is an important schedule of reinforcement?

A variable ratio schedule is a schedule of reinforcement where a behavior is reinforced after a random number of responses. This kind of schedule results in high, steady rates of responding.