How Variable Interval Schedules Influence Behavior

A man looking at his phone and computer, sitting on a windowsill

Albert Mollon / Getty Images

In operant conditioning, variable interval refers to a schedule of reinforcement where a response is rewarded after an unpredictable amount of time has passed. A variable interval schedule is the opposite of a fixed interval schedule. This schedule produces a slow, steady rate of response.

At a Glance

A variable-interval schedule is just one way to deliver reinforcement when trying to teach or change a behavior. By only offering a reward sporadically, people tend to respond at a moderate but steady speed.

One major plus about using a variable-interval schedule is that it leads to more extinction-proof behavior. That means that the things that are learned are more likely to stick.

How Does a Variable-Interval Schedule Work?

To understand how a variable-interval schedule works, let's start by taking a closer look at the term itself.

  • Schedule refers to the rate of reinforcement delivery, or how frequently the reinforcement is given.
  • Variable indicates that this timing is inconsistent and may vary from one trial to the next.
  • Finally, interval means that delivery is controlled by time.

So, a variable-interval schedule means that reinforcement is delivered at varying and unpredictable time intervals.

Imagine that you are training a pigeon to peck at a key to receive a food pellet. You put the bird on a variable-interval 30 (VI-30) schedule. This means that the pigeon will receive reinforcement an average of every 30 seconds.

It is important to note that this is an average, however. Sometimes the pigeon might be reinforced after 10 seconds; sometimes, it might have to wait 45 seconds. The key is that the timing is unpredictable.

Characteristics of a Variable-Interval Schedule

A variable-interval schedule has a few important characteristics that distinguish it from other reinforcement schedules. Key characteristics of a variable-interval schedule:

  • Very resistant to extinction
  • The rate of response is moderate but steady
  • Very minimal pause after reinforcement is given

One possible downside is that response rates tend to be more moderate. However, this can also be seen as a plus. 

Examples of Variable-Interval Schedules

To understand more about how variable interval schedules work, it can be helpful to look at a few different real-world examples:

Checking Your Email

Typically, you check your email at random times throughout the day instead of checking every time a single message is delivered. The thing about email is that, in most cases, you never know when you will receive a message.

Because of this, emails roll in sporadically at entirely unpredictable times. When you check and see that you have received a message, it acts as a reinforcer for checking your email.

Your Employer Checking Your Work

Does your boss drop by your office a few times throughout the day to check your progress? This is an example of a variable-interval schedule. These check-ins occur at unpredictable times, so you never know when they might happen.

Chances are good that you work at a fairly steady pace throughout the day since you are never quite sure when your boss will pop in, and you want to appear busy and productive when they do happen to stop by.

Immediately after one of these check-ins, you might briefly pause and take a short break before resuming your steady work pace.

Pop Quizzes

Your psychology instructor might issue periodic pop quizzes to test your knowledge and to make sure you are paying attention in class. While these exams occur with some frequency, you never really know precisely when they might give you a pop quiz.

One week you might end up taking two quizzes, but then go a full two weeks without one. Because you never know when you might receive a pop quiz, you will probably pay attention and stay focused on your studies to be prepared.

Schedules of Reinforcement in Operant Conditioning

Operant conditioning can either strengthen or weaken behaviors through reinforcement and punishment. This learning process involves forming an association with behavior and the consequences of that action.

Psychologist B.F. Skinner is credited with the introduction of the concept of operant conditioning. He observed that reinforcement could be used to increase a behavior, and punishment could be used to weaken behavior. He also noted that the rate at which a behavior was reinforcement had an effect on both the strength and frequency of the response.

What This Means For You

If you are trying to change a behavior, using the right reinforcement schedule is important. A variable-interval one means you'll only give or receive a reward after a random period of time. When you use this schedule, you're more likely to plug along at a steady, moderate pace because you never know quite when you'll finally get rewarded.

4 Sources
Verywell Mind uses only high-quality sources, including peer-reviewed studies, to support the facts within our articles. Read our editorial process to learn more about how we fact-check and keep our content accurate, reliable, and trustworthy.
  1. Lattal KA. Delayed reinforcement of operant behaviorJ Exp Anal Behav. 2010;93(1):129‐139. doi:10.1901/jeab.2010.93-129

  2. Marshall AT, Kirkpatrick K. Everywhere and everything: The power and ubiquity of timeInt J Comp Psychol. 2015;28.

  3. Nevin JA. Resistance to extinction and behavioral momentumBehav Processes. 2012;90(1):89‐97. doi:10.1016/j.beproc.2012.02.006

  4. Bouton ME. Why behavior change is difficult to sustainPrev Med. 2014;68:29‐36. doi:10.1016/j.ypmed.2014.06.010

By Kendra Cherry, MSEd
Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."