What do core independent peripherals (CIPs) bring to an application? They enable users to automate certain tasks that are usually executed by the central processing unit based on instructions written into the software. CIPs also reduce the amount of code a developer needs to write, while also creating a more responsive and low-powered system. This video from Microchip goes in-depth into what CIPs are, how they’re utilized, and how they benefit your project.
Hey, everybody, Marc McComb taking A Closer Look At core independent peripherals this time.
You may know that both PIC and AVR microcontroller architectures are loaded with peripherals. Many of these are what are called core independent peripherals or CIPs for short. These peripherals have been developed to automate certain tasks that were traditionally executed by the central processing unit or core based on instructions that the developer wrote in software.
These core independent peripherals have inherent advantages like reducing the amount of code that a developer needs to write while at the same time helping to create a more responsive and lower power system. What I want to do in this video is give you a general idea of what core independent peripherals bring to an application.
To do this, I'm going to use a very basic example in which I want to toggle an I/O port pin high and low, so basically creating a square wave output call a heartbeat signal. Now to accomplish this heartbeat signal, I've written some code that will toggle the output high to low or low to high each time through a loop that I've shown here using this flow chart.
Now let me add another task to this application. Say that any time a push button connected to another pin on the microcontroller is pressed, we want an output pulse on yet another pin to be driven that will be about two milliseconds wide, so kind of like a one-shot or a monostable multivibrator.
Now if we had a microcontroller with rudimentary peripherals on it, we would need to implement this additional task using interrupts, which could look like this. Here I'm using a basic timer peripheral and a general purpose input/output port connected to the push button. Most of these peripherals have the ability to generate common interrupts that you can find on most modern microcontrollers.
The first interrupt I'm going to use is one that the I/O peripheral can generate when there's a voltage change on an associated pin. So when the push button is pressed, the pin voltage drops to zero volts because I've got the push button pulled high here and an interrupt is triggered. When the core is notified that an I/O port interrupt has been triggered, it will go through either a software or, if the microcontroller's equipped, a hardware interrupt prioritization check.
And this makes sure that there isn't some other interrupt event happening simultaneously which has priority over this I/O port interrupt. If not, then the core calls the I/O port interrupt service routine. Within the service routine, core is instructed to reset the timer, load it at a specific value to allow it to overflow at exactly two milliseconds and the timer begins to count.
The core is also instructed to drive our output pin high. And this generates the rising edge of our one-shot signal. When the port interrupt service routine is finished, the core can then go back to what it was doing before, in our case, toggling our heartbeat signal. So the core continues to toggle the heartbeat signal until the timer overflows at our configured two milliseconds and an interrupt is triggered just like with the I/O port interrupt prioritization in hardware or software is implemented, and pending no other higher priority interrupts, the timer interrupt service routine is called.
The core, under instruction from the code, clears and stops the timer and then drives our one-shot signal low. So there's really nothing overly complex about this code, but we need to remember that all of these steps from the interrupt event occurring, then being recognized, then being prioritized, and then finally executing the associated hardware service routine, this all takes time, time where the core isn't able to toggle our heartbeat signal.
So let's demonstrate this. I've actually written the code for this application and downloaded it to a microcontroller. I have an oscilloscope connected to the two output pins. This blue signal here is our heartbeat signal being generated by the core, toggling the pin each time through the loop. Take a look at what happens when I press the push button and trigger the port interrupt, you can see here that our heartbeat signal now has some missed pulses just before and after the one-shot pulse rising edge and for a short period of time before and after the one-shot pulse falling edge.
The push button is pressed right around here. And this delay before the first edge of the one-shot pulse is a result of a number of things including interrupt latency which is a term for how long before the core actually detects that an interrupt has actually occurred. The bulk of this delay though is due to our interrupt prioritization and the actual execution code inside of our interrupt sub routines.
You can also see a delay after the one-shot pulse goes high before the core actually gets back to toggling our heartbeat signal. So we get a few more pulses here, and then the timer interrupts after two milliseconds. Again, after the small amount of time that it takes for the core to actually detect that the interrupt has occurred, it stops generating the heartbeat signal and then goes through the prioritization and then jumps the timer interrupt service routine where it eventually drives our one-shot pin low here.
So this is really just a basic application where we're toggling an I/O in software. But imagine for a second that instead of toggling a heartbeat signal this is maybe some sort of input signal that the core needs to detect. So maybe a user input, like a capacitive touch sensor or even something more urgent like a signal that detects overheating in the system or potential danger to the user.
Bottom line, the core may be off executing some interrupt and completely miss that signal, causing the user anything from mild annoyance or even some catastrophic effect on the system. Now we could minimize this by increasing our operating speed so that the interrupts gets serviced quicker by the core, but remember, this also increases our power consumption, right. And faster operation will only minimize this interruption in the heartbeat signal but won't get rid of it altogether.
We could also ask our application designer to spend more time bench-testing the application by writing different software routines or configure their other system components to improve the situation. But time is money, right? Having to spend even more time writing code or trying different configurations of secondary systems connected to our microcontroller could therefore become a costly endeavor.
Okay, so let's take a look at an alternative solution here using core independent peripherals. Here is the ATTINY817, and this is an AVR device which has, among other peripherals, a timer-based peripheral called the Timer/Counter Type B or TCB for short and another peripheral called the event system. The event system peripheral can use an event generated somewhere else on the microcontroller to trigger some other event to occur on the same microcontroller.
So for example, if a voltage connected to one of our comparator's input pins exceeds a predetermined threshold, the comparator output could be configured to go high. Using the event system, we could then use this comparator output to trigger an analog to digital conversion of a voltage on yet another pin.
Or, maybe some external signal change like, say, when a push button connected to a port pin is pressed, triggers a counter to start counting. A counter like the 16-bit Timer/Counter Type B or TCB peripheral on the ATTINY817 just so happens to have a single shot generation mode which outputs a pulse of a user-defined duration when it is triggered.
So what I'm going to do is use this to generate our one-shot output. The port I/O connected to the push button in our design is going to be routed using the event system to the Timer/Counter Type B so that a change on the input on that pin is going to trigger this single shot output. Implementing these capabilities into our application will look something like this.
So again, the microcontroller core is used to toggle that heartbeat signal on the I/O pin. However this time, instead of relying on interrupts in the core to drive our one-shot pulse, we're going to route the change event on our port pin connected to the push button through the event system and connect it to our Timer/Counter Type B so that it will trigger an output pulse for two milliseconds.
As you can see by this block diagram, the core is actually relieved of any tasks associated with the detecting of the push button press or executing the one-shot output since this is all done in hardware using the peripheral capabilities. I've gone ahead and programmed the ATTINY817 with this application, so now let's take a look at our waveforms.
Again, here is our toggling I/O pin that is being controlled by the core. The difference this time though is that when I push the push button connected to the input, the two millisecond output pulse being handled in hardware by the peripherals doesn't have any effect on the heartbeat signal since the core is never interrupted. Handling tasks in hardware using peripherals instead of software means that the core can now do something else in parallel such as execute the code for our heartbeat or even go into a lower power mode.
Most importantly, we've not only improved the responsiveness of our system by getting rid of software and interrupts that tie up the core, but we've also been able to keep our operating speed lower to help with power consumption. One last thing I want to show you, remember that we wanted the output pulse to be exactly two milliseconds wide.
However, take a look at the software interrupt based version of this application. We configure the timer to generate an interrupt at two milliseconds exactly, however, with interrupt latencies and the time it takes to execute the code associated with the interrupt service routines and so on, you can see that our output pulse is actually 300 or so microseconds longer than we want it to be.
Take a look at our waveform using the ATTINY817 peripherals. As you can see, the output is two milliseconds wide where we want it since this task is handled in hardware without the overhead that software and interrupts inherently introduce. This hardware peripheral version here was actually developed rather quickly using some of Microchip Technology's graphical programming tools that present the peripherals at a very high level to ease the application development with PIC and AVR microcontrollers.
I'll cover these tools and more in subsequent videos. In the meantime, for more information on these or other topics, please visit www.microchip.com/8bit. I'm Marc McComb, thanks for watching.