Here’s a very nice discussion of branch prediction from a Stack Overflow question. We graybeards and, really, just about everyone else like to pretend that all processors are still like the PDP-11. They were easy to understand and their semantics easy to assimilate.
Modern processors are nothing like them, of course. Instead of a linear pipeline of instructions executed sequentially, modern processors like to gamble. They look ahead in the instruction stream and precalculate results so they’re ready when they’re needed. The difficulty arises when there’s a branch. What should the look ahead procedure do? Should it take the left fork or the right one.
It’s a difficult problem but there are, in fact, some effective procedures. It basically boils down to noticing what the average behavior is and betting on that. If the behavior is reasonably normal, the results are good. Most times the processor guesses right and there’s no reason to stop everything and back up like there is when the processor guesses wrong.
If you’re writing in Python or a similar high-level language, there’s no reason to worry about this. But if you’re writing in C, or some other low-level language, this is something you sometimes have to take into account.
Happily, most of us don’t have to worry about such things but it’s worth knowing that the problem exists if only to have them in the back of our minds as we write our code.