For instance, a bunch of clients all make a request to a server at the same time, briefly saturating the server. If all the clients have the same timeout without jitter, they will all try again together at the same time once the timeout expires, saturating the server again and again. Jitter helps by « spreading » those clients in time, thus « diluting » the server load. The server can then process these requests without saturating.
The basic idea behind that is also used in all sorts of networks where you have multiple stations sharing the same medium with everyone being able to freely send stuff. To solve this, if a "collision" is detected, stations then use a random timeout before they send again in the hope that the next time there won't be another collision.