---
title: "In Search of Intuition: Queue Paradox"
description: "Why queue depth drifts to infinity when arrival and service rates are equal, explained through random walks and variance."
author: "Adam Fillion"
date: "2026-01-12"
categories: [distributed, queue, probability, statistics, simulation]
draft: false
---
Statistical paradoxes are one of my favorite things, because they expose the contrast between what our minds are built for and how reality operates.
Recently, while hosting a tech-talk about load-shedding, I casually mentioned that when the service rate and arrival rate of a queue are equal ($p=1$), as long as one of the two is a random process ($M/D/n$ or $D/M/n$) [[4]](#references), queue depth drifts over time to infinity.
```{ojs}
//| echo: false
viewof arrivalRate = Inputs.range([0.1, 2], {value: 1, step: 0.1, label: "Arrival Rate (λ)"})
viewof serviceRate = Inputs.range([0.1, 2], {value: 1, step: 0.1, label: "Service Rate (μ)"})
```
```{ojs}
//| echo: false
//| output: false
mutable queue = []
mutable serverBusy = false
mutable serverItem = null
mutable serviceTimer = 0
```
```{ojs}
//| echo: false
//| output: false
{
const interval = setInterval(() => {
if (Math.random() < arrivalRate / 10) {
mutable queue = [...mutable queue, {id: Date.now(), x: 0}];
}
}, 100);
invalidation.then(() => clearInterval(interval));
}
```
```{ojs}
//| echo: false
//| output: false
{
const interval = setInterval(() => {
// Start serving if idle and queue not empty
if (!mutable serverBusy && mutable queue.length > 0) {
mutable serverItem = mutable queue[0];
mutable queue = mutable queue.slice(1);
mutable serverBusy = true;
mutable serviceTimer = Math.ceil(10 / serviceRate);
}
// Deterministic service: count down timer
if (mutable serverBusy) {
mutable serviceTimer = mutable serviceTimer - 1;
if (mutable serviceTimer <= 0) {
mutable serverBusy = false;
mutable serverItem = null;
}
}
}, 100);
invalidation.then(() => clearInterval(interval));
}
```
```{ojs}
//| echo: false
html`<div style="font-size: 14px; margin-bottom: 10px;">
Queue depth: <strong>${queue.length}</strong> |
Server: <strong>${serverBusy ? "Busy" : "Idle"}</strong>
</div>`
```
```{ojs}
//| echo: false
{
const width = 600;
const height = 120;
const queueWidth = 340;
const baseItemSize = 20;
const gap = 2;
// Calculate item size: shrink if more than 15 items (cap at 80 for sizing)
const displayCount = Math.min(queue.length, 80);
const itemSize = displayCount <= 15
? baseItemSize
: Math.max(2, (queueWidth - gap * displayCount) / displayCount);
const svg = d3.create("svg")
.attr("viewBox", [0, 0, width, height])
.attr("width", width)
.attr("height", height);
// Queue container
svg.append("rect")
.attr("x", 50)
.attr("y", 30)
.attr("width", 350)
.attr("height", 50)
.attr("fill", "#f0f0f0")
.attr("stroke", "#333")
.attr("stroke-width", 2);
svg.append("text")
.attr("x", 225)
.attr("y", 100)
.attr("text-anchor", "middle")
.attr("font-size", 12)
.text("Queue");
// Server container
svg.append("rect")
.attr("x", 450)
.attr("y", 30)
.attr("width", 50)
.attr("height", 50)
.attr("fill", serverBusy ? "#ffcccc" : "#ccffcc")
.attr("stroke", "#333")
.attr("stroke-width", 2);
svg.append("text")
.attr("x", 475)
.attr("y", 100)
.attr("text-anchor", "middle")
.attr("font-size", 12)
.text("Server");
// Arrow
svg.append("path")
.attr("d", "M 405 55 L 445 55")
.attr("stroke", "#333")
.attr("stroke-width", 2)
.attr("marker-end", "url(#arrow)");
svg.append("defs").append("marker")
.attr("id", "arrow")
.attr("viewBox", "0 0 10 10")
.attr("refX", 9)
.attr("refY", 5)
.attr("markerWidth", 6)
.attr("markerHeight", 6)
.attr("orient", "auto")
.append("path")
.attr("d", "M 0 0 L 10 5 L 0 10 z")
.attr("fill", "#333");
// Arrival arrow
svg.append("path")
.attr("d", "M 10 55 L 45 55")
.attr("stroke", "#333")
.attr("stroke-width", 2)
.attr("marker-end", "url(#arrow)");
// Queue items (cap visual at 80)
const visibleQueue = queue.slice(0, 80);
svg.selectAll(".queue-item")
.data(visibleQueue)
.join("rect")
.attr("class", "queue-item")
.attr("x", (d, i) => 55 + i * (itemSize + gap))
.attr("y", 55 - itemSize / 2)
.attr("width", itemSize)
.attr("height", itemSize)
.attr("fill", "#4a90d9")
.attr("rx", Math.max(1, itemSize / 6));
// Server item
if (serverBusy) {
svg.append("rect")
.attr("x", 465)
.attr("y", 45)
.attr("width", baseItemSize)
.attr("height", baseItemSize)
.attr("fill", "#d94a4a")
.attr("rx", 3);
}
return svg.node();
}
```
When challenged on this, I struggled to come up with an eloquent, non mathematical intuition.
Of course, you can't dispute the math. For an M/M/1 queue [[3]](#references), the expected queue length is given by:
$$L_q = \frac{\rho^2}{1 - \rho} \quad \text{where} \quad \rho = \frac{\lambda}{\mu}$$
As $\rho \to 1$, we have $L_q \to \infty$.
You may find these formulae in undergraduate courses on probability or systems engineering or even sometimes computer science, but this is deeply unsatisfactory to grizzled engineers on a permanent sabbatical from academia.
To be clear, and maybe a little more colloquial, on what the claim is, it's that if you have a queue with a random arrival rate and constant service rate (or vice versa), the expected length of the queue grows with time.
The most common reasoning patterns I've seen are based on the following statements:
1. At the start of the simulation, queue depth is zero, and the server is idle.
2. When events arrive in intervals of service_time, the server has 100% utilization.
3. When events arrive further apart than the service_time interval, the server is idle.
4. When events arrive closer than the service_time interval, a queue forms.
**(Attempt 1)**: Clearly, if there is any amount of idleness and the average arrival rate equals the service rate, you will end up with some degree of queuing. But once a queue forms, the server will not be idle until the queue is eventually 0 again, so it's still unclear why there is a trend to infinity, and not just to 1.
**(Attempt 2)**: Since queues cannot become negative, and there is an upward force (4.) and no downward force beyond 0 since you can't have negative queue depth (3.), the queue must get bigger over time.
**Attempt 2** is almost right, but it incorrectly identifies the barrier at 0 as the force driving expected values higher, when it's simply a rectifier (like a diode in an electrical circuit) that clips negative values.
Rather, the **force** pushing values higher is **variance**, that is, the spreading (or diffusion) of **possible** queue depths over time [[8]](#references).
With the force correctly identified, we can look to other examples to convert this statistical artefact to intuition. Interestingly, our queue is mathematically equivalent to a 1-D random walk [[5]](#references), where we measure $|D_n|$ — the absolute distance from the origin after $n$ steps.
```{ojs}
//| echo: false
//| output: false
mutable particle1DPos = 0
```
```{ojs}
//| echo: false
viewof weight1DLeft = Inputs.range([0, 10], {value: 5, step: 0.1, label: "Left"})
```
```{ojs}
//| echo: false
viewof weight1DRight = Inputs.range([0, 10], {value: 5, step: 0.1, label: "Right"})
```
```{ojs}
//| echo: false
viewof particle1DSpeed = Inputs.range([1, 20], {value: 10, step: 1, label: "Speed"})
```
```{ojs}
//| echo: false
normalized1DProbs = {
const total = weight1DLeft + weight1DRight;
if (total === 0) return ({left: 0.5, right: 0.5});
return ({
left: weight1DLeft / total,
right: weight1DRight / total
});
}
```
```{ojs}
//| echo: false
//| output: false
{
const interval = setInterval(() => {
const r = Math.random();
const p = normalized1DProbs;
let dx = 0;
if (r < p.left) {
dx = -1;
} else {
dx = 1;
}
mutable particle1DPos = mutable particle1DPos + dx;
}, 100 / particle1DSpeed);
invalidation.then(() => clearInterval(interval));
}
```
```{ojs}
//| echo: false
particle1DVis = {
const width = 500;
const height = 60;
const maxDisplay = 100;
const displayPos = Math.max(-maxDisplay, Math.min(maxDisplay, particle1DPos));
const xScale = (width - 40) / (2 * maxDisplay);
const centerX = width / 2;
const svg = d3.create("svg")
.attr("viewBox", [0, 0, width, height])
.style("width", "100%")
.style("max-width", width + "px")
.style("background", "#fafafa")
.style("border", "1px solid #333");
// Number line
svg.append("line")
.attr("x1", 20)
.attr("y1", 30)
.attr("x2", width - 20)
.attr("y2", 30)
.attr("stroke", "#333")
.attr("stroke-width", 1);
// Origin tick
svg.append("line")
.attr("x1", centerX)
.attr("y1", 25)
.attr("x2", centerX)
.attr("y2", 35)
.attr("stroke", "#333")
.attr("stroke-width", 1);
svg.append("text")
.attr("x", centerX)
.attr("y", 48)
.attr("text-anchor", "middle")
.attr("font-size", 10)
.text("0");
// Left bound tick
svg.append("text")
.attr("x", 20)
.attr("y", 48)
.attr("text-anchor", "middle")
.attr("font-size", 10)
.text(-maxDisplay);
// Right bound tick
svg.append("text")
.attr("x", width - 20)
.attr("y", 48)
.attr("text-anchor", "middle")
.attr("font-size", 10)
.text(maxDisplay);
// Particle
const particleX = centerX + displayPos * xScale;
svg.append("circle")
.attr("cx", particleX)
.attr("cy", 30)
.attr("r", 6)
.attr("fill", "#4a90d9");
return svg.node();
}
```
```{ojs}
//| echo: false
html`<div style="display: flex; gap: 20px; align-items: flex-start;">
<div style="flex: 2;">
${particle1DVis}
<div style="text-align: center; margin-top: 5px; font-size: 14px;">
Position: <strong>${particle1DPos}</strong>
</div>
</div>
<div style="flex: 1;">
<div style="margin-bottom: 10px;"><strong>Direction Weights</strong></div>
${viewof weight1DLeft} <span style="font-size: 12px; color: #666;">(${(normalized1DProbs.left * 100).toFixed(0)}%)</span>
${viewof weight1DRight} <span style="font-size: 12px; color: #666;">(${(normalized1DProbs.right * 100).toFixed(0)}%)</span>
<div style="margin-top: 15px;">
${viewof particle1DSpeed}
</div>
</div>
</div>`
```
Let's look at some empirical data for both the 1D random-walk and queueing system, running a simple simulation and sampling the depth, repeating this trial 500 times to get a distribution of expected measurements.
```{ojs}
//| echo: false
function simulate1DWalk(steps) {
let pos = 0;
for (let i = 0; i < steps; i++) {
pos += Math.random() < 0.5 ? -1 : 1;
}
return Math.abs(pos);
}
function simulateQueue(steps) {
let depth = 0;
for (let i = 0; i < steps; i++) {
depth += Math.random() < 0.5 ? -1 : 1;
if (depth < 0) depth = 0;
}
return depth;
}
allSimData = {
const sampleSize = 1000;
const stepCounts = [10, 100, 1000, 10000];
const data1D = [], dataQueue = [];
for (const steps of stepCounts) {
for (let i = 0; i < sampleSize; i++) {
data1D.push({steps, value: simulate1DWalk(steps)});
dataQueue.push({steps, value: simulateQueue(steps)});
}
}
return {data1D, dataQueue};
}
```
```{ojs}
//| echo: false
viewof selectedSteps = Inputs.select([10, 100, 1000, 10000], {value: 100, label: "Steps (n)"})
```
```{ojs}
//| echo: false
filtered1D = allSimData.data1D.filter(d => d.steps === selectedSteps)
filteredQueue = allSimData.dataQueue.filter(d => d.steps === selectedSteps)
xDomain = [0, Math.max(50, Math.sqrt(selectedSteps) * 4)]
```
```{ojs}
//| echo: false
plot1D = Plot.plot({
width: 220,
height: 180,
marginLeft: 35,
marginBottom: 35,
x: { label: "Distance |X|", domain: xDomain },
y: { label: "Count", grid: true },
marks: [
Plot.rectY(filtered1D, Plot.binX({y: "count"}, {x: "value", fill: "#4a90d9", thresholds: 15})),
Plot.ruleY([0])
]
})
plotQueue = Plot.plot({
width: 220,
height: 180,
marginLeft: 35,
marginBottom: 35,
x: { label: "Queue depth", domain: xDomain },
y: { label: "Count", grid: true },
marks: [
Plot.rectY(filteredQueue, Plot.binX({y: "count"}, {x: "value", fill: "#d94a4a", thresholds: 15})),
Plot.ruleY([0])
]
})
```
```{ojs}
//| echo: false
html`<div style="display: flex; gap: 5px; justify-content: center;">
<div style="text-align: center;">
<strong style="font-size: 12px;">1D Walk</strong>
${plot1D}
</div>
<div style="text-align: center;">
<strong style="font-size: 12px;">Queue (barrier at 0)</strong>
${plotQueue}
</div>
</div>`
```
```{ojs}
//| echo: false
html`<div style="font-size: 13px; margin-top: 10px;">
<strong>Statistics for n=${selectedSteps} (1000 samples each), √n = ${Math.sqrt(selectedSteps).toFixed(2)}:</strong><br>
<table style="margin-top: 5px;">
<tr><td>1D Walk:</td><td>mean = <strong>${d3.mean(filtered1D.map(d => d.value)).toFixed(2)}</strong></td><td>ratio = ${(d3.mean(filtered1D.map(d => d.value)) / Math.sqrt(selectedSteps)).toFixed(2)}</td></tr>
<tr><td>Queue:</td><td>mean = <strong>${d3.mean(filteredQueue.map(d => d.value)).toFixed(2)}</strong></td><td>ratio = ${(d3.mean(filteredQueue.map(d => d.value)) / Math.sqrt(selectedSteps)).toFixed(2)}</td></tr>
</table>
</div>`
```
As we sample our measurements for increasing $n$ (a.k.a the simulation length, in real life this would be "time"), we naively stumble upon a core statistical concept: the binomial distribution:
$$P(X = k) = \binom{n}{k} p^k (1-p)^{n-k}$$
In our case, $p=0.5$ and $k$ is the number of steps in the simulation (essentially the same as time).
```{ojs}
//| echo: false
viewof binomialN = Inputs.range([10, 500], {value: 50, step: 10, label: "Number of steps (n)"})
```
```{ojs}
//| echo: false
binomialData = {
const n = binomialN;
const data = [];
// Position ranges from -n to +n in steps of 2
for (let k = 0; k <= n; k++) {
const position = 2 * k - n;
// Binomial probability: C(n,k) * 0.5^n
// Use log to avoid overflow
let logProb = -n * Math.log(2);
for (let i = 0; i < k; i++) {
logProb += Math.log(n - i) - Math.log(i + 1);
}
const prob = Math.exp(logProb);
data.push({position, probability: prob, k});
}
return data;
}
binomialStats = {
const n = binomialN;
// For symmetric random walk: mean = 0, std = sqrt(n)
const mean = 0;
const std = Math.sqrt(n);
// Expected absolute value E[|X|] ≈ sqrt(2n/π) for large n
const expectedAbs = Math.sqrt(2 * n / Math.PI);
return {mean, std, expectedAbs, n};
}
```
```{ojs}
//| echo: false
Plot.plot({
width: 600,
height: 300,
marginBottom: 50,
x: { label: "Position after n steps", domain: [-100, 100] },
y: { label: "Probability", grid: true, domain: [0, 0.3] },
marks: [
Plot.line(binomialData, {x: "position", y: "probability", stroke: "#4a90d9", strokeWidth: 2}),
Plot.ruleX([0], {stroke: "#333", strokeDasharray: "4,4"}),
Plot.ruleX([binomialStats.std], {stroke: "#e74c3c", strokeWidth: 2}),
Plot.ruleX([-binomialStats.std], {stroke: "#e74c3c", strokeWidth: 2}),
Plot.ruleY([0])
]
})
```
```{ojs}
//| echo: false
html`<div style="font-size: 13px; margin-top: 5px; background: #f8f8f8; padding: 10px; border-radius: 5px;">
<strong>n = ${binomialStats.n} steps:</strong><br>
Mean position: <strong>0</strong> (symmetric walk)<br>
Standard deviation: <strong>σ = √n = ${binomialStats.std.toFixed(2)}</strong> <span style="color: #e74c3c;">— red lines</span><br>
Expected |position|: <strong>E[|X|] ≈ √(2n/π) ≈ 0.8√n = ${binomialStats.expectedAbs.toFixed(2)}</strong>
</div>`
```
```{ojs}
//| echo: false
absDistData = {
const n = binomialN;
const data = [];
// Fold the distribution: combine probabilities for +k and -k
for (let k = 0; k <= n; k++) {
const position = 2 * k - n;
let logProb = -n * Math.log(2);
for (let i = 0; i < k; i++) {
logProb += Math.log(n - i) - Math.log(i + 1);
}
const prob = Math.exp(logProb);
const absPos = Math.abs(position);
// Find or create entry
const existing = data.find(d => d.absPosition === absPos);
if (existing) {
existing.probability += prob;
} else {
data.push({absPosition: absPos, probability: prob});
}
}
return data.sort((a, b) => a.absPosition - b.absPosition);
}
```
Crucially, as the distribution widens with increasing simulation length, its centre of mass - a.k.a the average value of a sample, drifts to infinity. For the folded distribution, $E[|X|] \approx \sqrt{2n/\pi}$ [[6]](#references).
Again, grasping for some intuition here, the distribution "widens" and "flattens" (since its area must always remain 1, it's a probability distribution after all).
```{ojs}
//| echo: false
viewof foldedBinomialN = Inputs.range([10, 10000], {value: 50, step: 10, label: "Number of steps (n)"})
```
```{ojs}
//| echo: false
// Folded binomial - models |random walk position|
foldedBinomialData = {
const n = foldedBinomialN;
const data = [];
// Calculate binomial probabilities and fold them
for (let k = 0; k <= n; k++) {
const position = 2 * k - n;
let logProb = -n * Math.log(2);
for (let i = 0; i < k; i++) {
logProb += Math.log(n - i) - Math.log(i + 1);
}
const prob = Math.exp(logProb);
const absPos = Math.abs(position);
const existing = data.find(d => d.absPosition === absPos);
if (existing) {
existing.probability += prob;
} else {
data.push({absPosition: absPos, probability: prob});
}
}
const sortedData = data.sort((a, b) => a.absPosition - b.absPosition);
const mean = Math.sqrt(2 * n / Math.PI); // E[|X|] ≈ √(2n/π)
const std = Math.sqrt(n);
// Calculate p50 (median)
let cumProb = 0;
let p50 = 0;
for (const d of sortedData) {
cumProb += d.probability;
if (cumProb >= 0.5) {
p50 = d.absPosition;
break;
}
}
return {data: sortedData, mean, std, n, p50};
}
```
```{ojs}
//| echo: false
html`<div>
<strong style="font-size: 12px;">|random walk position|</strong>
${Plot.plot({
width: 500,
height: 250,
x: { label: "|Position|", domain: [0, 100] },
y: { label: "Probability", domain: [0, 0.3] },
marks: [
Plot.line(foldedBinomialData.data, {x: "absPosition", y: "probability", stroke: "#2ecc71", strokeWidth: 2}),
Plot.areaY(foldedBinomialData.data, {x: "absPosition", y: "probability", fill: "#2ecc71", fillOpacity: 0.2}),
Plot.ruleX([foldedBinomialData.mean], {stroke: "#e74c3c", strokeWidth: 2, strokeDasharray: "4,4"}),
Plot.ruleX([foldedBinomialData.p50], {stroke: "#3498db", strokeWidth: 2, strokeDasharray: "2,2"}),
Plot.ruleY([0])
]
})}
<div style="font-size: 12px; text-align: center;">
Mean |X| = <strong>${foldedBinomialData.mean.toFixed(2)}</strong> <span style="color: #e74c3c;">— red dashed</span> |
P50 = <strong>${foldedBinomialData.p50}</strong> <span style="color: #3498db;">— blue dashed</span>
</div>
</div>`
```
We don't even need to keep this in probability distribution format, by transforming the Y axis into expected contribution ($\text{value} \times \text{probability}$), we can visualise how much each value contributes to the global mean.
```{ojs}
//| echo: false
viewof contributionN = Inputs.range([10, 5000], {value: 50, step: 10, label: "Number of steps (n)"})
```
```{ojs}
//| echo: false
contributionData = {
const n = contributionN;
const data = [];
// Calculate binomial probabilities and fold them
for (let k = 0; k <= n; k++) {
const position = 2 * k - n;
let logProb = -n * Math.log(2);
for (let i = 0; i < k; i++) {
logProb += Math.log(n - i) - Math.log(i + 1);
}
const prob = Math.exp(logProb);
const absPos = Math.abs(position);
const existing = data.find(d => d.absPosition === absPos);
if (existing) {
existing.probability += prob;
} else {
data.push({absPosition: absPos, probability: prob});
}
}
// Add expected contribution: value × probability
const sortedData = data
.map(d => ({...d, contribution: d.absPosition * d.probability}))
.sort((a, b) => a.absPosition - b.absPosition);
const mean = sortedData.reduce((sum, d) => sum + d.contribution, 0);
const totalContribution = mean; // Same thing - sum of value × prob
return {data: sortedData, mean, n};
}
```
```{ojs}
//| echo: false
html`<div>
<strong style="font-size: 12px;">Expected Contribution to Mean (value × probability)</strong>
${Plot.plot({
width: 500,
height: 250,
x: { label: "|Position|", domain: [0, 200] },
y: { label: "Contribution to E[|X|]", domain: [0, 5] },
marks: [
Plot.line(contributionData.data, {x: "absPosition", y: "contribution", stroke: "#9b59b6", strokeWidth: 2}),
Plot.areaY(contributionData.data, {x: "absPosition", y: "contribution", fill: "#9b59b6", fillOpacity: 0.2}),
Plot.ruleX([contributionData.mean], {stroke: "#e74c3c", strokeWidth: 2, strokeDasharray: "4,4"}),
Plot.ruleY([0])
]
})}
<div style="font-size: 12px; text-align: center;">
E[|X|] = <strong>${contributionData.mean.toFixed(2)}</strong> <span style="color: #e74c3c;">— red dashed</span> |
Area under curve = E[|X|]
</div>
</div>`
```
The area of this curve equals the expected value. Clearly it widens out further and further towards infinity, but it's peak is constant since each value gets more improbable, but the value getting bigger cancels this out exactly. Therefore, the area must be increasing.
So all that is happening is variance, the widening force, is pushing the mean higher and higher as the simulation progresses [[1]](#references).
Maybe this is intuitive for you already, but for me - I needed a bit of hand-holding to get all the way there.
## References
1. Jiahao, Tom Z. "[Why does a random walk tend to drift away?](https://medium.com/p/3484413503e0)." *Medium*.
2. "[Servitization and Queueing Theory: Deriving M/M/1 Model](https://medium.com/data-science/servitization-and-queueing-theory-deriving-m-m-1-model-589175b23054)." *Towards Data Science*.
3. "[M/M/1 queue](https://en.wikipedia.org/wiki/M/M/1_queue)." *Wikipedia*.
4. "[Kendall's notation](https://en.wikipedia.org/wiki/Kendall%27s_notation)." *Wikipedia*.
5. "[Random walk](https://en.wikipedia.org/wiki/Random_walk)." *Wikipedia*.
6. "[Half-normal distribution](https://en.wikipedia.org/wiki/Half-normal_distribution)." *Wikipedia*.
7. Lawler, Gregory F. "[Simple Random Walk](https://www.math.uchicago.edu/~lawler/reu1)." *University of Chicago*.
8. "[Random Walk and Diffusion](https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/03:_Diffusion/11:_Brownian_Motion/11.01:_Random_Walk_and_Diffusion)." *Chemistry LibreTexts*.