How to track engineering bottlenecks across Jira and GitHub
A practical guide to finding work that's actually stuck. Aging tickets, stalled PRs, reviewer overload: the pain that hides between your tracker and your code.
- ›Engineering bottlenecks hide in the gap between your tracker and your code. Jira says "In Review". GitHub says "no reviewer yet."
- ›Four bottlenecks cover 90% of real pain: stale tickets, stalled PRs, commit-less tasks, and reviewer overload.
- ›Single-tool dashboards (Jira Automation, GitHub Insights) can each show a slice. None join the two, so you end up joining them mentally.
- ›Cross-tool synthesis, querying both systems and joining on the ticket or PR reference, is the only reliable way to see what's actually stuck.
Most engineering teams can't answer a simple question: "what's stuck right now?"
Not because nobody's tracking anything. Quite the opposite. We're tracking everything. Jira has tickets, sprint boards, and cycle times. GitHub has pull requests, reviews, and commit histories. Monday has roadmaps. Slack has pings from people saying "hey, still waiting on this." Every tool shows you a slice of reality. None of them shows the whole picture.
This is the cross-tool bottleneck problem, and it gets worse every quarter as teams add tools.
Why cross-tool bottlenecks are invisible
Imagine a ticket parked in "In Review" on your Jira board for eight days. Jira says it's assigned to a senior engineer and was moved to review on the 12th. Useful.
Now open the linked GitHub PR. It was opened eight days ago, the branch has one commit, and the PR has no reviewer assigned. It's sitting in the default branch-protection queue waiting for a human to click "approve."
From Jira alone, the ticket looks in progress. From GitHub alone, the PR looks fresh. The only way to see the truth is to join the two views on the TKTIDE-123 ticket ID that happens to live in the PR title: this work is stuck, no one is reviewing it, and the engineer who wrote it has moved on to their next ticket.
Multiply that pattern by 40 open tickets across 15 repos and you have the classic "sprint looks healthy, velocity is mysteriously dropping" situation.
Four bottlenecks that cover 90% of real pain
You don't need to track 50 metrics. In our experience running TKTIDE across R&D teams, four bottleneck patterns explain most of what's actually hurting delivery.
1. Tickets parked in "In Review" for more than 5 days
The single highest-signal metric. A ticket stuck in review for a week almost always means one of three things:
- The PR has no assigned reviewer
- The assigned reviewer is overloaded
- The PR has merge conflicts no one wants to touch
Jira has a built-in aging report, but it won't tell you why something is aging. You need to cross-reference with GitHub to see the PR state:
jira: issue status = "In Review" AND updated < -5d
github: for each ticket → find PR → check reviewer + last activity
The join point is the ticket ID in the PR title or branch name. If your team's convention is PROD-412-fix-login-flow, the query is trivial. If there's no convention, enforce one. The ROI on that single process change is enormous.
2. Open PRs with no reviewer after 72 hours
Unassigned PRs are the ticking time bomb of most engineering orgs. Unlike ticket-level metrics, this one lives entirely in GitHub, but teams rarely build alerting for it.
A clean query:
pulls: created > 72h ago AND state = "open" AND requested_reviewers = empty
If you're running GitHub Actions, you can wire a daily workflow to post unreviewed PRs to a Slack channel. In our experience across teams, just surfacing the queue tends to drop median PR age by 30 to 50% within a month. Nobody wants their PR in the public "ignored" list.
3. Tickets marked "In Progress" with zero commits on their branch
A classic "ghost ticket": in progress according to the tracker, but no code has been written. This isn't about blame. It's about uncovering capacity conflicts that the team didn't realise were happening.
Usually the engineer is in the middle of another in-progress ticket (which does have commits) and the second one is parked. The tracker can't tell you this; only commit history can.
The fix is rarely "push harder". It's "reassign, defer, or un-assign." Making this visible is the first step.
4. Reviewer overload: one person approving more than 60% of PRs
This matters for three reasons: risk concentration, reviewer burnout, and knowledge silos. You'd be surprised how often a "fast-moving" team is actually just one staff engineer saying yes to everything.
github: group merged PRs by approver, last 30 days
threshold: > 60% of approvals from one person = overload
When you find this, don't fix it by scolding the overloaded person. Fix it by explicitly rotating reviewer assignments. GitHub Code Owners or round-robin tooling does this well.
The four ways teams actually track this today
Ranked roughly by how well they scale.
Approach 1: manual spreadsheets
Every team starts here. An engineering manager pastes a list of aging tickets into a Google Sheet each Monday. It works for small teams. It breaks instantly if you have more than two or three repos, or if the EM takes a week off.
Best for: teams under 10 engineers, or while you're still figuring out what to track. Breaks when: the EM isn't available, or the data needs to be fresh same-day.
Approach 2: per-tool dashboards (Jira Automation, GitHub Insights)
Jira Automation can flag aging tickets. GitHub's Insights tab shows PR stats per repo. Both are useful within their own walls. Neither joins them.
The core limitation: these dashboards don't know that Jira ticket PROD-412 and GitHub PR #2041 are the same piece of work. They each show you a partial view, and you mentally join them, which is exactly the manual labour you're trying to eliminate.
Best for: teams that genuinely live inside one tool for 95% of their work. Breaks when: you have more than one tool (which is nearly every company over 20 people).
Approach 3: custom ETL and BI stack
Some teams build internal pipelines: pull Jira data, pull GitHub data, dump both into Snowflake or BigQuery, build a Looker or Metabase dashboard on top. This works, and it's the most powerful option if you can staff it.
Realistic cost: one data engineer's time for 4 to 8 weeks to build, then maintenance forever. Every time Atlassian changes a Jira webhook, the pipeline breaks. Every new integration (Monday, Linear, Asana) is more ingestion code.
Best for: companies over 200 engineers with a dedicated data platform team. Breaks when: you don't have one of those, or priorities change mid-project.
Approach 4: agentic cross-tool synthesis
The newer approach, and what we're building at TKTIDE, is to deploy one AI agent per tool and have them talk to each other to answer cross-tool questions. Ask "what's stuck?" once, and the Jira agent pulls aging tickets, the GitHub agent pulls PR states, and a synthesis layer joins them by ticket ID and returns a single unified answer.
The key architectural difference vs. a BI pipeline: there's no migration, no central data warehouse, no ETL cron that breaks at 3am. Each agent reads from the live tool on demand. When Atlassian changes an API, only the Jira agent needs to know.
Best for: teams of 20 to 200 engineers who live in more than one tool and don't have a data platform team. Downside: newer category, fewer reference deployments (which is partly why we're writing this).
How to start tracking this tomorrow
Pick the approach that fits your team size:
- Under 20 engineers: write a 10-line GitHub Action that posts unreviewed PRs to Slack every morning. Cover bottleneck #2 only. That alone is worth doing.
- 20 to 200 engineers: use one of the cross-tool tools (TKTIDE, or an alternative). The ROI curve bends sharply around this team size because the manual joining work becomes a real tax.
- 200+ engineers with a platform team: build the BI pipeline. You'll own it forever, but you can customise it endlessly.
Common mistakes we see
Measuring everything, acting on nothing. The four bottlenecks above are enough. Don't add "PR size distribution by day of week" to your dashboard until you've actually intervened on reviewer overload.
Focusing on velocity instead of flow. Velocity (story points per sprint) is a lagging metric that tells you nothing about why work is stuck. Flow metrics (time-in-state, WIP count, aging) tell you exactly where to intervene.
Framing the data as accountability. 90% of aging tickets are a process problem, not an engineer problem: missing reviewer, unclear definition of done, waiting on an external team. If people feel they'll be blamed for aging tickets, they'll quietly re-open closed ones or work on the side, and you'll lose your data.
Where this goes next
The reason we write about this is that cross-tool visibility is the single biggest leverage point in engineering operations that most teams under-invest in. You can hire faster, estimate better, and run more ceremonies. None of that helps if you can't see that PROD-412 has been waiting on a reviewer for eight days.
Start with the four bottlenecks. Pick one tool-tier that matches your team size. Re-evaluate in 90 days.
Frequently asked
Can I track these bottlenecks without a third-party tool?
Yes, for simpler cases. GitHub Actions plus a Slack webhook can alert on PRs without reviewers or PRs older than N days. For anything that requires joining Jira and GitHub data, like aging tickets whose PR has no reviewer, you need either custom tooling or a cross-tool platform.
What's the difference between Jira's aging issues report and cross-tool bottleneck tracking?
Jira's report tells you which tickets are old. Cross-tool tracking tells you why. Specifically whether the bottleneck is a missing reviewer, reviewer overload, a stalled commit history, or something else. The which is table stakes; the why is what lets you actually intervene.
Is GitHub Insights enough for a small engineering team?
If your entire workflow lives in GitHub (code, issues, projects) then yes. If you use GitHub for code and Jira or Linear or Monday for tickets, you'll still be mentally joining the two, which is exactly the manual work cross-tool tooling is supposed to eliminate.
How often should I review bottleneck metrics?
Daily for PR reviewer assignment and reviewer overload, weekly for ticket aging. Less frequent and you miss signals; more frequent and it becomes noise that you eventually stop reading.
TKTIDE connects Jira, GitHub, Monday, and 30+ other R&D tools via one AI agent per tool. Ask a question once, get a synthesized answer across all your systems. No migration, no new dashboard.