Capital One is a US Top-3 credit-card issuer, with tech hiring volume in financials second only to JPMorgan and BlackRock. Their SWE and Technology Development Program (TDP) tracks open ~800 New Grad / Intern roles per year. The OA process is one of the most standardized in the industry — CodeSignal coding + HireVue behavioral, both must be cleared to count as an OA pass.
Below is a 2025-2026 cycle breakdown — question types, scoring, and recruit-loop handoff — based on 1point3acres, Reddit r/CapitalOne, and Glassdoor (this article focuses on SWE / TDP track; for Data Science see our DS-track OA writeup).
Capital One SWE / TDP OA at a Glance
| Dimension | Detail |
|---|---|
| Platform | CodeSignal General Coding Framework |
| Duration | 75 minutes (some batches 70 min) |
| Questions | 4 (Level 1 → Level 4 progression) |
| Difficulty | String parsing + simulation + set ops + DP |
| Pass Bar | Level 1-3 fully AC + Level 4 ≥ 50% hidden tests |
| Next Step | HireVue behavioral video (5-7 questions, 2-3 min each) |
| Feedback | 7-21 days |
Key trait: Capital One doesn't pose LC-style hard graph / DP — it asks bank-business-flavored string / simulation / state-machine problems. Prompts are long (1-2 screens each), so reading + modeling alone takes ≥ half your time.
Real Question Set: Bank Transaction Logs (4-level progression)
This is the most common prompt across fall 2025 to spring 2026.
Level 1: Basic Log Parsing
Prompt: given a string array logs, each like "2026-04-15 10:23:01 USR_001 DEPOSIT 250.00". Implement top_k_balance(logs, k) that returns the k users with the highest current balance (ties broken alphabetically by user_id).
Operations are limited to DEPOSIT and WITHDRAW.
from collections import defaultdict
def top_k_balance(logs, k):
bal = defaultdict(float)
for line in logs:
parts = line.split()
# parts: [date, time, user, op, amount]
user = parts[2]
op = parts[3]
amt = float(parts[4])
if op == "DEPOSIT":
bal[user] += amt
elif op == "WITHDRAW":
bal[user] -= amt
ranked = sorted(bal.items(), key=lambda x: (-x[1], x[0]))
return [u for u, _ in ranked[:k]]
Pitfall: float accumulation drifts; hidden tests are sensitive at 0.01. From Level 1, switch to int(amt * 100) ("cents") so all later levels stay integer-only.
Level 2: Frozen Account State
Adds FREEZE and UNFREEZE: a frozen account fails any WITHDRAW (still recorded but not deducted); DEPOSIT continues normally.
def parse_logs_v2(logs):
bal = defaultdict(int) # in cents
frozen = set()
fail_count = defaultdict(int)
for line in logs:
parts = line.split()
user, op = parts[2], parts[3]
if op == "FREEZE":
frozen.add(user)
elif op == "UNFREEZE":
frozen.discard(user)
elif op == "DEPOSIT":
bal[user] += int(float(parts[4]) * 100)
elif op == "WITHDRAW":
amt = int(float(parts[4]) * 100)
if user in frozen or bal[user] < amt:
fail_count[user] += 1
else:
bal[user] -= amt
return bal, fail_count
Scoring: int(float(x) * 100) in one line avoids cumulative float errors. Level 2 must also return fail_count — hidden tests assert things like "user X should have exactly 5 failed WITHDRAWs".
Level 3: Sliding Window Stats
New requirement: implement transactions_in_last_k_minutes(logs, query_user, k) — given a query user and k-minute window, return successful transaction count for that user within k minutes before their last operation.
Approach: build a (timestamp, op, success?) index per user during a single pass, then two-pointer / binary search for the query.
from datetime import datetime
from bisect import bisect_left
def build_index(logs):
index = defaultdict(list) # user -> list of (epoch_seconds, success_bool)
bal = defaultdict(int)
frozen = set()
for line in logs:
parts = line.split()
ts = datetime.strptime(parts[0] + " " + parts[1], "%Y-%m-%d %H:%M:%S").timestamp()
user, op = parts[2], parts[3]
if op == "FREEZE":
frozen.add(user)
continue
if op == "UNFREEZE":
frozen.discard(user)
continue
amt = int(float(parts[4]) * 100)
success = True
if op == "WITHDRAW":
if user in frozen or bal[user] < amt:
success = False
else:
bal[user] -= amt
else: # DEPOSIT
bal[user] += amt
index[user].append((ts, success))
return index
def transactions_in_last_k_minutes(index, user, k):
if user not in index or not index[user]:
return 0
last_ts = index[user][-1][0]
threshold = last_ts - k * 60
times = [t for t, _ in index[user]]
pos = bisect_left(times, threshold)
return sum(1 for t, s in index[user][pos:] if s)
Pitfall: timestamps are strings (YYYY-MM-DD HH:MM:SS) — string compare equals time order. But for k-minute sliding windows, convert to epoch to handle hour/day rollovers.
Level 4: User Clustering & Optimal Merging
The final level asks you to cluster users by behavioral similarity (operation time-window + average amount) and return the top k largest clusters. K-Means / hash bucketing / lightweight clustering all acceptable.
70 minutes is rarely enough to ace Level 4. Pragmatic approach: simple two-dimensional hash bucketing on (avg_amount_bucket, total_transactions_bucket), then return the largest k buckets. Earns 50-70% of hidden tests — enough to cross the bar.
def cluster_users(index, k):
profile = {}
for user, ops in index.items():
succ = [op for op in ops if op[1]]
if not succ:
profile[user] = (0, 0)
continue
n = len(succ)
if n <= 1:
n_bucket = 0
elif n <= 5:
n_bucket = 1
elif n <= 20:
n_bucket = 2
else:
n_bucket = 3
span = succ[-1][0] - succ[0][0]
s_bucket = 0 if span < 3600 else 1 if span < 86400 else 2
profile[user] = (n_bucket, s_bucket)
buckets = defaultdict(list)
for u, p in profile.items():
buckets[p].append(u)
top = sorted(buckets.values(), key=lambda x: -len(x))[:k]
return [sorted(g) for g in top]
Scoring: Capital One's Level 4 doesn't require an optimal solution — a non-empty, deterministic, defensible bucketing earns 50%+ of points. Don't burn 20 minutes on KMeans only to time out.
Time Allocation
| Level | Recommended Time | Key Strategy |
|---|---|---|
| Level 1 | 8 min | Clean string parsing + integer cents |
| Level 2 | 12 min | Reuse Level 1, add a frozen set |
| Level 3 | 18 min | Reuse the index, binary-search the window |
| Level 4 | 25 min | Hash buckets > KMeans; settle for cross-the-bar |
| Buffer | 12 min | debug + float sanity check |
HireVue Video Behavioral (Mandatory After OA)
Passing the coding OA does not mean overall OA pass — you must also clear HireVue. Email invite usually arrives 3-7 days after OA, with 5-7 questions, 2-3 minutes per response.
High-Frequency Questions
| Type | Example | STAR Anchors |
|---|---|---|
| Learning | "Tell me about a time you learned something new under pressure." | Emphasize timeline + concrete skill gained |
| Conflict | "Describe a situation when your team disagreed." | Your mediation role, no blame |
| Customer view | "How do you balance speed vs. quality?" | Maps to Capital One's "deliver excellence" value |
| Failure | "Tell me about a project that didn't go as planned." | Concrete lesson learned |
| Why C1 | "Why Capital One specifically?" | Reference a specific product / tech-blog post |
Recording Tips
- 2-3 minutes per question; mind the clock — overruns are auto-cut
- Look at the camera, not the screen; steady cadence + smile beats fancy content
- Don't re-record carelessly: HireVue typically allows only 1-2 retries per question
Prep Strategy
| Priority | Topic | Recommended LC / Resources |
|---|---|---|
| ⭐⭐⭐ | String parsing / split / parse | LC 65, LC 224, LC 736 |
| ⭐⭐⭐ | Simulation + state machines | LC 348, LC 1396, LC 1352 |
| ⭐⭐ | Sliding window + binary search | LC 209, LC 1838, LC 1146 |
| ⭐⭐ | OOP design | LC 146, LC 460, LC 355 |
| ⭐ | HireVue behavioral | Capital One careers FAQ + Glassdoor BQ bank |
FAQ
Q1: Is Capital One OA harder or easier than FAANG?
Lower algorithmic difficulty — Levels 1-3 are roughly LC Easy to easy-Medium. But the OA is not easy to clear because: (1) Level 4 is hard; (2) prompts are long, eating reading + modeling time; (3) HireVue is part of the pass criterion — many candidates clear coding but fail HireVue.
Q2: Are Capital One TDP and SWE the same OA?
The coding OA is identical, but HireVue banks differ slightly:
- TDP: more "why a rotational program" / "cross-team learning" themes
- SWE: more "code ownership" / "system-design intuition" themes
Yield: TDP accepts a broader pool (including non-CS), SWE leans CS / SE majors with a slightly higher bar.
Q3: How long after OA until onsite?
After both OA + HireVue pass, 2-4 weeks to Power Day (Capital One's onsite name): 1 Behavioral + 2 Coding + 1 Case (business scenario). End-to-end OA → Offer averages 6-10 weeks.
Q4: Can I preview HireVue questions?
No. Each question gives 30 seconds of think-time before the recording auto-starts. Recommendation: pre-write STAR stories for the 6 high-frequency types above so you can decide which one to use within those 30 seconds.
Q5: Does Capital One care about GPA?
TDP gates at GPA 3.0+ (sometimes 3.2+); SWE is not a hard cutoff but resumes with GPA < 3.0 default to manual review. If your GPA is low, strongly consider networking through Capital One Recruiter Connect or campus events to bypass automated screen.
Q6: What's the Capital One Power Day pass rate?
~30-40% — slightly higher than FAANG onsite (25-30%) because Capital One values culture fit over "can you smash the hardest problem". Tip: in onsite Coding, clear thinking + a working solution beats a one-line clever optimum.
Contact
If you're prepping Capital One TDP / SWE / DS or similar fintech (JPMorgan, BlackRock, Discover), the coding OA is just the first filter — HireVue + Power Day behavioral rounds are the real screen. We've curated Top 30 Capital One HireVue questions + STAR templates + Power Day question bank — feel free to reach out.
Add WeChat Coding0201, get the Capital One OA + HireVue bank.
- Email: [email protected]
- Telegram: @OAVOProxy