Does the Woodpecker Method Work? Analyzing 120,000 Puzzle Attempts
We analyzed 7 weeks of Woodpecker Method training data to see if cycle-based tactics repetition actually improves chess performance.
TL;DR: Based on 120,513 puzzle attempts from 1,017 users, the Woodpecker Method appears effective: users showed ~10 percentage point accuracy gains and 21% faster solve times by cycle 2. Users who trained consistently over weeks improved even at fixed difficulty levels. These are observational results without a control group.

Key Takeaways
- Users who repeated puzzle sets showed approximately 10 percentage point accuracy gains and 21% faster solve times by cycle 2.
- Efficiency multipliers reached 1.65x by cycle 3 and 2.36x by cycle 4, though sample sizes decrease at higher cycles.
- Within the same difficulty level, users showed 4-15 percentage point accuracy improvements over multiple weeks of activity.
- Approximately 14% of users who started at beginner level progressed to harder difficulty levels.
- This is observational data without a control group. Results should be interpreted as descriptive patterns, not causal claims.
Introduction
Does the Woodpecker Method actually work? We analyzed seven weeks of our own data to find out.
This post presents what we observed when 1,017 users solved 120,513 puzzles on our platform between November 2025 and January 2026. We looked at cycle-over-cycle improvement, longitudinal trends, difficulty progression, and what distinguishes highly engaged users from casual ones.
The short version: users who repeated puzzle sets got better at them. Users who stuck around for weeks showed improvement even at fixed difficulty levels. Whether this reflects the training method, user selection, or something else entirely is a question the data alone cannot answer.
What follows is a detailed breakdown of the numbers.
Dataset Overview
| Metric | Value |
|---|---|
| Total Users | 1,017 |
| Total Solve Attempts | 120,513 |
| Date Range | Nov 26, 2025 - Jan 14, 2026 |
| Duration | ~7 weeks |
| Overall Accuracy | 83.6% |
| Average Solve Time | 31.6 seconds |
| Median Solve Time | 15.0 seconds |
Our puzzle sets are organized by difficulty level (Beginner, Casual, Club, Tournament, Master) and by tactical theme (fork, pin, discovered attack, etc.). The Woodpecker Method underpins the training approach: users solve the same set of puzzles repeatedly in cycles, aiming to get faster and more accurate each time.
Woodpecker Method Cycle Results
Users who repeated the same puzzle set multiple times showed improvement in both accuracy and solve time across cycles.
| Cycle | Users | Accuracy | Avg Time | Efficiency Multiplier |
|---|---|---|---|---|
| 1 | 55 | 85.8% | 30.4s | 1.00x (baseline) |
| 2 | 54 | 95.8% | 24.0s | 1.41x |
| 3 | 10 | 97.7% | 21.0s | 1.65x |
| 4 | 4 | 98.7% | 14.8s | 2.36x |
Interpretation: Within this cohort of users who completed multiple cycles, accuracy increased by approximately 10 percentage points between cycles 1 and 2, while average solve time decreased by 21%. The efficiency multiplier, which combines accuracy and speed into a single metric, showed continued gains through cycle 4.
Important caveat: Sample sizes decrease substantially at higher cycles. Only 4 users in this dataset completed cycle 4. The apparent gains at cycles 3 and 4 may reflect survivorship bias (users who continue are likely those who are improving) rather than a universal effect.
The efficiency trajectory is consistent with the Woodpecker Method's expected progression (targeting 1.5x by cycles 3-4, 2.0x by cycles 5-6), though we cannot attribute this to the method specifically without a controlled comparison.
Individual Improvement Stories
Examining individual users provides texture beyond aggregate statistics. The following cases illustrate observed improvement patterns. User identifiers are anonymized.
Case 1: Tournament-Level Player
One user working on tournament-level puzzles showed substantial improvement over 2 cycles:
| Metric | Cycle 1 | Cycle 2 | Change |
|---|---|---|---|
| Accuracy | 58.6% | 94.8% | +36.2 pp |
| Avg Time | 39.9s | 25.4s | -36% |
| Attempts | ~375 | ~375 | - |
This user completed approximately 750 total attempts. The magnitude of improvement is notable, though individual results vary considerably.
Case 2: Theme Mastery Across Motifs
Another user systematically worked through multiple tactical themes at the club level:
| Theme | Cycle 1 Accuracy | Later Accuracy | Time Change |
|---|---|---|---|
| Hanging Piece | 65.0% | 98.0% | -25% |
| Deflection | 67.0% | 98.0% | -11% |
| Sacrifice | 68.0% | 96.0% | -64% |
| Pin | 74.0% | 95.5% | -50% |
Across these themes, this user showed an average accuracy improvement of approximately 28 percentage points with consistent speed gains.
Case 3: Multi-Level Progression
A third user progressed through multiple difficulty levels while maintaining high accuracy:
| Level | Accuracy | Puzzles Completed |
|---|---|---|
| Beginner | 100% | 18 |
| Casual | 94.0% | 1,113 |
| Club | 86.6% | 761 |
This user completed 1,892 total puzzles. The accuracy decline at higher difficulty levels is expected, as harder puzzles are designed to be more challenging.
Long-Term Improvement Trends
Analyzing users who remained active across multiple weeks, controlling for difficulty level, reveals improvement trends over time.
Beginner Level
| Week | Users | Accuracy | Avg Time |
|---|---|---|---|
| 1 | 516 | 90.0% | 21.2s |
| 2 | 47 | 93.7% | 16.8s |
| 3 | 25 | 94.5% | 19.6s |
Users active at the beginner level showed an accuracy improvement of approximately 4.5 percentage points over 3 weeks.
Club Level
| Week | Users | Accuracy | Avg Time |
|---|---|---|---|
| 1 | 260 | 84.6% | 31.9s |
| 2 | 60 | 86.1% | 34.6s |
| 3 | 40 | 88.8% | 37.0s |
Club-level users showed approximately 4 percentage points of accuracy improvement over 3 weeks. The slight increase in solve time may reflect users attempting harder puzzles within the club tier.
Tournament Level
| Week | Users | Accuracy | Avg Time |
|---|---|---|---|
| 1 | 125 | 56.7% | 59.3s |
| 2 | 29 | 57.6% | 56.2s |
| 3 | 19 | 72.0% | 48.9s |
Tournament-level users showed the largest observed improvement: approximately 15 percentage points in accuracy and 17% faster solve times by week 3.
Survivorship note: User counts decrease substantially from week 1 to week 3 at all levels. Users who remain active may differ systematically from those who stop. These improvements may partially reflect selection effects rather than universal training benefits.
Difficulty Progression
Of users who started at the beginner level, a subset progressed to harder content:
| Level | Users Reaching | % of Beginner Starters | Success Rate |
|---|---|---|---|
| Beginner | 453 | 100% | 73.0% |
| Casual | 64 | 14.1% | 71.9% |
| Club | 52 | 11.5% | 66.6% |
| Tournament | 29 | 6.4% | 39.2% |
| Master | 13 | 2.9% | 15.4% |
Users who progress to harder levels maintain reasonable success rates, though accuracy naturally declines as difficulty increases. The 14% progression rate from beginner to casual suggests that a meaningful subset of users advance through the curriculum.
Power User Patterns
The top 10% of users by engagement (102 users) account for 71% of all puzzle attempts.
| Cohort | Users | Attempts | Accuracy | Avg Time | Active Days |
|---|---|---|---|---|---|
| Top 10% | 102 | 85,791 | 85.3% | 30.6s | 13.1 |
| All Others | 915 | 34,722 | 69.1% | 40.2s | 2.0 |
Highly engaged users show:
- 16 percentage points higher accuracy
- 24% faster solve times
- 6.5x more active days
Correlation vs. causation: These differences could reflect that engaged users improve because of their practice, that users who are naturally better at tactics tend to practice more, or some combination. The data cannot distinguish between these explanations.
Power users also show improvement over time:
- First day: 84.1% accuracy
- Later days: 86.9% accuracy (+2.8 pp)
Methodology and Limitations
This analysis has several important limitations that affect how results should be interpreted:
Observational Design
This is observational data from production usage. There is no control group of users who trained with a different method or did not train at all. We cannot determine whether the observed improvements result from the training method specifically, from any form of practice, from user selection effects, or from other factors.
Survivorship Bias
Users who appear in later cycles or later weeks are those who continued using the platform. These users may differ systematically from those who stopped. Apparent improvements at later cycles may partly reflect that only improving users continue, rather than that all users who continue would improve.
Self-Selected Population
Users who sign up for a chess tactics training platform are not a random sample of chess players. They may be more motivated, have more time for practice, or differ in other ways from the general chess-playing population.
Sample Size Variation
Sample sizes decrease substantially at higher cycles and later weeks. Results for cycle 4 (n=4) or week 3 at tournament level (n=19) should be interpreted with caution.
No External Validation
We do not have data on whether improvements on the platform transfer to over-the-board play, online rated games, or other measures of chess skill.
Descriptive Only
All findings should be interpreted as descriptive patterns within this specific dataset, not as causal claims or predictions about future users.
Interpretation
Within the constraints noted above, the data is consistent with several observations about cycle-based training:
Pattern repetition appears associated with improvement. Users who repeated puzzle sets showed higher accuracy and faster solve times on subsequent cycles. This is consistent with the Woodpecker Method's hypothesis that repeated exposure builds pattern recognition, though it does not prove causation.
Improvement persists over time. Users who remained active showed gains over weeks, even when holding difficulty constant. This suggests the effect is not purely from seeing the exact same puzzles, since users typically work through different puzzles within a difficulty tier.
Engagement correlates with performance. Highly active users show better metrics than casual users. Whether practice causes improvement, natural ability drives both practice and performance, or both factors interact is unclear from this data.
The efficiency metric tracks expected benchmarks. The observed efficiency multipliers (1.41x at cycle 2, 1.65x at cycle 3, 2.36x at cycle 4) are consistent with the training progression targets derived from the Woodpecker Method literature.
Practical Implications
For users designing their own training:
-
Multiple cycles may be beneficial. The data suggests that repeating puzzle sets is associated with improvement. Users who only complete one cycle of a set may be leaving gains unrealized.
-
Consistency appears to matter. Users active over multiple weeks showed continued improvement. Sporadic practice may be less effective than regular sessions.
-
Difficulty progression is achievable. A meaningful percentage of users progress from easier to harder content while maintaining reasonable success rates.
For training design:
-
Track efficiency, not just accuracy. The combined metric captures both speed and correctness, which together indicate pattern recognition.
-
Expect diminishing returns at extremes. Accuracy above 95% leaves little room for measured improvement. Speed becomes the primary growth vector at high accuracy levels.
-
Account for survivorship. Aggregate metrics may overstate typical user improvement if non-improving users leave the platform.
Conclusion
This analysis examined 120,513 puzzle solve attempts from 1,017 users over 7 weeks. Within this dataset, we observed patterns consistent with improvement through cycle-based training: accuracy gains of approximately 10 percentage points by cycle 2, solve time reductions of 21%, and continued improvement over weeks of activity.
These findings describe what occurred in our data. They do not establish that the Woodpecker Method causes improvement, that these results would replicate in other populations, or that observed gains transfer to actual chess performance.
What the data does suggest is that users who engage consistently with cycle-based puzzle training tend to show measurable improvement on the metrics we track. Whether this reflects the training method, user motivation, selection effects, or other factors cannot be determined from observational data alone.
For users considering this approach to training, the data provides some evidence that the method is at least not failing. Users who repeat puzzle sets do tend to perform better on subsequent attempts. Whether this constitutes meaningful skill development is a question the data cannot fully answer.
Get Started with Disco Chess
- STEP 1Create your free accountSign up in seconds with Google or email
- STEP 2Pick a puzzle setChoose from beginner to advanced collections
- STEP 3Start your first cycleSolve puzzles and track your progress automatically
Frequently Asked Questions
Related articles

Introducing Your Games: Train on Tactics You Actually Missed
Connect your Lichess account. We'll scan your games, find the tactics you missed, and add them to your review queue.

How We Index 342 Million Chess Positions for Millisecond Lookups
How a chess-specific insight about material changes gave us 93% cache hit rates - validated on 873,000 positions from Magnus Carlsen's games.

Theme-Based Puzzles, Performance Analytics & Daily Goals
Disco Chess adds 170+ carefully curated puzzle sets organized by tactical theme and skill level, performance analytics with tactical weakness analysis, and customizable daily training goals.

