Does the Woodpecker Method Work? Analyzing 120,000 Puzzle Attempts

We analyzed 7 weeks of Woodpecker Method training data to see if cycle-based tactics repetition actually improves chess performance.

TL;DR: Based on 120,513 puzzle attempts from 1,017 users, the Woodpecker Method appears effective: users showed ~10 percentage point accuracy gains and 21% faster solve times by cycle 2. Users who trained consistently over weeks improved even at fixed difficulty levels. These are observational results without a control group.

research8 min read
Does the Woodpecker Method Work? Analyzing 120,000 Puzzle Attempts

Key Takeaways

  • Users who repeated puzzle sets showed approximately 10 percentage point accuracy gains and 21% faster solve times by cycle 2.
  • Efficiency multipliers reached 1.65x by cycle 3 and 2.36x by cycle 4, though sample sizes decrease at higher cycles.
  • Within the same difficulty level, users showed 4-15 percentage point accuracy improvements over multiple weeks of activity.
  • Approximately 14% of users who started at beginner level progressed to harder difficulty levels.
  • This is observational data without a control group. Results should be interpreted as descriptive patterns, not causal claims.

Introduction

Does the Woodpecker Method actually work? We analyzed seven weeks of our own data to find out.

This post presents what we observed when 1,017 users solved 120,513 puzzles on our platform between November 2025 and January 2026. We looked at cycle-over-cycle improvement, longitudinal trends, difficulty progression, and what distinguishes highly engaged users from casual ones.

The short version: users who repeated puzzle sets got better at them. Users who stuck around for weeks showed improvement even at fixed difficulty levels. Whether this reflects the training method, user selection, or something else entirely is a question the data alone cannot answer.

What follows is a detailed breakdown of the numbers.


Dataset Overview

MetricValue
Total Users1,017
Total Solve Attempts120,513
Date RangeNov 26, 2025 - Jan 14, 2026
Duration~7 weeks
Overall Accuracy83.6%
Average Solve Time31.6 seconds
Median Solve Time15.0 seconds

Our puzzle sets are organized by difficulty level (Beginner, Casual, Club, Tournament, Master) and by tactical theme (fork, pin, discovered attack, etc.). The Woodpecker Method underpins the training approach: users solve the same set of puzzles repeatedly in cycles, aiming to get faster and more accurate each time.


Woodpecker Method Cycle Results

Users who repeated the same puzzle set multiple times showed improvement in both accuracy and solve time across cycles.

CycleUsersAccuracyAvg TimeEfficiency Multiplier
15585.8%30.4s1.00x (baseline)
25495.8%24.0s1.41x
31097.7%21.0s1.65x
4498.7%14.8s2.36x

Interpretation: Within this cohort of users who completed multiple cycles, accuracy increased by approximately 10 percentage points between cycles 1 and 2, while average solve time decreased by 21%. The efficiency multiplier, which combines accuracy and speed into a single metric, showed continued gains through cycle 4.

Important caveat: Sample sizes decrease substantially at higher cycles. Only 4 users in this dataset completed cycle 4. The apparent gains at cycles 3 and 4 may reflect survivorship bias (users who continue are likely those who are improving) rather than a universal effect.

The efficiency trajectory is consistent with the Woodpecker Method's expected progression (targeting 1.5x by cycles 3-4, 2.0x by cycles 5-6), though we cannot attribute this to the method specifically without a controlled comparison.


Individual Improvement Stories

Examining individual users provides texture beyond aggregate statistics. The following cases illustrate observed improvement patterns. User identifiers are anonymized.

Case 1: Tournament-Level Player

One user working on tournament-level puzzles showed substantial improvement over 2 cycles:

MetricCycle 1Cycle 2Change
Accuracy58.6%94.8%+36.2 pp
Avg Time39.9s25.4s-36%
Attempts~375~375-

This user completed approximately 750 total attempts. The magnitude of improvement is notable, though individual results vary considerably.

Case 2: Theme Mastery Across Motifs

Another user systematically worked through multiple tactical themes at the club level:

ThemeCycle 1 AccuracyLater AccuracyTime Change
Hanging Piece65.0%98.0%-25%
Deflection67.0%98.0%-11%
Sacrifice68.0%96.0%-64%
Pin74.0%95.5%-50%

Across these themes, this user showed an average accuracy improvement of approximately 28 percentage points with consistent speed gains.

Case 3: Multi-Level Progression

A third user progressed through multiple difficulty levels while maintaining high accuracy:

LevelAccuracyPuzzles Completed
Beginner100%18
Casual94.0%1,113
Club86.6%761

This user completed 1,892 total puzzles. The accuracy decline at higher difficulty levels is expected, as harder puzzles are designed to be more challenging.


Long-Term Improvement Trends

Analyzing users who remained active across multiple weeks, controlling for difficulty level, reveals improvement trends over time.

Beginner Level

WeekUsersAccuracyAvg Time
151690.0%21.2s
24793.7%16.8s
32594.5%19.6s

Users active at the beginner level showed an accuracy improvement of approximately 4.5 percentage points over 3 weeks.

Club Level

WeekUsersAccuracyAvg Time
126084.6%31.9s
26086.1%34.6s
34088.8%37.0s

Club-level users showed approximately 4 percentage points of accuracy improvement over 3 weeks. The slight increase in solve time may reflect users attempting harder puzzles within the club tier.

Tournament Level

WeekUsersAccuracyAvg Time
112556.7%59.3s
22957.6%56.2s
31972.0%48.9s

Tournament-level users showed the largest observed improvement: approximately 15 percentage points in accuracy and 17% faster solve times by week 3.

Survivorship note: User counts decrease substantially from week 1 to week 3 at all levels. Users who remain active may differ systematically from those who stop. These improvements may partially reflect selection effects rather than universal training benefits.


Difficulty Progression

Of users who started at the beginner level, a subset progressed to harder content:

LevelUsers Reaching% of Beginner StartersSuccess Rate
Beginner453100%73.0%
Casual6414.1%71.9%
Club5211.5%66.6%
Tournament296.4%39.2%
Master132.9%15.4%

Users who progress to harder levels maintain reasonable success rates, though accuracy naturally declines as difficulty increases. The 14% progression rate from beginner to casual suggests that a meaningful subset of users advance through the curriculum.


Power User Patterns

The top 10% of users by engagement (102 users) account for 71% of all puzzle attempts.

CohortUsersAttemptsAccuracyAvg TimeActive Days
Top 10%10285,79185.3%30.6s13.1
All Others91534,72269.1%40.2s2.0

Highly engaged users show:

  • 16 percentage points higher accuracy
  • 24% faster solve times
  • 6.5x more active days

Correlation vs. causation: These differences could reflect that engaged users improve because of their practice, that users who are naturally better at tactics tend to practice more, or some combination. The data cannot distinguish between these explanations.

Power users also show improvement over time:

  • First day: 84.1% accuracy
  • Later days: 86.9% accuracy (+2.8 pp)

Methodology and Limitations

This analysis has several important limitations that affect how results should be interpreted:

Observational Design

This is observational data from production usage. There is no control group of users who trained with a different method or did not train at all. We cannot determine whether the observed improvements result from the training method specifically, from any form of practice, from user selection effects, or from other factors.

Survivorship Bias

Users who appear in later cycles or later weeks are those who continued using the platform. These users may differ systematically from those who stopped. Apparent improvements at later cycles may partly reflect that only improving users continue, rather than that all users who continue would improve.

Self-Selected Population

Users who sign up for a chess tactics training platform are not a random sample of chess players. They may be more motivated, have more time for practice, or differ in other ways from the general chess-playing population.

Sample Size Variation

Sample sizes decrease substantially at higher cycles and later weeks. Results for cycle 4 (n=4) or week 3 at tournament level (n=19) should be interpreted with caution.

No External Validation

We do not have data on whether improvements on the platform transfer to over-the-board play, online rated games, or other measures of chess skill.

Descriptive Only

All findings should be interpreted as descriptive patterns within this specific dataset, not as causal claims or predictions about future users.


Interpretation

Within the constraints noted above, the data is consistent with several observations about cycle-based training:

Pattern repetition appears associated with improvement. Users who repeated puzzle sets showed higher accuracy and faster solve times on subsequent cycles. This is consistent with the Woodpecker Method's hypothesis that repeated exposure builds pattern recognition, though it does not prove causation.

Improvement persists over time. Users who remained active showed gains over weeks, even when holding difficulty constant. This suggests the effect is not purely from seeing the exact same puzzles, since users typically work through different puzzles within a difficulty tier.

Engagement correlates with performance. Highly active users show better metrics than casual users. Whether practice causes improvement, natural ability drives both practice and performance, or both factors interact is unclear from this data.

The efficiency metric tracks expected benchmarks. The observed efficiency multipliers (1.41x at cycle 2, 1.65x at cycle 3, 2.36x at cycle 4) are consistent with the training progression targets derived from the Woodpecker Method literature.


Practical Implications

For users designing their own training:

  • Multiple cycles may be beneficial. The data suggests that repeating puzzle sets is associated with improvement. Users who only complete one cycle of a set may be leaving gains unrealized.

  • Consistency appears to matter. Users active over multiple weeks showed continued improvement. Sporadic practice may be less effective than regular sessions.

  • Difficulty progression is achievable. A meaningful percentage of users progress from easier to harder content while maintaining reasonable success rates.

For training design:

  • Track efficiency, not just accuracy. The combined metric captures both speed and correctness, which together indicate pattern recognition.

  • Expect diminishing returns at extremes. Accuracy above 95% leaves little room for measured improvement. Speed becomes the primary growth vector at high accuracy levels.

  • Account for survivorship. Aggregate metrics may overstate typical user improvement if non-improving users leave the platform.


Conclusion

This analysis examined 120,513 puzzle solve attempts from 1,017 users over 7 weeks. Within this dataset, we observed patterns consistent with improvement through cycle-based training: accuracy gains of approximately 10 percentage points by cycle 2, solve time reductions of 21%, and continued improvement over weeks of activity.

These findings describe what occurred in our data. They do not establish that the Woodpecker Method causes improvement, that these results would replicate in other populations, or that observed gains transfer to actual chess performance.

What the data does suggest is that users who engage consistently with cycle-based puzzle training tend to show measurable improvement on the metrics we track. Whether this reflects the training method, user motivation, selection effects, or other factors cannot be determined from observational data alone.

For users considering this approach to training, the data provides some evidence that the method is at least not failing. Users who repeat puzzle sets do tend to perform better on subsequent attempts. Whether this constitutes meaningful skill development is a question the data cannot fully answer.

Get Started with Disco Chess

  1. STEP 1
    Create your free account
    Sign up in seconds with Google or email
  2. STEP 2
    Pick a puzzle set
    Choose from beginner to advanced collections
  3. STEP 3
    Start your first cycle
    Solve puzzles and track your progress automatically

Frequently Asked Questions

Based on our data, yes. Users who repeated puzzle sets showed approximately 10 percentage point accuracy gains and 21% faster solve times by cycle 2. However, this is observational data without a control group, so we cannot definitively prove causation.

The Woodpecker Method is a chess training technique where you solve the same set of puzzles multiple times in cycles, aiming to get faster and more accurate with each repetition. It was popularized by GM Axel Smith and IM Hans Tikkanen.

These results describe patterns within our specific user population during a specific time period. Users who sign up for a tactics training platform may differ from the general chess-playing population in motivation, prior experience, or other factors.

Not all users complete multiple cycles. Some may stop using the platform, switch to different puzzle sets, or simply not have had enough time to reach later cycles during our observation period. This survivorship pattern is common in longitudinal data.

The efficiency multiplier combines accuracy and speed into a single metric. A 1.5x multiplier means a user is solving puzzles roughly 1.5 times as efficiently as their baseline cycle. Higher is better, but individual variation exists.
Disco Chess mascot

Ready to try the Woodpecker Method?

Start drilling tactics with Disco Chess. Automatic tracking, progress analytics, and a fun, gamified experience. Free to start.

Join our community