Update (March 19, 2026): A second cumulative dataset release is now available in the separate cunningham-chain-data repo as v2026-03-19-snapshot. It contains 1,551,489 first-kind roots (CC10-CC18), including 2 CC18s. The
data/andanalysis/folders in this code repo remain the original published snapshot and are not regenerated for this release. This repo also now includes an experimental CPU variant based on John Armitage’s bit-vector L2 filter plus additional optimizations.Update (March 14–17, 2026): Just days after publishing this repository, continued runs found two first-kind CC18s, starting with
106103983461039119546815109and214325014495971624590189129. A later prior-art review showed that first-kind CC18 had already been documented in John Armitage’s 2021 Oxford thesis via the smallest known example, so these are not the first known CC18s. They do, however, remain the largest listed first-kind CC18 entries on the current public Cunningham tables. The campaign has now finished because my available GPU compute time ran out.The text below is the original public release README from March 11, kept largely intact as the baseline snapshot. The March 11 dataset and release counts are unchanged here and do not include the later CC18 findings.
A visualization-driven project that grew into a high-throughput GPU/CPU pipeline for searching long Cunningham chains (first-kind). More about HPC optimization and AI-assisted iteration than the math itself.
As of March 2026, this project has contributed:
The campaign ultimately did reach its CC18 target, but not the more ambitious CC19 goal. It remained compute-limited.
For future searches, this code is more efficient at higher target/depth settings (for example CC19-oriented runs) than in the CC18-oriented configuration used here, because stricter filtering reduces survivor pressure on the downstream confirmation path.
The campaign dataset lives in a separate repo: cunningham-chain-data. Summary statistics and analysis CSVs are in data/ here; the full raw dataset (~30 MB, 929K CC10+ roots including 44 CC16 and a CC17) is available as a release download.
| What | Where |
|---|---|
| GPU filter engine (CUDA) | src/cuda/ |
| CPU search engine (GMP) | src/cpu/ |
| GP/PARI library + MCP server | gp/ |
| Interactive visualizations | visualizations/ |
| Analysis notes | analysis/notes/ |
| Analysis scripts | analysis/scripts/ |
| Campaign dataset | cunningham-chain-data (separate repo) |
| Analysis CSVs | data/ |
| Failed experiments (17 approaches) | experiments/failed/ |
| Repo map | docs/REPO_MAP.md |
| Search pipeline overview | docs/SEARCH_PIPELINE.md |
High-level staged pipeline:
src/cuda/cc18_filter_cuda_CpC_v15.cu has reached roughly 96-98B candidates/sec on RTX 5090 in some CC19-style runs, but it still needs more validation before replacing the public baseline.cc_gmp_v33_03.c baseline.See docs/SEARCH_PIPELINE.md for details.
Standalone HTML/JS tools that shaped the search design. Try them live on GitHub Pages:
# GPU engine (requires CUDA toolkit + GMP)
# Adjust -arch=sm_XX for your GPU / CUDA toolchain
# Example: Ada may use sm_89; some setups may still require sm_86
nvcc -O3 -arch=sm_89 src/cuda/cc18_filter_cuda_CpC_v13.cu -o cc18_filter -lgmp -lpthread
# CPU engine (requires GMP)
gcc -O3 -march=native -flto src/cpu/cc_gmp_v33_03.c -o cc_search -lgmp -lpthread -lm
# Run tests
./cc_search --test
gp -q
\r gp/cc_lib_v10.gp
Self-tests (37) run automatically on load.
See gp/HOWTO_cc_lib_v10.md for usage.
Nenad Micic, Belgium — LinkedIn
See LICENSE.