Top 10 Claude Skills for Researchers in 2026: A Data-Driven Ranking

Top 10 Claude Skills for Researchers in 2026: A Data-Driven Ranking
Introduction

Author: Daniel Slug: top-claude-skills-researchers-2026 Meta Description: We ranked the top Claude Skills for researchers and analysts by GitHub stars, maintenance, and real-world fit. Q2 2026 data from 24 audited repos. Primary Keyword: claude skills for researchers Secondary Keywords: best claude skills for research, ai research skills, claude analyst skills, claude scientific skills Reading Time: 11 minutes Cluster Role: Supporting (parent: claude skills) Cover Image: ![Top Claude Skills for

Detail

Research workflows were not the obvious early use case for Claude Skills. Most early adopters were software engineers automating their pipelines. By Q2 2026, that has shifted: scientific-skill bundles routinely cross five-figure star counts, paper-finder and citation tools are quietly embedded in literature reviews, and competitive-intelligence skills are showing up in market-research stacks at consultancies and analyst firms.

For researchers and analysts, the question is which skills materially shorten the gap between "I have a question" and "I have a defensible answer."

This article ranks the ten most useful Claude Skills for research and analysis work, scored against public GitHub adoption data pulled on April 30, 2026.

📌Key Takeaways
  1. 1Specialized research skills outperform kitchen-sink bundles for research workflows; sharpness matters more than breadth.
  2. 2Document-format skills (pdf, xlsx, docx) are dramatically underutilized by researchers and offer some of the highest hours-saved-per-skill in the ecosystem.
  3. 3Search with citations is a recently solved problem and worth standardizing on.
  4. 4For protected-source data gathering, pair the available skills with a stealth-browser backend.
  5. 5Watch K-Dense-AI/claude-scientific-skills for ongoing growth — it is the de facto standard for quantitative research in Claude.


How We Ranked These Skills

Three weighted signals:

1. GitHub stars (40%) — adoption proxy.
2. Maintenance activity (30%) — recent commits, release cadence.
3. Research-specific fit (30%) — directness of use for literature review, data gathering, analysis, or synthesis versus general productivity.

We excluded skills without public GitHub presence. Research output stands on auditable sources, and the same standard applies to the tools producing it.

The Top 10

1. K-Dense-AI/claude-scientific-skills — 19,711 ⭐

A comprehensive collection of ready-to-use scientific skills covering specialized libraries (NumPy, SciPy, scikit-learn) and scientific databases.

  • Repository: K-Dense-AI/claude-scientific-skills
  • Best for: Quantitative researchers and data scientists running analyses inside Claude rather than between Claude and a separate notebook environment.
  • Why it ranks here: It is the most-adopted research-specific skill bundle on GitHub by a wide margin. Maintenance is steady. Coverage spans most quantitative use cases.
  • Caveat: Bias toward the Python scientific stack. R-first or Julia-first researchers will find less direct value.

2. eugeniughelbur/obsidian-second-brain — 438 ⭐

A Claude Code skill that turns an Obsidian vault into an "AI-first second brain" — 31 commands for research-oriented note management, vault-first retrieval, and scheduled agents that maintain the knowledge graph.

  • Repository: eugeniughelbur/obsidian-second-brain
  • Best for: Researchers who already operate in Obsidian and want Claude integrated as a first-class collaborator on the vault.
  • Why it ranks here: Personal knowledge management is one of the highest-leverage research workflows, and this skill is the most-adopted PKM-aware skill in the ecosystem.
  • Caveat: Obsidian-specific. Tinderbox, Notion, or Logseq users will need a different tool.

3. bchao1/paper-finder — 207 ⭐

A Claude skill for finding ML research papers — single-purpose, sharp.

  • Repository: bchao1/paper-finder
  • Best for: Anyone running ML literature reviews. The narrow scope is the feature.
  • Why it ranks here: It does one thing well, and that thing is a frequent unmet need in research workflows.
  • Caveat: ML-focused. Adjacent disciplines (HCI, systems, theory) get less direct coverage.

4. liangdabiao/amazon-sorftime-research-MCP-skill — 215 ⭐

An Amazon-focused research toolkit covering full-dimension listing analysis, category analysis, keyword analysis, review sentiment analysis, and market research. Built on Sorftime MCP.

  • Repository: liangdabiao/amazon-sorftime-research-MCP-skill
  • Best for: Market researchers and ecommerce analysts running competitive intelligence on Amazon.
  • Why it ranks here: It is the most adopted market-research skill aimed at a specific commercial vertical, and Amazon is the most-researched commercial surface in the world.
  • Caveat: Tightly coupled to Sorftime as data source.

5. PleasePrompto/google-ai-mode-skill — 149 ⭐

A Claude Code skill for Google AI Mode search with citations and a persistent browser profile.

  • Repository: PleasePrompto/google-ai-mode-skill
  • Best for: Researchers who do their first-pass exploration through Google search and want structured output with citations rather than a chat thread.
  • Why it ranks here: First-pass search is the most common entry point for research work, and citation-aware output is what separates "I asked an AI" from "I have a defensible source list."
  • Caveat: Depends on Google AI Mode being stable and available.

6. anthropics/skills — pdf — 125,856 ⭐

Anthropic's official PDF skill — extract text and tables, create PDFs, merge/split documents, handle forms.

  • Repository: anthropics/skills — pdf
  • Best for: Any researcher who reads PDFs (so: every researcher). The skill turns "extract this paper's data tables into a structured format" into a single instruction.
  • Why it ranks here: PDF handling is unglamorous infrastructure that researchers spend disproportionate time on. The official skill solves the common cases out of the box.
  • Caveat: PDF parsing is hard. Edge cases (rotated pages, mathematical typesetting, image-only scans) still require manual intervention.

7. anthropics/skills — xlsx — 125,856 ⭐

Anthropic's official Excel skill — create, edit, and analyze spreadsheets with formula support, formatting, and charting.

  • Repository: anthropics/skills — xlsx
  • Best for: Quantitative researchers who deliver in Excel, which is most analysts in industry.
  • Why it ranks here: Spreadsheet-native output is what business research actually ships, and the official skill handles it well.
  • Caveat: Pivot tables and macros remain partial; complex VBA workflows still need a human.

8. easonc13/abstract-searcher — implied via context

A skill that adds abstracts to .bib file entries by searching academic databases (arXiv, Semantic Scholar, CrossRef) with browser fallback.

  • Best for: Academic researchers maintaining BibTeX libraries who routinely import citations without abstracts.
  • Why it ranks here: Citation hygiene is a high-frequency, low-glamour task that AI handles unusually well.
  • Caveat: Confined to BibTeX workflows. Mendeley or Zotero users will need a wrapper.

9. anthropics/skills — docx — 125,856 ⭐

Anthropic's official Word skill — create, edit, analyze documents with tracked-changes, comments, and formatting preservation.

  • Repository: anthropics/skills — docx
  • Best for: Researchers delivering long-form reports, particularly in industry or consulting where Word is the lingua franca.
  • Why it ranks here: Long-form deliverable production is where Claude Skills earn their keep for industry researchers.
  • Caveat: Tracked-changes handling occasionally drops formatting in deeply nested edits. Spot-check before sending to a stakeholder.

10. anthropics/skills — webapp-testing — 125,856 ⭐

The official Playwright skill, included here because researchers increasingly use it to drive structured data extraction from public web sources.

  • Repository: anthropics/skills — webapp-testing
  • Best for: Researchers gathering structured data from non-API public sources (regulatory filings, government data portals, civic platforms).
  • Why it ranks here: It's the most-adopted skill that lets a researcher say "extract every row of this table, monthly" without writing a scraper.
  • Caveat: Vanilla Playwright. Public-facing data portals usually don't fight back; commercial sites with anti-bot defenses do.

At-a-Glance Comparison

Rank

Skill

Stars

Primary Use Case

Output Type

Maintenance

1

K-Dense-AI/claude-scientific-skills

19,711

Quantitative analysis

Code + data

Active

2

eugeniughelbur/obsidian-second-brain

438

PKM + retrieval

Notes

Active

3

bchao1/paper-finder

207

ML literature search

Citations

Active

4

liangdabiao/amazon-sorftime-research-MCP-skill

215

Market research

Reports

Active

5

PleasePrompto/google-ai-mode-skill

149

Search w/ citations

Cited answers

Active

6

pdf (official)

125,856

PDF extract + create

Documents

Active

7

xlsx (official)

125,856

Spreadsheet analysis

Excel

Active

8

easonc13/abstract-searcher

low

BibTeX hygiene

BibTeX entries

Active

9

docx (official)

125,856

Long-form reports

Word

Active

10

webapp-testing (official)

125,856

Web data extraction

Structured data

Active

Three observations.

First, the top-three by relevance for research workflows skew toward narrow, specialized skills (scientific-skills, paper-finder, obsidian-second-brain) rather than the kitchen-sink bundles that dominate other categories. Research benefits from sharp tools.

Second, the document-format skills (pdf, xlsx, docx) are dramatically underweighted by stars relative to how much research time they save. Most researchers don't realize they exist.

Third, web-data extraction sits at the edge of the research toolkit. The skills that exist work against unprotected public data; the moment a researcher needs to gather data from a commercial platform with anti-bot defenses, the skill layer falls short.

BrowserAct

Stop getting blocked. Start getting data.

  • ✓ Stealth browser fingerprints — bypass Cloudflare, DataDome, PerimeterX
  • ✓ Automatic CAPTCHA solving — reCAPTCHA, hCaptcha, Turnstile
  • ✓ Residential proxies from 195+ countries
  • ✓ 5,000+ pre-built Skills on ClawHub

The Gap: Gathering Data From Protected Sources

Researchers regularly run into the same wall as scraping teams. A market researcher tracking competitor pricing, a media analyst monitoring social platforms, a policy researcher pulling data from a paywalled commercial source — none of the top-10 skills covers them.

The official webapp-testing skill works against open-data portals; the moment you point it at a site that defends itself, you get blocked. Researchers handle this today by either accepting smaller datasets or by standing up custom scraping infrastructure outside the Claude Skills layer.

BrowserAct is one such infrastructure layer — purpose-built for AI agents that need to gather data from protected sources, with anti-fingerprinting, residential proxies, and automatic CAPTCHA bypass. For research workflows that involve commercial-platform data (Amazon Product API skill) or social-platform monitoring (Reddit Posts & Comments Scraper template), pairing an orchestration skill with this kind of backend is the typical pattern. It currently exposes a REST API and templates rather than a Claude Skill, which is why it's in the gap analysis.

Who Should Install What

For a quantitative research analyst:

1. K-Dense-AI/claude-scientific-skills as the analysis core.
2. xlsx (official) for delivery.
3. pdf (official) for source ingestion.
4. PleasePrompto/google-ai-mode-skill for first-pass search.

For an academic researcher:

1. bchao1/paper-finder + easonc13/abstract-searcher for literature.
2. eugeniughelbur/obsidian-second-brain for synthesis.
3. pdf (official) for paper ingestion.
4. docx (official) for long-form drafting.

For a market or competitive-intelligence researcher:

1. liangdabiao/amazon-sorftime-research-MCP-skill for Amazon-specific work.
2. PleasePrompto/google-ai-mode-skill for first-pass exploration.
3. xlsx (official) + docx (official) for delivery.
4. A stealth backend for protected-source data gathering.

Conclusion

Research is the category where Claude Skills compress the most hours per researcher per month, and where the underlying skill ecosystem is more mature than its star counts suggest. The official document-format skills alone justify standardizing on Claude for any team that produces written deliverables. The remaining gap — gathering data from protected commercial or social sources — is the same gap every other category hits, and the same fix applies.

If your research work hits the wall at protected sources, BrowserAct is the stealth-browser layer purpose-built for AI-driven data gathering at exactly that step.



Automate Any Website with BrowserAct Skills

Pre-built automation patterns for the sites your agent needs most. Install in one click.

🛒
Amazon Product API
Search products, track prices, extract reviews.
📍
Google Maps Scraper
Extract business listings, reviews, contact info.
💬
Reddit Analysis
Monitor mentions, track sentiment, extract posts.
📺
YouTube Data
Channel stats, video metadata, comments at scale.
Browse 5,000+ Skills on ClawHub →


Frequently Asked Questions

Can Claude Skills replace a research assistant?

They compress routine work — citation hygiene, source ingestion, draft delivery — but human judgment on framing and synthesis still matters.

Will paper-finder cover non-ML disciplines?

Not directly. The skill is ML-focused. Adjacent disciplines need different sources or a custom skill.

How do I cite output produced through Claude Skills?

Cite the underlying source the skill retrieved. The skill is a tool, not a source.

Why isn't BrowserAct on this list?

BrowserAct ships as a REST API and templates today, not as a Claude Skill. It appears in the gap analysis as the stealth layer researchers pair with the skills above for protected-source gathering.

Which skill saves the most time first?

For most researchers, the official pdf skill — source ingestion is the highest-frequency low-glamour task in research.

Are the document-format skills safe to use on confidential research?

They run on the Claude API. Treat them with the same data-handling care as any LLM-mediated workflow; check your enterprise terms.

Can these skills run unattended on a schedule?

Yes via the Claude API plus a scheduler. Search and synthesis skills work especially well on weekly cadence.

Stop writing automation&scrapers

Install the CLI. Run your first Skill in 30 seconds. Scale when you're ready.

Start free
free · no credit card
Top 10 Claude Skills for Researchers in 2026: A Data-Driven