D

Daily News Curator

Curates personalized news from RSS feeds, newsletters, and web sources based on your interests and industry

SkillClipticsdaily digestsv1.0.0MIT
0 views0 copies

Daily News Curator

Curates a personalized news briefing from RSS feeds, email newsletters, Hacker News, Reddit, and web sources tailored to your specific interests and industry. Unlike generic news apps, this skill learns your preferences, filters noise, and presents a focused reading list with AI-generated summaries so you stay informed in 5 minutes instead of 50.

Supported Platforms & Integrations

PlatformSetup MethodAuth TypeNotes
RSS/Atom FeedsDirect URL subscriptionNoneAny valid RSS or Atom feed URL
Hacker NewsHN Algolia APINone (public)Top stories, Show HN, Ask HN filterable
RedditReddit JSON APIOptional OAuthSubreddit filtering, score thresholds
Twitter/X ListsX API v2OAuth 2.0Curate from specific lists or accounts

When to Use This Skill

  • Use this when you follow 10+ news sources and want one consolidated feed
  • Use this when you want AI-summarized articles instead of reading full pieces
  • Use this when you need industry-specific news filtered from general noise
  • Consider alternatives when you need real-time breaking news alerts (this is a digest)

Quick Start

# Minimal configuration - news-curator.yml skill: daily-news-curator interests: primary: ["artificial-intelligence", "web-development", "startups"] secondary: ["cybersecurity", "climate-tech"] excluded: ["crypto", "celebrity-news"] sources: rss: - https://techcrunch.com/feed/ - https://feeds.arstechnica.com/arstechnica/technology-lab - https://www.theverge.com/rss/index.xml hackernews: enabled: true min_score: 100 reddit:
claude /daily-news-curator

Expected Output

NEWS BRIEFING - March 15, 2026
Curated from 23 sources | 142 articles scanned | 12 selected

TOP STORIES (3):

1. "Anthropic Releases Claude Agent SDK for Enterprise" - TechCrunch
   Summary: New SDK allows enterprises to build autonomous agents with
   built-in safety constraints and audit logging. Supports Python and
   TypeScript. Available today on npm and PyPI.
   Relevance: AI + web development | Read time: 6 min
   Link: https://techcrunch.com/2026/03/15/...

2. "React 20 Introduces Server-First Architecture" - Verge
   Summary: Major React update rethinks component model with server
...

Advanced Configuration

Platform-Specific Setup

RSS Feeds with Custom Parsing

sources: rss: - url: https://techcrunch.com/feed/ category: "tech" max_age_hours: 24 - url: https://feeds.bloomberg.com/markets/news.rss category: "finance" keywords_filter: ["tech", "AI", "startup"]

Full Options Reference

ParameterTypeDefaultDescription
interests.primaryarray[]Main topics to prioritize (weighted 3x)
interests.secondaryarray[]Supporting topics (weighted 1x)
interests.excludedarray[]Topics to always filter out
max_articlesnumber15Maximum articles in final digest
summary_lengthstring"medium"Summary detail: short, medium, long
sources.rssarray[]RSS feed URLs or objects with config

Core Concepts

ConceptPurposeHow It Works
Interest ScoringRanks articles by personal relevanceMatches article content against your interest keywords with TF-IDF weighting
Cross-Source DedupPrevents duplicate storiesCompares article titles and content using cosine similarity, keeps highest-scored version
Source DiversityPrevents single-source dominanceCaps any single source at 30% of total articles to ensure breadth
Temporal DecayPrioritizes fresh contentArticles lose relevance score over time; 12-hour-old story scores 50% of a 1-hour-old story

Architecture

RSS Feeds ────────> Feed Parser ──────┐
  Hacker News API ──> HN Fetcher ──────┤
  Reddit API ───────> Reddit Fetcher ──┼──> Deduplicator ──> Interest Scorer ──> Ranker ──> Summarizer ──> Output

Workflow Examples

Scenario 1: Developer staying current with AI

Input: 5 RSS feeds, HN top stories, 3 ML subreddits, primary interest in AI tooling Processing: Scans 142 articles, deduplicates 23 cross-posted stories, scores remaining 119 against AI tooling interest, selects top 15 with source diversity cap. Output: 15-article digest with 3 top stories fully summarized and 12 brief headlines with relevance tags.

Scenario 2: Startup founder tracking competitors

Input: RSS feeds from competitor blogs, industry news, VC funding feeds, tracked company names Processing: Highlights any mention of tracked companies, flags funding announcements, and groups competitive intelligence separately from general news. Output: Digest with dedicated "Competitor Watch" section showing 2 competitor blog posts and 1 funding announcement.

Best Practices

  1. Start with 5-8 sources and expand gradually -- Too many sources on day one creates noise. Add one new source per week and evaluate whether it improves your digest quality.

  2. Use excluded interests aggressively -- It is easier to exclude noise than to perfectly define signal. If crypto news keeps appearing in your tech feed, add it to.

  3. Set appropriate score thresholds -- HN score of 100 captures major stories but filters low-engagement posts. Reddit varies by subreddit size: use 50 for small.

Common Issues

  1. RSS feed returning empty results -- Some sites require a User-Agent header. The skill uses a standard browser UA by default, but some feeds may be geo-restricted.

  2. Too many articles from one source -- The source diversity cap defaults to 30%. If one source still dominates, reduce its max_items in the per-source config or.

Privacy & Data Handling

All news fetching and summarization happens locally. Article content is fetched directly from source URLs and processed on your machine. No article text or reading patterns are sent to external analytics services. Feed subscription lists are stored in ~/.claude/config/news-curator/ and cached articles expire after 48 hours in ~/.claude/cache/news-curator/.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates