All Toolsโ€บReal-Time News Summarizer
๐Ÿ”ง AI for Real-Time News SummarizationApril 15, 2026โœ… Tests passing

Real-Time News Summarizer

A CLI tool that aggregates trending news articles from online sources and generates AI-powered summaries in real-time. This tool is perfect for developers looking to extract concise insights from large volumes of news data without manually processing articles.

What It Does

  • Fetch news articles from multiple RSS feed sources.
  • Filter articles by category.
  • Generate AI-powered summaries of articles using Hugging Face's Transformers library.
  • Save summaries to a file or display them in the console.

Installation

1. Clone this repository:

git clone https://github.com/yourusername/real_time_news_summarizer.git
   cd real_time_news_summarizer

2. Install the required dependencies:

pip install -r requirements.txt

Usage

Run the tool using the following command:

python real_time_news_summarizer.py --source <RSS_FEED_URL> [--category <CATEGORY>] [--length <SUMMARY_LENGTH>] [--output <OUTPUT_FILE>]

Arguments

  • --source: One or more RSS feed URLs to fetch news from (required).
  • --category: Filter articles by category (optional).
  • --length: Maximum summary length (default: 100).
  • --output: File path to save the summaries (optional).

Example

Fetch and summarize news articles from two RSS feeds, filter by the category "technology", and save the summaries to a file:

python real_time_news_summarizer.py --source https://example.com/rss1.xml https://example.com/rss2.xml --category technology --length 50 --output summaries.txt

Source Code

import argparse
import feedparser
from transformers import pipeline
from bs4 import BeautifulSoup
import sys
import os

def fetch_rss_feed(url):
    try:
        feed = feedparser.parse(url)
        if feed.bozo:
            raise ValueError(f"Failed to parse feed: {feed.bozo_exception}")
        return feed.entries
    except Exception as e:
        print(f"Error fetching RSS feed: {e}", file=sys.stderr)
        return []

def summarize_text(text, summarizer, max_length):
    try:
        summary = summarizer(text, max_length=max_length, min_length=10, do_sample=False)
        return summary[0]['summary_text']
    except Exception as e:
        print(f"Error summarizing text: {e}", file=sys.stderr)
        return ""

def extract_text_from_entry(entry):
    try:
        if 'summary' in entry:
            return BeautifulSoup(entry['summary'], 'html.parser').get_text()
        elif 'content' in entry and entry['content']:
            return BeautifulSoup(entry['content'][0]['value'], 'html.parser').get_text()
        elif 'description' in entry:
            return BeautifulSoup(entry['description'], 'html.parser').get_text()
        else:
            return ""
    except Exception as e:
        print(f"Error extracting text from entry: {e}", file=sys.stderr)
        return ""

def main():
    parser = argparse.ArgumentParser(description="Real-Time News Summarizer")
    parser.add_argument('--category', type=str, help="Filter by category (optional)")
    parser.add_argument('--length', type=int, default=100, help="Maximum summary length (default: 100)")
    parser.add_argument('--source', type=str, nargs='+', required=True, help="RSS feed URLs to fetch news from")
    parser.add_argument('--output', type=str, help="File path to save the summaries (optional)")

    args = parser.parse_args()

    summarizer = pipeline("summarization")
    summaries = []

    for url in args.source:
        entries = fetch_rss_feed(url)
        for entry in entries:
            if args.category and args.category.lower() not in entry.get('tags', [{}])[0].get('term', '').lower():
                continue

            text = extract_text_from_entry(entry)
            if text:
                summary = summarize_text(text, summarizer, args.length)
                summaries.append({
                    'title': entry.get('title', 'No Title'),
                    'link': entry.get('link', 'No Link'),
                    'summary': summary
                })

    output = "\n\n".join([f"Title: {s['title']}\nLink: {s['link']}\nSummary: {s['summary']}" for s in summaries])

    if args.output:
        try:
            with open(args.output, 'w', encoding='utf-8') as f:
                f.write(output)
            print(f"Summaries saved to {args.output}")
        except Exception as e:
            print(f"Error writing to file: {e}", file=sys.stderr)
    else:
        print(output)

if __name__ == "__main__":
    main()

Community

Downloads

ยทยทยท

Rate this tool

No ratings yet โ€” be the first!

Details

Tool Name
real_time_news_summarizer
Category
AI for Real-Time News Summarization
Generated
April 15, 2026
Tests
Passing โœ…
Fix Loops
3

Quick Install

Clone just this tool:

git clone --depth 1 --filter=blob:none --sparse \
  https://github.com/ptulin/autoaiforge.git
cd autoaiforge
git sparse-checkout set generated_tools/2026-04-15/real_time_news_summarizer
cd generated_tools/2026-04-15/real_time_news_summarizer
pip install -r requirements.txt 2>/dev/null || true
python real_time_news_summarizer.py
Real-Time News Summarizer โ€” AI Tools by AutoAIForge