๐ง AI newsMarch 5, 2026โ
Tests passing
AI Trend Analyzer
A CLI tool that analyzes recent AI news articles to extract trending topics, common keywords, and sentiment. It assists developers in identifying hot topics and sentiment shifts in the AI landscape.
What It Does
- Extracts keywords from articles using spaCy.
- Analyzes sentiment using NLTK's SentimentIntensityAnalyzer.
- Generates a word cloud from the extracted keywords.
Installation
1. Clone the repository:
git clone https://github.com/your-repo/ai-trend-analyzer.git
cd ai-trend-analyzer2. Install the required dependencies:
pip install -r requirements.txt3. Download the spaCy language model:
python -m spacy download en_core_web_smUsage
Input JSON file (articles.json):
[
"AI is transforming the world.",
"Machine learning is a subset of AI."
]Run the tool:
python ai_trend_analyzer.py --input articles.json --output wordcloud.pngSource Code
import argparse
import json
import os
import re
from collections import Counter
from wordcloud import WordCloud
import matplotlib.pyplot as plt
import spacy
from nltk.sentiment import SentimentIntensityAnalyzer
from nltk import download
def load_input(input_path):
"""Load input from a file."""
if os.path.exists(input_path):
with open(input_path, "r", encoding="utf-8") as file:
return json.load(file)
else:
raise FileNotFoundError(f"Input file {input_path} not found.")
def extract_keywords(articles):
"""Extract keywords from articles using spaCy."""
nlp = spacy.load("en_core_web_sm")
all_keywords = []
for article in articles:
doc = nlp(article)
keywords = [token.lemma_.lower() for token in doc if token.is_alpha and not token.is_stop]
all_keywords.extend(keywords)
return Counter(all_keywords)
def analyze_sentiment(articles):
"""Analyze sentiment of articles using NLTK."""
download("vader_lexicon", quiet=True)
sia = SentimentIntensityAnalyzer()
sentiments = [sia.polarity_scores(article) for article in articles]
avg_sentiment = {
"positive": sum(s["pos"] for s in sentiments) / len(sentiments),
"neutral": sum(s["neu"] for s in sentiments) / len(sentiments),
"negative": sum(s["neg"] for s in sentiments) / len(sentiments),
}
return avg_sentiment
def generate_wordcloud(keywords, output_path):
"""Generate a word cloud from keywords."""
wordcloud = WordCloud(width=800, height=400, background_color="white").generate_from_frequencies(keywords)
plt.figure(figsize=(10, 5))
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
plt.savefig(output_path)
plt.close()
def main():
parser = argparse.ArgumentParser(description="AI Trend Analyzer")
parser.add_argument("--input", required=True, help="Path to input JSON file containing articles.")
parser.add_argument("--output", required=True, help="Path to save the output word cloud image.")
args = parser.parse_args()
try:
data = load_input(args.input)
if not isinstance(data, list) or not all(isinstance(article, str) for article in data):
raise ValueError("Input file must contain a JSON array of strings.")
keywords = extract_keywords(data)
sentiment = analyze_sentiment(data)
print("Top Keywords:")
for word, count in keywords.most_common(10):
print(f"{word}: {count}")
print("\nSentiment Analysis:")
for key, value in sentiment.items():
print(f"{key.capitalize()}: {value:.2f}")
generate_wordcloud(keywords, args.output)
print(f"Word cloud saved to {args.output}")
except Exception as e:
print(f"Error: {e}")
if __name__ == "__main__":
main()Community
Downloads
ยทยทยท
Rate this tool
No ratings yet โ be the first!
Details
- Tool Name
- ai_trend_analyzer
- Category
- AI news
- Generated
- March 5, 2026
- Tests
- Passing โ
- Fix Loops
- 2
Quick Install
Clone just this tool:
git clone --depth 1 --filter=blob:none --sparse \ https://github.com/ptulin/autoaiforge.git cd autoaiforge git sparse-checkout set generated_tools/2026-03-05/ai_trend_analyzer cd generated_tools/2026-03-05/ai_trend_analyzer pip install -r requirements.txt 2>/dev/null || true python ai_trend_analyzer.py