19 Jan Practical Exercise: Wikipedia Research Agent in n8n
Let us create a Wikipedia research agent in n8n. In this lesson, we will build a workflow that:
- Takes a research topic as input
- Fetches relevant information from Wikipedia
- Uses an LLM to summarize the information
- Returns a concise, well-structured summary
Step-by-Step Implementation
Step 1: HTTP Request to Wikipedia API
Nodes needed:
- Webhook node (to trigger workflow)
- HTTP Request node
Configuration:
- Method: GET
- URL: https://en.wikipedia.org/api/rest_v1/page/summary/{topic}
- Replace {topic} with dynamic data from webhook
Step 2: Data Processing
Nodes needed: - Set node (to extract relevant fields) Data to extract: - Extract: title, extract (summary), pageid - Handle errors (page not found)
Step 3: AI Processing with Free LLM
Nodes needed:
- AI Agent or Chat Model node (using Google Gemini)
Prompt engineering:
"Summarize the following Wikipedia content in 3 bullet points:
Title: {title}
Content: {extract}
Focus on key facts and importance."
Step 4: Output Formatting
Nodes needed: - Code node or HTML node Create clean output: - Format as markdown - Include source link - Add timestamps
Complete Workflow Structure
Start (Webhook/Manual Trigger) ↓ HTTP Request → Wikipedia API ↓ Error Handler (If page not found) ↓ Set Node (Extract title & content) ↓ AI Agent Node (Gemini - Free tier) ↓ Format Output ↓ Return Result (Webhook Response/Email/Slack)
If you liked the tutorial, spread the word and share the link and our website, Studyopedia, with others.
For Videos, Join Our YouTube Channel: Join Now
Read More:
- What is Deep Learning
- Feedforward Neural Networks (FNN)
- Convolutional Neural Network (CNN)
- Recurrent Neural Networks (RNN)
- Long short-term memory (LSTM)
- Generative Adversarial Networks (GANs)
No Comments