Skip to main content
Social Media Automation - n8n + AI Content Pipeline Case Study by CueBytes
Country

United States

Platform

Web (n8n Automation)

Type

AI Content Generation & Publishing

Engine

n8n + GPT-4o-mini + LinkedIn API

Status

Live & Active

Overview

Automated Open Source Content Discovery, AI Writing & LinkedIn Publishing

CueBytes needed a hands-off system to consistently publish high-quality LinkedIn content about trending open source projects. The goal: build thought leadership in the developer community without spending hours every day researching projects, writing posts, and manually publishing.

We built a 4-phase automated pipeline running on n8n that discovers trending GitHub repositories daily, generates polished LinkedIn posts using GPT-4o, and publishes them on a set schedule — all without human intervention.

The pipeline has been running in production 24/7, publishing one curated open source project to LinkedIn every day at 8 AM UTC.

Tech Stack

n8n Workflow AutomationGitHub APIOpenAI GPT-4o-miniLinkedIn API (OAuth2)PostgreSQLDiscord WebhooksDockerNode.js

The Problem

What Manual Content Publishing Looked Like

Manually browsing GitHub Trending every day to find interesting repositories

Reading through READMEs and documentation to understand each project

Writing a LinkedIn post from scratch with the right tone and structure

Logging into LinkedIn, formatting the post, and publishing at the optimal time

No tracking of what's been posted before - risk of duplicate content

No way to maintain a consistent posting cadence during busy weeks

Architecture

4 Interconnected n8n Workflows

Each phase runs on its own schedule with staggered timing. One project flows through the entire pipeline per day — from discovery to published LinkedIn post.

Phase 1: GitHub ScraperDaily 2:00 AM UTC
Search API
Filter + Dedup
README Fetch
PG
status: pending
AI
Phase 2: GPT-4o Content GenDaily 6:00 AM UTC
PG
status: queued
in
Phase 3: LinkedIn PublishDaily 8:00 AM UTC
Published
|
Error Handler→ Discord

Scheduling Strategy

Time (UTC)WorkflowAction
2:00 AMPhase 1Scrape GitHub, insert new projects
6:00 AMPhase 2Generate LinkedIn content for 1 pending project
8:00 AMPhase 3Publish 1 queued post to LinkedIn
01

GitHub Trending Scraper

Trigger: Daily at 2:00 AM UTC

  • GitHub API search for repos with 100+ stars, sorted by popularity
  • Extract project name, URL, stars, forks, language, and description
  • Filter by minimum stars and recency (updated within 365 days)
  • Deduplicate against existing projects in PostgreSQL
  • Fetch full README content via GitHub API for AI context
  • Insert new projects with status=pending
02

AI Content Generation

Trigger: Daily at 6:00 AM UTC

  • Select highest-starred pending project from PostgreSQL
  • Decode base64 README and build structured prompt
  • GPT-4o-mini generates LinkedIn post with developer-audience tone
  • Content escaped for SQL safety and stored on project record
  • Status transitions from pending to queued
03

LinkedIn Publishing

Trigger: Daily at 8:00 AM UTC

  • Select oldest queued project (FIFO order)
  • Publish via n8n LinkedIn node with OAuth2 credential
  • Extract LinkedIn post URN from API response
  • Update project status to published with post ID
  • Log to posting_history table for audit trail
04

Error Handler

Trigger: On any workflow failure

  • Parse error details: workflow name, node, message, stack trace
  • Classify severity: critical, high, medium, low
  • Log to workflow_errors or error_logs table
  • Send formatted Discord webhook alert
  • Mark failed projects with status=error to prevent retries

What We Built

8 Core Capabilities

GitHub Trending Discovery

Automated GitHub API search for repositories with 100+ stars, sorted by popularity. Fetches full README content for deep AI context. Runs nightly at 2 AM UTC.

AI Content Generation

GPT-4o-mini generates polished LinkedIn posts from project data and README content. Carefully tuned system prompt ensures developer-audience tone and SQL-safe formatting.

LinkedIn Auto-Publishing

Native n8n LinkedIn node with OAuth2 credential publishes one curated post daily at 8 AM UTC. Post URN tracked for analytics.

Full Status Lifecycle

Every project flows through pending, queued, published, or error states. PostgreSQL tracks the entire journey with timestamps at each transition.

Dual-Layer Error Handling

Workflow-level and project-level error handlers with severity classification. Critical failures trigger @everyone Discord alerts. Bad projects are isolated so the pipeline continues.

Complete Audit Trail

Posting history table records every published post with platform, post ID, content, and success status. Full traceability from discovery to publication.

Staggered Schedule Design

Three phases run at 2 AM, 6 AM, and 8 AM with 2-4 hour gaps. Ensures each step completes before the next begins. One project per day for quality control.

Competitor Content Scraping

Companion workflows scrape RSS feeds from developer content sites. Human-in-the-loop approval gate ensures quality before publishing competitor-sourced ideas.

Data Flow

Project Status Lifecycle

pending
queued
published|
error
PG

projects

project_name
github_url
owner
description
stars_count
language
readme_content
linkedin_content
status
linkedin_post_id
published_date
PG

posting_history

project_id
platform
post_id
post_content
success
PG

workflow_errors

workflow_name
node_name
error_message
severity
execution_id

Outcome

What's Running in Production

Fully automated daily LinkedIn publishing - zero manual intervention

GitHub trending discovery running every night at 2 AM

AI-generated posts tailored for developer audience

One high-quality post published daily at 8 AM UTC

Complete status tracking: pending, queued, published, error

Full posting history audit trail with LinkedIn post URNs

Dual-layer error handling with Discord alerting

Companion competitor scraping with human-approval gate

Need an automated content pipeline for your brand?