Blog
We started a project on n8n. Three days in, we rebuilt it as a Claude skill. Cost per search dropped significantly. Iteration time dropped from hours to minutes.
This is not an anti-n8n post. n8n is a good tool. But the project taught us something specific about where the line is — and that line shifted in 2026.
An HR candidate sourcing tool. The goal: replace an expensive, limited LinkedIn subscription with a custom workflow that could search more flexibly, pull from multiple sources, and feed results into the client’s existing process.
The original spec called for n8n. Standard choice for automation workflows. We started building.
A “simple” sourcing workflow in n8n required:
Five separate services. Five separate failure points. Every time one had a problem — rate limit, authentication error, service outage — the workflow stopped, threw an error, and required manual intervention: find the issue, update the automation, test from scratch.
A fairly simple workflow. Around five services to make it work. One small problem and everything stops.
That’s not the n8n team’s fault. That’s just what this category of problem looks like when you use n8n for it.
After the third debugging session in a row, we switched approaches. The entire n8n workflow translated into a Claude skill in a couple of prompts. Not days — prompts.
The architecture change:
Before (n8n): Job description → n8n webhook → Claude API call → LinkedIn API service → formatting service → output
After (Claude skill): Job description → Claude skill (runs everything internally, on subscription)
The differences that mattered in practice:
Cost. The Claude API at per-token pricing is significantly more expensive than Claude running on a Max subscription. By moving the logic into a skill running on subscription, the per-search cost dropped substantially. That matters when you’re running dozens of searches per session.
Iteration speed. When something needed to change in n8n, that meant opening the canvas, finding the right node, making the change, and testing the whole flow again. With the Claude skill: “Change the search to also look for candidates who changed roles in the last 6 months” — done, mid-session, in 30 seconds. The client can make changes themselves just by talking.
Reliability. One service instead of five. When Claude is up, the skill works.
This isn’t a “n8n is dead” take. It genuinely isn’t.
n8n is still the right tool when:
For those use cases, n8n is still excellent. We still run it.
The practical threshold is changing. In 2024, n8n was the obvious choice for most AI automation workflows because Claude skills either didn’t exist or were too limited. In 2026, Claude’s context window, tool use, and subscription model make it genuinely competitive for a category of workflows that used to default to n8n.
The deciding question isn’t “which tool is better?” It’s: does this workflow need to run in the background on a schedule? If yes — n8n. If it runs when a human is working — consider a Claude skill first.
For our HR project, the answer was clear. The client runs searches during working hours, wants to iterate mid-session, and cares about cost per search. Claude skill won on every dimension.
We build custom AI automation for businesses — workflows that run in production, not just demos. If you have a process that’s eating your team’s time, let’s talk.
Related: What is an AI-native web studio · Geotechnical survey automation case study
ilf.studio — AI workflow automation and web development, Gdansk, Poland. 13 countries. 150+ projects.