Disclaimer first
The architecture
A modern AI-assisted trading system has four layers:
- Signal generation โ Pine Script strategies running on TradingView charts
- Signal validation โ Claude evaluates whether the signal makes sense given current market context
- Risk gating โ position sizing, drawdown limits, correlation checks
- Execution โ broker API or DEX router
The pattern is "signal-then-validate-then-execute." Claude is not generating signals from scratch โ it's reviewing signals from a deterministic strategy and adding judgment. This division of labor matters: deterministic strategies are auditable; Claude provides context-awareness.
Writing Pine Script with Claude Code
Pine Script is TradingView's proprietary scripting language. It's quirky โ typed-but-not-really, with weird semantics around bars and lookback. Claude Code handles it surprisingly well. I've found these prompts especially productive:
- "Write a Pine Script v5 strategy that enters long when [condition] and exits when [condition]. Include alertcondition() calls so I can wire up webhook alerts."
- "Review this Pine Script for lookahead bias. Identify any places where I'm using
request.security()incorrectly." - "Refactor this strategy to use multiple timeframes โ entries on 4h, trailing stops on 15m."
The combination of Claude's Pine knowledge and TradingView's instant visual feedback means iteration cycles measured in minutes, not hours.
TradingView webhook alerts
The bridge from TradingView to your code is webhook alerts. When a strategy condition fires, TradingView POSTs a JSON payload to a URL you specify. Your backend receives it and decides what to do.
// TradingView alert message format (you author this in the alert UI):
{
"ticker": "{{ticker}}",
"action": "BUY",
"price": {{close}},
"timestamp": "{{time}}",
"strategy_id": "trend_follow_v3"
}
Two gotchas:
- TradingView webhook IPs are documented but can change. Whitelist them on your backend.
- TradingView retries failed webhooks โ your endpoint must be idempotent.
Claude as a strategy validator
This is the interesting layer. When a signal fires, before executing, we ask Claude:
- "Given current news/sentiment for $TICKER, does this signal make sense?"
- "What macro events are scheduled in the next 24 hours that could invalidate this?"
- "Look at the recent volatility regime โ is this strategy's edge likely to hold here?"
Claude returns a structured response โ proceed / hold / abort โ with a written rationale. The bot follows the deterministic strategy 95% of the time and overrides via Claude in the 5% of cases where context matters (Fed announcements, earnings, unusual volatility).
For Claude to do this well, it needs current context. Plug in a market-data API (Polygon, Alpaca) and a news feed (Benzinga, NewsAPI), pass relevant snippets in the prompt, and Claude becomes meaningfully better than a pure-deterministic system.
The execution layer
Options for placing the actual order:
- Alpaca โ clean REST API for US equities and crypto. Good for prototyping.
- Interactive Brokers โ institutional-grade, supports global markets, but complex API.
- Binance / Coinbase API โ for crypto.
- On-chain DEX (1inch, Jupiter) โ for DeFi.
Whichever you pick, isolate the execution layer behind your own abstraction. You will switch brokers eventually. Build a place_order(symbol, side, qty, type) interface and swap implementations underneath.
Risk management is the actual product
Strategies don't blow up accounts. Position sizing and risk management failures blow up accounts. Your bot should refuse to trade if:
- Account drawdown today exceeds X%
- Position size would exceed Y% of NAV
- Correlation with existing positions exceeds Z
- Recent strategy performance has degraded (rolling Sharpe below threshold)
- Volatility regime change detected (VIX above N, BTC realized vol above M)
This is uninteresting code that prevents catastrophic outcomes. Spend more time here than on signal generation.
Backtesting reality
TradingView's built-in backtester is a starting point, not a truth. It has lookhead bias problems, doesn't model slippage well, and assumes you can always get your fill at the bar close. Real performance will be worse โ sometimes much worse.
- Use bar-replay mode to verify your strategy bar-by-bar.
- Model slippage โ at least 5โ10 bps for liquid US equities, much more for thin crypto.
- Walk-forward test โ fit on data older than 6 months, test on the most recent 6 months.
- Out-of-sample is the only sample. A strategy that's amazing in-sample and mediocre out-of-sample is curve-fit and will fail live.
Mistakes I've watched people make
- Backtested on bull market data, deployed in bear market, surprised.
- Over-leveraged because backtest showed great Sharpe; first 2-sigma move wipes them out.
- Didn't include commissions, especially for high-frequency strategies where commissions destroy the edge.
- Used
request.security()with default settings, picked up lookahead bias, looked profitable on chart but lost live. - Ran without circuit breakers; bug triggered 1000 orders in a minute.
- Trusted Claude's signal-validation override too much; Claude is a useful but imperfect filter, not an oracle.
djEnterprises consults on trading bot architecture โ particularly the AI-augmented variants โ for clients with serious capital and serious risk tolerance. Book a call if that's you. For everyone else: paper-trade for at least three months before you commit a dollar.
- TradingView โ Pine Script v5 documentation
- TradingView โ Webhook alert syntax
- Alpaca โ Alpaca API documentation
- Anthropic โ Claude API documentation
- Robert Carver โ Systematic Trading (book) for risk management frameworks