This guide is for builders who have a Claude Max subscription, a real project, and a nagging suspicion they're using maybe 30% of what Claude can actually do. If that's you, this post answers every common question I get about getting more out of Claude — with the specific situations behind each answer.
Honest framing: Claude has shipped fast. The product surface in May 2026 (Chat, Code, Cowork, Plans, Console, Artifacts, MCP connectors, skills, agents, hooks, the 1M context Opus model, configurable thinking effort, Fast mode) is genuinely a lot. Most users land in a workflow that works "well enough" and never explore further. That's leaving real productivity on the table.
The Claude product family in 2026
Anthropic ships several products under the Claude brand. They look related but solve different problems:
- Claude.ai (web) — the consumer/professional chat interface. Conversations, file uploads, Artifacts, Projects, MCP connectors, image generation, document analysis. This is where most people start.
- Claude Desktop (macOS / Windows) — the native app version of Claude.ai with the same core feature set plus better integration with your local machine.
- Claude Code (CLI) — the terminal-based developer agent. Runs in your shell, edits files on disk, executes commands, can spin up subagents, supports MCP, slash commands, skills, hooks. This is what's writing your iOS app with you.
- Claude Cowork — collaborative agentic workspace inside Claude.ai for design mockups, multi-file artifacts, and "build me an HTML/CSS prototype" kinds of tasks. The mockup-and-iterate environment.
- Claude Console — console.anthropic.com. The developer dashboard. API keys, usage metering, billing, organization management, rate-limit settings, evaluations, fine-tuning (where available), Workbench for raw prompt testing.
- Claude API — programmatic access. What your Railway backend calls when your RDR2 Companion app sends a question.
- Claude in Chrome / mobile apps — companion clients for the same conversational experience on the go.
About "Claude Design": there isn't currently a standalone product called Claude Design. The design work people refer to ("Claude designed this mockup") happens inside Cowork, using its Artifacts/preview surface to generate visual HTML/CSS prototypes that you can iterate on conversationally. If you're seeing the word "Design" in your UI somewhere I'm not, screenshot it for me — the lineup evolves.
Plans: Free, Pro, Max-$100, Max-$200
The plans are a usage ladder, not a feature ladder. Almost every feature is available on every paid tier; what changes is how much you can use them before you hit a cap.
- Free — light Claude.ai access, Sonnet by default, daily message cap, no Code.
- Pro ($20/mo) — ~5x Free's usage, access to Opus + Sonnet, Claude Code on a limited budget, Projects, basic connectors. Good for "Claude is my thinking partner" use cases. Tight for serious building.
- Max โ $100/mo tier — roughly 5x Pro's quota. Real headroom for Claude Code work. Most solo builders shipping real apps live here.
- Max โ $200/mo tier — roughly 20x Pro's quota. The "I'm running Code all day every day" tier. Multiple parallel agents, long context windows, heavy tool use.
- Team / Enterprise — pooled usage, SSO, admin controls, business terms. Different shape entirely.
Should you upgrade from Max-$100 to Max-$200? The honest test: do you hit your usage cap before your workday ends? If you regularly hit the 5-hour-window throttle while in the middle of a Claude Code session, $200 buys back your day. If you only hit it occasionally during heavy refactors, $100 is fine. For a builder shipping multiple iOS apps using Claude Code as the primary IDE assistant, $200 usually pays back in saved time within the first week of any given month.
The cleanest way to decide: check /usage in Claude Code at the end of each day for a week. If you're regularly past 80% of your window allocation, the bigger plan is the right call.
The models: Opus 4.7, Opus 4.7 1M, Sonnet 4.6, Haiku 4.5, Opus 4.6 Legacy
The model picker in your desktop app (Cmd-1 through Cmd-5) is the single most under-used optimization. The model you pick changes both the quality and the cost of every turn.
Opus 4.7 โ the flagship
Highest capability. Best at: architectural decisions, debugging across many files, long reasoning chains, ambiguous problem framing, code review of subtle bugs. Most expensive per token. Use when: the problem is genuinely hard and you'd rather burn tokens than re-do the work.
Opus 4.7 1M โ same model, 1M-token context
Same intelligence as standard Opus 4.7 but with a 1 million-token context window (roughly 800,000 words of conversation history + files you've shared). Useful when: you're deep into a session that's been going for hours and a normal context window would force a compact; you're working with a large codebase and want Claude to hold many files in memory at once; you're doing research that requires referencing 50+ documents. Use sparingly — 1M context is dramatically more expensive per call than standard context. Reserve for genuinely long sessions.
Sonnet 4.6 โ the workhorse
Roughly 80% of Opus's quality on most tasks, 2-3x faster, ~5x cheaper. Best at: most coding work, content generation, document analysis, focused tasks where the framing is clear. Use as default for routine work. Switch to Opus when you hit something where Sonnet is visibly struggling.
Haiku 4.5 โ fast and cheap
The smallest, fastest, cheapest current model. Best at: high-volume mechanical work, simple transformations, fast iteration, batched operations. Surprisingly competent given the price. Use when: you're running an automation that calls Claude in a loop, you need many small responses fast, or you're prototyping flows where quality is secondary to volume.
Opus 4.6 Legacy
The previous Opus, kept available because some workflows have been tuned around its specific behavior. Use only if you have a prompt or evaluation set that performs better on 4.6 than 4.7. For new work, ignore.
Practical model strategy
- Default to Sonnet 4.6 for most tasks. It's good enough for 80% of work and costs much less.
- Promote to Opus 4.7 when you're debugging something hard, designing architecture, or doing work that's expensive to redo.
- Promote to Opus 4.7 1M only when you actually need the context. A short conversation on the 1M model wastes money for no benefit.
- Drop to Haiku 4.5 for batch / automation / fast-iteration loops.
You can switch models mid-conversation with /model in Claude Code. In Claude.ai the model picker is one click. Use it.
Thinking effort: Low / Medium / High / Extra high / Max
Thinking effort controls how much "internal reasoning" Claude does before responding. Higher effort = more invisible chain-of-thought before the visible answer = better answers on hard problems, slower responses, more tokens spent.
- Low — near-instant. Use for greetings, simple recall, quick lookups, one-liner code edits.
- Medium — the default sweet spot for most chat work, content drafting, and "explain this code."
- High — (your current default per the screenshot) good for any non-trivial coding work, multi-step problems, refactors, code review.
- Extra high — complex architectural decisions, multi-file debugging, problems where you've already tried the obvious solutions.
- Max — the hardest problems where you genuinely want Claude to think for 30-60+ seconds before responding. Use rarely — expensive and slow. Most users never need it.
Practical rule: default to Medium for chat / writing / simple code, High for serious coding work, Extra high only when High is visibly failing. Max is reserved for "I need this right and I'll wait."
Fast mode
Fast mode tells Claude to skip the explicit reasoning chain and respond directly. The visible behavior: noticeably quicker turn-around with slightly lower depth on complex tasks. Useful when you're in a rapid back-and-forth flow (renaming variables, batch text edits, quick yes/no decisions) where waiting on chain-of-thought slows the rhythm. Bad fit when the problem actually benefits from reasoning. Toggle on for fast iteration, off for hard problems.
Chat vs Claude Code vs Cowork โ when to use which
This is the single most common confusion. Each tool has a sweet spot:
Use Claude.ai / Claude Desktop (Chat) when:
- You're thinking, not building. Brainstorming features, debating architecture, drafting copy, asking research questions, summarizing PDFs.
- You want to use Artifacts for a single-file deliverable (a one-pager doc, a React component, a chart, a quick HTML preview).
- You want image generation, document Q&A, or other consumer-style features.
- You're using Projects to keep a long-running topic isolated with its own knowledge base.
- You want to iterate on writing — this blog post lives there.
Use Claude Code when:
- You want Claude to edit your real source code on disk — multi-file changes, refactors, rename-across-files.
- You want Claude to run your shell commands —
git,npm,xcodebuild,curl,railway up. - You want Claude to interact with external tools via MCP — App Store Connect, GitHub, Railway, YouTube, Slack.
- You want subagents running in parallel (research while you build, build while you test).
- You're doing any sustained development work longer than a 5-minute Q&A.
Use Cowork when:
- You want visual mockups of an app idea before you commit to building it — "show me three landing-page variations."
- You're doing multi-file design work that's still exploratory — not yet a real codebase, more than a single Artifact.
- You want a shared workspace the agent can iterate inside without touching your local disk.
- You're showing stakeholders a clickable prototype before a full build.
For your existing iOS work (RDR2 Companion, GTA V Companion): Claude Code in the project folder is your daily driver. Cowork comes in when you want to design something new visually. Chat comes in for the strategy and writing work.
Transitioning a Cowork mockup into Claude Code
This is your specific question. Here's the workflow that works:
- Lock the mockup. Iterate in Cowork until the HTML/CSS/visual feel is what you want.
- Export the artifact. Download the HTML/CSS files or copy the rendered code out of Cowork.
- Save it to your project repo under a path like
mockups/feature-name/. Per yourCLAUDE.md, this is exactly the convention you've already set up. - Open Claude Code in the project and ask: "Look at
mockups/feature-name/and translate this design into native SwiftUI. The mockup is HTML/CSS — treat the visual styling as the spec. Match colors, spacing, typography. Use SwiftUI patterns fromCLAUDE.mdconventions." - Iterate via screenshots. After Claude implements, run the simulator (Claude Code can do this), screenshot the result, drop the screenshot into the chat, and ask Claude to compare against the mockup. Iterate.
The friction in this workflow is usually step 4 — Claude needs the mockup as a clear reference. The trick is putting the mockup files inside the project directory so Claude Code can read them as part of its working set. Pasting screenshots also works, but a saved HTML mockup is the cleanest spec.
Slash commands — the ones that actually matter
Claude Code ships with dozens of slash commands. Most you'll never use. Here are the ones that materially change your workflow:
/help— lists every available command in your current install./init— bootstraps aCLAUDE.mdfile by analyzing the current project. Run this once per new project, then edit./model— switch models mid-session. Hop to Opus for a hard part, back to Sonnet for routine./context— shows how full your current context window is. Watch this approach the limit during long sessions./usage— current usage against your plan's window. This is the closest thing to a "usage bar" you asked about — run it any time to see where you stand./cost— cost of the current session if you're on API billing rather than a plan./compact— tells Claude to summarize the conversation so far and continue with the summary as context. Use this when/contextshows you're getting close to the limit but you don't want to lose continuity./clear— full reset of the conversation. Use when you're starting a genuinely new task and the prior context is no longer relevant./resume— resume a previous session from history./agents— manage the agent definitions available to your project (the subagents that can be spawned for parallel work)./skills— manage skills (reusable instruction packs Claude loads for specific tasks)./mcp— list and manage your MCP servers. Add or remove the GitHub, Apple Notes, Railway, etc. integrations./hooks— manage lifecycle hooks (run-this-before-edit, run-this-after-commit, etc.)./add-dir— explicitly include another directory in Claude Code's working context. Useful when your project depends on a sibling repo./review— trigger a structured code review of the current changes./pr-comments— pull GitHub PR comments for the current branch./doctor— diagnose your Claude Code installation when something's broken./ide— manage IDE integration (VS Code, JetBrains, Xcode connector)./config— configuration settings./login,/logout— account switching./export— export the current conversation to a file./version— current Claude Code version./quit//exit— end the session.
The five commands you'll use weekly: /usage, /context, /compact, /clear, /model. Memorize these.
CLAUDE.md (and AGENTS.md) — why yours sometimes fails
You've already invested in a serious CLAUDE.md for the RDR2 project. When it works, Claude opens a session and instantly knows: you're Daniel, UCF Computer Engineering 2000, returning to dev after 25 years, building on Railway, shipping to App Store, prefers SwiftUI. When it doesn't, you wonder what you're paying for.
The most common failure modes and their fixes:
- The file is too long. Claude reads it but the signal-to-noise ratio is bad. Fix: put the most operationally important rules in the top 30 lines — "don't commit without confirmation," "never put API keys in Swift files," "always run /mockups check before UI work." Everything else can be lower.
- The file is in the wrong place. Claude Code reads
CLAUDE.mdfrom the project root. If you've started Claude from a subdirectory, you may be missing the project-level file. Fix: always launch Claude from the project root (or use/add-dirto expand the context). - Conflicting instructions across files. A
CLAUDE.mdin your home directory + a projectCLAUDE.md+ anAGENTS.mdall giving overlapping rules will confuse Claude. Fix: Put global stuff (your communication style, model preferences) in~/CLAUDE.mdand project-specific stuff in the project'sCLAUDE.md. Don't repeat. - Stale content. Your CLAUDE.md still references decisions from six months ago that no longer apply. Fix: end every meaningful session with "Claude, update CLAUDE.md to reflect what we did this session."
- The "important" claims aren't actually important. Marking every line as IMPORTANT trains Claude to ignore the marker. Fix: reserve IMPORTANT for the 3-5 rules that, if violated, would cause real harm (data loss, leaked keys, broken builds).
- It says what but not why. "Always use NavigationStack." Why? When? Fix: include the reasoning so Claude can generalize when your rule doesn't directly apply.
AGENTS.md is a related convention some teams use to define subagent behavior — what each agent type does, when to invoke it. If you're using Claude Code's agent system, an AGENTS.md alongside your CLAUDE.md is helpful. For solo work, the default agents are usually sufficient.
MCP connectors — which to enable, when too many hurts
MCP (Model Context Protocol) is the open standard Anthropic created for connecting Claude to external tools and data. Each MCP server adds a set of tools Claude can invoke: read your GitHub repos, query your Notion workspace, control Chrome, manage Apple Notes, deploy to Railway, etc.
Your concern is correct: too many connectors enabled at once does hurt performance. Each connector advertises its tools to Claude, and those tool definitions cost context-window tokens before the conversation even starts. A bloated tool list also makes Claude pick the wrong tool more often.
The right approach is per-project scoping:
- Always-on globally — only the few connectors you literally use every session. For most builders: GitHub, filesystem (built-in), maybe Apple Notes if that's your second brain.
- Per-project — enable connectors that match the work. RDR2 project gets App Store Connect API + Railway. The website project gets Vercel + Plausible.
- Just-in-time — some connectors (Figma, YouTube, niche APIs) only get enabled when you're doing that specific work, then disabled again.
Use /mcp to inspect what's currently loaded. If the tool count is north of ~50, you're probably over-equipped. Trim.
Daily-use MCPs for an iOS builder:
- GitHub MCP — manage issues, PRs, fetch comments without leaving the chat.
- Filesystem (built-in) — reading/writing project files.
- App Store Connect MCP (or a custom one wrapping the API) — check build status, app metadata.
- Railway MCP — deploy, check logs.
- iOS Simulator / Xcode bridge — for the simulator screenshots and builds.
- Plausible / analytics MCP — if you want to ask "how did traffic do this week" without context-switching.
That's 5-6 servers. Resist the temptation to install every interesting MCP you see — each one taxes the context window whether you use it that session or not.
Tokens, limits, and "why am I throttled on Max?"
This is the most confusing part of the product for new builders, so let's slow down.
What is a token?
Tokens are the unit Claude (and every modern LLM) measures input and output in. Roughly: 1 token = ~4 characters of English text = ~0.75 words. A typical conversation message of 100 words is ~133 tokens. A 1,000-line code file is roughly 10,000-15,000 tokens. The phrase "How does this work?" is 5 tokens.
How limits work on Max
Your plan gives you a 5-hour rolling window of usage. Inside that window, you have a quota. Once you hit it, you wait until the oldest hour rolls off before you can continue at full speed. The window resets continuously, not at a fixed time.
Within that window, two things consume quota fastest:
- Long conversations. Every turn sends the full conversation history to the model. A 4-hour conversation with many tool calls has been re-sending tens of thousands of tokens every turn. The longer it runs, the more each new turn costs.
- Large file reads. When Claude Code reads a 5,000-line file, that's ~50,000 tokens of input on that one turn. Reading many large files in a session adds up fast.
Why you hit limits on Max even with "plenty left"
You probably saw a number on the billing page (monthly total) that looked fine, but you hit the 5-hour window cap. The monthly view doesn't show that.
/usage in Claude Code shows your window quota specifically — that's the relevant number for "am I about to hit a wall."
Mitigations that work
- Use
/compactproactively. When/contexthits ~60%, run/compactto summarize the conversation. This cuts the per-turn token cost dramatically. - Use
/clearat task boundaries. Switching from "fix this bug" to "design the new feature" is a context boundary. Don't carry the bug-fix conversation into the design conversation. - Don't paste huge files when you can reference them. Telling Claude "look at
Services/ClaudeService.swiftlines 80-120" is much cheaper than pasting the file. - Drop to Sonnet for routine work. Sonnet uses far less of your quota per turn than Opus. Promote only when needed.
- Lean on prompt caching. If you continue a conversation within ~5 minutes, the prompt cache stays warm and you pay much less. Long pauses between turns are expensive.
- Spawn agents for parallel work. A subagent in a fresh context is cheaper than continuing the main thread with even more history.
When to start a new session
Start a new Claude Code session when:
/contextshows 80%+ full and you don't want to/compact(compacting can lose important nuance).- You're switching tasks at a clean boundary — finished feature A, starting feature B.
- The conversation has drifted away from useful context (lots of failed attempts, dead-end debugging, exploratory commands that aren't relevant going forward).
- You want to use a different agent set, MCP set, or model strategy and the current session was configured for the previous mode.
- Claude has gotten "stuck" — repeatedly making the same mistake despite correction. A fresh session often unsticks because it doesn't carry the failed-attempt context.
Don't start a new session when:
- Mid-debug. You'll lose the investigative trail. Use
/compactinstead. - You haven't updated
CLAUDE.mdwith anything important from this session that future Claude needs to know. Doing this should be the last step before starting fresh. - You're partway through a feature where the unwritten "we tried X and Y didn't work" context matters.
Visual usage indicator: The cleanest answer to "is there a usage bar in the desktop app" is no — the desktop app doesn't currently surface a persistent bar. /usage in Claude Code is the closest you get. For Claude.ai (Chat), the usage detail is on console.anthropic.com under Usage. Anthropic adds UI improvements regularly though, so this may change.
Why Claude Code "forgets" mid-session that it can do things
Your specific symptom: sometimes Claude knows it can drive Xcode and App Store Connect directly; other times in the same chat it just gives you instructions to do it manually. This is genuinely confusing and usually has one of three root causes:
- The relevant MCP server disconnected or wasn't loaded for this session. Run
/mcp— if the Xcode bridge or App Store Connect MCP isn't listed, Claude isn't being lazy, it literally doesn't have the tool. Re-enable it. - The conversation was compacted and the tool-availability context got trimmed. After a
/compact, sometimes Claude doesn't re-acknowledge available tools clearly. Fix: ask "what tools do you currently have available?" Claude will list them and effectively re-prime itself on what it can do. - You're hitting a context boundary where Claude defaults to "instructions to user" mode. If the conversation has been mostly Q&A for a while, Claude can drift into instruction-giver mode rather than tool-using mode. Fix: be explicit — "use the App Store Connect MCP to check the build status yourself rather than telling me how to do it."
The general principle: when Claude offers instructions instead of action, ask "can you do this directly?" That single phrase reliably gets Claude back into agent mode if the tools are available.
iOS development with Claude โ the toolchain
You asked specifically about Claude Code + Xcode vs alternatives. Honest comparison:
Do iOS apps require Xcode?
For App Store distribution: yes, effectively. The build, signing, and submission flow all run through Xcode (or its underlying command-line tools xcodebuild, xcrun, altool). Cross-platform frameworks (Flutter, React Native, .NET MAUI) all eventually compile through Xcode toolchain to produce an App Store-compatible binary. There's no realistic path that bypasses Xcode for production iOS distribution.
Claude Code + Xcode
What you're using today. Claude Code edits Swift files directly, runs xcodebuild, drives the iOS Simulator, takes screenshots for verification, and can interact with App Store Connect via API for build status. This is currently the most agentic iOS development workflow available. The combination of native Swift comprehension + tool use + simulator verification is hard to beat.
ChatGPT Codex
OpenAI's coding agent. Can edit code and run commands. Reasonable Swift support. Less Apple-toolchain-aware than Claude Code in my experience — particularly weaker at App Store Connect, TestFlight, and the signing dance. Strongest as a second opinion or for specific tasks where its model has particular strength.
SuperGrok / Grok in xAI's tools
Improving but not yet at parity for iOS development. Better suited to general Q&A and real-time research.
Gemini
Google's models can write Swift, but the agentic iOS workflow story is weaker than Claude Code's. Better for non-iOS work or as a cross-checker.
Cursor
An AI-first VS Code fork. Excellent for web/full-stack work where the IDE-level integration matters. Less compelling for iOS specifically because Xcode is the canonical iOS IDE and Cursor doesn't replace it. Some builders use Claude Code in terminal for iOS and Cursor for web in parallel.
Replit (or "Replic" as you wrote)
Cloud development environment with strong AI integration. Best for web apps and rapid prototyping. Limited utility for iOS App Store distribution because of the Xcode dependency.
Visual Studio on Mac
You mentioned you installed it. Honestly: for iOS work, Visual Studio on Mac isn't the right tool. Microsoft has wound down VS for Mac. You'd want Visual Studio Code (different product, lighter, cross-platform), but even VS Code is just a text editor for iOS — you still need Xcode underneath. Stick with your current Claude Code + Xcode workflow.
Firebase / Google Antigravity for iOS?
Firebase is a backend-as-a-service (auth, database, push notifications, analytics). It's perfectly usable as the backend for an iOS app, but it doesn't replace Xcode — it's a backend, not an IDE. Antigravity is Google's agentic IDE; like Cursor, it doesn't replace Xcode for iOS. Neither is "an iOS development tool" in the way you're thinking.
The bottom line for your situation
Claude Code + Xcode is the right toolchain for your iOS work today. You're not missing out by not adopting Cursor / Replit / Antigravity / Visual Studio. The marginal gain from a second tool is small and the cost of switching context is real. Stay focused on Claude until you hit a specific limitation Claude can't address.
Making your apps look premium โ the design overhaul question
Your RDR2 and GTA V Companion apps "function well but lack polish." This is solvable. The workflow that produces premium-feeling iOS apps with Claude:
- Define the aesthetic explicitly in
CLAUDE.md. "Western / RDR2: warm desaturated palette — bone, oxblood, sage. Period serif headings. Generous whitespace. No bright accent colors. Avoid 2020-era gradient-blob design." - Build a reference deck. Screenshots from genuinely premium apps you admire (not just other companion apps). Save them in
references/design/alongside the mockups folder. - Use Cowork to generate visual mockups of redesigned screens before touching SwiftUI. Iterate the visuals quickly when you don't have to compile.
- Then ask Claude Code: "Take
mockups/redesign-home-screen.htmland rewriteViews/HomeView.swiftto match. Use the existing color tokens. Keep all existing functionality — only the visual layer changes." - Verify in the simulator by taking screenshots and asking Claude to compare against the mockup. Iterate.
- One screen at a time, not the whole app at once. A premium-feeling app is the result of many small choices — per-screen iteration produces those choices.
- Steal shamelessly from your design references. The specific shadows, the specific corner radii, the specific font weights of the apps you admire are not their differentiation. Their product is. Adopt their visual vocabulary freely.
This is genuinely the workflow that produces premium-looking iOS apps with AI assistance in 2026. Cowork for visual exploration, Code for the implementation, simulator screenshots for verification.
Cursor, Manus, Replit and the wider AI tool landscape
Brief opinions:
- Cursor — legitimately excellent for web/full-stack work in VS Code. If you also build web apps, worth a dedicated post (it's on the queue). Doesn't beat Claude Code for iOS.
- Manus — agentic AI for general computer tasks. Interesting and improving but still rough for sustained development work. Worth watching.
- Replit — the right tool when you want a fully cloud-hosted IDE for web apps. Their AI is good. Not relevant for iOS.
- Aider — open-source CLI similar to Claude Code, multi-LLM. Solid if you want vendor flexibility but loses MCP, Skills, and Anthropic-specific features.
- Devin / Replit Agent / OpenDevin — full-autonomy agents. Demos are impressive; real-world reliability for sustained dev work is below Claude Code's bar today.
Your instinct is right: staying focused on one tool until you've maxed out its capability beats jumping between five tools that each give you 80% of what Claude already does. Revisit alternatives when you hit a specific Claude limitation, not before.
How APIs work + the most useful ones
An API (Application Programming Interface) is a contract between two pieces of software: "send a request in this format to this URL, and I'll send a response back in this format." That's it. The API doesn't care if you're a person typing curl commands or another piece of software making the request.
Concrete example: when your RDR2 Companion app on a user's phone wants to ask Claude a question, it sends an HTTPS request to your Railway backend; your backend sends an HTTPS request to api.anthropic.com with the user's question + your API key; Anthropic's servers run the inference and send the response back; your backend forwards it to the user's phone. Three API calls, three contracts.
APIs you'll encounter as a builder:
- Anthropic API — what Claude API calls look like under the hood.
- OpenAI API — same shape, different vendor.
- App Store Connect API — manage app metadata, builds, TestFlight, reviews programmatically.
- GitHub API — fetch repos, issues, PRs, comments.
- Railway API — deploy, manage services, fetch logs.
- Vercel API — same for Vercel deployments.
- YouTube Data API — channels, videos, comments.
- Stripe API — payments.
- Twilio API — SMS, voice.
- Apple Push Notification service (APNs) — send push notifications to iOS devices.
Most APIs require an API key (a long secret string) that authenticates your requests. Never put API keys in client-side code — including your iOS app's Swift source. They live on your backend (Railway, in your case) where users can't extract them.
When to use code from GitHub in your own development
The honest test before pulling someone's GitHub project into your app:
- License — check the LICENSE file. MIT, Apache 2.0, BSD: generally fine for commercial use. GPL: viral — using it can require open-sourcing your whole app. No license at all: legally murky, treat as off-limits.
- Maintenance — last commit date, open vs closed issues ratio, response time on issues. A repo with no activity in 2 years is technical debt waiting to happen.
- Stars + forks — rough popularity signal, not a quality guarantee.
- Dependencies — does this library pull in 30 transitive dependencies? Each one is supply-chain risk.
- Audit the code yourself — for any library that handles user data, auth, or payments, read the source before integrating. Claude can do this audit in minutes.
- Active CVEs — check Snyk, GitHub's Security tab, or just search "[library] CVE 2026."
- Maintainer reputation — is the maintainer a known good actor or a single anonymous account?
For iOS specifically, Swift Package Manager is the cleanest dependency path. Add packages via Xcode's Swift Packages UI, pin to a tagged version (never a branch), audit transitive dependencies on first add.
Telling Claude you're learning
Two-step setup that works for every Claude product:
- Add to your global
CLAUDE.md(the one in~/CLAUDE.md, not project-specific):## About me I'm Daniel, a Computer Engineer (UCF 2000) returning to development after a 25-year gap. I have strong engineering fundamentals but the specific stacks I'm using now — Swift, SwiftUI, Claude API, Railway, MCP, modern git workflows — are largely new to me. ## How I want to work - Explain WHY you're doing something, not just what. - Flag better approaches I should consider before proceeding. - Teach as you go — don't talk down, but don't assume current knowledge of new stacks. - When a tool would be a better fit, say so explicitly. - For one-off conversations, open with: "I'm still learning [topic]. Explain your reasoning as you work and flag any time you make a non-obvious choice." This sets the mode for the conversation.
Both Claude Code and Claude.ai will adapt within a couple of turns. The CLAUDE.md approach is durable across sessions; the one-off framing only lasts for the current conversation.
Daily playbook
What "using Claude at maximum efficiency" actually looks like as a working pattern:
- Start of day: open Claude Code in the project root. Run
/usageto see your starting state. - Pick the right model. Sonnet by default; Opus when you know the work is hard; 1M only for deep sessions.
- State the goal explicitly. "Today I want to ship the v1.3 freemium update. The remaining work is X, Y, Z." Claude works better with a stated goal than incremental tasks.
- Use the right product for the task. Cowork for visual exploration; Code for build; Chat for thinking.
- Watch
/contextat the 60% mark. Compact or branch to a new session before you hit the wall. - End-of-session: update
CLAUDE.mdwith any decisions made, conventions established, or known limitations. - End-of-week: review what slowed you down. If a particular tool kept failing, fix the tool. If a particular pattern kept working, codify it.
This guide is the foundation. Several questions in here have full posts of their own coming: GitHub fundamentals, backend choice deep-dives (Railway, AWS, Azure, GCP), AI provider economics for solo builders, and the AI-tools landscape. If something here didn't fully land for you, tell me which section and I'll go deeper.
- Anthropic โ Claude documentation
- Anthropic โ Claude Code documentation
- Model Context Protocol โ MCP specification
- Anthropic โ Console (usage, billing, API keys)