# LabForge: Building an AI-Powered iOS App in ~40 Hours A solo tinkerer's journey from idea to App Store submission — integrating Claude AI, Firebase, and RevenueCat into a production SwiftUI app. By Nate R. --- ## The Problem Homelabs are infinitely customizable. That's what makes them appealing to enthusiasts — and overwhelming to newcomers. Someone interested in self-hosting faces an immediate wall of decisions: Raspberry Pi or repurposed Dell OptiPlex? Docker or bare metal? What's a realistic budget for a media server? The answers live scattered across Reddit threads, YouTube videos, and forum posts from 2019 that may or may not still apply. I wanted to build something that could synthesize all of that tribal knowledge into personalized, actionable recommendations. Not a static guide, but a tool that adapts to your budget, experience level, and goals. LabForge is that tool: an iOS app that uses Claude AI to generate custom homelab configurations — complete with hardware lists, rough cost estimates, and step-by-step setup instructions. --- ## Project Scope - Development Time: ~40 hours across 8 weeks (evenings and weekends, while working full-time) - Timeline: September – November 2025 - Platform: iOS 16+ (SwiftUI) - Backend: Node.js proxy on Linode - AI Model: Claude Haiku via Anthropic API - Status: Final polish before App Store submission The time constraint forced discipline. I worked in focused blocks, prioritizing features that delivered the most value with the least complexity. --- ## Architecture Overview The system has three main components: a SwiftUI iOS app, a Node.js backend proxy, and Claude AI for generation. ### Why a Backend Proxy? The most important architectural decision was routing all AI requests through a backend proxy rather than calling Anthropic's API directly from the app. Security — The API key never touches the client. Even with certificate pinning and obfuscation, shipping secrets in an iOS bundle is a liability. The backend holds the key; the client holds nothing sensitive. Rate Limiting — Free users get 1 generation per day; Pro users get 10. This logic must be server-authoritative. If the client controlled rate limits, anyone with a proxy tool could bypass them. The server tracks usage in Firestore and returns a 429 when limits are exceeded. Flexibility — I can update the Claude prompt, adjust rate limits, or swap models without shipping an app update. The proxy runs on a $5/month Linode VM managed by PM2. --- ## Key Technical Decisions ### 1. Structured Output for Reliable Parsing The biggest technical challenge was getting Claude to produce consistent, parseable output. Early attempts returned free-form text that broke the UI in unpredictable ways. The solution: an ultra-specific JSON schema baked into the system prompt. Every field is defined with types, constraints, and examples — including hardware components with names, prices, categories, and Amazon search queries for affiliate link generation. When working with LLMs in production, the prompt is the product. I spent more time refining the system prompt than writing Swift code. ### 2. Hybrid Services Pattern I wanted Firebase for production but needed the app to function during development without network calls. The solution was a hybrid services pattern that abstracts the storage layer behind feature flags. In debug builds, I can flip a flag and test the entire flow offline. In production, everything routes through Firebase. This pattern covered authentication, favorites storage, and user data — any service with both local and cloud implementations. ### 3. Request Signing Every API request is signed with HMAC-SHA256 to prevent replay attacks and unauthorized access. The iOS app retrieves a signing secret from Firebase Remote Config, generates a timestamp, and creates an HMAC signature of the timestamp and request body. The backend verifies the timestamp is recent (within 5 minutes), recalculates the HMAC, and only processes valid requests. The signing secret lives in Remote Config — not the app bundle. If compromised, I can rotate it instantly without an App Store update. My CySA+ background shaped this approach. Thinking about request validation, replay prevention, and secret rotation from the start meant I didn't have to bolt security on after the fact. --- ## Development Timeline ### Weeks 1–2: Wireframe and UI Skeleton Started with a basic wireframe and jumped straight into SwiftUI. The visual feedback loop of "change code, see result" accelerated learning dramatically. The initial UI was rough but functional: a form with dropdowns for use case, budget, and experience level, plus a results view. ### Weeks 3–4: Backend and AI Integration The most technically dense phase. Set up a Linode VM with Ubuntu 24.04, Node.js/Express with PM2, Caddy for automatic HTTPS, Firebase Admin SDK for token verification, and the Anthropic SDK for Claude calls. The first successful end-to-end generation — tapping "Generate" on my phone and seeing a formatted homelab config appear — was the most satisfying moment of the project. ### Weeks 5–6: Authentication and Subscriptions Firebase Auth with Sign in with Apple (required for App Store) and email/password. RevenueCat for subscription management. Free users get 1 generation per day; Pro subscribers ($4.99/month or $29.99/year) get 10 per day plus custom parameters. This phase had the most third-party integration complexity: App Store Connect products, RevenueCat entitlements, syncing subscription state to Firestore, and handling edge cases like restore purchases and expiration. ### Weeks 7–8: Polish and App Store Prep Privacy Manifest (required by Apple as of 2024), Firestore security rules hardening, error message UX, App Store screenshots and metadata, and legal pages for Privacy Policy and Terms of Service. --- ## What I'd Do Differently Start with the data model, not the UI. I refactored the core LabConfiguration struct three times as I discovered edge cases in Claude's output. Defining the schema first would have saved hours. Write LLM-focused documentation early. Halfway through the project, I created a CURSORRULES.md file documenting code patterns, architecture decisions, and conventions. This dramatically improved AI-assisted code suggestions. I should have started it on day one. Test subscription flows on a real device sooner. Sandbox testing in the simulator works, but the real App Store sandbox on a physical device behaves differently. I found bugs that only appeared on-device. --- ## Tech Stack - iOS: SwiftUI, Combine, async/await - Backend: Node.js, Express, PM2 - Database: Firebase Firestore - Auth: Firebase Auth, Sign in with Apple - Subscriptions: RevenueCat - AI: Claude Haiku (Anthropic API) - Hosting: Linode ($5/mo Nanode) - SSL: Caddy - Analytics: Firebase Analytics, Crashlytics - Dev Tools: Cursor IDE with custom rules --- ## By the Numbers - ~40 hours of development - ~14,000 lines of Swift - ~500 lines of backend Node.js - 12 Markdown documentation files (2,500+ lines) - ~$5/month infrastructure cost (Linode + Firebase free tier) The documentation-to-code ratio might seem high, but it paid dividends. The CURSORRULES.md file alone improved AI-assisted development speed by roughly 2–3x in the final weeks. --- ## Key Takeaways 1. AI integration is mostly prompt engineering. The code for calling Claude is trivial. The hard part is crafting a system prompt that produces reliable, structured output across edge cases. 2. Server-side rate limiting is non-negotiable for AI apps. API calls cost money. If your rate limiting can be bypassed by editing a plist or intercepting a network call, it will be. 3. Hybrid local/cloud patterns ease development. Developing offline without mocking every Firebase call saved significant time. 4. Document for your future self and your AI tools. A CURSORRULES.md that describes your architecture pays for itself within days when using AI-assisted development. 5. Ship single-platform first. Constraining to iOS reduced complexity enough that I could actually finish. Cross-platform can come in v2. --- ## What's Next LabForge is in final polish before App Store submission. Post-launch priorities include user feedback integration, prompt refinement based on real market availability, export features (shareable links, PDF configs), and eventually a companion web app. Website: labforge.ai App Store: Coming soon Contact: support@labforge.ai
# LabForge: Building an AI-Powered iOS App in ~40 Hours
A solo tinkerer's journey from idea to App Store submission — integrating Claude AI, Firebase, and RevenueCat into a production SwiftUI app.
By Nate R.
---
## The Problem
Homelabs are infinitely customizable. That's what makes them appealing to enthusiasts — and overwhelming to newcomers.
Someone interested in self-hosting faces an immediate wall of decisions: Raspberry Pi or repurposed Dell OptiPlex? Docker or bare metal? What's a realistic budget for a media server? The answers live scattered across Reddit threads, YouTube videos, and forum posts from 2019 that may or may not still apply.
I wanted to build something that could synthesize all of that tribal knowledge into personalized, actionable recommendations. Not a static guide, but a tool that adapts to your budget, experience level, and goals.
LabForge is that tool: an iOS app that uses Claude AI to generate custom homelab configurations — complete with hardware lists, rough cost estimates, and step-by-step setup instructions.
---
## Project Scope
- Development Time: ~40 hours across 8 weeks (evenings and weekends, while working full-time)
- Timeline: September – November 2025
- Platform: iOS 16+ (SwiftUI)
- Backend: Node.js proxy on Linode
- AI Model: Claude Haiku via Anthropic API
- Status: Final polish before App Store submission
The time constraint forced discipline. I worked in focused blocks, prioritizing features that delivered the most value with the least complexity.
---
## Architecture Overview
The system has three main components: a SwiftUI iOS app, a Node.js backend proxy, and Claude AI for generation.
### Why a Backend Proxy?
The most important architectural decision was routing all AI requests through a backend proxy rather than calling Anthropic's API directly from the app.
Security — The API key never touches the client. Even with certificate pinning and obfuscation, shipping secrets in an iOS bundle is a liability. The backend holds the key; the client holds nothing sensitive.
Rate Limiting — Free users get 1 generation per day; Pro users get 10. This logic must be server-authoritative. If the client controlled rate limits, anyone with a proxy tool could bypass them. The server tracks usage in Firestore and returns a 429 when limits are exceeded.
Flexibility — I can update the Claude prompt, adjust rate limits, or swap models without shipping an app update. The proxy runs on a $5/month Linode VM managed by PM2.
---
## Key Technical Decisions
### 1. Structured Output for Reliable Parsing
The biggest technical challenge was getting Claude to produce consistent, parseable output. Early attempts returned free-form text that broke the UI in unpredictable ways.
The solution: an ultra-specific JSON schema baked into the system prompt. Every field is defined with types, constraints, and examples — including hardware components with names, prices, categories, and Amazon search queries for affiliate link generation.
When working with LLMs in production, the prompt is the product. I spent more time refining the system prompt than writing Swift code.
### 2. Hybrid Services Pattern
I wanted Firebase for production but needed the app to function during development without network calls. The solution was a hybrid services pattern that abstracts the storage layer behind feature flags.
In debug builds, I can flip a flag and test the entire flow offline. In production, everything routes through Firebase. This pattern covered authentication, favorites storage, and user data — any service with both local and cloud implementations.
### 3. Request Signing
Every API request is signed with HMAC-SHA256 to prevent replay attacks and unauthorized access.
The iOS app retrieves a signing secret from Firebase Remote Config, generates a timestamp, and creates an HMAC signature of the timestamp and request body. The backend verifies the timestamp is recent (within 5 minutes), recalculates the HMAC, and only processes valid requests.
The signing secret lives in Remote Config — not the app bundle. If compromised, I can rotate it instantly without an App Store update.
My CySA+ background shaped this approach. Thinking about request validation, replay prevention, and secret rotation from the start meant I didn't have to bolt security on after the fact.
---
## Development Timeline
### Weeks 1–2: Wireframe and UI Skeleton
Started with a basic wireframe and jumped straight into SwiftUI. The visual feedback loop of "change code, see result" accelerated learning dramatically. The initial UI was rough but functional: a form with dropdowns for use case, budget, and experience level, plus a results view.
### Weeks 3–4: Backend and AI Integration
The most technically dense phase. Set up a Linode VM with Ubuntu 24.04, Node.js/Express with PM2, Caddy for automatic HTTPS, Firebase Admin SDK for token verification, and the Anthropic SDK for Claude calls. The first successful end-to-end generation — tapping "Generate" on my phone and seeing a formatted homelab config appear — was the most satisfying moment of the project.
### Weeks 5–6: Authentication and Subscriptions
Firebase Auth with Sign in with Apple (required for App Store) and email/password. RevenueCat for subscription management. Free users get 1 generation per day; Pro subscribers ($4.99/month or $29.99/year) get 10 per day plus custom parameters. This phase had the most third-party integration complexity: App Store Connect products, RevenueCat entitlements, syncing subscription state to Firestore, and handling edge cases like restore purchases and expiration.
### Weeks 7–8: Polish and App Store Prep
Privacy Manifest (required by Apple as of 2024), Firestore security rules hardening, error message UX, App Store screenshots and metadata, and legal pages for Privacy Policy and Terms of Service.
---
## What I'd Do Differently
Start with the data model, not the UI. I refactored the core LabConfiguration struct three times as I discovered edge cases in Claude's output. Defining the schema first would have saved hours.
Write LLM-focused documentation early. Halfway through the project, I created a CURSORRULES.md file documenting code patterns, architecture decisions, and conventions. This dramatically improved AI-assisted code suggestions. I should have started it on day one.
Test subscription flows on a real device sooner. Sandbox testing in the simulator works, but the real App Store sandbox on a physical device behaves differently. I found bugs that only appeared on-device.
---
## Tech Stack
- iOS: SwiftUI, Combine, async/await
- Backend: Node.js, Express, PM2
- Database: Firebase Firestore
- Auth: Firebase Auth, Sign in with Apple
- Subscriptions: RevenueCat
- AI: Claude Haiku (Anthropic API)
- Hosting: Linode ($5/mo Nanode)
- SSL: Caddy
- Analytics: Firebase Analytics, Crashlytics
- Dev Tools: Cursor IDE with custom rules
---
## By the Numbers
- ~40 hours of development
- ~14,000 lines of Swift
- ~500 lines of backend Node.js
- 12 Markdown documentation files (2,500+ lines)
- ~$5/month infrastructure cost (Linode + Firebase free tier)
The documentation-to-code ratio might seem high, but it paid dividends. The CURSORRULES.md file alone improved AI-assisted development speed by roughly 2–3x in the final weeks.
---
## Key Takeaways
1. AI integration is mostly prompt engineering. The code for calling Claude is trivial. The hard part is crafting a system prompt that produces reliable, structured output across edge cases.
2. Server-side rate limiting is non-negotiable for AI apps. API calls cost money. If your rate limiting can be bypassed by editing a plist or intercepting a network call, it will be.
3. Hybrid local/cloud patterns ease development. Developing offline without mocking every Firebase call saved significant time.
4. Document for your future self and your AI tools. A CURSORRULES.md that describes your architecture pays for itself within days when using AI-assisted development.
5. Ship single-platform first. Constraining to iOS reduced complexity enough that I could actually finish. Cross-platform can come in v2.
---
## What's Next
LabForge is in final polish before App Store submission. Post-launch priorities include user feedback integration, prompt refinement based on real market availability, export features (shareable links, PDF configs), and eventually a companion web app.
Website: labforge.ai
App Store: Coming soon
Contact: support@labforge.ai
# LabForge: Building an AI-Powered iOS App in ~40 Hours
A solo tinkerer's journey from idea to App Store submission — integrating Claude AI, Firebase, and RevenueCat into a production SwiftUI app.
By Nate R.
---
## The Problem
Homelabs are infinitely customizable. That's what makes them appealing to enthusiasts — and overwhelming to newcomers.
Someone interested in self-hosting faces an immediate wall of decisions: Raspberry Pi or repurposed Dell OptiPlex? Docker or bare metal? What's a realistic budget for a media server? The answers live scattered across Reddit threads, YouTube videos, and forum posts from 2019 that may or may not still apply.
I wanted to build something that could synthesize all of that tribal knowledge into personalized, actionable recommendations. Not a static guide, but a tool that adapts to your budget, experience level, and goals.
LabForge is that tool: an iOS app that uses Claude AI to generate custom homelab configurations — complete with hardware lists, rough cost estimates, and step-by-step setup instructions.
---
## Project Scope
- Development Time: ~40 hours across 8 weeks (evenings and weekends, while working full-time)
- Timeline: September – November 2025
- Platform: iOS 16+ (SwiftUI)
- Backend: Node.js proxy on Linode
- AI Model: Claude Haiku via Anthropic API
- Status: Final polish before App Store submission
The time constraint forced discipline. I worked in focused blocks, prioritizing features that delivered the most value with the least complexity.
---
## Architecture Overview
The system has three main components: a SwiftUI iOS app, a Node.js backend proxy, and Claude AI for generation.
### Why a Backend Proxy?
The most important architectural decision was routing all AI requests through a backend proxy rather than calling Anthropic's API directly from the app.
Security — The API key never touches the client. Even with certificate pinning and obfuscation, shipping secrets in an iOS bundle is a liability. The backend holds the key; the client holds nothing sensitive.
Rate Limiting — Free users get 1 generation per day; Pro users get 10. This logic must be server-authoritative. If the client controlled rate limits, anyone with a proxy tool could bypass them. The server tracks usage in Firestore and returns a 429 when limits are exceeded.
Flexibility — I can update the Claude prompt, adjust rate limits, or swap models without shipping an app update. The proxy runs on a $5/month Linode VM managed by PM2.
---
## Key Technical Decisions
### 1. Structured Output for Reliable Parsing
The biggest technical challenge was getting Claude to produce consistent, parseable output. Early attempts returned free-form text that broke the UI in unpredictable ways.
The solution: an ultra-specific JSON schema baked into the system prompt. Every field is defined with types, constraints, and examples — including hardware components with names, prices, categories, and Amazon search queries for affiliate link generation.
When working with LLMs in production, the prompt is the product. I spent more time refining the system prompt than writing Swift code.
### 2. Hybrid Services Pattern
I wanted Firebase for production but needed the app to function during development without network calls. The solution was a hybrid services pattern that abstracts the storage layer behind feature flags.
In debug builds, I can flip a flag and test the entire flow offline. In production, everything routes through Firebase. This pattern covered authentication, favorites storage, and user data — any service with both local and cloud implementations.
### 3. Request Signing
Every API request is signed with HMAC-SHA256 to prevent replay attacks and unauthorized access.
The iOS app retrieves a signing secret from Firebase Remote Config, generates a timestamp, and creates an HMAC signature of the timestamp and request body. The backend verifies the timestamp is recent (within 5 minutes), recalculates the HMAC, and only processes valid requests.
The signing secret lives in Remote Config — not the app bundle. If compromised, I can rotate it instantly without an App Store update.
My CySA+ background shaped this approach. Thinking about request validation, replay prevention, and secret rotation from the start meant I didn't have to bolt security on after the fact.
---
## Development Timeline
### Weeks 1–2: Wireframe and UI Skeleton
Started with a basic wireframe and jumped straight into SwiftUI. The visual feedback loop of "change code, see result" accelerated learning dramatically. The initial UI was rough but functional: a form with dropdowns for use case, budget, and experience level, plus a results view.
### Weeks 3–4: Backend and AI Integration
The most technically dense phase. Set up a Linode VM with Ubuntu 24.04, Node.js/Express with PM2, Caddy for automatic HTTPS, Firebase Admin SDK for token verification, and the Anthropic SDK for Claude calls. The first successful end-to-end generation — tapping "Generate" on my phone and seeing a formatted homelab config appear — was the most satisfying moment of the project.
### Weeks 5–6: Authentication and Subscriptions
Firebase Auth with Sign in with Apple (required for App Store) and email/password. RevenueCat for subscription management. Free users get 1 generation per day; Pro subscribers ($4.99/month or $29.99/year) get 10 per day plus custom parameters. This phase had the most third-party integration complexity: App Store Connect products, RevenueCat entitlements, syncing subscription state to Firestore, and handling edge cases like restore purchases and expiration.
### Weeks 7–8: Polish and App Store Prep
Privacy Manifest (required by Apple as of 2024), Firestore security rules hardening, error message UX, App Store screenshots and metadata, and legal pages for Privacy Policy and Terms of Service.
---
## What I'd Do Differently
Start with the data model, not the UI. I refactored the core LabConfiguration struct three times as I discovered edge cases in Claude's output. Defining the schema first would have saved hours.
Write LLM-focused documentation early. Halfway through the project, I created a CURSORRULES.md file documenting code patterns, architecture decisions, and conventions. This dramatically improved AI-assisted code suggestions. I should have started it on day one.
Test subscription flows on a real device sooner. Sandbox testing in the simulator works, but the real App Store sandbox on a physical device behaves differently. I found bugs that only appeared on-device.
---
## Tech Stack
- iOS: SwiftUI, Combine, async/await
- Backend: Node.js, Express, PM2
- Database: Firebase Firestore
- Auth: Firebase Auth, Sign in with Apple
- Subscriptions: RevenueCat
- AI: Claude Haiku (Anthropic API)
- Hosting: Linode ($5/mo Nanode)
- SSL: Caddy
- Analytics: Firebase Analytics, Crashlytics
- Dev Tools: Cursor IDE with custom rules
---
## By the Numbers
- ~40 hours of development
- ~14,000 lines of Swift
- ~500 lines of backend Node.js
- 12 Markdown documentation files (2,500+ lines)
- ~$5/month infrastructure cost (Linode + Firebase free tier)
The documentation-to-code ratio might seem high, but it paid dividends. The CURSORRULES.md file alone improved AI-assisted development speed by roughly 2–3x in the final weeks.
---
## Key Takeaways
1. AI integration is mostly prompt engineering. The code for calling Claude is trivial. The hard part is crafting a system prompt that produces reliable, structured output across edge cases.
2. Server-side rate limiting is non-negotiable for AI apps. API calls cost money. If your rate limiting can be bypassed by editing a plist or intercepting a network call, it will be.
3. Hybrid local/cloud patterns ease development. Developing offline without mocking every Firebase call saved significant time.
4. Document for your future self and your AI tools. A CURSORRULES.md that describes your architecture pays for itself within days when using AI-assisted development.
5. Ship single-platform first. Constraining to iOS reduced complexity enough that I could actually finish. Cross-platform can come in v2.
---
## What's Next
LabForge is in final polish before App Store submission. Post-launch priorities include user feedback integration, prompt refinement based on real market availability, export features (shareable links, PDF configs), and eventually a companion web app.
Website: labforge.ai
App Store: Coming soon
Contact: support@labforge.ai
# LabForge: Building an AI-Powered iOS App in ~40 Hours
A solo tinkerer's journey from idea to App Store submission — integrating Claude AI, Firebase, and RevenueCat into a production SwiftUI app.
By Nate R.
---
## The Problem
Homelabs are infinitely customizable. That's what makes them appealing to enthusiasts — and overwhelming to newcomers.
Someone interested in self-hosting faces an immediate wall of decisions: Raspberry Pi or repurposed Dell OptiPlex? Docker or bare metal? What's a realistic budget for a media server? The answers live scattered across Reddit threads, YouTube videos, and forum posts from 2019 that may or may not still apply.
I wanted to build something that could synthesize all of that tribal knowledge into personalized, actionable recommendations. Not a static guide, but a tool that adapts to your budget, experience level, and goals.
LabForge is that tool: an iOS app that uses Claude AI to generate custom homelab configurations — complete with hardware lists, rough cost estimates, and step-by-step setup instructions.
---
## Project Scope
- Development Time: ~40 hours across 8 weeks (evenings and weekends, while working full-time)
- Timeline: September – November 2025
- Platform: iOS 16+ (SwiftUI)
- Backend: Node.js proxy on Linode
- AI Model: Claude Haiku via Anthropic API
- Status: Final polish before App Store submission
The time constraint forced discipline. I worked in focused blocks, prioritizing features that delivered the most value with the least complexity.
---
## Architecture Overview
The system has three main components: a SwiftUI iOS app, a Node.js backend proxy, and Claude AI for generation.
### Why a Backend Proxy?
The most important architectural decision was routing all AI requests through a backend proxy rather than calling Anthropic's API directly from the app.
Security — The API key never touches the client. Even with certificate pinning and obfuscation, shipping secrets in an iOS bundle is a liability. The backend holds the key; the client holds nothing sensitive.
Rate Limiting — Free users get 1 generation per day; Pro users get 10. This logic must be server-authoritative. If the client controlled rate limits, anyone with a proxy tool could bypass them. The server tracks usage in Firestore and returns a 429 when limits are exceeded.
Flexibility — I can update the Claude prompt, adjust rate limits, or swap models without shipping an app update. The proxy runs on a $5/month Linode VM managed by PM2.
---
## Key Technical Decisions
### 1. Structured Output for Reliable Parsing
The biggest technical challenge was getting Claude to produce consistent, parseable output. Early attempts returned free-form text that broke the UI in unpredictable ways.
The solution: an ultra-specific JSON schema baked into the system prompt. Every field is defined with types, constraints, and examples — including hardware components with names, prices, categories, and Amazon search queries for affiliate link generation.
When working with LLMs in production, the prompt is the product. I spent more time refining the system prompt than writing Swift code.
### 2. Hybrid Services Pattern
I wanted Firebase for production but needed the app to function during development without network calls. The solution was a hybrid services pattern that abstracts the storage layer behind feature flags.
In debug builds, I can flip a flag and test the entire flow offline. In production, everything routes through Firebase. This pattern covered authentication, favorites storage, and user data — any service with both local and cloud implementations.
### 3. Request Signing
Every API request is signed with HMAC-SHA256 to prevent replay attacks and unauthorized access.
The iOS app retrieves a signing secret from Firebase Remote Config, generates a timestamp, and creates an HMAC signature of the timestamp and request body. The backend verifies the timestamp is recent (within 5 minutes), recalculates the HMAC, and only processes valid requests.
The signing secret lives in Remote Config — not the app bundle. If compromised, I can rotate it instantly without an App Store update.
My CySA+ background shaped this approach. Thinking about request validation, replay prevention, and secret rotation from the start meant I didn't have to bolt security on after the fact.
---
## Development Timeline
### Weeks 1–2: Wireframe and UI Skeleton
Started with a basic wireframe and jumped straight into SwiftUI. The visual feedback loop of "change code, see result" accelerated learning dramatically. The initial UI was rough but functional: a form with dropdowns for use case, budget, and experience level, plus a results view.
### Weeks 3–4: Backend and AI Integration
The most technically dense phase. Set up a Linode VM with Ubuntu 24.04, Node.js/Express with PM2, Caddy for automatic HTTPS, Firebase Admin SDK for token verification, and the Anthropic SDK for Claude calls. The first successful end-to-end generation — tapping "Generate" on my phone and seeing a formatted homelab config appear — was the most satisfying moment of the project.
### Weeks 5–6: Authentication and Subscriptions
Firebase Auth with Sign in with Apple (required for App Store) and email/password. RevenueCat for subscription management. Free users get 1 generation per day; Pro subscribers ($4.99/month or $29.99/year) get 10 per day plus custom parameters. This phase had the most third-party integration complexity: App Store Connect products, RevenueCat entitlements, syncing subscription state to Firestore, and handling edge cases like restore purchases and expiration.
### Weeks 7–8: Polish and App Store Prep
Privacy Manifest (required by Apple as of 2024), Firestore security rules hardening, error message UX, App Store screenshots and metadata, and legal pages for Privacy Policy and Terms of Service.
---
## What I'd Do Differently
Start with the data model, not the UI. I refactored the core LabConfiguration struct three times as I discovered edge cases in Claude's output. Defining the schema first would have saved hours.
Write LLM-focused documentation early. Halfway through the project, I created a CURSORRULES.md file documenting code patterns, architecture decisions, and conventions. This dramatically improved AI-assisted code suggestions. I should have started it on day one.
Test subscription flows on a real device sooner. Sandbox testing in the simulator works, but the real App Store sandbox on a physical device behaves differently. I found bugs that only appeared on-device.
---
## Tech Stack
- iOS: SwiftUI, Combine, async/await
- Backend: Node.js, Express, PM2
- Database: Firebase Firestore
- Auth: Firebase Auth, Sign in with Apple
- Subscriptions: RevenueCat
- AI: Claude Haiku (Anthropic API)
- Hosting: Linode ($5/mo Nanode)
- SSL: Caddy
- Analytics: Firebase Analytics, Crashlytics
- Dev Tools: Cursor IDE with custom rules
---
## By the Numbers
- ~40 hours of development
- ~14,000 lines of Swift
- ~500 lines of backend Node.js
- 12 Markdown documentation files (2,500+ lines)
- ~$5/month infrastructure cost (Linode + Firebase free tier)
The documentation-to-code ratio might seem high, but it paid dividends. The CURSORRULES.md file alone improved AI-assisted development speed by roughly 2–3x in the final weeks.
---
## Key Takeaways
1. AI integration is mostly prompt engineering. The code for calling Claude is trivial. The hard part is crafting a system prompt that produces reliable, structured output across edge cases.
2. Server-side rate limiting is non-negotiable for AI apps. API calls cost money. If your rate limiting can be bypassed by editing a plist or intercepting a network call, it will be.
3. Hybrid local/cloud patterns ease development. Developing offline without mocking every Firebase call saved significant time.
4. Document for your future self and your AI tools. A CURSORRULES.md that describes your architecture pays for itself within days when using AI-assisted development.
5. Ship single-platform first. Constraining to iOS reduced complexity enough that I could actually finish. Cross-platform can come in v2.
---
## What's Next
LabForge is in final polish before App Store submission. Post-launch priorities include user feedback integration, prompt refinement based on real market availability, export features (shareable links, PDF configs), and eventually a companion web app.
Website: labforge.ai
App Store: Coming soon
Contact: support@labforge.ai
# LabForge: Building an AI-Powered iOS App in ~40 Hours
A solo tinkerer's journey from idea to App Store submission — integrating Claude AI, Firebase, and RevenueCat into a production SwiftUI app.
By Nate R.
---
## The Problem
Homelabs are infinitely customizable. That's what makes them appealing to enthusiasts — and overwhelming to newcomers.
Someone interested in self-hosting faces an immediate wall of decisions: Raspberry Pi or repurposed Dell OptiPlex? Docker or bare metal? What's a realistic budget for a media server? The answers live scattered across Reddit threads, YouTube videos, and forum posts from 2019 that may or may not still apply.
I wanted to build something that could synthesize all of that tribal knowledge into personalized, actionable recommendations. Not a static guide, but a tool that adapts to your budget, experience level, and goals.
LabForge is that tool: an iOS app that uses Claude AI to generate custom homelab configurations — complete with hardware lists, rough cost estimates, and step-by-step setup instructions.
---
## Project Scope
- Development Time: ~40 hours across 8 weeks (evenings and weekends, while working full-time)
- Timeline: September – November 2025
- Platform: iOS 16+ (SwiftUI)
- Backend: Node.js proxy on Linode
- AI Model: Claude Haiku via Anthropic API
- Status: Final polish before App Store submission
The time constraint forced discipline. I worked in focused blocks, prioritizing features that delivered the most value with the least complexity.
---
## Architecture Overview
The system has three main components: a SwiftUI iOS app, a Node.js backend proxy, and Claude AI for generation.
### Why a Backend Proxy?
The most important architectural decision was routing all AI requests through a backend proxy rather than calling Anthropic's API directly from the app.
Security — The API key never touches the client. Even with certificate pinning and obfuscation, shipping secrets in an iOS bundle is a liability. The backend holds the key; the client holds nothing sensitive.
Rate Limiting — Free users get 1 generation per day; Pro users get 10. This logic must be server-authoritative. If the client controlled rate limits, anyone with a proxy tool could bypass them. The server tracks usage in Firestore and returns a 429 when limits are exceeded.
Flexibility — I can update the Claude prompt, adjust rate limits, or swap models without shipping an app update. The proxy runs on a $5/month Linode VM managed by PM2.
---
## Key Technical Decisions
### 1. Structured Output for Reliable Parsing
The biggest technical challenge was getting Claude to produce consistent, parseable output. Early attempts returned free-form text that broke the UI in unpredictable ways.
The solution: an ultra-specific JSON schema baked into the system prompt. Every field is defined with types, constraints, and examples — including hardware components with names, prices, categories, and Amazon search queries for affiliate link generation.
When working with LLMs in production, the prompt is the product. I spent more time refining the system prompt than writing Swift code.
### 2. Hybrid Services Pattern
I wanted Firebase for production but needed the app to function during development without network calls. The solution was a hybrid services pattern that abstracts the storage layer behind feature flags.
In debug builds, I can flip a flag and test the entire flow offline. In production, everything routes through Firebase. This pattern covered authentication, favorites storage, and user data — any service with both local and cloud implementations.
### 3. Request Signing
Every API request is signed with HMAC-SHA256 to prevent replay attacks and unauthorized access.
The iOS app retrieves a signing secret from Firebase Remote Config, generates a timestamp, and creates an HMAC signature of the timestamp and request body. The backend verifies the timestamp is recent (within 5 minutes), recalculates the HMAC, and only processes valid requests.
The signing secret lives in Remote Config — not the app bundle. If compromised, I can rotate it instantly without an App Store update.
My CySA+ background shaped this approach. Thinking about request validation, replay prevention, and secret rotation from the start meant I didn't have to bolt security on after the fact.
---
## Development Timeline
### Weeks 1–2: Wireframe and UI Skeleton
Started with a basic wireframe and jumped straight into SwiftUI. The visual feedback loop of "change code, see result" accelerated learning dramatically. The initial UI was rough but functional: a form with dropdowns for use case, budget, and experience level, plus a results view.
### Weeks 3–4: Backend and AI Integration
The most technically dense phase. Set up a Linode VM with Ubuntu 24.04, Node.js/Express with PM2, Caddy for automatic HTTPS, Firebase Admin SDK for token verification, and the Anthropic SDK for Claude calls. The first successful end-to-end generation — tapping "Generate" on my phone and seeing a formatted homelab config appear — was the most satisfying moment of the project.
### Weeks 5–6: Authentication and Subscriptions
Firebase Auth with Sign in with Apple (required for App Store) and email/password. RevenueCat for subscription management. Free users get 1 generation per day; Pro subscribers ($4.99/month or $29.99/year) get 10 per day plus custom parameters. This phase had the most third-party integration complexity: App Store Connect products, RevenueCat entitlements, syncing subscription state to Firestore, and handling edge cases like restore purchases and expiration.
### Weeks 7–8: Polish and App Store Prep
Privacy Manifest (required by Apple as of 2024), Firestore security rules hardening, error message UX, App Store screenshots and metadata, and legal pages for Privacy Policy and Terms of Service.
---
## What I'd Do Differently
Start with the data model, not the UI. I refactored the core LabConfiguration struct three times as I discovered edge cases in Claude's output. Defining the schema first would have saved hours.
Write LLM-focused documentation early. Halfway through the project, I created a CURSORRULES.md file documenting code patterns, architecture decisions, and conventions. This dramatically improved AI-assisted code suggestions. I should have started it on day one.
Test subscription flows on a real device sooner. Sandbox testing in the simulator works, but the real App Store sandbox on a physical device behaves differently. I found bugs that only appeared on-device.
---
## Tech Stack
- iOS: SwiftUI, Combine, async/await
- Backend: Node.js, Express, PM2
- Database: Firebase Firestore
- Auth: Firebase Auth, Sign in with Apple
- Subscriptions: RevenueCat
- AI: Claude Haiku (Anthropic API)
- Hosting: Linode ($5/mo Nanode)
- SSL: Caddy
- Analytics: Firebase Analytics, Crashlytics
- Dev Tools: Cursor IDE with custom rules
---
## By the Numbers
- ~40 hours of development
- ~14,000 lines of Swift
- ~500 lines of backend Node.js
- 12 Markdown documentation files (2,500+ lines)
- ~$5/month infrastructure cost (Linode + Firebase free tier)
The documentation-to-code ratio might seem high, but it paid dividends. The CURSORRULES.md file alone improved AI-assisted development speed by roughly 2–3x in the final weeks.
---
## Key Takeaways
1. AI integration is mostly prompt engineering. The code for calling Claude is trivial. The hard part is crafting a system prompt that produces reliable, structured output across edge cases.
2. Server-side rate limiting is non-negotiable for AI apps. API calls cost money. If your rate limiting can be bypassed by editing a plist or intercepting a network call, it will be.
3. Hybrid local/cloud patterns ease development. Developing offline without mocking every Firebase call saved significant time.
4. Document for your future self and your AI tools. A CURSORRULES.md that describes your architecture pays for itself within days when using AI-assisted development.
5. Ship single-platform first. Constraining to iOS reduced complexity enough that I could actually finish. Cross-platform can come in v2.
---
## What's Next
LabForge is in final polish before App Store submission. Post-launch priorities include user feedback integration, prompt refinement based on real market availability, export features (shareable links, PDF configs), and eventually a companion web app.
Website: labforge.ai
App Store: Coming soon
Contact: support@labforge.ai
# LabForge: Building an AI-Powered iOS App in ~40 Hours
A solo tinkerer's journey from idea to App Store submission — integrating Claude AI, Firebase, and RevenueCat into a production SwiftUI app.
By Nate R.
---
## The Problem
Homelabs are infinitely customizable. That's what makes them appealing to enthusiasts — and overwhelming to newcomers.
Someone interested in self-hosting faces an immediate wall of decisions: Raspberry Pi or repurposed Dell OptiPlex? Docker or bare metal? What's a realistic budget for a media server? The answers live scattered across Reddit threads, YouTube videos, and forum posts from 2019 that may or may not still apply.
I wanted to build something that could synthesize all of that tribal knowledge into personalized, actionable recommendations. Not a static guide, but a tool that adapts to your budget, experience level, and goals.
LabForge is that tool: an iOS app that uses Claude AI to generate custom homelab configurations — complete with hardware lists, rough cost estimates, and step-by-step setup instructions.
---
## Project Scope
- Development Time: ~40 hours across 8 weeks (evenings and weekends, while working full-time)
- Timeline: September – November 2025
- Platform: iOS 16+ (SwiftUI)
- Backend: Node.js proxy on Linode
- AI Model: Claude Haiku via Anthropic API
- Status: Final polish before App Store submission
The time constraint forced discipline. I worked in focused blocks, prioritizing features that delivered the most value with the least complexity.
---
## Architecture Overview
The system has three main components: a SwiftUI iOS app, a Node.js backend proxy, and Claude AI for generation.
### Why a Backend Proxy?
The most important architectural decision was routing all AI requests through a backend proxy rather than calling Anthropic's API directly from the app.
Security — The API key never touches the client. Even with certificate pinning and obfuscation, shipping secrets in an iOS bundle is a liability. The backend holds the key; the client holds nothing sensitive.
Rate Limiting — Free users get 1 generation per day; Pro users get 10. This logic must be server-authoritative. If the client controlled rate limits, anyone with a proxy tool could bypass them. The server tracks usage in Firestore and returns a 429 when limits are exceeded.
Flexibility — I can update the Claude prompt, adjust rate limits, or swap models without shipping an app update. The proxy runs on a $5/month Linode VM managed by PM2.
---
## Key Technical Decisions
### 1. Structured Output for Reliable Parsing
The biggest technical challenge was getting Claude to produce consistent, parseable output. Early attempts returned free-form text that broke the UI in unpredictable ways.
The solution: an ultra-specific JSON schema baked into the system prompt. Every field is defined with types, constraints, and examples — including hardware components with names, prices, categories, and Amazon search queries for affiliate link generation.
When working with LLMs in production, the prompt is the product. I spent more time refining the system prompt than writing Swift code.
### 2. Hybrid Services Pattern
I wanted Firebase for production but needed the app to function during development without network calls. The solution was a hybrid services pattern that abstracts the storage layer behind feature flags.
In debug builds, I can flip a flag and test the entire flow offline. In production, everything routes through Firebase. This pattern covered authentication, favorites storage, and user data — any service with both local and cloud implementations.
### 3. Request Signing
Every API request is signed with HMAC-SHA256 to prevent replay attacks and unauthorized access.
The iOS app retrieves a signing secret from Firebase Remote Config, generates a timestamp, and creates an HMAC signature of the timestamp and request body. The backend verifies the timestamp is recent (within 5 minutes), recalculates the HMAC, and only processes valid requests.
The signing secret lives in Remote Config — not the app bundle. If compromised, I can rotate it instantly without an App Store update.
My CySA+ background shaped this approach. Thinking about request validation, replay prevention, and secret rotation from the start meant I didn't have to bolt security on after the fact.
---
## Development Timeline
### Weeks 1–2: Wireframe and UI Skeleton
Started with a basic wireframe and jumped straight into SwiftUI. The visual feedback loop of "change code, see result" accelerated learning dramatically. The initial UI was rough but functional: a form with dropdowns for use case, budget, and experience level, plus a results view.
### Weeks 3–4: Backend and AI Integration
The most technically dense phase. Set up a Linode VM with Ubuntu 24.04, Node.js/Express with PM2, Caddy for automatic HTTPS, Firebase Admin SDK for token verification, and the Anthropic SDK for Claude calls. The first successful end-to-end generation — tapping "Generate" on my phone and seeing a formatted homelab config appear — was the most satisfying moment of the project.
### Weeks 5–6: Authentication and Subscriptions
Firebase Auth with Sign in with Apple (required for App Store) and email/password. RevenueCat for subscription management. Free users get 1 generation per day; Pro subscribers ($4.99/month or $29.99/year) get 10 per day plus custom parameters. This phase had the most third-party integration complexity: App Store Connect products, RevenueCat entitlements, syncing subscription state to Firestore, and handling edge cases like restore purchases and expiration.
### Weeks 7–8: Polish and App Store Prep
Privacy Manifest (required by Apple as of 2024), Firestore security rules hardening, error message UX, App Store screenshots and metadata, and legal pages for Privacy Policy and Terms of Service.
---
## What I'd Do Differently
Start with the data model, not the UI. I refactored the core LabConfiguration struct three times as I discovered edge cases in Claude's output. Defining the schema first would have saved hours.
Write LLM-focused documentation early. Halfway through the project, I created a CURSORRULES.md file documenting code patterns, architecture decisions, and conventions. This dramatically improved AI-assisted code suggestions. I should have started it on day one.
Test subscription flows on a real device sooner. Sandbox testing in the simulator works, but the real App Store sandbox on a physical device behaves differently. I found bugs that only appeared on-device.
---
## Tech Stack
- iOS: SwiftUI, Combine, async/await
- Backend: Node.js, Express, PM2
- Database: Firebase Firestore
- Auth: Firebase Auth, Sign in with Apple
- Subscriptions: RevenueCat
- AI: Claude Haiku (Anthropic API)
- Hosting: Linode ($5/mo Nanode)
- SSL: Caddy
- Analytics: Firebase Analytics, Crashlytics
- Dev Tools: Cursor IDE with custom rules
---
## By the Numbers
- ~40 hours of development
- ~14,000 lines of Swift
- ~500 lines of backend Node.js
- 12 Markdown documentation files (2,500+ lines)
- ~$5/month infrastructure cost (Linode + Firebase free tier)
The documentation-to-code ratio might seem high, but it paid dividends. The CURSORRULES.md file alone improved AI-assisted development speed by roughly 2–3x in the final weeks.
---
## Key Takeaways
1. AI integration is mostly prompt engineering. The code for calling Claude is trivial. The hard part is crafting a system prompt that produces reliable, structured output across edge cases.
2. Server-side rate limiting is non-negotiable for AI apps. API calls cost money. If your rate limiting can be bypassed by editing a plist or intercepting a network call, it will be.
3. Hybrid local/cloud patterns ease development. Developing offline without mocking every Firebase call saved significant time.
4. Document for your future self and your AI tools. A CURSORRULES.md that describes your architecture pays for itself within days when using AI-assisted development.
5. Ship single-platform first. Constraining to iOS reduced complexity enough that I could actually finish. Cross-platform can come in v2.
---
## What's Next
LabForge is in final polish before App Store submission. Post-launch priorities include user feedback integration, prompt refinement based on real market availability, export features (shareable links, PDF configs), and eventually a companion web app.
Website: labforge.ai
App Store: Coming soon
Contact: support@labforge.ai
