G'day:
Sigh. I watched a video about Lovable the other day, which in and of itself was entirely adequate, and provided some interesting information in that way that one paragraph of text could also provide (such is the way with YT technical videos sometimes). It's this one: Master Lovable AI in 20 Minutes (NEW 2.0 UPDATE).
One of the tactics the author detailed was this thing in Lovable that - despite it being an AI-driven app generator thingey (you know "vibe coding" etc), the standard approach to things seems to be to use someone else's AI tooling to do most of the planning / AI / back-and-forth work, and then get - eg - ChatGPT to summarise the plan as a "prompt" which one then gives to the Lovable AI, and it churns away and will generate a prototype-quality app that follows most of the guidance, but still grafts on a sixth finger (sic) here and there because well: AI. At the time I was bemused as to why one would use another AI tool to do the planning, instead of using Lovable itself.
So I asked the question in the video comments:
@darrelwilson I was kinda perplexed as to why it was necessary to use one AI to draft guidance for another AI? I realise this ecosystem is fast moving, but looking today Lovable is using Opus 4.5, which should be on a "cognitive" par with ChatGPT's LLM... so what's the difference between prompting ChatGPT as an intermediary over just prompting Lovable in the first place? Is this mostly to game token-usage on Lovable? Or is it to keep Lovable's memory-context tight & not cluttered with intermediary discussions with ChatGPT whilst drafting the "ideal" prompt? Keeping zeitgeisty, I asked Claude Desktop's Opus 4.5 why one might do this, and its reaction was bemusement, as it seems like an unnecessary step..?
Someone replied:
@adamcameron1 I mean you should really read the prompt GPT provides. If you put a generic prompt into Lovable directly youll have less control & say into what it builds. Its going to extroplate if you aren't specific.
That didn't quite land with me, so I followed-up:
Hi @shenshaw5345, I appreciate the response, cheers!
Reasoning about this, I'd expect one would not charge in and go "build an app! Oh... no, hang on... not like that. Do it this way... damn, still not right... oh maaaan, this is never gonna work"; one would do precisely what was done in this example via ChatGPT first, and "discuss" with Lovable what needs building before any code is written. Once one reckons everyone is on the same page with the plan, get Lovable to spit out the plan, review (poss rinse and repeat), and then go "OK build that". Not all chat prompts need to result in code being written, right?
NB: I'm only evaluating the possibilities of Lovable as yet - this is part of the evaluation - so I am, admittedly, speculating (based on ChatGPT, GitHub Copilot and claude.ai all working this way). Then again, I also know that one has to be direct with AI IDE integrations and prompt it "we are currently just planning, do not write any code until we agree on a plan and I tell you to crack on with it". I'm presuming Lovable works the same, given it's the same underlying AI models...
THAT SAID, I just checked Lovable's pricing, and it's *ludicrous*... 100 tokens for $25 for a MONTH (as an example; there are better price points than that, obvs). One could burn most of those tokens in a planning session for one app component! Compare that to Claude.ai which measures its tokens by the million for similar amounts of $$$. Obvs not directly analogous, but honestly, Lovable seems to have missed a trick here: it's important to have the chat as part of the context for the work an agent does. A lot of that is being lost to ChatGPT's context in this case. And a summary of a chat from one AI to another is not the same as the nuance of the actual chat.
This, however, explains why one ought not plan using Lovable. They've hamstrung their offering with their pricing. Duly noted.
Thanks again!
And someone else came back to me:
@adamcameron1 you might not have an idea fully flushed out in your mind. You braindump into chatgpt and it will organize a precise starter prompt for lovable to start. Then u iterate off that. In the AI world, you want to have to starting point as close as you can get it to the finish line to make things easier. 5 or 10 edits is ok. Try doing 50-100 edits and then you start to run into context window issues….. at least that how I assume lovable works.. just started lovable a couple days ago, but i have a good understanding on how prompting affects LLMs
This respondent didn't quite get what I was meaning, so I followed up. And this leads to the reason I am reposting this here:
Hi @youcanfindrace. It might be that. But given I've been working almost exclusively via various AI tooling for the bulk of the year this year; and for about 50% of my work time last year, I'm not so sure I'm misunderstanding how it all comes together. But it's possible, sure.
What I rather more think is that I wasn't as clear as I could have been previously, and my point didn't land with you. Let me change a few things around, and try to be clearer.
Let's pretend the guidance given wasn't "use ChatGPT to build a prompt", but instead "use Claude.ai to build a prompt". You can hopefully see how "using ChatGPT" and "using Claude" are - for most intents - analogous processes: different tools, same job. Like using a DeWalt drill or a Ryobi drill to drill a hole: different sorts of drills, but they're still drills. So we use Claude to build a lovely fully-realised prompt, which we then paste into Lovable's UI... which then uses the same AI as Claude uses to analyse the prompt and perform that action Lovable uses Anthropic's LLMs under the hood (so... same as Claude... same thing... different wrapper, by a different vendor). But even if it's not a wrapper around the same tool, it's still an analogous tool (Lovable=>DeWalt; Claude=>Ryobi) in and of itself. "use a chat with ChatGPT to build a prompt", "use a chat with Claude to build a prompt" and "use a chat with Lovable to build a prompt" are analogous exercises. EXCEPT... that Lovable simply don't facilitate this, because their pricing for their usage tokens are orders of magnitude more than other AI vendors, making it prohibitive to use its AI for... actual AI stuff.
"Building a prompt" is only a tool to transfer the summary of an AI's reasoning between tooling that is otherwise disconnected. For various reasons I sometimes need to have a similar chat with Claude and ChatGPT (or Gemini), and the way to "get the other AI up to speed", is to ask the initial AI "can you please summarise this so I can give it to Gemini and see what they think". One only builds a prompt when doing this context-interchange. However these summaries always lose context and nuance from the underlying chat (think Cliff's Notes vs the actual novel the notes are summarising). In application building, that context/nuance is really important.
Given Lovable uses Anthropic's LLMs, the process in a well-realised application would be to use the Chat interface in Lovable to build the instructions for the Agent interface (still Lovable, different part of the same tool) to then go do the code generation - though even that's more ceremony than necessary. Ideally you'd just have the natural back-and-forth in one interface without artificial handoffs between different AI tools.
However I have done further research into this, and have identified the reason for the way it is (well: others have, I'm just parroting it here). It's simply a bad business model on Lovable's part, basically. They're approximately paying retail rates to purchase usage tokens from Anthropic (so: same if you or I have a plan with them to use Claude.ai), and instead of doing something reasonable like tack a margin on to that cost, they present their "credits" as some sort of priceless artifact that can be only used sparingly, and you should be thankful to have any at all; whereas if one was using Claude etc, then they're basically given away. A chat in Claude is allocated 190k tokens. 190000. And - as an example - I've been chatting back and forth for a business day now, and have used 35k in the current chat. And what happens when I use the 190k? It tells me to start another chat. I'm on a $50/month plan I think, and I've never run into a wall with token usage. And these are the same tokens that Lovable are using under the hood. There's not a one-to-one measure between what Anthropic terms as a token and what Lovable terms a credit, but the disparity is orders of magnitude. Lovable have priced themselves out of the market for their users to be able to use their tool as it ought to be intended.
That's why the ChatGPT-as-intermediary workflow exists.
And the comment was promptly deleted. I dunno if it was by YT's bots, or by the bloke who did the video: there was no explanantion. I changed a few things (in case I was being a meany without noticing), and it was... deleted again. I finally posted a kinda "shrug" response, in the hope that the person I was trying to reply to would see that I did value their feedback to me:
@youcanfindrace Not entirely sure why my reply to you keeps getting deleted: there was nothing code-of-conduct-tripping about it. Poss cos I said something that was not completely in agreement about how wonderful Lovable is, based on research I did into its pricing. But I answered my own question in the process anyhow, so... job done. Soz I couldn't post it here. [eyeroll].
At time of writing, that reply is still there.
Anyhoo, I'm not gonna take the time to do that research (largely googling, reading reddit / stackoverflow, and asking Claude and Gemini) and writing it up only to have it deleted. So I'm reproducing it here where I own it.
And that's that.
Righto.
--
Adam