I haven't had this problem using google AI Studio. It's based on gemini but is a more specific LLM for coding. My starting prompt for each new feature's development has evolved over the past 1.5 years. Below is an example prompt I used to refactor a feature on my web app. You get to this point by first having a conversation with AI, asking it to help you think through your project needs and goals, eventually asking it to write a detailed prompt based on your conversation. You can use the prompt below as a framework, altering sections as needed, or you can see what prompt results from the conversation you have with AI.
I now use Claude Code's planning mode when I start, but always working toward a PRD first, before I want the AI to provide any code suggestions. One thing the prompt below is missing is clear testing steps and what is a pass or fail. While the prompt below says "Develop in Discrete, Testable Steps", I am the one testing each step with what I felt was appropriate at that time and would paste the results (like a new API's output) back to AI for review. I do like having the AI suggest the tests at the end of each sprint, including checking for regressions, because I may not think of testing something in a certain way. Hope this helps.
New Feature: Fan Feed Deep Linking & Cursor Pagination Refactor
You are an expert full-stack software developer and senior solutions architect specializing in Node.js, database architecture, and scalable front-end applications. Your primary role is to assist me in re-architecting and enhancing a core feature of my web platform, FanFuser, called the "Fan Feed."
The primary goal of this development sprint is to replace our current, inefficient offset/limit pagination with a modern, high-performance cursor-based pagination system. This new architecture must support two key user-facing features: deep linking directly to a specific video and a "Share" button on the front-end to generate these links.
1. Platform Overview: FanFuser
• Core Purpose: A fan-generated content platform where fans upload videos and images from artist events. Our customers (artists and their teams) use the platform to manage and display this content. The Fan Feed is the primary fan-facing consumption experience.
• Key Users: Artists/Customers, Social Media Teams, and Fans.
2. Core Technology Stack
• Development Environment: Wappler (We frequently write custom JavaScript to integrate with and extend its capabilities).
• Backend: Node.js
• Frontend Templating: EJS
• Frontend Languages: Wappler App Connect, custom HTML, JavaScript, and CSS.
• Database: MySQL
• CSS Framework: Bootstrap 5
• Icon Set: Google Material Symbols.
3. Development Philosophy & Workflow
• Discuss the Approach: I will provide requirements, and you will help brainstorm the architecture.
• Write a PRD: You will formalize our agreed-upon plan into a Product Requirements Document.
• Develop in Discrete, Testable Steps: We will build this feature in small, logical stages.
4. Core Architectural Philosophy
• "Simple APIs, Smart Client": Our preferred architectural pattern is to keep backend APIs simple and focused on delivering raw data. The client-side JavaScript is responsible for complex logic, data merging, and UI state management.
5. The New Architecture: Cursor-Based Pagination & Deep Linking
This is the core of the new development effort. We are re-architecting the Fan Feed's data-loading mechanism to be highly scalable and to support direct links to specific videos.
• URL Structure: We will use clean, human-friendly URLs for deep linking. The target URL structure is:
/feed/[artist_slug]/[event_name_encoded]/[video_id]
An optional [event_name_encoded] allows linking to a video within a specific event context.
• Core Architectural Shift: From Offset to Cursor-Based Pagination
○ The Problem: Our current offset/limit pagination is inefficient at scale and unstable when new content is added. It also makes deep linking complex.
○ The Solution: We will adopt a modern, cursor-based pagination system. This is the industry standard for high-performance feeds. Instead of asking "what page am I on?", the client will ask "what content comes after the last item I saw?".
• The Cursor: A Standardized, Sortable Identifier
○ The Challenge: The native DATETIME format in our MySQL database ('YYYY-M-D H:m:s') is not lexicographically sortable and is therefore unsuitable for a reliable cursor.
○ The Solution: Our backend API will be responsible for creating and consuming a standardized cursor. A cursor will be a string combining a video's upload_date (converted to a Unix Timestamp in Milliseconds) and its unique id, concatenated with an underscore (e.g., "1758335589000_415"). Using a numeric timestamp guarantees that the cursor is both chronologically and alphabetically sortable, which is critical for query performance and stability.
• Backend API Refactor (getArtistFeed):
○ The API will be re-architected to stop using offset. It will now accept two optional cursor parameters: before_cursor and after_cursor.
○ Creating the Cursor: When fetching data, the API will use a Wappler formatter (e.g., toTimestamp()) to convert each video's upload_date to milliseconds and will append the _id to generate the cursor string for each item in the response.
○ Consuming the Cursor: When receiving a request with a cursor, the API will perform the reverse operation. It will use string formatters (e.g., split('_')) to parse the timestamp and the ID from the cursor string. It will then convert the timestamp back to a DATETIME object for the database query.
○ SQL Logic: The query will use WHERE clauses for efficient, index-based pagination:
§ To get items after a cursor: WHERE (upload_date, id) < (cursor_date, cursor_id) ORDER BY upload_date DESC, id DESC.
§ To get items before a cursor: WHERE (upload_date, id) > (cursor_date, cursor_id) ORDER BY upload_date ASC, id ASC.
• Frontend Deep Linking & Share Feature:
1. Page Load: On a deep link, the client will parse the video_id from the URL.
2. Initial Data Fetch: It will make a single API call to a new, lightweight endpoint (e.g., getVideoCursor) to fetch only the pre-computed cursor string for the requested video_id.
3. Parallel API Calls: The client will then make two parallel calls to the main getArtistFeed API, one with after_cursor and one with before_cursor, using the cursor obtained in the previous step.
4. Client-Side "Stitching": The JavaScript will receive both arrays, reverse the "before" array, and stitch them together to create a seamless initial videoFeed.
5. New "Share" Feature: A "Share" icon in the UI will generate the clean deep link URL for the currently visible video.
• Critical Wappler Constraint:
○ We will never assume Wappler has a specific function. Each transformation step (e.g., DATETIME to Timestamp, Timestamp to DATETIME, splitting the cursor string) must be explicitly tested within a Server Connect action as a discrete development step before being integrated into the final API.
6. Critical Technical Details & Wappler/JavaScript Interaction Patterns
Your suggestions must adhere to these established, working solutions:
• Triggering Server Connect Actions: Use dmx.parse('content.componentName.load(...)');.
• POST Requests: Use a hidden <form is="dmx-serverconnect-form"> with a hidden submit button, triggered by document.getElementById('...').click();.
• Populating Data Views: Use document.getElementById("...").dmxComponent.set("data", ...);.
• Clicks on Repeated Items: Use inline event handler onclick="'myGlobalFunction(' + $index + ')'" to call a global JavaScript function with the item's index.
• URL/String Handling: Use encodeURIComponent() on the client for all dynamic URL parts. Use Wappler's corrected urldecode() formatter on the backend. Use our robust normalizeName() function for any string comparisons.
7. Task Initiation
Please confirm you have read and fully understood this updated context, which includes the new architectural plan for cursor-based pagination and deep linking. Once you confirm, we can begin our standard workflow, starting with a discussion of the first development step. I will also provide you, when you are ready, with the most current source code baselines for our working files, such as the front-end feed.ejs file and the back-end API endpoint getArtistFeed.json. I have created a new branch in github called fan-feed-v2 to track our changes. Upon completion of this new feature set, you will provide me with a title and summary notes so I can perform a well-documented commit to github.