Launch Debrief Analysis (Lessons Learned)

Prompt

We just completed the launch of [PRODUCT NAME]. Now, I want to conduct a launch debrief to capture lessons learned. Our launch results: [briefly list outcomes, e.g., sales numbers, user sign-ups, any major issues or successes].

Analyze the launch by detailing:

  • What went well: Key successes and effective strategies during the launch.
  • What didn’t go well: Challenges, mistakes, or things that didn’t work as expected.
  • Lessons learned: Important takeaways from both successes and failures.
  • Recommendations: How we should adjust our approach for future launches or next phases.

Provide the analysis in a structured way so we can easily review and discuss it.

How to Use

  1. Usage Instructions: Before using this prompt, gather any relevant data or anecdotes from your product launch. Replace [PRODUCT NAME] with the product and include specific outcomes or observations (for example, “we reached 80% of our sign-up target, but had server downtime on day one” or “users loved the core feature, but we got complaints about onboarding difficulty”). This context helps the model give a more accurate debrief. You can use this prompt with any LLM like GPT-4, Claude, etc. For thorough analysis, more detail is better—don’t hesitate to list multiple points in the prompt about what you observed during the launch.
  1. Desired Response Format: The model’s answer should be organized into clear sections or bullet points for each category (e.g., What Went WellWhat Didn’t Go WellLessons LearnedRecommendations). For instance, it might output a few bullets under each heading, or number them for clarity. Ask for a structured list or use of subheadings so that each part of the debrief (successes, failures, lessons, recommendations) is distinctly identified. This will make the debrief easy to scan and share with your team.