Summary of 6 Popular Prompting techniques for LLMs
Hello folks! First things first, Thanks Sam Altman for joining Open AI again. I am currently working on a Non Profit project which uses ChatGPT API heavily, and I was considering moving to other LLM providers after last weeks drama.
So as a beginner, I am sure you are now experimenting with LLM workflows, and discovered how prompt engineering is the where the real gold is / as opposed to fine tuning models, data sources etc. Prompt engineering is getting very advanced, and it can decide whether your user experience works or fails.
I decided to summarize the top 6 methods that I am utilizing when developing LLM based applications. Just for context, I am generating long form content in healthcare domain, and my data sources include wide array of web urls, docs, pdfs, ppts, public comments, user questions and more.
What other prompt engineering methods are you using in your LLM applications? Care to comment?