Different prompt frameworks for different situations.
Bruce Clark’s Post
More Relevant Posts
-
The quality of the results depend on the quality and details provided in your prompts. This is a useful tool to improve your prompts and eliminate the need to continually retype your prompts. Helpful hint: if the chaptbot freezes, ask it to continue where it left off to complete the task.
ChatGPT expands its 'custom instructions' feature to free users | TechCrunch
https://techcrunch.com
To view or add a comment, sign in
-
You get a prompt! You get a prompt! Words like "prompt design" and "prompt engineering" are flooding my LinkedIn feed right now. Everyone wants to learn how to talk to Generative AI and are investing thousands to get there. But what if I told you that we have an entire database of powerful, guided prompts already created for you? This free database has 500+ prompts for any of your marketing use cases. Our goal is to empower you, providing the guidance to take charge of your outputs and communicate confidently with Generative AI. You're welcome! 😉
500+ Useful Jasper Prompts
proof.notion.site
To view or add a comment, sign in
-
Trusted Technologist, CTO & Speaker. Helping organizations succeed through Platform Engineering, DevSecOps, GenAI, and hyper automation
This is going to be super useful especially when using repetitive scenarios, same prompt but different t parameters. https://lnkd.in/em-7ewF4
Custom instructions for ChatGPT
openai.com
To view or add a comment, sign in
-
Rag with Memory is a project that leverages Llama 2 7b chat assistant to perform RAG (Retrieval-Augmented Generation) on uploaded documents. Additionally, it operates in a chat-based setting with short-term memory by summarizing all previous K conversations into a standalone conversation to build upon the memory. https://lnkd.in/e_8vbM7j
Q&A with RAG | 🦜️🔗 LangChain
python.langchain.com
To view or add a comment, sign in
-
Recap: Common Prompt Patterns https://lnkd.in/gZQkP6EV
Recap: Common Prompt Patterns
medium.com
To view or add a comment, sign in
-
"When Should You Fine-Tune LLMs?" - This is a pretty common question these days as many try to improve output of off-the-shelf LLMs for specific tasks at hand. If you've fine-tuned LLMs or trained domain-specific LLMs from scratch, would love to learn about your experience! #generativeai
When Should You Fine-Tune LLMs?
towardsdatascience.com
To view or add a comment, sign in
-
AI-Powered Bug Reports: Pros and Cons While AI-generated bug reports hold undeniable promise, they are not a magic bullet. https://lnkd.in/gG5yd5MB
AI-Powered Bug Reports: Pros and Cons
https://globaldataops.com
To view or add a comment, sign in
-
Many people are experimenting with chatbots on their data using retrieval augmented generation (RAG). At first it looks promising. Then after some testing on difficult questions, you realize it's not all it could or should be. It takes work. Don't start from scratch. Leverage what LlamaIndex has done in this area and see if it helps! https://lnkd.in/eQG6ypzE
Building Performant RAG Applications for Production
gpt-index.readthedocs.io
To view or add a comment, sign in
-
1) Write clear instructions 2) Provide reference text 3) Split complex tasks 4) Give the model time 5) Use external tools 6) Test changes systematically https://lnkd.in/gd5a7eUJ
6 Strategies For Better Results From ChatGPT, According To OpenAI
forbes.com
To view or add a comment, sign in