Behind the Scenes of KAYA Global's Approach to Prompt Engineering

Behind the Scenes of KAYA Global's Approach to Prompt Engineering

In this week's article, we dive into how prompt engineering is used at KAYA. We explore personal experiences from our members, share a business use case, then outline our methodology for efficiently producing quality prompts.

From Fun to Function

In KAYA’s early days, our team dabbled with generative models like GPT-2 and used it primarily as a told for creative experimentation and generating text for various applications. It was initially about exploring the capabilities of the models and taking note of how we could use the technology. As we crafted narratives, answered operational questions, and produced code snippets, we quickly realized that we were essentially using the models as an artificial assistant.  

What is really profound was the acceleration from the early models to where we are now, with being able to draft entire documents, develop creative pieces, and assist in more complex software development tasks.

Use Cases

Personal: KAYA Engineering Intern Vinaya Ramamorthy Venkatasubramanian has developed a knack for crafting prompts that coax out precise and informative responses from LLMs. She enjoys experimenting with styles and questioning to coax out information. Whether it's finding inspiration to clear a mental roadblock in her studies or planning a 10 day vacation to Thailand (complete with beach and restaurant recommendations), there's nothing she can't do. Her friends have even started to refer to her as the ‘Prompt Whisperer’.

Business: Business Development manager Andrew Christensen Christensen was attached to the scrum team tasked with developing KAYA’s AI assistant, Bernie. He comes from a non-technical background (studied Finance at the University of San Diego for his bachelor’s degree) and was admittedly intimidated about joining a team of engineers. However, since LLMs are designed to generate conversational responses to users, he was surprised to find his writing abilities added significant value to his exceedingly patient teammates.

The result was an complex Software Requirement Specification (SRS) for Bernie, which included a feature blueprint complete with priorities, resource needs, and timelines.

Guiding Principles

Prompt engineering is the art of crafting precise and strategic inputs to AI models, tailoring their responses and behaviors to meet specific goals and applications. Although the objective is simple, we are aware that this technology has significant complexities. In our prompting, we take care to make the following considerations:

  • Ethical Considerations: In our pursuit of harnessing the power of language models from OpenAI, we hold several key considerations close to heart, ensuring that our approach to prompting remains both ethically sound and highly effective. These considerations serve as guiding principles that shape every step of our prompting process: 
  • Bias Mitigation and Ethical Consideration: We recognize the significance of mitigating bias and upholding ethical standards in AI interactions. Our prompts are meticulously designed to minimize any potential biases in the responses generated by the model. We are committed to providing users with fair and impartial information while adhering to ethical AI practices.
  • Leveraging Domain Knowledge: To maximize the relevance and accuracy of our prompts, we tap into domain-specific knowledge. This deep understanding of various subject areas allows us to craft prompts that are not only well-informed but also highly effective in eliciting meaningful responses from the model.
  • User-Centered Design: At the core of our prompting approach is a commitment to user-centered design. We prioritize the needs and expectations of our users, ensuring that prompts are user-friendly and align with their goals and preferences. By putting users at the forefront, we create interactions that are intuitive and valuable.
  • Transparency: We believe in maintaining transparency throughout the prompting process. This means that our prompts are designed to be clear and comprehensible to both users and the language model. Transparency fosters trust and helps users understand how AI is being used to provide them with information and solutions.
  • Compliance and Regulation: We are steadfast in our commitment to compliance with all relevant regulations and industry standards. Our prompts are created with a keen awareness of legal and ethical frameworks, ensuring that our interactions with language models align with the highest standards of compliance and accountability.

These considerations form the ethical and operational foundation of our approach to prompting. They guide us in crafting prompts that not only deliver exceptional results but also adhere to the highest standards of fairness, transparency, and user-centricity.

KAYA's Prompting Method

Unlocking the full potential of language models requires a strategic approach to crafting prompts. At KAYA, we understand that the quality of prompts plays a pivotal role in the accuracy and relevance of responses generated by these models. To achieve this, we follow a meticulous 3-step method that has proven to be the cornerstone of our success:

1. Classify the Prompting Task -  First, we need to frame what output we are looking for.  Are we asking for code to fix a frontend issue? Are we pulling specific points from a database? A combination? Asking what we are looking for will help reduce time spent in the iterative space. We categorize our prompting tasks from the list below:

  • Summarization – Prompting the LLM for a summary of inputted information.
  • Information Extraction – Asking LLM to pull specific information from a body of text or database.
  • Question Answering / Searches – Asking LLM for indexed knowledge.
  • Text Classification  - Categorizing text as positive or negative on a scale
  • Conversation – Creating a human like conversation between user and LLM.
  • Code Generation – Prompting to receive a snippet of code in a specific language.
  • Reasoning - Guiding the prompt with logic
  • Recommendations – Prompting an LLM to give advise on a task.

2.     Apply Prompting Tactics – Second, it is helpful to give the model information on the viewpoint we are coming from, or any conditions that need to be met. We’ve found that being structured both helps the prompter stay organized and helps flag important information to the LLM.

  • Optimize precision – Adding language to narrow down answers.
  • Enrich keywords – adding language to bring context to keywords
  • Format changes – Prompting for a specific format or data source
  • Inject context – Prompting from a point of view
  • Boolean Operators - using “and” to return conditions that satisfy two requirements.
  • Negation – Excluding unwanted information in prompt.
  • Iterative – Refining the prompt to produce desired categories.

3.     Iteration to Perfection – This final step is where we can expect to spend most of our time. If we have followed the process correctly, however, we can expect to at least get most of the information needed in a way that is about halfway formatted. That means a tasks that would have taken 12-13 prompts to accomplish can be reduced down to about 5.  

  • Process prompts – Figure out how the prompt can be improved. Is it lacking information or does the information need to be presented differently? Ask yourself what you want changed and then prompt with appropriate tasks.
  • Evaluate Results – Evaluate the prompt within the LLM.
  • Re-engineered Prompts – Build your next prompt to include what you wanted changed.
  • Validate output – This is applied for whatever you are using the prompt for. Now is the time to test your code to see what it produces. Copy and paste the table into your report. The big question here is ‘does this accomplish the task outside of the LLM?’ If no, go back to Processing Prompts. If yes, then congratulations, you are finished!


To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics