Prompt Engineering & Building LLM Applications

  1. Prompts are the instructions provided to LLMs to elicit specific responses. Prompt engineering involves crafting prompts to obtain desired answers from the LLM.
  2. Prompt engineering is like programming in spoken language, which is a unique approach compared to traditional programming languages.
  3. Considerations:
    • Composition Patterns: Breaking down complex tasks or questions into simpler components.
    • Recognizing Patterns: Identifying common patterns in prompts and responses.
    • Abstraction: Treating a set of problems as a single concept.
    • Algorithms: Providing step-by-step instructions for tasks.
  4. Grounding Prompts:
    • Context: Background information w/ necessary context for the LLM.
    • Role: Role or persona the LLM should assume.
    • Tone: Define the desired tone of the response, such as formal, humorous, warm or irreverant. Our personal favorite is “Appropriately innappropriate.”
    • Action: Specify the task or action requested from the LLM.
    • Output Format: Define how you want the response to be formatted (e.g., consise and data-driven).
    • ~Prompt Engineering~
    • Iterative Process involves refining prompts to achieve the desired results. 
    • *May require multiple attempts and adjustments.
  1. Mitigating Bias 
  2. Prompt engineering can help mitigate bias by providing specific instructions to LLMs to ensure responses are fair and inclusive.
  3. Injection Concerns
  4. Similar to security concerns like SQL injection, prompt injection is a potential issue that needs to be addressed at the client, app, and LLM level. Anomaly detection and review of prompts can help mitigate. 
  5. We’re Curious: do any of you use a global security mandate at each prompt?