Prompt Engineering & Building LLM Applications
- Prompts are the instructions provided to LLMs to elicit specific responses. Prompt engineering involves crafting prompts to obtain desired answers from the LLM.
- Prompt engineering is like programming in spoken language, which is a unique approach compared to traditional programming languages.
- Considerations:
- Composition Patterns: Breaking down complex tasks or questions into simpler components.
- Recognizing Patterns: Identifying common patterns in prompts and responses.
- Abstraction: Treating a set of problems as a single concept.
- Algorithms: Providing step-by-step instructions for tasks.
- Grounding Prompts:
- Context: Background information w/ necessary context for the LLM.
- Role: Role or persona the LLM should assume.
- Tone: Define the desired tone of the response, such as formal, humorous, warm or irreverant. Our personal favorite is “Appropriately innappropriate.”
- Action: Specify the task or action requested from the LLM.
- Output Format: Define how you want the response to be formatted (e.g., consise and data-driven).
- ~Prompt Engineering~
- Iterative Process involves refining prompts to achieve the desired results.
- *May require multiple attempts and adjustments.
- Mitigating Bias
- Prompt engineering can help mitigate bias by providing specific instructions to LLMs to ensure responses are fair and inclusive.
- Injection Concerns
- Similar to security concerns like SQL injection, prompt injection is a potential issue that needs to be addressed at the client, app, and LLM level. Anomaly detection and review of prompts can help mitigate.
- We’re Curious: do any of you use a global security mandate at each prompt?