Interacting with AI models like GPT-4 or Llama 2 is easy. You type a prompt — a set of instructions for the AI — and wait for the magic to happen. 🪄 Except, sometimes the results are not quite what you expected. But don’t worry, because this guide will help you up your AI prompting game.
Here’s the thing — in the world of artificial intelligence, your words are your currency. The clearer and more precise your AI prompts, the more likely you are to hit the jackpot and get the golden nugget of information you’re after (yes, we know that’s a cheesy metaphor).
This AI prompt writing guide is designed to demystify communication with artificial intelligence and give you a clear path from a newbie prompter to a fully-fledged prompt engineer. And the best part? It’s tailored for everyone, whether you’re already familiar with AI or just starting out.
So, without further ado, let’s dive in! 🚀
⚙️ Understanding LLMs and Their Response Mechanisms
“Do robots dream of electric sheep?” 🐑
We don’t know. But what’s clear is that there’s a whole lot of buzzing in their electronic brains between the time you type in your query and the moment you get a response.
A large language model (LLM) uses complex algorithms to analyze language structures, identify patterns, and understand context. This allows it to generate responses that are not only relevant to prompts provided by the user (yes, we’re looking at you!) but also coherent and contextually sound.
The model doesn’t “understand” text in the human sense.
It calculates probabilities of what word or phrase should logically follow next.
Every time you type a prompt, the language model performs a detailed analysis of its content. It evaluates the syntax, identifies the main themes, and correlates these with its stored knowledge to generate an accurate and relevant answer (well, most of the time).
Of course, this is a tl;dr version. If you want to learn more, be sure to check our article that takes an in-depth look into Generative AI.
With the basics out of the way, let’s have some fun. 🥳
⭐ Tips For Effective AI Prompting
Coke’s kept their recipe a secret for over 100 years — talk about playing hard to get! But when it comes to AI prompt writing we believe in sharing the love.
Effective prompts consist of three key ingredients that make them so effective.
First, there’s clarity — your prompt should be crystal-clear, so the AI knows exactly what you’re looking for. Then comes context — background information can help the AI understand the prompt in the right frame. And don’t forget about specificity — this steers the AI in the direction you want it to go.
The key is to find the right ratio of each ingredient.
Be Clear and Specific
Language models are excellent at deciphering the nuances of the human language, thanks in large part to advancements in natural language processing (NLP).
Except for the times when they’re not.
Remember, AI can’t read your mind (for now); it relies entirely on the information you input as well as its own training data. That’s why you need to use simple and straightforward language to convey your intent — no jargon, vague expressions, unnecessary complexity, or ambiguous phrases.
Take a look at these AI prompt examples:
“Tell me about that thing in space.”
That didn’t go well, but we can try to improve the prompt:
“Explain the formation of black holes in space.”
As you can see, a precise prompt yields much better results.
Tell It What To Do And What Not To Do
In many ways, an LLM is like a spirited toddler — it can display a wide range of behaviors, some desired and some not. To make the most of your AI-powered tool, you’ve got to be crystal clear about what you want it to do and what’s off-limits. It needs to be right there from the start.
In your prompt, include specific instructions about what the AI should include or exclude, the desired tone, the required level of detail, and any boundaries or restrictions you want to set.
Here are two examples:
Prompt #1: “Write a short sci-fi story without any violence, suitable for young teenagers.”
Prompt #2: “Provide email marketing tips for a health and wellness brand, avoiding aggressive sales tactics and focusing on building trust with the audience.”
Let’s consider the following scenario.
You’re asking an AI assistant to book a table at a nearby restaurant. You define the price and maximum distance, but you forget to specify your culinary preferences and restrictions.
The AI could point you to a lively Mexican place or a serene French bistro — not the worst recommendations by any measure. That’s as long as you’re not on a gluten-free mission or your taste buds prefer a gentle whisper over a spicy serenade. 🌮
The lesson for today? AI models need context to give you tailored responses.
This boils down to giving them extra details, even if they may seem obvious — the reasons behind your queries, the timing of your requests, what you’re looking for, and any other relevant information.
To better understand this tip, compare these two AI prompts:
Prompt #1: “Find me a restaurant within 5 miles under $30 per person.”
Prompt #2: “Find me a gluten-free restaurant within 5 miles under $30 per person.”
See the difference? 👌
Provide Details of How the Output Should Appear
There are many ways you can shape the AI output to better match your intent.
For instance, a simple prompt like: “Generate a summary of the latest scientific discoveries in the field of biology” will return a paragraph of text listing key findings and breakthroughs. Useful, but you still need to put in some work to format and organize it. AI is supposed to do that for you.
To save some time, you can provide more details on the structure of the expected response, such as requesting bullet points, a Markdown syntax, a numbered list, or a concise paragraph.
Here’s what we mean by that:
Prompt #1: “Summarize the key points of the latest UN climate report.”
Prompt #2: “List the top 5 findings of the latest UN climate report in bullet points.”
Let’s go back to our toddler analogy (this is the last time; we promise!)
Like children, LLMs can learn by imitation. And we’re not just talking about the extensive data sets they are fed during training. An LLM can also learn from the examples you provide during chats.
Examples can come in all shapes and sizes — samples of text in specific styles or tones, code snippets for programming tasks, existing graphic designs for creative projects, or even structured data.
This way, you can nudge the AI to follow the format, style, or tone you desire, essentially “teaching” it to mimic your preferences and intentions. Here’s a simple example to better illustrate this point:
Prompt: “Extract key information from this business meeting transcript. Example of extracted data: Date: [Date], Attendees: [Names], Key Decisions: [Decisions]”
The tone of AI’s responses mirrors the language and tone you use when interacting with it. But you don’t have to leave it all to chance. You can set the desired tone in the prompts you provide.
If you ask the AI model to explain something in a cheerful and simple manner, it will adapt its language to fit that request. If you prefer a more formal or technical tone for a specific task, simply mention that preference in your prompt; the model will switch gears, and don a metaphorical lab coat. 👩🔬
It’s that simple.
Prompt #1: “Explain how photosynthesis works in a cheerful and simple way.”
Not the most elegant explanation, but it gets the message across!
Prompt #2: “Provide a formal and technical explanation of quantum computing.”
Define The Audience
What’s your aim?
Educating, entertaining, or advising? Each goal needs a different style. Education needs clarity, entertainment should be engaging, and advice should be straightforward.
Culture matters too. Different places have different norms. If your audience comes from a diverse range of cultures, it’s important to be sensitive to those differences and adapt your style.
Finally, consider the medium. Written content may require a more formal tone, while spoken or video content can often be more conversational. Ultimately, it’s all a matter of who you’re talking to.
AI may know how to appeal to specific audiences, but it’s your job to guide it.
Prompt #1: “Write an educational piece on climate change suitable for a global audience.”
Prompt #2: “Create a script for a podcast episode on work-life balance in a conversational tone.”
Point Out Mistakes
A big part of working with AI is correcting its mistakes.
From petty blunders like mixing up dates or names to more significant errors in understanding a complex topic, these slip-ups are part of the learning curve.
When you catch a mistake, don’t hesitate to point it out.
It could be as simple as saying, “Actually, the event happened in 2001, not 2011,” or as complex as explaining a nuanced concept that the AI misunderstood.
If you do nothing and let the AI run with its narrative, it will perpetuate the mistake in subsequent generations. The errors will compound and undercut whatever you’re working on.
Here are two ways you can correct AI’s mistakes:
AI Statement: “The Mars Rover Curiosity landed on Mars in 2016.”
AI Statement: “Photosynthesis in plants occurs in the mitochondria.”
Here’s the bad news: Even if you follow all the tips from this guide, there will be times when your AI input and output will not align perfectly. The good news is that’s just part of the learning curve.
Consider it a creative collaboration.
Each time you fine-tune your AI prompts, you’re teaching the AI model a bit more about what you’re after. It’s like a dance — sometimes you lead, sometimes you follow.
Iterative prompting allows you to gradually refine AI’s understanding of what you’re working on within a single conversation. This could mean adding more details, asking a different set of questions, or even correcting misconceptions. Consider the following examples:
AI Statement: “Here’s a basic recipe for chocolate chip cookies: Mix flour, sugar, and chocolate chips. Bake at 350°F for 12 minutes.”
AI Statement: “Here’s a travel itinerary for a weekend trip: Day 1 – Museums, Day 2 – Parks.”
Advanced Techniques in Prompt Engineering
Ready for the next level? 🚀
Then let’s move on to the arcane arts of prompt engineering: advanced techniques. You may not need them in casual chats, but they will come in handy whenever you want to delve deeper into a topic.
Did you know that AI can have its own, unique personality?
Well, not a real personality. At least not in the human sense of the word. It’s more of a reflection of the many ways or tones we humans use to communicate.
For instance, a travel guide will use a completely different language and tone compared to a fitness coach. They will also boast different expertise and priorities.
Imagine crafting a character for a novel. That’s what persona creation is all about.
You’re essentially “training” AI to mimic human-like communication within a specific context — the right words, tone, and responses to make the model seem more human, without the fluff.
Check these AI prompt examples to see what we mean:
Persona: Travel Blogger 📸
Persona: Health Guru 🧘♀️
Sometimes, the best way to find an answer is to, well… ask more questions.
The cognitive verifier pattern is an advanced prompting technique that involves encouraging AI to ask additional questions for better clarity or context.
Think of it as an intellectually stimulating conversation with a friend. You start with one question and dig deeper to get to the heart of the matter. But in this case, you’re prompting AI to figure out the questions itself, before it generates an output. Here’s how it works.
Prompt #1: “Please analyze the economic impacts of climate change by answering three specific questions. Each question should focus on a different economic aspect. Then, integrate these answers for a comprehensive understanding.”
Prompt #2: “To explore the health effects of climate change, answer three targeted questions covering different health-related issues. Then, combine these answers to provide an overall perspective on climate change’s impact on health.”
Chain of Thought (CoT) Prompting
CoT involves providing a series of “thought nodes” to the model. Each nodes represents a part of the overall reasoning process, and the model evaluates the output at each stage.
For instance, in a standard prompt, you might simply ask, “If it costs $1.50 to paint a square foot, and the room is 10 feet by 12 feet, what is the total cost to paint the room?”
With Chain of Thought prompting, you guide the model through each calculation step:
“Calculate the area by multiplying the length by the width, which gives us 10 feet times 12 feet. Then, multiply the total area by the cost per square foot, which is $1.50.”
Unlike traditional prompting, CoT prompting encourages AI models to evaluate their reasoning at each stage of a generation. This means that, if everything goes well, a model will correct its course if it identifies an error in its reasoning process.
Consider two more AI prompt examples:
Few-Shot Prompting: “Three friends go out to dinner. The total bill is $75. They decide to split the bill equally. How much does each person pay?”
Direct Instruction Prompting: “Water freezes at 0 degrees Celsius and boils at 100 degrees Celsius. Let’s work through the problem step by step. At what temperature is water both a solid and a liquid?”
Prompt Templates and Frameworks
Alright, here’s a little hack that might seem obvious but is a real game-changer.
Most people’s interactions with AI revolve around one-off prompts. They’re great for when you need a quick result, like generating clever tweets. But to get the most out of AI, you need to think bigger.
Instead of spending minutes or hours writing prompts, create a set of templates to speed things up.
It doesn’t have to be anything elaborate. You can start with a few conversation starters that have given you good results in routine tasks. Whether it’s data analysis, content generation, or customer support, custom templates can save you a ton of time. And speaking of templates…
Did you know that Taskade comes with hundreds of AI prompt templates for every occasion? You can also check other catalogs with the finest selection of Taskade’s AI generators and AI bots and agents,
Final Thoughts: Elevating Your AI Prompting Skills
Phew… that was a crazy ride.
We hope that the prompt crafting tips from this article will make your interactions with AI more productive and seamless. You can apply them all at once, or experiment with one or two and compare your results. Before you go, here’s what we learned today:
- 🔸 Be clear and specific
- 🔸 Tell AI what to do and what not to do
- 🔸 Use context
- 🔸 Provide details of how the output should appear
- 🔸 Give examples
- 🔸 Use tones
- 🔸 Define the audience
- 🔸 Point out mistakes
- 🔸 Refine your prompts with each generation
- 🔸 Create AI personas
- 🔸 Practice Chain of Thought (CoT) prompting
- 🔸 Use cognitive verifiers
And that’s it!
Did you know that Taskade is the only AI productivity tool you need to get stuff done?
Taskade includes a range of powerful AI features that will help you organize projects, manage tasks, and collaborate in real-time with team members, all wrapped in a user-friendly interface.
🪄 Workflow Generator: Use the power of AI to automatically generate projects, documents, mind maps, or any other workflow you can think of. Just type your prompt, sit back, and watch the magic!
📚 Built-In AI Prompts: Taskade features hundreds of AI prompts for every occasion, from creative tasks like writing and brainstorming to structured project planning.
✏️ AI Assistant: Tap into the power of Taskade AI directly in the project editor. Choose from dozens of handy /AI commands or define your own as part of Custom AI Agents.
🤖 Custom AI Agents: Interactions with AI don’t have to feel like a chore. Agents will help you automate routine tasks and streamline your workflows. No coding skills needed!
Visit Taskade’s pricing page for a breakdown of AI features and plans.