Prompt Engineering Is the Skill Nobody Teaches (But Everyone Needs)
Most people interact with AI like a search engine — one vague question, one disappointing answer. The problem isn't the tool. It's the input. Here's why learning to prompt well is the highest-leverage skill you can develop right now.
The first time a well-crafted prompt genuinely saved me hours, I was preparing a literature review for a grant application in the second half of 2024. I’d been manually reading through dozens of papers, extracting methodological details, and building a comparison table by hand. On a whim, I fed the AI a detailed description of my research question, the specific methodological features I needed to compare, the format I wanted the output in, and three example papers as reference. What came back wasn’t perfect — but it was about 65% of the way there in about two minutes. Cleaning it up took another twenty.
Compare that to my first attempt months earlier: “Help me with my literature review.” The response was a generic, surface-level summary that could have come from a first-year textbook. I closed the tab and went back to doing it manually, convinced the whole thing was overhyped.
Same tool. Completely different results. The only thing that changed was what I gave it. And now, with more capable models and AI coding assistants, I can get 90% of the way there — not just 65%. The prompting principles are the same; the ceiling keeps rising.
The Problem: We’re All Prompting Badly
Most people interact with AI the way they interact with Google — type a few words, hit enter, hope for the best. “Summarise this paper.” “Help me with my data.” “Write a literature review.”
These aren’t prompts. They’re wishes. And when the output is vague, generic, or wrong, people conclude that AI isn’t useful for serious work. I hear this constantly from colleagues: “I tried ChatGPT, it wasn’t very good.” When I ask what they typed, it’s almost always a single sentence with no context.
The tool isn’t broken. The input is.
This matters because AI assistants are fundamentally different from search engines. A search engine retrieves existing documents. An AI model generates a response based on everything you give it. The more you give it, the better the output. Give it nothing, and you get nothing useful back.
Why Academics Need This Especially
Research tasks are nuanced. You’re not asking for the capital of France — you’re asking for a comparison of mixed-effects modelling approaches for nested longitudinal data with missing observations. That’s a prompt that requires domain-specific context, and no model will get it right if you don’t provide that context.
Think about what you actually need AI to help with:
- Literature review — not “find me papers” but “compare the methodology across these six studies and identify gaps in how they measure X”
- Data analysis — not “analyse my data” but “I have a repeated-measures design with three conditions, 45 participants, and 12% missing data — suggest an appropriate analysis and write the Python code”
- Grant writing — not “write a grant” but “here’s my specific aims draft and the funding body’s assessment criteria — identify where my argument is weakest and suggest how to strengthen it”
- Course material — not “make slides about statistics” but “I’m teaching a second-year research methods course to psychology students with no coding experience — create a 15-minute activity that introduces p-values using a coin-flipping simulation”
Every one of these requires context the model doesn’t have. Your job is to provide it. The model’s job is to do something useful with it.
The Three Things That Actually Matter
After a year of using AI assistants for almost everything in my workflow, I’ve landed on three things that separate effective prompting from the “help me with my research” approach.
1. Context
Tell the model who you are, what you’re working on, and who the output is for. This isn’t filler — it fundamentally changes the response.
In my honours course — Practical AI for Behavioural Science — I teach students to start every prompt with context. Not because it’s a rule, but because they immediately see the difference. A prompt that begins “You are helping a fourth-year psychology student analyse qualitative interview data using thematic analysis” produces dramatically better output than one that begins “Analyse this data.”
Context includes your role, the project, the domain, the audience, the purpose. The more specific you are, the more specific the output. And context isn’t just what you write in the prompt — it’s also the files, documents, and materials you feed alongside it. The prompt sets the frame; the materials fill it in.
2. Materials
The model can only work with what you give it. If you’re asking it to review a paper, give it the paper. If you’re asking it to write analysis code, describe the dataset — columns, types, sample size, any quirks. If you’re asking it to draft a rubric, give it the learning objectives, the assessment task description, and an example of what a good submission looks like.
This is the real “ghost in the machine” — you and your intelligence and understanding of the problem and what’s needed to help solve it. In short, the quality of the output is directly proportional to the quality of the input materials. This is where most people fall short — they ask the model to produce something from nothing, then wonder why the result is thin.
Your data description. Your paper excerpts. Your rubric. Your marking criteria. Your existing notes. Your old presentation or lecture slides. Your old analysis code or results sections. Relevant websites or methods and analysis sections from published papers. Hand it everything relevant and let the model work with real material instead of inventing from its training data.
3. Planning and Iteration
Don’t start by asking for the final product. Start by building a plan — with the AI.
This is the mental shift that matters most. People type a prompt, read the response, and either accept it or reject the whole thing. That’s not how this works. The best results come from working in stages: plan first, then generate, then refine. Describe what you want to achieve. Ask the AI to help you structure an approach. Review the plan, adjust it, and then ask for the output. The thinking happens before the writing — not after.
And when the output arrives, it’s a draft, not the answer. You read it, identify what’s wrong or missing, and refine: “Good, but make the tone more formal.” “The third point is wrong — here’s what I actually mean.” “Now rewrite this for a non-technical audience.” “What are the assumptions underlying that analysis method?” “Are there alternative approaches I should consider?” Each round of feedback narrows the output toward what you actually need. Three or four rounds of refinement typically gets you to something genuinely useful. One-shot prompting almost never does.
This plan-first approach applies to everything — from writing a grant application to building a data analysis pipeline to preparing a lecture. I’ll go deeper into this in a separate post — Planning Is Everything — coming soon.
The Unlock Most People Miss
Here’s the thing that changed my prompting more than anything else: ask the model to help you prompt better.
Try this: before you ask your actual question, say:
I want to ask you to help me with [task]. Before I do, what information would you need from me to do this really well? Ask me questions.
The model will come back with a list of clarifying questions — the exact context and materials it needs to produce a good response. Answer those questions, and your prompt essentially writes itself. Modern reasoning models and the agentic systems built on top of foundation LLMs (large language models) are increasingly designed to ask clarifying questions by default. But even with models that don’t do this automatically, you can trigger it with the prompt above.
This is meta-prompting: using the AI to improve your interaction with the AI. It sounds circular, but it’s extraordinarily practical. The model knows what it needs. It just can’t ask unless you invite it to.
I use this approach constantly, and I teach my students to do the same. It’s the fastest way to go from “I don’t know what to type” to a well-structured prompt that produces useful output.
Students Need This Too
This isn’t just an academic productivity issue. Students arrive at university with ChatGPT on their phones and no framework for using it well. They paste in assignment questions and submit whatever comes back — then they get mediocre grades and either blame the AI or, worse, conclude that their own thinking doesn’t matter.
I teach my students prompting as a core skill alongside the domain content. Students learn to provide context, feed in materials, iterate on outputs, and — crucially — verify everything the model produces. The goal isn’t to make them dependent on AI. It’s to make them effective users of it, the same way we teach them to use statistical software or academic databases.
The students who learn to prompt well don’t just get better AI output. They get better at articulating what they actually need, which makes them better researchers and clearer thinkers. Prompting well forces you to be precise about your question — and that precision has value far beyond the AI interaction.
Where to Start
If you want to move beyond vague prompts and start getting genuinely useful output from AI assistants, I’ve put together a practical guide with ready-to-use prompt templates for academic workflows — literature review, data analysis, grant writing, course material, and more. It also covers meta-prompting techniques and common mistakes to avoid.
Read the full guide: Prompt Engineering for Academics
The guide is the reference you come back to. This post is the argument for why you should bother. The short version: prompting well isn’t a nice-to-have. It’s the skill that determines whether AI tools are a genuine productivity multiplier or an expensive autocomplete.
I’ll continue updating the guide as things change — and they change fast. The models, the capabilities, and the best practices are all evolving at a pace that makes anything written today partially outdated within months. That’s the reality of working with this technology right now. The guide will stay current; the principles in this post will stay relevant longer.
Prompting is thinking. The better you get at articulating what you need — context, materials, constraints — the better the AI performs. But more importantly, the better you perform. The discipline of writing a good prompt is the discipline of clear thinking, and that’s a skill that transfers everywhere.
— Michael Richardson Professor, School of Psychological Sciences Faculty of Medicine, Health and Human Sciences Macquarie University
AI Disclosure: This article was written with the assistance of AI tools, including Claude. The ideas, opinions, experiences, and workflow described are entirely my own — the AI helped with drafting, editing, and structuring the text. I use AI tools extensively and openly in my research, teaching, and writing, and I encourage others to do the same. Using AI well is a skill worth developing, not something to hide or be ashamed of.
It’s also worth acknowledging that the AI models used here — and all current LLMs — were trained on vast quantities of text written by others, largely without explicit consent. The ideas and language of countless researchers, educators, and writers are embedded in every output these models produce. Their collective intellectual labour makes tools like this possible, and that contribution deserves recognition even when it can’t be individually attributed.