xKiwiLabs xKiwiLabs

Welcome to xKiwiLabs

Why I started this site, how AI tools transformed my academic workflow, and why I think every researcher and student can do the same — regardless of their technical background.

AI tools productivity research about

This site has been a long time coming.

For the past couple of years, colleagues have been asking me the same questions: “How did you do that so fast?” “What tool did you use for that?” “How are you managing to keep up with all of this?” And every time I explain — the tools, the workflow, the approach — the reaction follows a familiar pattern. First, genuine interest. Then, as soon as I mention code, or the terminal, or VS Code, the shutters come down: “That sounds too complicated.” “I don’t code.” “I wouldn’t know where to start.”

I get it. But I also know it’s not true. Every one of those colleagues is more than capable of working this way. So are their students. The barrier isn’t ability — it’s exposure. Nobody showed them how to start.

That’s what xKiwiLabs is for.

What follows is a bit of my background and how it motivates this site. If you want to skip the mini autobiography, jump straight to Why This Site Exists and What’s Coming — those sections are far more relevant to what you’ll actually find here.

How I Got Here

I’ve been coding since I wrote my first game on my family’s Commodore VIC-20 in the 1980s. From there it was BASIC, then C and OpenGL and Visual Basic through the 90s, C++ and C# and MATLAB through the 2000s, then Python and a wide range of web languages and frameworks from the 2010s onward. Today I use C++ for production applications, C# for game and VR development in Unity, Python for data analysis and AI tooling, and TypeScript with React and Next.js for web development — but the point isn’t the languages. The point is that coding has been part of how I think and work for as long as I can remember.

That technical background has been central to my academic career — and I’ve always known it. During my Master’s work in the late 1990s, my supervisor Dean Owen encouraged me to expand my coding skills so I could build my own VR applications — to be self-sufficient and not too reliant on others. The environments I built weren’t commercial-quality products — far from it — but they were good enough for research, and building them myself meant I understood every detail of the experimental setup.

From that early VR work through to my PhD in the early 2000s, I was building basic VR and graphics applications for research experiments, along with analysis tools and packages for my own and colleagues’ research. I continued this following my PhD — building custom research applications, real-time sensing systems, computational models, and data analysis pipelines for myself and for others across multiple institutions. I wrote the software that collected the data, analysed it, and often generated the figures for the papers. Coding wasn’t a side skill — it was the engine that made my research possible.

But here’s the thing: for most of my career, that was unusual in my field. Psychology and cognitive science aren’t traditionally coding-heavy disciplines. Most of my colleagues used SPSS, Excel, and PowerPoint. I was the one people came to when they needed something automated or custom-built. And that was fine — I enjoyed it, and it gave me an edge.

AI Didn’t Start in 2023

My use of machine learning and AI tools in research goes back well over a decade. Long before the current wave of generative AI, I was using ML for pattern recognition in behavioural data, classification of movement dynamics, pose detection and human movement analysis, and computational modelling of human coordination. NLP tools — even early ones, predating the transformer revolution — were part of how I approached text analysis in research.

In 2019, I started working more closely with Mark Dras from Computing at Macquarie and became much more aware of how the modern NLP landscape was changing. BERT had arrived the year before, transformer-based models were rapidly improving, and it was clear these tools could do things with language that previous approaches couldn’t. I started integrating them into research workflows — text classification, semantic analysis, automated coding of qualitative data. Each new model generation opened up tasks that had previously been impractical.

Then, from 2023 onwards, everything accelerated. ChatGPT, Claude, Gemini, Copilot — large language models became genuinely useful as general-purpose assistants. Not just for research tasks, but for everything: writing, teaching, administration, tool-building, data analysis, project management. I started using them for more and more of my daily work, and the productivity gains compounded.

By 2025, the integration was near-total — and now in 2026, it’s total. As I’ve written about in detail in another post, there is now barely a task or hour in my working day where I’m not using an AI tool, an AI agent, or building my own. Writing papers, analysing data, preparing lectures, reviewing grants, building custom tools, managing projects — AI assistants are woven into all of it. Not as a gimmick or an experiment, but as fundamental infrastructure for how I work.

The Coding Advantage — and Why It’s No Longer a Barrier

I won’t pretend that being a lifelong coder hasn’t helped. It absolutely has. Knowing how to code meant I could automate repetitive tasks, build custom tools, process data at scale, and integrate systems together — long before AI made any of that easier. That head start compounded over decades.

But here’s what’s changed: you don’t need that head start anymore.

The current generation of AI coding assistants — GitHub Copilot, Claude, ChatGPT — means that someone with zero programming experience can describe what they want in plain English and get working code back. I’ve seen this firsthand with my students — undergraduates, honours students, Master’s and PhD students — most of whom had never written a line of code before. Some take to it with a practical acceptance, others genuinely enjoy it, but all of them gain enough proficiency to be effective. With AI coding assistants, they’re building HTML presentations, writing data analysis scripts, doing advanced ML and AI work, and using AI tools productively within weeks, not months.

The barrier to entry has collapsed. The tools that gave me a decades-long advantage are now accessible to anyone willing to spend a few hours learning the basics. And you can start for nothing. VS Code — the code editor I use for almost everything — is free. GitHub Copilot — an AI assistant that lives inside it — is free for academics and students. Most foundation models — ChatGPT, Claude, Gemini — have free tiers that are often all you need, at least to start with. And if you’re concerned about privacy, you can run models locally on your own machine using Ollama or LM Studio, both also free.

Full disclosure: if you go far enough down this road, you’ll probably end up paying for at least one tool. I use Claude Code almost exclusively as my coding assistant, with free tiers for other models and Gen-AI tasks — OpenAI, Gemini, image generation, and so on. But the entry point is genuinely free, no subscriptions required, and you can get a long way before you ever need to spend a cent.

What used to require years of programming experience now requires curiosity and a willingness to learn. That’s a fundamental shift, and it’s the reason I think every academic and student should be paying attention.

Why This Site Exists

When colleagues ask me how I work the way I do, I used to answer one conversation at a time. That doesn’t scale — and honestly, the answers are too long for a hallway chat. What I really need is a place to point people to. A place where I can share what I’ve learned, document the tools and workflows that actually work, and update things as the landscape evolves (which it does, constantly).

That’s xKiwiLabs. It started back in the 2000s as my studio identity for side projects — building research applications and tools for colleagues. I’ve revamped it as a platform for sharing the AI-assisted workflows, tools, and guides that I think can genuinely help academics and students work better.

Everything here is free, open-source, and built in the open. The blog posts are my perspective — opinionated, practical, based on what I actually use. The guides are the reference material: step-by-step instructions, prompt templates, tool recommendations. I’ll keep both updated as things change, because they will.

What’s Coming

I have a growing list of topics I want to cover — from specific tool setups to broader arguments about how AI is changing academic work. Some of the posts will be practical how-to guides. Others will be more opinionated takes on why certain approaches matter. A few will be about the things I’m building and the problems I’m trying to solve.

If you’re a researcher, academic, or student who’s curious about how AI tools can fit into your work — but you’ve been put off by the technical barrier or haven’t known where to start — this site is for you. You don’t need to be a coder. You don’t need to be technical. You just need to be willing to try something new.

I’ve been building out the current content over the past several weeks so there’s something here when the site goes live. There’ll be a lot more coming soon. But if you’re keen to get started on transforming your workflows — whether as a researcher, teacher, or student — these two posts are the best place to begin:

Welcome. Let’s get started.


Michael Richardson Professor, School of Psychological Sciences Faculty of Medicine, Health and Human Sciences Macquarie University


AI Disclosure: This article was written with the assistance of AI tools, including Claude. The ideas, opinions, experiences, and workflow described are entirely my own — the AI helped with drafting, editing, and structuring the text. I use AI tools extensively and openly in my research, teaching, and writing, and I encourage others to do the same. Using AI well is a skill worth developing, not something to hide or be ashamed of.

It’s also worth acknowledging that the AI models used here — and all current LLMs — were trained on vast quantities of text written by others, largely without explicit consent. The ideas and language of countless researchers, educators, and writers are embedded in every output these models produce. Their collective intellectual labour makes tools like this possible, and that contribution deserves recognition even when it can’t be individually attributed.

Share: