AI tools for designers who don’t code (yet)
A no-pressure guide for designers to open- and closed-source tools and communities
The AI space is exhaustingly noisy. Everyone’s either building a startup, pivoting their entire career, or issuing fatalistic statements about the future of entire industries in the wake of these all-powerful AI tools: Graphic design is dead, UX is dead, case studies are dead, Figma is dead, and so on.
But as we all know, only the Sith deal in absolutes, so this post is more about showing what’s actually out there — and maybe taking a bit of the pressure and some of the anxiety out of the conversation.
Because the truth is: There’s a lot of interesting stuff out there that’s genuinely worth digging into. There’s also a lot of scary stuff that’ll send shivers down your spine, but that’s worth knowing about, too.
So relax. Nobody’s dead.
For now, let’s just be curious together and see what we got, shall we?
Just a quick heads-up:
This list is for designers of all stripes, beginners included, with a slight bias toward product designers. So if you’re about to roll your eyes at the mention of Perplexity because it feels “so basic” and “everyone should know about it by now,” please let people explore at their own pace. Gatekeeping doesn’t make anyone look smarter; it just makes these tools less accessible. Also, this is by no means an exhaustive list — just a semi-random selection of tools and communities that left an impression on me.
Essentials
CustomGPTs:
While you’re likely already familiar with ChatGPT, I’d recommend you check out its plug-ins, too. These add-ons enhance ChatGPT’s capabilities, allowing it to do even more, tailored to your specific needs.
For instance:
- Write for Me is great for content creators, generating high-quality text quickly for blog posts, articles, and social media content.
- Scholar GPT is ideal for research, helping you find academic papers, summarize them, and provide citations.
- Consensus is perfect for analyzing data or gathering insights across different viewpoints to help with decision-making.
Plug-ins make ChatGPT more specialized, so you can customize it to match your goals — whether that’s writing, research, data analysis, coding, etc. If you haven’t checked them out yet, give them a try! You’ll find them in the Plug-in store, available for GPT-4 users.
Perplexity

Perplexity is a free, AI-powered search engine that blends ChatGPT-style conversational answers with real-time internet access — but with a much stronger focus on sources. It gives you direct, citation-backed answers that are great for researching a topic, summarizing content, or just getting quick, reliable facts without falling into a Reddit rabbit hole.

You can access it via the website, browser extension, or mobile apps (iOS and Android), and it’s honestly one of the best tools out there for anyone who wants answers instead of sifting through SEO-optimized fluff and tons of ad content that DOESN’T EVEN REMOTELY MATCH MY SEARCH QUERY BECAUSE THE ALGORITHM THINKS IT KNOWS ME BETTER THAN I DO!!
And speaking of SEO — this kind of tool shifts the whole dynamic. Instead of clicking through 10 links on Google and parsing outdated blog posts, you’re getting cleaner, contextual answer immediately. That’s great for users, but you can see why publishers, content marketers, and the entire ad-driven search economy are tense (If this becomes the default way people “Google,” the ripple effects are going to be huge).
Careful, though: While Perplexity usually gives you a decent line-up of sources, I’ve had cases where links provided were not relevant to the subject at all. Some were downright broken. So while super helpful, it’s by no means perfect.
Image & Video
CivitAI
A community-driven platform where people upload and remix Stable Diffusion models. You can browse styles, download checkpoints, and see what others are building — like GitHub meets DeviantArt. But the best part is the Create feature. Here, you can try different models, formats and tweaks to create your own stuff. All for free. You can however buy additional credits to create more images, but I don’t have to do that to enjoy it and dig around. I personally use it to browse and try models, before I dump them in my ComfyUI workflow, but more on that below.

Leonardo.ai
Leonardo.ai is beginner-friendly, but still gives you plenty of control for image and video generation. I especially like the Flow State generation, which generates little animations based off of an image, perfect for animated backgrounds. If the naming is appropriate or if DaVinci is turning in his grave as we speak is a different matter. From my experience, it’s more of a playground. It’s fun to mess around with (especially now that the new Veo 3 model is out), but getting something out of it that’s actually usable for my design workflow has been rather hard.

RunwayML
This is where it gets really fun. Runway lets you generate videos from images, animate scenes, remove objects, and more. Feels like After Effects without the suffering (albeit with much less control). Seeing an image come to live can feel like a serious breakthrough. It can also be the most cursed stuff you’ve seen in a long while. Something that I’ve been doing quite often at work lately: I create an image of something (say a graphics card) for our website, then upload it to runway and say “rotate pls” and I have something quite decent in a few seconds:

Pika Labs
Text-to-video generation that feels like the early days of Midjourney, but in motion. Hit-or-miss — but honestly, the misses are sometimes more interesting.

Sora (by OpenAI)
That one you’ve already seen by now. Sora is OpenAI’s text-to-video model that turns written prompts into short, detailed video clips — up to 20 seconds long, in resolutions up to 1080p, and in pretty much any aspect ratio (widescreen, vertical, square, etc). It launched quietly in late 2024, and it’s currently available to ChatGPT Plus and Pro users.
Type in a prompt, and get back a realistic video. You can also start from images or existing clips, then remix them using built-in tools like Storyboard, Remix, Blend, and Loop — which seems to be pretty good for pitching a concept or just seeing how far the model can go.
It’s powerful, but not perfect. Sora still struggles with things like real-world physics or spatial logic (But I think the fact that we live in an age where our expectations are this high and this is considered a flaw is pretty insane). But as a glimpse into where video generation is headed? It’s kind of a big deal.
But there’s also a part that’s hard to ignore: it’s way too easy to generate videos of real people — celebrities, influencers, your coworker’s face — with almost zero guardrails. No watermarking, no consent layer, no real friction. The creative potential is massive, sure. But so are the doors this opens for misinformation, deepfakes, and manipulation at scale.
It’s an impressive leap for AI-driven content creation. But also kind of terrifying.
ComfyUI
If you read some of my last posts, you know: this is the “give me the knobs and dials” tool. A node-based, open-source interface for Stable Diffusion. It takes more setup, but you’ll learn a ton by messing with it. Here’s a cool playlist to get into it.
Also, if you don’t feel like waiting for minutes to get an output, consider renting a GPU from marketplaces like CloudRift.
Here’s a tutorial on how to do that.

Audio & Video
ElevenLabs
Text-to-speech that actually sounds human. It’s scary-good. Great for narration, concept videos, or voiceovers when you don’t feel like hearing your own voice for the tenth take.
But as an audiobook enthusiast, it gives me a weird feeling. Is this what audiobooks will be like in five years? It’s just too perfect, the complete absence of breathing, pronunciation errors and other flaws make it feel sterile to me, am I the only one?
Anyway, do check it out here. For now, I’m somewhat relieved that the voices still struggle to capture real emotional depth and tension — so it’s not going to convincingly read me Lovecraft anytime soon ( — a mercy, really; the idea of a soulless silicon voice reciting those eldritch cadences would shred what’s left of my fragile sanity).
Descript
Perfect for podcasts and voiceovers. Edit audio like it’s a Word doc, remove filler words with one click, and clone your voice if you forgot to record something. It’s pretty wild.
For (Product) Design-People
UX Pilot
The elevator pitch: drop in a prompt (“social app for sharing local coffee spots”) and UX Pilot turns it into an editable wireframe in one click, already nested inside Figma. Because it generates native Figma layers, you can rearrange, restyle, or tear it apart just like any file you built by hand. It also offers one-click design reviews to help you improve the user experience.
But the truth is, most of us aren’t always designing the next coffee-sharing social network. So how does that pitch translate to an ordinary workday?
I’m mixed on this. Like most “instant UI” tools, the out-of-the-box results are meh at best; they’re fine for quick mock-ups of generic screens — settings pages, login flows, basic dashboards — but nothing you couldn’t grab from a community template. If being quick is your absolute priority, this might work for you. UX Pilot also offers to hook up your design system to its system, so its output aligns more with your existing designs.
Then again, if you have a design system ready that fulfills its requirements, you’re already in a good situation to do some rapid prototyping using Figma alone.

To its credit, UX Pilot does let you iterate: feed the frame new prompts and it revises copy, layout, and structure, which sets it apart from many one-and-done generators. The design-review feedback is decent (if generic), but it still can’t match an experienced designer’s intuition — much less real, data-driven decisions.
UIzard
Uizard’s cool cool trick that caught my attention right away is definitely its Screenshot Scanner: drop in a screen grab of any app or website and it spits back an editable mock-up you can tweak in the browser. Handy when a PM or (in my case the CEO) says, “Just make it sorta like this,” and you need a starting point fast.
Add its Autodesigner text-prompt feature and, on paper, you’ve got a full “idea → UI in seconds” pipeline.
The reality is (like always, actually), that output quality is hit-or-miss. Reviews call the editor “clunky,” the layouts generic, and anything beyond a simple landing page more work to fix than to build from scratch . It’s great for rough wireframes, quick stakeholder demos, or turning reference screenshots into something you can actually edit. But if you expect pixel-perfect, production-ready UI, be ready to wrestle with alignment, spacing, and that “unexpected error” popup that loves to eat unsaved changes.

Figma’s Jambot
Being the only designer in a team, I usually have a hard time getting everyone to collaborate on FigJam, that’s why I haven’t really touched it a lot. But then I discovered Jambot. Jambot is an AI-powered widget inside FigJam that lets you generate sticky notes, summaries, brainstorm ideas, or rephrase content directly on your canvas. It’s designed to support early ideation and collaborative sessions without needing to leave the whiteboard. I still think that the best ideas arise from connecting at least two creative human minds, but it’ll do in a pinch.
For Designers: This Is Part of the Job Now
If you work in product, UX, visual design, or anything adjacent — this kind of exploration counts.
One of the most important things I’ve learned while exploring AI is that you don’t figure it out before you use it. You figure it out by using it.
By giving you this list and my thoughts on these tools, I hope I could help you:
- Figure out what’s there and what’s possible (and what’s still clunky)
- Imagine new workflows — like: instead of searching for stock photos, you prompt ChatGPT and let it generate visuals, then animate them in Runway, etc.
- Think about interfaces, trust, use cases, and new user behaviour
- Build technical empathy for how this stuff actually works
It also gives you ammo for the inevitable question from a founder, PM, or hiring manager:
“Galileo.ai designs the screens, UX Pilot spits out the wireframes — why do we still need a designer?”
Short answer: to tell the difference between a usable interface and AI-generated sludge.
Long answer: the job hasn’t changed — you’re still the one balancing user pain points with business goals and making sure the pixels (or the prompt output) actually serve a purpose.
Closing Thoughts: Try Stuff. Break Stuff. Rinse. Repeat.
You don’t need to rebrand yourself as an “AI designer.” But you do need to stay curious. This is how you stay sharp when the tools keep shifting under your feet. Don’t expect to just “get it.”
This isn’t 2005 where learning three tools could carry your career for a decade. There’s no status quo anymore — and that’s kind of the point. No one knows what they’re doing, at least not consistently. And especially not in a field that’s changing this fast.
But the people making the most progress, and having the most fun, are the ones who are just trying stuff.
You don’t need a 10-year plan. You just need to start somewhere. Open a tab, do stuff, break something.
AI tools for designers who don’t code (yet) was originally published in Bootcamp on Medium, where people are continuing the conversation by highlighting and responding to this story.