AI Revolution

Democratizing Design

Dylan Field and David George

This conversation is part of our AI Revolution series, which features some of the most impactful builders in the field of AI discussing and debating where we are, where we’re going, and the big open questions in AI. Find more content from our AI Revolution series on www.a16z.com/AIRevolution.

Will AI take all the design jobs? Dylan Field, founder and CEO of Figma, looks at the relationship between designers, developers, and AI, in conversation with a16z’s David George.

  • [00:36] Will AI replace designers?
  • [4:03] Jambot demo
  • [7:02] Human vs. AI creativity
  • [13:37] Applying AI to design
  • [14:31] Startups vs. incumbents

Will AI replace designers?

David: To start, fiery question. Is AI actually going to take the job of the designer in the future?

Dylan: I don’t think so. Implicit in the question is, “Well, there’d be less things to design, or is AI going to do all the design work?” You’re on one of those paths. The first one, “Will there be less things to design?” If you look at every technological shift or platform shift so far, it’s resulted in more things to design. You have the printing press and then you have to figure out what you put on a page. Even more recently, you have mobile. You would think, “Less pixels, less designers,” right? But no. That’s when we saw the biggest explosion of designers.

If you’d asked me this at the beginning of the year, you might have said, “We’ll all have these chat boxes and people will be asking questions in them and that’s going to be our interface for everything.” Look at OpenAI. They’re on a hiring acquisition spree trying to get product people and designers right now so they can make great consumer products. It turns out design matters.

The second path, “Will AI be doing the design?” is pretty interesting. So far, we’re not there. Right now, we’re at a place where AI might be doing the first draft. Getting from first draft to final product, it turns out that’s kind of hard and usually takes a team. If you could get AI to start to suggest interface elements to people and do that in a way that actually makes sense, I think that could unlock a whole new era of design in terms of creating contextual designs, designs that are responsive to what the user’s intent is at that moment. That’d be a fascinating era for all designers to be working in, but I don’t think it replaces the need for human designers.

David: You started to go into this in the second part, and I totally agree with you on the first part. This is a shift of abundance, just like the other technology shifts we’ve had in the past. Maybe go deeper on how AI is actually going to change the work itself that the designer does.

Dylan: I think AI will make it so that all of us are able to do more design work in the first place. It will lower the floor for who’s able to participate in the design process, but also raise the ceiling of what you can actually do.

As a designer, it will help you move across this ladder of abstraction that we’re all inherently working on. Everything from high-level prompting or ideation to pixel work to, “How exactly should this motion curve look?” Right now, if you’re trying to be the ultimate craftsperson, it takes a long time to learn all the necessary skills. But, again, if you’re able to get to that first draft with the help of AI, perhaps it makes it so that you’re able to more easily iterate on things and jump between the solution space and see more of the solution space. Then you can dive in and figure out, “In the solution space, what do I want to iterate on further?” On that topic, we actually have been exploring how AI should manifest in FigJam. I have a demo and I can show off some stuff.

David: Yes, yes, yes.

Jambot demo

Dylan: If you haven’t seen FigJam, this is our whiteboarding and brainstorming product. The idea for FigJam is that you do ideation in FigJam, design in Figma Design, and then we now have something called Dev Mode, too, where you’re able to go from design to code and production.

Here in this FigJam we’ve been doing this brainstorm, “What topic should I talk about at a16z? We’ve got some sticky notes. It’s like, “Figma’s vision for AI and future roadmap, how is Gen AI different for design versus text, etc.”

One thing we’re introducing today is this new thing called Jambot. Jambot makes it so you can create a diagram and wire something from a sticky or a section, which contains elements in your screen, to a prompt agent. For example, we’ve got this section here of the brainstorm and I’ve wired up to a custom prompt. The input contains the topics I’m going to cover in a presentation at the conference. Please suggest a list of ideas for a fun and novel title for this presentation.

I’ll press run and it’ll give me a bunch of different titles. I can now select “haiku” and now it’ll give me a haiku for each of these titles.

Nothing that we haven’t seen before from ChatGPT. Obviously, we’re running this off the backend of these systems. What I think is neat is that then you can start to explore your prompting in a more non-linear way. You’re also able to do it in a way where you can see the history of what you’ve done as a graph on the screen. For example, maybe I want to say, “What are the startups to watch working on AI?” Based on the prompt for startups to watch working on AI, it will suggest a list of options.” If I select “companies,” it’ll give me a list of companies.

Perhaps I want a rabbit hole on something. I can press “rabbit hole.” It selects the default, the first sticky, which is OpenAI. I’ll press that.

Now it’s going to give me some different options for things to look into. I could keep going there, or perhaps, based on Figma’s huge vision for AI and future roadmap, I could try to ideate there. Based on that, it will give me a bunch of stickies. Here it says different ways we could implement AI features in the Figma roadmap.The first one is to develop an AI-power design assistant.

Perhaps now we’re getting to the point where we want to explore how we launch this. If I press “rewrite” this as a tweet in the style of @zoink, that’s my Twitter username. Maybe make it shorter and less emojis.

What’s been really fun about this is it gives you a way to explore these topics. To me, this feels different than your typical chat session. It came out of a hackathon, actually.

Human vs. AI creativity

David: Going back to the conversation before you did the demo, I absolutely love what you said about raising the ceiling, and especially lowering the floor. When we actually originally made our investment in Figma, Peter, my partner, wrote in our thesis, “We are moving into the decade of design where design, not just code, is at the center of product development and successful organizations. The interface no longer reflects the code, but the code reflects the design.” Talk a little bit more about the relationship between designers, developers, and AI. And then: loaded topic—how would you differentiate the creativity of a machine or an AI versus that of a human?

Dylan: I think that’s just a sub-question of what’s the modern-day Turing test? This question comes up everywhere now. We’re now seeing from these systems that it’s easy to convince a human that you’re human. It’s hard to actually make good things. I could have GPT-4 create a business plan and come pitch to you. That doesn’t mean you’re going to invest. When you have 2 competing businesses side by side and one of them is run by AI and the other one is run by a human—and you invest in the AI business, then I’m worried. We’re not there yet.

David: That’s a good test.

Dylan: For Figma, again, getting to that first draft, I hope we’ll make it much easier. If we’re able to lower the floor and make it so more people in the organization can contribute to the design process, I suspect that we’ll get better results and more people will be able to explore the option space of what the company can do.

I also think that we’re all biased here. Our vision for Figma is to make design accessible to all. That’s been our vision for a long time now. Before that, it was to eliminate the gap between imagination and reality. Both of those work pretty well with this new world. We’ve actually seen that through design systems creating kits and systems of parts, anyone in the organization can use and bring into the design. That has unlocked a lot for designers and for everyone in the organization to be able to contribute to the design process. It’s made results better for everybody. Now most companies that are at scale have design systems teams so more people are able to contribute to that process. I think this will be welcomed by designers.

David: Talk about the actual elements of design itself and the tools that you provide. One, collaboration, and two, the ease with which you can create. With AI, you massively open up the opportunity for creation. Talk about the actual tasks and some of the upleveling that you envision now that we have AI.

Dylan: What’s interesting is that even before this moment in time, we’ve already been seeing roles collapse. Designers and developers, for example, the line is getting a lot more blurry than it was in the past. The best designers are starting to think much more about code; the best developers are thinking much more about design. Beyond designers and developers, if you think about a product person, for example, before they were working on a spec but now they’re going much more into mockups. It’s not because they’re trying to take the job of a designer. It’s because they’re trying to communicate their ideas more effectively.

This will eventually allow anyone in the organization to go from idea to design and possibly to production as well, much faster. I think you’ll still need to hone each of those steps. You’ll need someone to really think through, “What ideas are we going to explore? How are we going to explore them?” You’ll want to tweak the designs and finesse them properly to go from first draft to final product. And on the code side, we’re not at full self-driving for code yet. Perhaps one day we will get there, but I think we’re still a long way out. Right now I can’t really generate a ReadMe for an open source project.

Applying AI to design

David: Earlier, you mentioned the printing press and revolutionizing the way that we communicate and distribute content. I think AI is doing something similar, or maybe even more powerful. The question I have is about the format. Most of the LLMs today that we’re accessing are just text-based or image generation-based. How is applying generative AI to design actually different from just text or images?

Dylan: On a technical level, if you look at the actual internal structure of a Figma document, it’s much more similar to an abstract syntax tree and less similar to some image. Because of that, one outcome might be that we find more success with models that are similar to copilot than you do with a diffusion model, for example, when you’re trying to figure out how you actually create designs using AI systems.

That said, some of the foundational models are really good. If you ask GPT-4, “I want to have this dog sitter app. Can you give me the basic XML structure for that?” It’ll give you that. It’ll give you some very simplistic layout that maybe is going to spark something for you.

The other day I was creating a birthday card for my friend, Ari. He’s really into retro computer stuff, so I asked GPT-4, “What would be a good structure for a program called Ari Online?” just to see what it would say. It gave me a pretty good structure of jokes. I selected a few of them and I made a birthday card and HyperCard out of it because I was trying to be really retro in terms of those emulators available. I’m getting a little off-topic, but the point being that you can get some basic structure from these foundational models too, but they don’t take you all the way there. A lot of our job right now as we think about text design in Figma is figuring out, “How do we leverage all the different tools available in order to make good progress?”

David: What do you think are the big breakthroughs that need to happen in order to actually take this next level? Then where do you think that gets you from a product standpoint? I’m talking technical, arms and legs, memory, etc.

Dylan: The way we’re approaching it from Figma’s perspective is not putting all of our eggs in one basket. The demo is a good example of that. We have done a lot to try to make sure that everyone in the company knows how important it is to take this technology and use it across our entire product. As we’ve done that, there have been a bunch of different areas that I wouldn’t have predicted where it seems like AI might be very useful. Now, the next question will be, “Where does the cost make sense and where is the usability predictable?” It’s going to be more helpful than not, usually, versus once in a while it gives you something good. It usually gives you something good.

Startups vs. incumbents

David: I want to shift gears just a little bit and move towards the business and market structure side of things. We’re all very excited about AI. We think it’s the biggest thing since the microchip. We’re very confident in that. Where we potentially have very low confidence is who will be the winners and where in the value chain will value accrue on the business side. In the question of startups versus incumbents, I think there’s a case to be made that a company like Figma is perfectly positioned to take advantage of this because you actually have some distribution, you have customer relationships, but you also are a young company. You ship fast, you move fast, you can disrupt yourself. I’m curious about your take on who is best positioned given that dynamic, and then if you have any view of where in the value chain you think value will accrue.

Dylan: On the question of startups versus incumbents, it’s hard to believe that foundational models won’t get commoditized, but I’m definitely willing to be proven wrong there. I understand the arguments for why incumbents may benefit in a disproportionate way. In every platform shift that’s happened, people have claimed that, and then it hasn’t been the case. I think if you’re a startup, this is a pretty good time to pick the area that you think could really benefit from the technology and go after it and try to find ways to make innovation there. I wouldn’t bet against startups in a general sense there. It’s so early. Most of what I see coming right now is still at the foundational/base model area and, if it’s not that, it’s infrastructure or dev tools. It’s not at all, “How do we use this all the way up the stack?” I think that that’s coming. Enterprise is coming. There’s a lot of stuff that will show up in all these areas, but it’s going to take some time.

David: The concept on the enterprise side, which is interesting to think about, is software. People always talk about their systems of record, and that’s the powerful moat of a software company. Over the last handful of years, systems of record have been pretty good at becoming systems of prediction. But there’s still a bunch of workflow and windows that people click around in. The promise of AI—and I’m curious for your take on this—is that we’ll actually create a system of action. We’ll actually do the action on your behalf, and that completely transforms what interface you need. With the foundational models, it probably changes the way that your data is organized underneath and accessed, too.

Dylan: I still believe that there will probably be a human in the loop for quite a while. I do like the idea of a system of action and how you actually help people get through the tasks that are really obvious so that they can focus on higher-level work. I suspect that if we’re able to do that effectively, then the way work will happen will be fundamentally transformed, which is pretty exciting.

David: On this topic, and we danced around it before, when you originally built the product, one of your biggest unlocks was actually building in the browser. That enabled collaboration, which was entirely new in the design space. Famously, it took you many years to actually ship your product because the build was so complex architecturally. Now you can access an API and immediately get the model and the value to the end user. Do you think this time is different and the use cases and the winners will be determined much faster? Or do you think we’re still steps away and they just are either technological, business, or good idea unlocks?

Dylan: Everyone that talks to me and says, “We’re trying to build our startup like Figma. We’re going to take years to develop it, and we’ll go talk to our first user.” I’m like, “First of all, it’s not what we did. And second of all, don’t do that.” I would encourage anyone who’s starting a company or trying to ship a software product to try to get it out to market as fast as you can. At the same time, if you look at the arc of machine learning and what might be coming, there might be a lull between where we’re at now with LLMs and where folks say we’re going with AGI.

David: In closing, most of the audience here is a room full of builders. If you were starting another company or giving advice to someone who’s thinking about starting another company, where would you go build given this big shift? What do you want to see get built?

Dylan: Well, that’s a different question than what would I build. I’ll just answer, “What do I want to see?” because I think it’s the one I like the most. When it comes to science, the applications of all this technology that’s happening right now are still completely underexplored. Whether it’s using deep learning to get approximations of systems faster or figuring out how we can just accelerate human progress in general, I get really excited about what could happen there.

David: Then what would you build?

Dylan: Figma!

David: Yes! Exactly. Exactly. It’s clear that AI has got just massive potential for you, for Figma, for the design space, and for the individuals who are actually building a design. Thank you so much for being here with us. We’re excited to see what you guys ship next.

Dylan: Thank you.