When Dynamo first started making the rounds in AEC, it felt like magic. Yes, I’m a Civil guy, but I would still sit through many Dynamo classes, presentations and demos early on at Autodesk University and other events just to see the power of Dynamo, hoping that one day it would be available for Civil as well to some degree. I always figured that those that truly knew Dynamo, had that early insight and understanding of how much power came with it to automate tasks and make design teams’ lives easier.
Once Dynamo for Civil 3D came around, we could suddenly automate so much! We could automate our corridor modeling workflows, perform clash detections on complex gravity and pressure utility networks, and convert a recently received survey from one standard to another all before lunch. These 3 tasks alone would have taken 1-2 weeks to complete! It was a new superpower, and the barrier to entry felt low enough that non-developers like me could quickly learn it, but still high enough that not everyone would bother to explore the potential.
It’s been a long journey to get to where we are today, which is why I have this lingering thought running through my head lately: “What happens to Dynamo, Grasshopper, GenerativeComponents, and other low code/no code tools like it, if in the very near term all we’ll need to do is put a prompt into an AI agent built directly into the software?” This questions gets a lot more real when you look at what’s happening with MCP and agentic AI.
The Promise of MCP and AI in Design Tools
If you haven’t been paying attention to Model Context Protocol (MCP) integrations, now’s the time to start. MCP is essentially a standardized way for AI models to securely read and understand the “context” of your project, allowing them to connect to your data sources and design authoring tools like Revit, Civil 3D, and OpenRoads Designer to see exactly what you see. We’re already seeing early moves from major vendors in this direction, some further along than others, and the trajectory is pretty clear. While MCP provides the context, emerging Agentic AI will allow these models to understand our design intent and take prompted actions inside these applications using specific tool-calling APIs.
Imagine telling your AI agent something like “Create a corridor for alignment “A” using assembly “B”, and flag any locations where the daylight intersects the right-of-way.” In the very near term, the AI agent will not only be able to generate the Dynamo graph for us, but also execute it. This bridges the gap between the speed of a natural language prompt and the design precision required in AEC. The AI agent handles the complex “wiring,” while the professionals can retain a visible, auditable script. Previously this script would need to have been manually developed by a Dynamo expert.
That’s not science fiction. It’s the reality of where things are heading!
So are Low Code Applications Becoming Obsolete?
Not quite. In general, AI is only as good as the information we feed into it and have it learn from. To that standpoint, there’s still a very big human element involved and a need for human intervention to perform verification and quality of our designs. The instinct to declare these tools obsolete misses something very important about why they existed in the first place.
Dynamo, Grasshopper, etc. didn’t fully solve automation problems. They did, however, change how designers and modelers think about their workflows. They introduced parametric, logic, and data driven thinking into disciplines that had largely operated on manual, intuition-based processes for decades. To me, it shows that there was a cultural shift paired with a tooling shift.
So, maybe the question to be asking isn’t “will AI replace Dynamo?”, but “What does Dynamo represent moving forward, and can AI replicate that?”
As many of us know, Dynamo represents explicitness. When you build a script or graph, it’s very transparent in the sense that every decision is known and is visible to us. There’s an accountability component baked into this workflow because you can see the logic, audit it, hand it off to a colleague, version-control it, deploy it, and scale it. An AI agent, on the other hand, that “just does it” is essentially a black box by comparison. In AEC, where decisions carry liability, regulatory review, and safety implications, black boxes make a lot of people justifiably very nervous.
The Low-Code Middle Layer Isn’t Going Away…It’s Shifting
My view is that low-code tools like Dynamo won’t exactly disappear, but they will evolve into being relied on as the verification and governance layer that sits between AI-generated actions and production deliverables.
If you think about it, an AI agent might generate a proposed drainage network based on your graded surface and additional design criteria you feed into the AI agent. But before that network or deliverable gets stamped and submitted, someone still needs to validate that the logic was correct and produced accurate results. Furthermore, while AI can “just do it,” a firm requires every project to adhere to specific digital delivery standards. Dynamo graphs will serve as the guardrails, ensuring AI-generated geometry follows the firm’s established standards rather than the AI’s best guess. Because the AI agent processing the prompt is like a black box, that validation is where applications like Dynamo still hold a lot of power.
I might even go as far as to say that AI might actually make Dynamo and those that truly know it more valuable in some ways, not less. If AI agents are generating geometry and data at scale, we, in turn, need to be building robust checking processes to support automated QA/QC.
And who will be responsible for building these checks? While AI agents will lower the barrier, allowing almost anyone to generate a script via natural language, the responsibility of ensuring that script meets engineering requirements will remain with the experts. If an architect/engineer can put a prompt into an AI agent to design a model, they can just as easily put a prompt into that same agent to create a script and then run it. Very likely, AI agents will be embedded in these low code applications as well, but with that accountability component still being very present and need for a true Dynamo expert to verify and validate.
The Real Disruption Is to the Middle Skill Tier
The group most at risk isn’t the Dynamo power users or the full developers. The risk resides with the middle tier of people who learned just enough Dynamo to be dangerous. These are typically the ones who copy and modify existing graphs without deeply understanding the logic, and who serve as the “automation person” on their team without that deeper foundation.
If you think about it, if a project manager can prompt an AI agent to do what that person does, the business case for keeping someone in that lane gets thin fast. We’ve seen this pattern before across other industries too, where automation doesn’t just eliminate the bottom of the skill ladder, it eliminates the rungs that exist solely because the technology used to require them.
That said, these professionals are uniquely positioned to evolve into something more like an AI workflow integrator. While a PM can type a simple prompt, the middle skill tier professional understands how to structure complex “agentic” workflows, framing multiple AI outputs and stitching them together into a cohesive project delivery. They already possess that baseline understanding of computational design, how data flows, how lists are structured, and how AEC geometry connects throughout the design process. That foundation gives them a real opportunity to pivot and thrive in this evolving environment.
These are also the professionals who can move from just using automation to truly understanding it. The ones who understand why it works, not just that it works. For engineers/architects, it’ll be those that can look at an AI-generated result and immediately spot the flaw. For designers, it’ll be those who can articulate their intent clearly enough in a prompt for AI to act on it, and then critically evaluate and validate what comes back. From a QA/QC standpoint, if robust automated checking processes are in place, a designer can facilitate cross-disciplinary verification of designs generated from prompts, which starts to flip some of the roles and responsibilities of what we’ve historically done on its head.
What This Actually Means for the AEC Industry
As with anything in AEC, the professionals and organizations that can be agile, adaptable and treat this as more than a tooling question are going to succeed. Ultimately, this really isn’t about tooling. It’s way more complex, where workflow, talent, and those that have enough visionary foresight about what expertise is needed to support your organization in a field that’s increasingly mediated by intelligent systems will be the ones to succeed.
If your competitive advantage today is “we have people who know Dynamo,” you may be left behind very quickly. If your competitive advantage is “we have people who think systematically about design logic, can articulate constraints and intent precisely, and can build and validate automated workflows at scale”, that’s something AI augments rather than replaces.
The tools will change. They always do. What doesn’t change is the need for people who actually understand the problem they’re trying to solve. That’s where the focus and attention should be and will essentially be your secret sauce.
What do you think? Are you already seeing AI start to encroach on your automation workflows? Or do you think the hype is outpacing reality? Drop a comment! I’d love to hear where your firm is landing on this!
Leave a Reply