Learn Better, Build Smarter: How AI Can Help You Overcome a Lack of Experience
How I Went from “What Even Is Terraform?” to Shipping Prod-like Infra - and How You Can Learn Anything New, Too
TL;DR
In a world where AI is replacing junior-level work and entry-level jobs are drying up, learning better is more important than ever.
This article breaks down how I went from zero experience with Terraform + AWS ECS to shipping production-ready infrastructure — in just 4 days — by using AI not as a crutch, but as a learning accelerator.
I also share a repeatable framework (LEAP) you can use to learn faster and build smarter.
What’s Inside
Why most people are using AI wrong — and what to do instead
How I figured out what “good” Terraform actually looks like
My learning path through Terraform + ECS
How I prompted AI to build out the infrastructure
What went wrong, what I learned, and how AI helped me spot blind spots
LEAP — a repeatable way to learn fast while building real things
Turns out vibe coding ≠ learning
A few weeks ago, I had to setup the infrastructure for a backend service on AWS Elastic Container Service with Terraform for the first time.
For those of you that don’t know AWS ECS with Fargate is a service to setup docker-ised applications that does not involve the management of servers (EC2s). Terraform on the other hand is Infrastructure-as-Code, its a way to code infrastructure as opposed to setting it up manually on a web console.
I had no experience with Terraform, no experience with AWS ECS, and no real exposure to infrastructure-as-code. I wasn’t even sure what “good Terraform” looked like. I just knew that I needed to deploy a service on AWS ECS, and that Terraform was the way to do it.
How I got to the point where I realized Terraform and ECS were the right tools is a story in itself. That process involved a lot of system design thinking, tradeoffs, and decision-making. I’ll write about that separately, because it deserves its own deep dive.
So there I was: brand new to both the tool and the ecosystem, trying to figure out how to build something that worked, and ideally, something that wasn’t terrible. Naturally, I did what any sane person would do in this situation, I turned to ChatGPT.
In all honesty, it was very helpful. I was able to get moving faster than I could have on my own. But something quickly became obvious.
AI was guiding me rather than me guiding AI.
AI can give you solutions, it can even explain to you why one works well and how it compares with other solutions. This it will do on the basis of the context you provide it. It’s acts as a mirror.
If you don’t tell it what tradeoffs matter to you, or what matters in the long term. If you don’t tell it what you’re definition of ‘good’ looks like. It will create its own definition of ‘good’. And in most cases, especially in engineering, that can lead to a lot of problems down the road.
This is why senior engineers are getting the most value out of AI. They know how to use it. They understand what a high-quality solution should look like. They can give AI direction, critique its output, and spot when something is off.
Junior engineers can’t do that yet. Not because they’re not smart, but because they haven’t built the internal reference points or the instincts. They haven’t spent years running into problems, debugging edge cases, and learning through failure. They haven’t had enough exposure to understand what tends to work and what doesn’t.
And that’s a problem. Because if you don’t develop those instincts and that experience, you’ll never grow into someone who can use AI well. You’ll be stuck relying on it for answers you don’t know how to validate.
So the real challenge is this: how do you build those instincts faster? How do you level up your ability to learn, not just faster, but better?
How do you figure out what “good” looks like?
What people get wrong
Let’s be clear. Most junior engineers using AI to accelerate their learning are not doing it well.
Here’s what usually happens.
You hit a blocker, you ask ChatGPT, it gives you some code. You copy it, paste it into your editor, and if it runs, you move on. Maybe you learned a new syntax trick. Maybe you didn’t. What you almost definitely didn’t do was build real understanding.
You didn’t question the approach. You didn’t explore why it worked or whether it was the best option. You didn’t learn the underlying model. You didn’t build intuition.
AI makes it easy to skip the struggle. That’s the danger.
And that’s why a lot of engineers end up mistaking output for progress. Moving fast doesn’t mean learning fast, and getting something to work doesn’t necessarily mean you understand it well enough.
The real opportunity is to use AI not as a shortcut, but as a learning accelerator. And that only happens when you stay in control of the learning process.
This is how I approached that challenge - and how you might, too.
Terraform - What Does “Good” Even Mean?
When I started working on the infrastructure for this service, I had never written a line of Terraform before. In fact, I had never worked with any infrastructure-as-code tool. So before I even touched anything, there were two big questions: How does Terraform work, and what does good Terraform code actually look like?
I started with the obvious first step. I opened ChatGPT and just had a conversation. I asked it the basics: how is Terraform structured, what’s the syntax like, what are the core building blocks? I looked up some basic commands just to get a feel for how it’s supposed to be used.
In hindsight, one thing I could’ve done better was to ask ChatGPT to help me build out a very simple application using Terraform. A basic real-world example would have helped ground the syntax in something functional. But even without that, just exploring through back-and-forth helped me get a working familiarity.
Tip: Building out a small application or trying out a small tutorial can help with developing familiarity with something new.
At that point though, I had to pause and think. If I’m going to write this for a production service that I will be responsible for, what actually matters to me?
That’s when I started defining my own quality bar. I asked myself: what’s actually important here? I came up with three things:
Readability
Someone else is likely going to maintain this in the future. They need to be able to look at it and understand what’s going on without reverse-engineering the logic.
Reusability
This infrastructure wasn’t going to be a one-off. I knew the product roadmap and I knew that more services would follow. The way I structured this would potentially save a lot of effort in the future.
Security
This was the foundation layer of the products architecture. I needed it to be secure.
With these priorities set, I went back to ChatGPT. I asked it how to write readable Terraform? How to make it modular and reusable? What are the security best practices I should be aware of? For every question, I asked for documentation links and references. When it gave me options, I asked for pros and cons and I asked it to be critical and blunt in its evaluations.
Tip: Ask ChatGPT to be critical and blunt. Its reward function is designed to make you happy, even if that means giving you a half-right answer. Ask it to be critical of its own answers and your inputs.
By default, I don’t really trust AI so I cross-checked most things with Google and Reddit. Even when the answer sounded right, I wanted to see what people in the wild had said.
Reddit is especially useful because the answers tend to be opinionated. The signal-to-noise ratio is better than you’d expect. Experienced engineers show up with strong opinions, and the community usually calls out bad takes. It’s a surprisingly reliable bullshit filter.
Tip: Don’t trust AI by default. Validate answers with a quick Reddit or Google search to avoid being misled.
By the end of this process, I hadn’t mastered Terraform. But I had a clear, working idea of how good Terraform should look. I knew how I wanted to structure modules. I knew which practices I wanted to avoid. I had a mental model for evaluating whether a piece of code was going to be maintainable, extensible, and secure.
For example, one lesson I picked up was to be careful with Terraform outputs. At first, I didn’t think twice about passing in environment variables like database credentials. But then I realized that if you output those values - Terraform logs them. A much safer pattern is to write those values directly to a secure vault like AWS Parameter Store during provisioning and never expose them in outputs.
I now had enough confidence to move forward. I was ready to actually start deploying something - which meant figuring out how ECS works in the first place.
ECS - What tutorials don’t teach you
Now that I had a rough sense of how I wanted to write Terraform, I hit the next wall: what was I even trying to write Terraform for?
I had decided on using AWS ECS (Elastic Container Service) for deployment (how I got to this decision is something for another time), but I didn’t really know how ECS worked. I didn’t understand the moving parts, the terminology, or the configurations I’d need to make.
So I had to do something I usually hate doing. I followed tutorials.
I prefer reading over watching, so I looked specifically for detailed articles on ECS + Fargate. I went through three in total. I followed one of them step-by-step, and skimmed the other two just to get a broader sense of how people were using it.
Tip: Don’t rely on a single tutorial to understand a new system. Skim several. Most tutorials are one-dimensional. They simplify things for the sake of explanation, which often leaves critical gaps.
At the end of the tutorial, I had a basic application running on ECS. But did I really understand how ECS worked? Barely.
Instead of trying to memorize everything, I used the tutorials to scan for important keywords, configurations, and concepts. Clusters, task definitions, task execution roles and services, to name a few. Whenever I came across something unfamiliar, I’d do a quick dive using a mix of ChatGPT and Google.
Tip: Keyword extraction is one of the best ways to study any new system. Treat unfamiliar terms as entry points.
It wasn’t about becoming an expert overnight. It was about building enough context to ask better questions and make better decisions later. I didn’t need to know everything - I just needed to know enough to recognize what I didn’t know.
All throughout this process, I kept my end goal in mind: I needed to write Terraform code that could spin up infrastructure using AWS ECS, and I wanted to do it in a way that would be production-ready.
The process wasn’t very clean, in fact it was messy, and occasionally frustrating. But it gave me what I needed: a high-level map of ECS and its components, and a much stronger foundation for the next step - actually writing the infrastructure-as-code with Terraform.
Infra Design With AI – Collaborate, don’t Delegate
Once I had a decent grasp on Terraform and a working understanding of ECS, it was time to actually start building. But I’ve found that prompting AI to generate code and hoping for the best rarely works out. I wanted to use AI more intentionally.
So I turned to Cursor (configured with Claude-3.7 - its my go to coding agent these days), and instead of saying “write Terraform for my infrastructure,” I gave it a structured prompt with the idea that if I wanted good output, I had to give it good input.
The prompt
Here’s what the prompt looked like:
# Overview
<Obfuscated context about the service I am writing, what I want to write it in and where I want it deployed>
## Guidelines for Writing Terraform Code
### Note: naturally I have intentionally removed some details for brevity.
### High-Level Principles
- **Reusability:** All components must be written as reusable modules.
- **Environment Parity:** There will be two environments (staging and production) with identical architecture. Only resourcing and access rules will differ. Shared resources should be in a separate folder.
- **Security:**
- No hardcoded secrets.
- Sensitive values should be passed in as Terraform variables or stored in AWS Parameter Store.
- Avoid logging or outputting secrets.
- **Readability:** Code should be clean and well-commented, especially where intent might be unclear.
- **Tagging:** Every resource should include tags for:
- `project_name`
- `environment`
- `AppRegistry_tag`
- **Naming Convention:** All resources must have environment-specific names (e.g., `xyz-staging`).
### Backend Configuration
- Use AWS S3 as the Terraform backend.
- Use the following DynamoDb table for state locking: xyz.
---
## Implementation Process
- Implement module by module.
- After each module, stop and prompt me to test and review it.
- Do not proceed to the next module without my confirmation.
---
## Communication Rules
- Do not make assumptions.
- If you are unsure about anything, ask for clarification.
- Ask me follow-up questions before writing code if anything is unclear.
---
## Infrastructure Specs
### Shared Modules
- **AppRegistry:** AWS service catalog entry for better observability into resources and costs. Shared across staging and production.
### Environment-Specific Modules
- **Database (RDS):**
- PostgreSQL v17
- Use default encryption key
- Credentials stored in AWS Parameter Store
- Use private subnets
- Enable backups
- **ECR:** Shared Docker image store. Keep no more than 10 images at a time.
- **ECS + Fargate:**
- Use Fargate launch type
- Enable container insights
- Task definition:
- Linux x86
- 0.25 vCPU, 0.5GB RAM
- Logging to CloudWatch
- Env variables from AWS parameter store: `DB_HOST`,`DB_PASSWORD`, `DB_USER`
- Plain Env variables: `PORT`, `ENVIRONMENT`
- Service:
- 2 tasks
- Rolling updates
- Networking:
- Only allow ECS service to access RDS security groups.
- Use the following private subnets for the service (I provided the exact subnet ids)
- Load Balancer:
- Listener config - listen on https (port 443) and forward all to http (pport 80) in the target group
- SSL certs (I provided the exact ID of an existing certificate)
- Target groups and their health checks
- **Bastion Host:**
- Launch template with AMI (I specified the exact Amazon linux AMI to use)
- SSH access config
- Instance role with SSM access
- PSQL installed via user script
With this prompt, I wasn’t just describing what I wanted. I was embedding rules, expectations, and a review loop. And it worked pretty well.
Challenges encountered
However, even with all that detail, there were still 2 kinds of problems I ran into.
Simple Mistakes by AI: Some errors only became apparent during testing, underscoring the necessity of human oversight.
Tip: Always test out output from AI extensively. Reviewing AI output will be a major part of software engineering in the future. Humans-in-the-loop will always be required in some shape or form.
Unanticipated Questions from AI: (this wasn’t really a problem) Cursor began asking insightful questions I hadn't considered. This it based of the instruction I provided it to seek absolute clarity. For example while implementing the bastion host module it asked questions such as:
Do you want the bastion host to always be running or on-demand?
Should it access both staging and production databases or just one?
What level of PostgreSQL tooling do you want installed?
Should SSH access be restricted by IP range?
Do we want to provide DB credential details to the bastion host?
These questions highlighted gaps in my understanding, helped refine the system and made me realize the things that I didn’t know, that I didn’t know.
And this was a huge unlock. The real value is when the AI starts asking you questions to make you better. That’s when it becomes a true partner and something you can learn from.
Tip: Encourage AI to critically analyze your prompts and specs. That’s how you find some of the things you didn’t know that you didn’t know.
By the end of this phase, I had working Terraform code, written with the values in my mind, and validated by a process that uncovered things I would’ve otherwise missed.
Perfection achieved?
This wasn’t perfect. I made decisions that I might revisit later. I hit issues during testing. And I’m almost certain I will run into problems later on.
But despite all that, I deployed a backend service on AWS ECS, provisioned via Terraform - using tools I had zero experience with just a few days earlier.
If I hadn’t used AI, that process would’ve easily taken a couple of weeks. I would’ve spent far more time digging through documentation, debugging issues, and asking senior engineers for help. With AI in the loop, I was able to get there in just over four days - that too with a decent understanding of what I was doing.
The real takeaway here isn’t speed. It’s quality. This process didn’t just help me move faster it helped me learn better, while still delivering. In any organization thats extremely valuable.
That said, this was a specific project. The real question is: can this approach be turned into something repeatable? Can you deliberately learn and build like this on any technical challenge?
I think so.
LEAP: Learn better while delivering fast
After going through all of this, I thought: why not try to turn it into something repeatable?
So I came up with this little framework. It’s called LEAP. Yeah, I know - its cringey. It's the best ChatGPT and I could come up with.
But honestly, I think it’s solid. It’s simple. It captures the real steps I followed. And more importantly, it’s useful. You can apply it to almost any problem where you need to learn something fast and deliver something meaningful.
L – Lock the Goal
Before you open ChatGPT or hit up any docs, make sure you know what you're actually trying to do.
What are you trying to build? What does done look like? What are your constraints? Write it down.
Being clear about the outcome keeps you from wandering aimlessly through tutorials and half-baked prompts.
E – Establish the Fundamentals
Ask AI to walk you through the basics. Find 2–3 decent tutorials and skim through them. Pull out keywords, concepts, configurations - anything that shows up more than once.
Use Google and ChatGPT together. Ask dumb questions and follow rabbit holes. Your goal isn’t mastery, it’s to build just enough context to ask better questions later.
A – Articulate What ‘Good’ Looks Like
You have to decide what matters for what you’re building. What’s your quality bar?
For me, it was:
Readability — someone else will have to maintain this
Reusability — this setup would be cloned across services
Security — this was foundational infrastructure
Yours might be different. But if you don’t know what “good” looks like for you, you’ll blindly accept whatever AI gives you and that’s an problem.
P – Probe and Refine
Don’t just copy what AI gives you. Ask it to critique itself, ask for alternatives and test everything.
Even better: make AI ask you questions. Make it challenge your specs and clarify gaps. Inadvertently, you’ll end up finding things you may have missed.
LEAP is all about learning fast while delivering value - and becoming a sharper, more effective engineer in the process.
Get better, faster
What surprised me most was how much ground I could cover in just a few days, without cutting corners on quality. This doesn’t mean I’ve mastered Terraform or ECS, far from it. But I’ve built a solid foundation.
This process won’t replace years of production experience, but it can help you accelerate toward it. It can help you become the kind of engineer who learns deliberately and builds with clarity.
If this resonated with you, stick around. I’m trying to figure out how to become a better engineer using AI.
If you’ve got feedback, or you think I could’ve done something better, I’d genuinely love to hear it. Let’s get better, together.

