I will say I've had a lot of success with AI and boiler plate HCL.
I try to avoid modules out of the gate until I know the shape of a system and the lifecycles of things and I've been pleasantly surprised with how well the AI agents get AWS things correct out of the gate with HCL.
This should super charge this workflow since it should be able to pull out the provider docs / code for the specific version in use from the lockfile.
te_chris 6 hours ago [-]
Me too. Having not done it for a couple of years, I got a full private gke vpc system and the live config etc incl argocd deployed and managed by tf all setup in like 3 or 4 days. I know it’s meant to be hours… but real life.
What I enjoyed using cursor was when shit went wrong it could generate the gcloud cli commands etc to interrogate, add the results of that to the agent feed then continue.
Lucasoato 6 hours ago [-]
Finding the right command every time is the real time saver.
Ok, it’s probably something that a developer should know how to do, but who remembers every single command for cloud providers cli?
Querying the resources actual state makes these AI infra tools so powerful, I found them so useful even when I had to manage Hetzner based terraform projects.
te_chris 5 hours ago [-]
100%. The real unlock/augmentation is not having to remember everything to type.
bythreads 7 hours ago [-]
Maybe i dont understand this to well but isnt this basically a wrapper for github.com/mark3labs/mcp-go/server
throwup238 12 hours ago [-]
Does anyone know of an MCP server like this that can work with Terragrunt?
d_watt 12 hours ago [-]
I'd think this would work, as the 4 tools listed are about retrieving information to give agents more context of correct providers and modules. Given terragrunt works with terraform directly, I'd think that it would help with it as well, just add rules/prompts that are explicit about the code being generated being in terra grunt file structure / with terragrunt commands.
notpushkin 9 hours ago [-]
And it’s MPL, so you’re free to use it with OpenTofu as well (even if competing with Hashicorp).
But as mdaniel notes in a sibling thread, this doesn’t seem to do much at this point.
RainyDayTmrw 8 hours ago [-]
I dunno about this. Infra-as-code has always been a major source of danger. Now we want to put AI on it?
teej 7 hours ago [-]
There's zero danger writing Terraform. The danger is running `apply`.
benterix 6 hours ago [-]
whats the point of writing tf if you never mean to apply?
teej 4 hours ago [-]
You apply with a human in the loop.
ivolimmen 8 hours ago [-]
Initially thought the MCP acronym would stand for "Master control program". Was disappointed.
fakedang 8 hours ago [-]
Funny anecdote, I asked Claude 3.7 to explain MCP to me and it went on blabbering on something about Master Control Programs.
curtisszmania 9 hours ago [-]
[dead]
tecleandor 12 hours ago [-]
Oh, just what I needed to raise my RUMs and send my Hashicorp bill through the roof!
benterix 6 hours ago [-]
> Oh, just what I needed to raise my RUMs and send my Hashicorp bill through the roof!
out of curiosity, what are you paying them for? most orgs that use tf dont
The back side of that coin is that it similarly just(?) seems to be a fancy way of feeding the terraform provider docs to the LLM, which was already available via `tofu provider schema -json` without all this http business. IMHO the fields in the provider binary that don't have populated "description" fields are a bug
Rendered at 12:41:39 GMT+0000 (UTC) with Wasmer Edge.
I try to avoid modules out of the gate until I know the shape of a system and the lifecycles of things and I've been pleasantly surprised with how well the AI agents get AWS things correct out of the gate with HCL.
This should super charge this workflow since it should be able to pull out the provider docs / code for the specific version in use from the lockfile.
What I enjoyed using cursor was when shit went wrong it could generate the gcloud cli commands etc to interrogate, add the results of that to the agent feed then continue.
Ok, it’s probably something that a developer should know how to do, but who remembers every single command for cloud providers cli?
Querying the resources actual state makes these AI infra tools so powerful, I found them so useful even when I had to manage Hetzner based terraform projects.
But as mdaniel notes in a sibling thread, this doesn’t seem to do much at this point.
out of curiosity, what are you paying them for? most orgs that use tf dont
The back side of that coin is that it similarly just(?) seems to be a fancy way of feeding the terraform provider docs to the LLM, which was already available via `tofu provider schema -json` without all this http business. IMHO the fields in the provider binary that don't have populated "description" fields are a bug