I will admit I've been extremely skeptical of Artificial Intelligence, but this week I found that it was a really useful tool in one specific application.
I've been developing some REST API scripts for a client of mine since 2018, and one of the scripts takes a stroll through his product catalog (hosted on BigCommerce), looking for any items have disagreements between the website and his accounting system. One of the scripts has been taking over an hour to run, because for each product, it has to do a second call to BigCommerce for additional product information. In a single-threaded situation, all of those latencies add up.
I was already using Parallel::ForkManager for one of the scripts, so I thought I'd use it again for a new version of this script. Unfortunately, the new script didn't work correctly. My client has access to the repositories where I store the code, so he grabbed the code and fed it into Claude Opus 4.5. This AI produced some really helpful pointers, and I incorporated five of the six improvements.
Really, it provide a second set of eyes on my code. Colour me impressed.
ChatGPT has actually been pretty helpful with XS coding. I've been giving it bits of Perl to translate to XS, asking it what the causes of particular compiler errors could be, why particular lines were causing segfaults, etc. The code it produces is far from flawless, but it has definitely helped.
> This AI produced some really helpful pointers, and I incorporated five of the six improvements.
You didn't tell us if the improvements made the script "work correctly".
Cheers Rolf
(addicted to the Perl Programming Language :)
see Wikisyntax for the Monastery
You're quite right -- yes, it fixed some of my errors. In using P:FM, in the child process I had a loop where I was evaluating child SKUs, and initially had
next(to go on to the next kid) when I discovered I didn't need to deal with this child SKU. When reviewing this code, I changed that to a
$pfm->finishcall, exiting the child process after the first child SKU. The AI caught this, and reminded me I should be using
nextthere -- just like I'd had originally.
I'd also collected some anomalous information in the child process, but not passed it back with the
finishmethod. The AI caught that as well, and suggested code (commented, even) to fix that omission.
AI is bad for a lot of things, but as a second set of eyes, it's quite useful
I suppose you got good suggestions because it was trained on many examples, not because it was intelligently deducing the right way from the documentation ("if A then B").
That's rather Artificial Remembering
LLM's regularly fail if there is not enough training data to remember.
Like when we note that a word might be misspelled in English because we've seen it many times. (You often can't deduce the orthography in English, you need to have seen it before)
For instance, AFAIK no LLM ever learned to do simple calculations divisions by training alone. At best newer LLMs recognize a calculation and are wired to delegate it to an "agent" (here a calculator) which knows how to divide. (Please someone correct me if I'm wrong)
So yes some kind of pseudo-intelligent programming might happen one day, if LLMs are wired to use better agents to improve their code.
But just imagining the necessary energy consumption° for try and error coding makes me paling.
And yes remembering is part of intelligence, like recently when I recognized that
@{%$hashref} looks wrong. We use a lot of "cashing" in our mental processes.
Cheers Rolf
(addicted to the Perl Programming Language :)
see Wikisyntax for the Monastery
°) On a tangent: the "Make the President Richer Again" movement just bought stakes at a nuclear firm because they are betting on higher electricity needs in the US. Probably fulled by government contracts.
I agree 100%. The energy costs for AI/an LLM are gigantic. *If* it delivered everything it promised, this energy cost might be worthwhile -- but with what I've seen (about what AI/LLM can do), I'd rather a human do a this work.