Re: OK, OK. AI is a handy tool.

by LanX on 2025-12-19 09:16:57
> Unfortunately, the new script didn't work correctly. ...

> This AI produced some really helpful pointers, and I incorporated five of the six improvements.

You didn't tell us if the improvements made the script "work correctly".

Cheers Rolf
(addicted to the Perl Programming Language :)
see Wikisyntax for the Monastery

Replies

  • You're quite right -- yes, it fixed some of my errors. In using P:FM, in the child process I had a loop where I was evaluating child SKUs, and initially had

    next
    (to go on to the next kid) when I discovered I didn't need to deal with this child SKU. When reviewing this code, I changed that to a
    $pfm->finish
    call, exiting the child process after the first child SKU. The AI caught this, and reminded me I should be using
    next
    there -- just like I'd had originally.

    I'd also collected some anomalous information in the child process, but not passed it back with the

    finish
    method. The AI caught that as well, and suggested code (commented, even) to fix that omission.

    AI is bad for a lot of things, but as a second set of eyes, it's quite useful

    Alex / talexb / Toronto

    As of June 2025, Groklaw is back! This site was a really valuable resource in the now ancient fight between SCO and Linux. As it turned out, SCO was all hat and no cattle.Thanks to PJ for all her work, we owe her so much. RIP -- 2003 to 2013.

    • I'm still cringing when LLMs are called AI.

      I suppose you got good suggestions because it was trained on many examples, not because it was intelligently deducing the right way from the documentation ("if A then B").

      That's rather Artificial Remembering

      LLM's regularly fail if there is not enough training data to remember.

      Like when we note that a word might be misspelled in English because we've seen it many times. (You often can't deduce the orthography in English, you need to have seen it before)

      For instance, AFAIK no LLM ever learned to do simple calculations divisions by training alone. At best newer LLMs recognize a calculation and are wired to delegate it to an "agent" (here a calculator) which knows how to divide. (Please someone correct me if I'm wrong)

      So yes some kind of pseudo-intelligent programming might happen one day, if LLMs are wired to use better agents to improve their code.

      But just imagining the necessary energy consumption° for try and error coding makes me paling.

      And yes remembering is part of intelligence, like recently when I recognized that

      @{%$hashref}
      looks wrong. We use a lot of "cashing" in our mental processes.

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      see Wikisyntax for the Monastery

      °) On a tangent: the "Make the President Richer Again" movement just bought stakes at a nuclear firm because they are betting on higher electricity needs in the US. Probably fulled by government contracts.

        • But just imagining the necessary energy consumption° for try and error coding makes me paling.

        I agree 100%. The energy costs for AI/an LLM are gigantic. *If* it delivered everything it promised, this energy cost might be worthwhile -- but with what I've seen (about what AI/LLM can do), I'd rather a human do a this work.

        Alex / talexb / Toronto

        As of June 2025, Groklaw is back! This site was a really valuable resource in the now ancient fight between SCO and Linux. As it turned out, SCO was all hat and no cattle.Thanks to PJ for all her work, we owe her so much. RIP -- 2003 to 2013.