I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).

Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.

  • jasory@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    1 day ago

    I’m not a software dev but rather a mathematical researcher. I see zero use for myself or designing any advanced or critical systems. LLM coding is like relying on stack overflow, if you want to solve a novel or sophisticated problem relying on them is the wrong approach.

  • nomad@infosec.pub
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    I use llms from both ends. It helps me plan an think through complex code architecture and helps me do the little stuff i do too infrequent to remember. Putting it all together is usually all me.

  • southernbrewer@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    I’m enjoying it, mostly. It’s definitely great at some tasks and terrible at orhers. You get a feel for what those are after a while:

    1. Throwaway projects - proof of concepts, one-off static websites, that kind of thing: absolutely ideal. Weeks of dev becomes hours, and you barely need to bother reviewing it if it works.

    2. Research (find a tool for doing XYZ) where you barely know the right search terms: ideal. The research mode on claude.ai is especially amazing at this.

    3. Anything where the language is unfamiliar. AI bootstraps past most of the learning curve. Doesn’t help you learn much, but sometimes you don’t care about learning the codebase layout and you just need to fix something.

    4. Any medium sized project with a detailed up front description.

    What it’s not good for:

    1. Debugging in a complex system
    2. Tiny projects (one line change), faster to do it yourself
    3. Large projects (500+ line change) - the diff becomes unreviewable fairly quickly and can’t be trusted (much worse than the same problem with a human where you can at least trust the intent)
  • MXX53@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    I like using it. Mostly for quick ideation, and also for getting rid of some of the tedious shit I do.

    Sometimes it suggests a module or library I have never heard of, then I go and look it up to make sure it is real, not malicious and well documented.

    I also like using my self hosted AI to document my code base in a readme following a template I provide. It gets it pretty good and usually is like 60-80% accurate and to the form I like. I just edit up the remaining and correct mistakes. Saves me a ton of time.

    I think the best way to use AI is to use it like a tool. Don’t have it write code for you, but use it to enhance your own ability.

  • iegod@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    I use it to vet ideas, concepts, approaches, and paradigms. It’s amazing for rubber ducking. I don’t use it for wholesale code gen though.

    And as a documentation companion it’s pretty rad. Not always right but generally gets things in the correct direction.

  • Ledivin@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 days ago

    Its an absolute gamechanger, IMO - the research phase of any task is reduced to effectively nothing, and I get massive amounts of work done when I walk away from my desk, because I plan for and keep lists of longer tasks to accomplish during those times.

    You need to review every line of code it writes, but that’s no different than it ever was when working with junior devs 🤷‍♂️ but now I get the code in minutes instead of weeks and the agents actually react to my comments.

    We’re using this with a massive monorepo containing hundreds of thousands of lines of code, and in tiny tool repos that serve exactly one purpose. If our code quality checks and standards werent as strict as they have been for the past decade, I think it wouldn’t work well with the monorepo.

    The important part is that my company is paying for it - I have no clue what these tools cost. I am definitely more productive, there is absolutely no debate there IMO. Is the extra productivity worth the extra cost? I have literally no idea.

  • Electricd@lemmybefree.net
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    I do and it’s great for small tasks. Wouldn’t trust it on an existing code base or more than a hundred lines of code.

    I always review what it does and often cherry pick stuff

    The only thing I vibe code are small websites / front ends because fuck HTML,CSS,JS

  • communism@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    I wouldn’t know about professionally as I don’t work in the industry, but anecdotally a lot of young people I see use LLMs for everything. Meanwhile in the FOSS community online I see very little of AI/LLMs. I think it’s a cultural thing that will vary depending on what circle of people you’re looking at.

  • iglou@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    I’m not against AI use in software development… But you need to understand what the tools you use actually do.

    An LLM is not a dev. It doesn’t have the capability to think on a problem and come up with a solution. If you use an LLM as a dev, you are an idiot pressing buttons on a black box you understand nothing about.

    An LLM is a predictive tool. So use it as a predictive tool.

    • Boilerplate code? It can do that, yeah. I don’t like to use it that way, but it can do that.
    • Implementing a new feature? Maybe, if you’re lucky, it has been trained on enough data that it can put something together. But you need to consider its output completely untrustworthy, and therefore it will require so much reviewing that it’s just better to write it yourself in the first place.
    • Implementing something that solves a problem not solved before? Just don’t. Use your own brain, for fuck’s sake. That’s what you have been trained on.

    The one use of AI, at the moment, that I actually like and actually improves my workflow is JetBrains’ full line completion AI. It very often accurately predicts what I want to write when it’s boilerplate-ish, and shuts up when I write something original.

  • djmikeale@feddit.dk
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    Not total bullshit, but it’s not great for all use cases:

    For coding tasks the output looks good on the surface but often I end up changing stuff, meaning it would have been faster up do myself.

    For coding I know little about (currently writing some GitHub actions), it’s great at explaining alternatives, pros and cons, to give me a rudimentary understanding of stuff

    I’ve also used it to transcribe tutorial screencasts, and then afterwards having a secondary LLM use the transcription to generate documentation (include in prompt: "when relevant, generate examples, use markdown tables, generate plantuml etc)

  • I don’t see how it could be more effecient to have AI generate something that you then have to review and make sure actually works over just writing the code yourself, unless you don’t know enough to code it yourself and just accept the AI generated code as-is without further review.

    • Zexks@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      You can type at 300 words per minute with zero mistakes. Youre able to do than on systems youve never worked on before in languages youve never seen. #Doubt

    • Quibblekrust@thelemmy.club
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      I don’t see how it could be more effecient to have [a junior developer write] something that you then have to review and make sure actually works over just writing the code yourself…

      • iglou@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        3 days ago
        1. A junior dev wont be a junior dev their whole career, code reviews also educates them
        2. You can’t trust the quality of a junior’s work, but you can trust that they are able to understand the project and their role in it. LLMs are by definition unable to think and understand. Just pretty good at pretending they are. Which leads to the third point:
        3. When you “vibe code”, you don’t “just” have to review the produced code, you also have to constantly tell the LLM what you want it to do. And fight with it when it fucks up.
        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          if the only point of hiring junior devs were to skill them up so they’d be useful in the future, nobody would hire junior devs

          LLMs aren’t the brain: they’re exactly what they are… a fancy auto complete…

          type a function header, let if fill the body… as long as you’re descriptive enough and the function is simple enough to understand (as all well structured code should be) it usually gets it pretty right: it’s somewhat of a substitute for libraries, but not for your own structure

          let it generate unit tests: doesn’t matter if it gets it wrong because the test will fail; it’ll write a pretty solid test suite using edge cases you may have forgotten

          fill lines of data based on other data structures: it can transform text quicker than you can write regex and i’ve never had it fail at this

          let it name functions based on a description… you can’t think of the words, but an LLM has a very wide vocabulary and - whilst not knowledge - does have a pretty good handle on synonyms and summary etc

          there’s load of things LLMs are good for, but unless you’re just learning something new and you know your code will be garbage anyway, none of those things replace your brain: just repetitive crap you probably hate to start with because you could explain it to a non-programmer and they could carry out the tasks

          • iglou@programming.dev
            link
            fedilink
            arrow-up
            0
            ·
            2 days ago

            if the only point of hiring junior devs were to skill them up so they’d be useful in the future, nobody would hire junior devs

            I never said that, and a single review already will make a junior dev better off the bat

            LLMs aren’t the brain: they’re exactly what they are… a fancy auto complete

            I agree, but then you say…

            type a function header, let if fill the body… as long as you’re descriptive enough and the function is simple enough to understand (as all well structured code should be) it usually gets it pretty right: it’s somewhat of a substitute for libraries, but not for your own structure

            …which says the other thing. Implementing a function isn’t for a “fancy autocomplete”, it’s for a brain to do. Unless all you do is reinventing the wheel, then yeah, it can generate a decent wheel for you.

            let it generate unit tests: doesn’t matter if it gets it wrong because the test will fail; it’ll write a pretty solid test suite using edge cases you may have forgotten

            Fuck no. If it gets the test wrong, it won’t necessarily fail. It might very well pass even when it should fail, and that’s something you won’t know unless you review every single line it spits out. That’s one of the worst areas to use an LLM.

            fill lines of data based on other data structures: it can transform text quicker than you can write regex and i’ve never had it fail at this

            I’m not sure what you mean by that.

            let it name functions based on a description… you can’t think of the words, but an LLM has a very wide vocabulary and - whilst not knowledge - does have a pretty good handle on synonyms and summary etc

            I agree with that, naming or even documenting is a good way to use an LLM. With supervision of course, but an imprecise name or documentation is not critical.

            • Pup Biru@aussie.zone
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 days ago

              Implementing a function isn’t for a “fancy autocomplete”, it’s for a brain to do. Unless all you do is reinventing the wheel, then yeah, it can generate a decent wheel for you.

              pretty much every line of code we write in modern software isn’t unique… we use so many orders of magnitude more lines of other people’s code than our own, we’re really just plumbing pipes together

              most functions we write that aren’t business logic specific to the problem domain of our software (and even sometimes then) has been written before… the novel part isn’t in the function body: the low level instructions… the novel part is how those instructions are structured… that may as well be pseudocode, and that pseudocode may as well take the form of function headers

              Fuck no. If it gets the test wrong, it won’t necessarily fail. It might very well pass even when it should fail, and that’s something you won’t know unless you review every single line it spits out. That’s one of the worst areas to use an LLM.

              write tests, tests fail, write code, tests slowly start to pass until you’re done… this is how we’ve always done TDD because it ensures the tests fail when they should. this is a good idea with or without LLMs because humans fuck up unit tests all the time

              I’m not sure what you mean by that.

              for example, you have an external API of some kind with an enum expressed via JSON as a string and you want to implement that API including a proper Enum object… an LLM can more easily generate that code than i can, and the longer the list of values the more cumbersome the task gets

              especially effective for generating API wrappers because they basically amount to function some_name -> api client -> call /api/someName

              this is basically a data transformation problem: translate from some structure to a well-defined chunk of code that matches the semantics of your language of choice

              this is annoying for a human, and an LLM can smash out a whole type safe library in seconds based on little more than plain english docs

              it might not be 100% right, but the price for failure is an error that you’ll see and can fix before the code hits production

              and of course it’s better to generate all this using swagger specs, but they’re not always available and tend not to follow language conventions quite so well

              for a concrete example, i wanted to interact with blackmagic pocket cinema cameras via bluetooth in swift on ios: something they don’t provide an SDK for… they do, however document their bluetooth protocols

              https://documents.blackmagicdesign.com/UserManuals/BlackmagicPocketCinemaCameraManual.pdf?_v=1742540411000

              (page 157 if you’re interested)

              it’s incredibly cumbersome, and basically involves packing binary data into a packet that represents a different protocol called SDI… this would have been horrible to try and work out on my own, but with the general idea of how the protocol worked, i structured the functions, wrote some test case using the examples they provided, handed chatgpt the pdf and used it to help me with the bitbanging nonsense and translating their commands and positionally placed binaries into actual function calls

              could i have done it? sure, but why would i? chat gpt did in 10 seconds what probably would have taken me at least a few hours of copying data from 7 pages of a table in a pdf - a task i dont enjoy doing, in a language i don’t know very well

            • Quibblekrust@thelemmy.club
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 days ago

              fill lines of data based on other data structures: it can transform text quicker than you can write regex and i’ve never had it fail at thisI’m

              not sure what you mean by that.

              Not speaking for them, but I use LLMs for this. You have lines of repetitive code, and you realize you need to swap the order of things within each line. You could brute force it, or you could write a regex search/replace. Instead, you tell the LLM to do it and it saves a lot of time.

              Swapping the order of things is just one example. It can change capitalization, insert values, or generate endless amounts of mock data.

          • leftzero@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            0
            ·
            3 days ago

            They’ll never be able to learn, though.

            A LLM is merely a statistical model of its training material. Very well indexed but extremely lossy compression.

            It will always be outdated. It can never become familiar with your codebase and coding practices. And it’ll always be extremely unreliable, because it’s just a text generator without any semblance of comprehension about what the texts it generates actually mean.

            All it’ll ever be able to do is reproduce the standards as they were when its training model was captured.

            If we are to compare it to a junior developer, it’d be someone who suffered a traumatic brain injury just after leaving college, which prevents them from ever learning anything new, makes them unaware that they can’t learn, and incapable of realising when they don’t know something, makes them unable to reason or comprehend what they are saying, and causes them to suffer from verbal diarrhoea and excessive sycophancy.

            Now, such a tragically brain damaged individual might look like the ideal worker to the average CEO, but I definitely wouldn’t want them anywhere near my code.

  • fubarx@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    I use it mainly to tweak things I can’t be bothered to dig into, like Jekyll or Wordpress templates. A few times I let it run and do a major refactor of some async back-end code. It botched the whole thing. Fortunately, easy to rewind everything from remote git repo.

    Last week I started a brand new project, thought I’d have it write the boilerplate starter code. Described in detail what I was looking for. It sat there for ten minutes saying ‘Thinking’ and nothing happened. Killed it and created it myself. This was with Cursor using Claude. I’ve noticed it’s gotten worse lately, maybe because of the increased costs.

  • asm@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    I’m somewhat new to the field ~1.5 years, so my opinion doesn’t hold too much weight.

    But in the embedded field I’ve found AI to not be as helpful as it seems to be for many others. The one BIG thing is has helped me with is I can give it a data sheet and it’ll spit out all the register fields that I need, or help me quickly find information that I’m too lazy to Ctrl-f, which saves a couple minutes.

    It has not proven it’s worth when it comes to the firmware itself. I’ve tried to get it to instantiate some peripheral instances and they never ended up working, no matter how I phrased the prompt or what context i’ve given it.

  • sobchak@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    In the grand scheme of things, I think AI code generators make people less efficient. Some studies have come out that indicate this. I’ve tried to use various AI tools, as I do like fields of AI/ML in general, but they would end up hampering my work in various ways.