As in, were you originally optimistic about AI, but are now pessimistic, or were once pessimistic but are now optimistic? Did you originally see it as a huge boon to your work, but now find it a hinderance, or were you once dismissive of it, but now find it indispensible?
Just curious who has changed their mind/outlook and what precipitated the change.
I did a PhD in program synthesis (programming languages techniques) and one the tricks there was to efficiently prune the space of programs. With LLMs it is more much more likely to start with an almost correct guess, the burden now shifts to lighter verification methods.
I still do not believe in the AGI hype. But I am genuinely excited. Computing has always been humans writing precise algorithms and getting correct answers. The current generation of LLMs are the opposite you can be imprecise, but the answers can be wrong. We have to figure out what interesting systems we can build with it.