Every now and then I come across some article or discussion that just feels plain and mundane. All the words seem to make sense, yet at the same time, they feel almost predictable. Despite how well articulated these ideas were - be it in carefully formatted slide decks or confidently delivered proses - they fail to amaze. Ever since November of 2022, the ability to articulate words cohesively (I’m purposefully not using the word coherently) has become table stakes. In a society where frankly most work is evaluated on completion and length, LLMs have led to a rapid advancement of productivity. Yet I think we should make certain clarifications here - productivity gain is in automating repetitive and redundant tasks, this does not apply to all tasks, in fact, using GPT for sophisticated reasoning is almost guarenteed to produce mediocore results.
Let’s admit it, a big chunck of the work we do everyday some one else can do. The tedius, repetitive, and standard operating procedure tasks don’t require drastic innovation, they just need a criteria to be evaluated on and human hours, lots of it. This is work that AI could automate. However, an issue I’ve been seeing recently is people using AI as a catch all for tasks that should involve a level of reasoning and for a lack of better word, taste. Product managers go asking LLMs for user pain points, product features, and even feedback for products. However, the thing to note here is that a LLM is probablistic - it’s trained on generalization - when you build for all, you build for none. This is why I caution my self and take a step back each time an LLM produces a lengthy blob of text that I don’t see obvious issues with through the first run - do I have enough knowledge in the field to have good taste?
This to me is a fascinating topic. Although I have some ideas of how to best use LLMs. I now ask my self to read more before formulating a response. At worse, this would be developing enough text corpus to develop probablistic predictions. At best, I’d even be able to reason and build on some good ideas and push the field a bit further. Here are some of the papers / books I plan to be reading in the coming weeks:
- Magic Link - Information Software and the Graphical Interface by Bret Victor
- Man-Computer Symbiosis by J.C.R Licklider
- Augmenting Human Intellect: A Conceptual Framework by D.C. Engelbart.
6.25 Something I’ve just recently started to acknowledge (I’ve heard this repeated many times, but it sort of just sunk in today) is that we are actually fast approaching a period where ai moves beyond the traditional “copilot” and human-in-the-loop dynmaic, moving to fully autonomous teams capable to aligning goals and executing multi-step tasks over a long horizon. This would mean a dramatic shift in our workforce, where we would actually have a significant portion of workers be non-human entities. The industrial revolution and rise of the internet has led to specialization in the work we do: for internet products, we’d have software engineers (Research, QA, ML) , PMs, Designers, Marketers, but I see the line to become increasingly blurred as we progress.