Software Ecologist, Architect, Modeler | Optimizer of Teams and Individuals | Domain-Driven Design and Systems Transformation
I'm certain that I'm not an expert using various LLMs, but I'm sure the LLMs are not expert at interpreting prompts for code generation. It's impressive that certain LLMs that are said to specialize in code generation can roughly understand what I've asked them to do, but the outcomes are nowhere near useful. BTW, it turns out that Dunning-Kruger proved to exhibit what Dunning and Kruger described as the effect. Now what? https://lnkd.in/gj5kYUkH
Reinventing Finance 1% at a Time 💸 | Leading & Scaling FinTech Unicorn 🦄 | The only newsletter you need for Finance🤝Tech at 🔔linas.substack.com🔔 | Financial Technology | Artificial Intelligence | Banking | AI
This is brilliant! We're now witnessing the Dunning-Kruger effect in action and scale never seen before 😅 Always remember - AI won't take your job but people who are "experts" in AI will 😉 Crazy times. P.S. for more great stuff, check out 🔔linas.substack.com🔔, it's the only newsletter you need for all things when Finance meets Technology. For founders, builders, and leaders.
The man you quoted is the best example for what he described
I'm still trying to become an expert in just regular organic intelligence. The more I learn, the more I realize I don't know.
a techie | backend & distributed systems | WebRTC
2wI recently came up with an interesting metaphor. Of course it is absolutely subjective one. We have a relational database and to query it, we use SQL to define search criteria for our data. RDBMS interprets our SQL query, applies it to the dataset and returns records matching the request. An LLM contains a huge dataset of words, and to query it, we use natural language (e.g., English) to define search criteria for the words. LLM interprets our prompt query, applies it to the dataset and returns a sequence of words matching the prompt. So, essentially, we are not talking to LLMs and they do not think or reason a response. Instead, we are defining queries to a huge dataset of words, to locate and narrow down a subset of those words. These prompts just happen to resemble English language, and mostly only when they are simple. Bigger promts turn into complex structured queries, that need to follow specific rules and are not Engliah anymore