Vaughn Vernon’s Post

View profile for Vaughn Vernon, graphic

Software Ecologist, Architect, Modeler | Optimizer of Teams and Individuals | Domain-Driven Design and Systems Transformation

I'm certain that I'm not an expert using various LLMs, but I'm sure the LLMs are not expert at interpreting prompts for code generation. It's impressive that certain LLMs that are said to specialize in code generation can roughly understand what I've asked them to do, but the outcomes are nowhere near useful. BTW, it turns out that Dunning-Kruger proved to exhibit what Dunning and Kruger described as the effect. Now what? https://lnkd.in/gj5kYUkH

View profile for Linas Beliūnas, graphic

Reinventing Finance 1% at a Time 💸 | Leading & Scaling FinTech Unicorn 🦄 | The only newsletter you need for Finance🤝Tech at 🔔linas.substack.com🔔 | Financial Technology | Artificial Intelligence | Banking | AI

This is brilliant! We're now witnessing the Dunning-Kruger effect in action and scale never seen before 😅 Always remember - AI won't take your job but people who are "experts" in AI will 😉 Crazy times. P.S. for more great stuff, check out 🔔linas.substack.com🔔, it's the only newsletter you need for all things when Finance meets Technology. For founders, builders, and leaders.

  • No alternative text description for this image
Anton Kosyakin

a techie | backend & distributed systems | WebRTC

2w

I recently came up with an interesting metaphor. Of course it is absolutely subjective one. We have a relational database and to query it, we use SQL to define search criteria for our data. RDBMS interprets our SQL query, applies it to the dataset and returns records matching the request. An LLM contains a huge dataset of words, and to query it, we use natural language (e.g., English) to define search criteria for the words. LLM interprets our prompt query, applies it to the dataset and returns a sequence of words matching the prompt. So, essentially, we are not talking to LLMs and they do not think or reason a response. Instead, we are defining queries to a huge dataset of words, to locate and narrow down a subset of those words. These prompts just happen to resemble English language, and mostly only when they are simple. Bigger promts turn into complex structured queries, that need to follow specific rules and are not Engliah anymore

Maximilian Kusterer

Data Scientist | Data Engineer

2w

The man you quoted is the best example for what he described

Jake Bruun

Solutions Architect at Axian

2w

I'm still trying to become an expert in just regular organic intelligence. The more I learn, the more I realize I don't know.

See more comments

To view or add a comment, sign in

Explore topics