James Kaplan’s Post

Is anyone _not_ talking about the Goldman Sachs report alleging that we are all looking at a bubble in #GenAI and that the potential value may never justify the proposed trillion dollars in project investment? I would suggest the report is precisely half-right! 1. They are right to be skeptical of value of many use cases proposed to date, but (like almost everyone else), they don’t understand the real potential value of GenAI. Chatbots are nice but only so consequential. Writing emails for you is a party trick. They breakthrough value will come from converting unstructured text to structured data that companies can manipulate and analyze. Lawyers reviewing ISDAs and MBAs reviewing contract terms are expensive! More importantly, think about the insight you could derive because it will be economical to look for patterns, for example, across thousands of vendor or customer contracts. Or insurance claims. Or electronic health records. 2. They are half right on cost. The cost of technology will get cheaper — no reason to expect Moore’s Law wont apply to GPUs — and academic (academics!) are churning out reams of papers on how to train, train and deploy LLMs in more compute-efficient ways. The cost of electricity worries me, though — US electrical generation capacity has been flat for two decades (decreasing power intensity per dollar of GDP and GDP growth have cancelled each other out). Not clear to anyone that we have the regulatory model to expand electrical generation capacity quickly. 3. I wouldn't be surprised as the cost and complexity of GPU capacity buildout means that this will continue to happen off-prem, rather than being repatriated by enterprises. Sequoia Capital is maybe a bit less bearish than GS -- in part because of GPUs in the pipeline should offer a lot more power for only a bit more money. https://lnkd.in/eJeGysQY

Goldman Sachs Research Newsletter

goldmansachs.com

This reminds me of the early 1980s and the discussion about the latency in the BUS. It took nearly 40 years to get around to doing anything about that issue. Today, people talk about GPUs which were add-on cards in computers and have latency between the CPU and GPU. Not an efficient design. The obvious answer is to combine the CPU and GPU which companies like Graphcore have done. The cost of electricity problem can be easily solved with 2D materials - see what Paragraf and Black Semiconductor are doing. Massive reduction in electricity cost if you use a graphene-like material and Paragraf have developed a high-throughput method of depositing pure graphene. I just hope it doesn't take another 40 years to become mainstream.

Alexander Chukovski

Building Niche Job Boards (Web3 & AI) | HR Tech Consultant | Expertise in Job Sites SEO, Google Jobs, NLP & AI Solutions, Job Scraping | HR Tech Blogger

3w

If your company can sell a few slides on how trash bins are better than leaving trash on the street for $4MM, you can also sell hallucinating LLMs to the Enterprise. I am being extreme here, but talking about using LLMs for contracts and legal use-cases shows a fairly low level of understanding this technology.

subhojit banerjee

Principal DataEngineer, Streaming, LLMOPS, MLOPS, AWS Certified Architect, Azure data engineer

3w

The sad part is decision makers bet  their Orgs future based on reading these reports while they should just hear the signal from the discord channels, the competition arena, arxiv, reddit niche group where researchers hang out. New technology is bound to be tumultuous with extreme volatility in the initial period when it's finding its footing. Like always the truth is in the middle and leadership today doesn't have the wherwithal to understand both sides of the argument to come to an informed decision. When was the last time you saw your manager reading the latest applied llm paper?

Like
Reply
S. Mike Dierken

Founder First Principle Ventures

3w

This part (structured data extraction) is where I’m seeing value in clients I help. However it may be that other non-LLM techniques are just as effective once the impact is validated with low effort prototypes. A more fundamental change is to help companies move to conversational customer features in order to drive all that unstructured data which drives satisfaction and more customers and more conversations. “They breakthrough value will come from converting unstructured text to structured data that companies can manipulate and analyze.”

Titouan Parcollet

Research Scientist at the Samsung AI Centre Cambridge. Affiliated Lecturer at the University of Cambridge, UK. Associate Professor on leave from Avignon Université, FR.

3w

Generative models make things up and they will always do, it is in the name. GS is right. Anything that needs a certain degree of robustness, like law, can not be replaced by generative models alone. You will still need a lawyer to review the output of your model, and most models will fail in anything that isn’t a generic contract. The problem is that a generic contract could also be done pretty quickly with a template and a lawyer. GenAI is indeed a big-ass bubble inflated by the “wow” effect of new interactions between humans and machines. It definitely is a great push in what AI techniques are capable of, but it definitely isn’t worth the current investment.

"They breakthrough value will come from converting unstructured text to structured data that companies can manipulate and analyze." The key word is will. It hasn't happened yet and the post doesn't provide any evidence that it will happen. Goldman Sachs is doing what others of us have been doing for years; searching for a successful use case and not yet finding it.

Bijan O.

AWS Strategic Accounts, Sales

3w

There is a profound assumption you've made in 1. that frankly has never materialized. You're assuming the conversion from unstructured to structured data is something AI can do. No. AI will not be best placed to design, define, execute an enterprise data governance strategy across existing data stores, emails, documents, manuals, messages etc. Very few, if ANY firms today have a defined, observed and clean corporate lexicon across all data and communications. Yes one can buy data ontologies, but if they actually solved anything by that act alone, then we'd know more about them. Then of course, there is the mapping to external data sources (try that with an LLM with total opacity of data lineage etc). And finally, after all this, how many firms do you think there are that actually have the necessary data history and volume (cleaned and mapped) necessary for even domain specific modelling to the extent where it can be a corporate asset. As if that's not enough, you've skipped the compliance issues of data access, sovereignty, privacy, security, lineage etc that a centralized and universally (within the enterprise) modeled data set runs up, hard, against. It's always about the data, even when it's not about the data.

Darius Burschka

Professor CIT (TUM), Member Scientific Board - Munich Institute of Robotics and Machine Intelligence

3w

The second point is similar to the typical answer from ML community if something is not working correctly. There is always someone working on it already but they cannot tell who and how it is supposed to be fixed. The way it is conveyed indicates also that the assumption is that this person is way above the intelligence level of anyone in the current conversation, kind of high-priest? Every statement should give at least a valid path how it is supposed to be done?

Stan Cole

Networking | Next-gen Correspondent Banking | Multi-currency Common Platform for Real-Time Cross-Border Payments with Instant Settlement Finality in CeBM | Super Centralized Liquidity |

2w

Nothing wrong with GS raising a red flag 🚩about Gen AI valuations getting into bubble territory. It’s prudent to do so when a new tech cult-like following results in a massive reallocation of capital into the “next big thing”, inflating valuations beyond reason, largely based on faith. Look at similar recent examples, i.e. the 5G hype, or even the cannabis hype. Weren’t each of these purported to be umpteen trillion dollars opportunities?

One major obstacle to the large-scale rollout of GenAI in real-world operations is its lack of reliability and dependability. The impact of further scaling on issues such as hallucination remains unclear. Until such fundamental problems (or features?) are resolved, or at least a clear path to their resolution is established, it will be challenging to identify and develop killer applications and successful use cases.

See more comments

To view or add a comment, sign in

Explore topics