Sign in to view David’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
New York, New York, United States
Contact Info
Sign in to view David’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
204 followers
204 connections
Sign in to view David’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with David
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with David
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Sign in to view David’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
Citadel
******** ********
-
****** *****, ***
******** ********
-
**** ***** *******, ***
******** *********
-
********* **********
***
-
View David’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Licenses & Certifications
View David’s full profile
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Other similar profiles
-
Benjamin Spar
Norwood, MAConnect -
Ariel Rakovitsky
New York City Metropolitan AreaConnect -
Alex Feldman
Brooklyn, NYConnect -
Sooho Park
San Francisco Bay AreaConnect -
Jonathan Bettencourt
New York, NYConnect -
Dan Berger
New York, NYConnect -
Doug Musser
Ellicott City, MDConnect -
Jeffrey Liguori
Greater Chicago AreaConnect -
Ian Simonson
United StatesConnect -
Xinyue Chen
United StatesConnect -
Patrick Moran
Greater Chicago AreaConnect -
Casey Worthington
Greater Chicago AreaConnect -
Xi Jin
New York City Metropolitan AreaConnect -
Arpad Asztalos
Greater BostonConnect -
Xin Pan
New York City Metropolitan AreaConnect -
Hiren M.
New York, NYConnect -
Vivek Kotecha
Kubernetes Engineer @ VMware | Sustainability Enthusiast
Austin, Texas Metropolitan AreaConnect -
Trishla Shah
New York City Metropolitan AreaConnect -
Sandeep Chaudhary
New York, NYConnect -
Elizabeth Hong
Sunnyvale, CAConnect
Explore more posts
-
Michael Ashley Schulman, CFA
"I believe I heard a collective sigh of relief across Wall Street as many investors and analysts were nervous about NVIDIA and the entire #tech center ahead of this announcement," said Michael Schulman, chief investment officer at Running Point Capital. 📰We were quoted by Reuters this afternoon—following #Nvidia's release of first quarter #earnings and #future projections—in an article by Caroline Valetkevitch and Arsheeya Bajwa. 🦅SUMMARY: Nvidia's #AI ambitions continue to soar, but restrictions on sales to #China and ‘in-house” chip competition from its own customers hangs over its future. 📈After four consecutive quarters of earnings that blew past Wall Street analyst estimates, Nvidia just flexed its muscles again, delivering solid Q1 sales and Q2 projections that outpaced expectations. From fiscal 2020 to 2024, Nvidia nearly 6x revenues to $61B while nearly 9x earnings to nearly $13 per share. That frantic growth does not have to be maintained, but investors still want to see market leading growth from this $2.3 trillion behemoth. 🪓Their announced ten-for-one stock split and increased dividend should also be a huge positive rally catalyst for the stock in afterhours trading and over the next several days. 🐂#AI_chip_demand_begets_more_AI_chip_demand: Management’s second quarter revenue projections bested estimates by an incredible $1.2 billion. This bullish performance indicates longevity of demand for chips that can process large language models (#LLMs) and #generativeAI. The forecast indicates that we are still early in the cycle, that AI chip demand begets more AI chip demand, and that their chips that help data-centers, chatbots, crypto-miners, AI, and other cutting-edge tools are a hot commodity; the main risk would be if for some reason #TSMC is not able to meet Nvidia’s production demand. 🏏The generative AI and LLM rally that has propelled the tech center may continue a while longer, especially since we are in the early innings of utilization across industries and corporations across the globe. It is interesting that cloud customers have fallen to less than 50% of revenues which may indicate that #chip demand is broadening across customers. ⚖Nvidia's recent GTC 2024 conference showcased their ambition to become a one-stop shop for building AI infrastructure. Their new #Blackwell platform and expanding software offerings strengthen their position, and improved switch performance could keep their networking business humming. However, recent restrictions by the US on sales of certain chips to China, while not a major short-term blow, dampens their long-term prospects in this critical market, and ‘in-house” chip competition from current customers like Alphabet Inc. and Apple hangs over its future. https://lnkd.in/gEBvwf57
34
3 Comments -
Herbert Blank
"Midcap Tech Stocks and ETFs: What Lies Beneath" - TalkMarkets article using ValuEngine Inc. data, stock and ETF reports along with data pulled from ETFdb.com Inc, a VettaFi company. Please read article here: https://lnkd.in/ez4rhPQY However, as impressive as the rise of NVDA has been, opportunities in the tech sector clearly stretch below this one stock and its peers in the “Magnificent Seven.” With an eye towards potential broadening of the tech sector opportunity set, this blog focuses on a few selected Technology ETFs, three of which target a broad segment of ETFs, most of which are not in QQQ, followed by a quick look at a few technology stocks most highly rated by ValuEngine. Featured ETFs include: First Trust Technology AlphaDEX (now called StrataQuant) Fund (FXL) Invesco US S&P SmallCap Information Technology ETF (PSCT) Invesco Nasdaq Next Gen 100 ETF (QQQJ) Featured stocks include: ServiceNow (NOW); Pinterest (PINS); DoorDash (DASH); Snap Inc. (SNAP) and Western Digital. Another company that we rank a strong Buy in this group but didn't make the market cap cut for the article. is Toast (TOST). Overall, our predictive model and valuation model both are bearish on technology in general but they’ve unearthed a few hidden gems. Investors interested in staying in the sector but paring positions in Apple, Nvidia and Tesla may find some of these stocks worth investigating. However, such investors should fasten their seatbelts as all of these stocks are very volatile and many experts predict increased market volatility for the next 6 months. Thanks for assistance to Rajesh Jain; Paul Henneman and Trish Twining. Shouts to: Patricia Baronowski-Schneider;Michael Cronan; Jeff West; Jerilyn Klein; Dorothy Hinchcliff; Peter Wright; Wayne Nef; Mary Ann Bartels; Deborah Fuhr, CFA fellow; Lauren Davis, M.A.; Gayathiri Sri Rangan; Fernando Domecq; Anchal Tandon; Gareth Parker; Elle Worrell; Anil Ghelani, CFA; James Pacetti; Elisabeth Kashner, CFA; Cinthia Murphy; Lara Crigger; Todd Rosenbluth; Sam Stovall; James Eagle; James Picerno
11
1 Comment -
Noelle Acheson
**ETF approval comETH?** (Ok, that title was really bad, sorry, I couldn't resist.) It’s time for an ETH spot ETF verdict. This week, the SEC has to make a decision on a couple of the ETH spot ETF proposals in its in-tray, after exhausting all the possible delays. Up for final consideration are proposals from VanEck and ARK/21 Shares, with the Hashdex deadline following next week. These are very likely to be denied. There has not been much back-and-forth between the issuers and their regulator, as you would expect were the SEC actually considering approval. And the current uncertainty over whether or not ETH is a security effectively puts a lid on the current set of proposals as its classification affects under which rule they are submitted. What’s more, we could see the SEC this week issue a blanket refusal for all outstanding proposals, rather than drip them into the market one by one. This is largely already baked into the ETH price, however, as there are clearly no obvious grounds for approval. The asset has been outperforming BTC over the past few days, but it feels more like a function of market risk sentiment than ETF speculation (chart below, ratio of BTC/ETH drops when ETH is outperforming). Will there be lawsuits after the denial, as there were with the BTC spot ETF proposals? I don’t think so. Grayscale have already done their bit for the industry in filing (and winning!) a lawsuit against the securities regulator last year – by withdrawing their proposal for an ETH futures ETF, they have signalled they are stepping back for now. What’s more, there is little incentive for BlackRock or any of the other traditional investment managers to take up the baton as the ETH spot ETF is unlikely to be a significant product for them, certainly not worth antagonizing their regulator over. A BlackRock executive recently said that they are seeing little demand for an ETH spot product, and the absence of staking return distributions would make it less profitable than holding ETH directly. Of course, that doesn’t mean that the denial, when it comes, won’t push the ETH price down – I’ve often talked about how crypto markets are not efficient. But macro sentiment is likely to continue to be a key driver in coming weeks, with even crypto trader eyes on upcoming economic data. (I wrote about this and more in today's Crypto is Macro Now newsletter - https://lnkd.in/dfDQ-kVk)
10
1 Comment -
Matthew Yeseta
Seeking a full time onsite hybrid AI Architect Engineer Manager for leading RAG and LLM Generative AI role to relocate and lead RLHF, prompt engineering for zero shot learning, one shot learning, few short learning. Engineer Architect or Manage and lead teams on Fine Tuning LLM models on PEFT, RLFL, Prompt Tuning context window, and to work on Reward model using LoRA, and to work on agent to instruct LLM on Context window and lead AI Generative LLM RAG performance for scalability. Engineer Architect or Manage and lead Generative AI LLM Lang Chain Use cases, and work on Hugging faces for encoded/weights. Engineer Architect or Manage and contribute to building RAG Retrieval Augmented Generation to retrieve external library data for tailored to specific models domain in prompt analysis. Additional talents that I have which include People first partnerships, deliver on AI architect accountability and ensure focus on diversity inclusion for delivery business value. Manage stakeholders data projects to improve business revenue. People talents that I offer are people loving manager who is humble and offers critical creative innovative AI thinking with the team. Engineer Architect or Manager the architecture for AI Lang Chain Generative AI Use Cases that need LLM text words. Contribute as Engineer Lead on improved production pipeline supply chain signal processing and be the Engineer Architect for large language text analysis (LLM). Research for teams can be more productive in using Lang Smith for LLM large language models. Assist on curating sandbox to develop bidirectional decoder/encoder transformer (BERT) masking language next sentence (MLM). Lead again on OpenAI GPT for optimized performance to build chat what-if analysis for the business. Manage Engineer to lead scaling the team on development with performing LLM Chain and predictions models and build predict messages that pull from the Hugging Face Hub objects and prompt engineering with prompt templates and human message responses.
3
-
Henry Diep
Today, I found a fascinating way to think about “Tokens” in LLMs. It came from a conversation between Jensen Huang and Patrick Collison at a recent Stripe event. When asked about the future of compute capacity in five years, Jensen smartly dodged that question a bit, but then gave an excellent explanation and analogy about why tokens are the new forces that will power the next decades of humanity. He explained that we're now producing something unprecedented (and at scale): floating point numbers that possess “value”, which we now call tokens. These tokens are valuable because they encapsulate “intelligence”. People are now taking these tokens and transforming them into English, French, images, videos, chemicals, proteins, robotic movements, etc. And many are working hard to expand the range of concepts and ideas we can create with these tokens. He then goes on to made a compelling comparison between tokens and electricity. In the past industrial revolution, we successfully found a way to convert “atoms” into “electrons” (by boiling water to power electricity turbines). And now, we've discovered a way to convert "electrons" into "tokens" (by using energy to power data centers that train and run LLMs). When electricity was first introduced, few people understood its value. Today, paying for kilowatts is routine. The same will happen with tokens. Right now, only early adopters and builders are paying tokens. Soon, everyone will be paying for tokens on a daily basis to supercharge productivity and power new products and services. Many new industries will be born from and built on top of tokens. When I first heard about this way of thinking, I got goosebumps. Perhaps it’s because Jensen has a good storytelling skill that helps him sell what he builds, but I have to admit this way of thinking gave me a brief surge of pride. I'm proud of humanity's collective effort and creativity, which have taken us from living under rocks to building machines that can "think." That's nothing short but a miracle.
18
5 Comments -
M R moiq capital
Alexandr Wang from Scale AI : "Hiring on merit will be a permanent policy". It’s a big deal whenever we invite someone to join our #mission, and those decisions have never been swayed by orthodoxy or #virtue signaling or whatever the current thing is. I think of our guiding principle as MEI: #merit, #excellence, and #intelligence. That means we #hire only the best person for the #job, we seek out and demand #excellence, and we unapologetically prefer people who are very smart. 👏 👏 👏 🍾 🥂 👏 👏 👏 We treat everyone as an individual. We do not unfairly stereotype, #tokenize, or otherwise treat anyone as a member of a demographic group rather than as an individual. We believe that people should be judged by the content of their #character — and, as colleagues, be additionally judged by their talent, skills, and work ethic. There is a mistaken belief that #meritocracy somehow conflicts with #diversity. I strongly disagree. No group has a monopoly on excellence. A hiring process based on merit will naturally yield a variety of backgrounds, perspectives, and ideas. Achieving this requires casting a wide net for talent and then objectively selecting the best, without bias in any direction. We will not pick winners and losers based on someone being the “right” or “wrong” race, gender, and so on. It should be needless to say, and yet it needs saying: doing so would be racist and sexist, not to mention illegal. Upholding meritocracy is good for #business and is the right thing to do. This approach not only results in the strongest possible #team, but also ensures we’re treating our colleagues with fairness and respect. As a result, everyone who joins Scale can be confident that they were chosen for their outstanding talent, not any other reasons. MEI has gotten us to where we are today. And it’s the same thing that’ll get us where we’re going, as we embark on our next chapter focusing on data abundance, frontier data, and reliable measurement to accelerate the development and adoption of AI models." we never wrote or put down I words, but I can confidently say these are the principles for hiring at moiq capital !
-
Sugato Ray
🎉 PyTorch just made available the alpha release of ExecuTorch library. 🦋 What it does: 🎈 Helps deploy Large Language Models (LLMs) on edge devices (including mobiles) 👉 Since, mobile devices are highly constrained for compute, memory, and power, this introduces a considerable amount of challenge in bring the LLMs to such devices. To pack these models appropriately, the PyTorch team heavily leveraged quantization and other techniques. 🎁 What it offers/supports: 👉 supports 4-bit post-training quantization using GPTQ 👉 production tested, ExecuTorch is used by Meta for hand tracking on Meta Quest 3 and a variety of models on Ray-Ban Meta Smart Glasses 👉 supports Llama 2 7B efficiently on iPhone 15 Pro, iPhone 15 Pro Max, Samsung Galaxy S22, S23, and S24 phones and other edge devices 👉 early support for Llama 3 8B 👉 broad device support on CPU via dynamic shape support and new dtypes in XNNPack 🚀 Blog post: https://lnkd.in/gTFxdDhP 👉 GitHub: https://lnkd.in/gN-Yjj4A 👉 Docs: https://lnkd.in/gVD-2Tzt #LLMs #edgedevice #deploment #ml #pylib #executorch #pytorch
8
-
Oliver Loutsenko
Many of us know the saying "history often rhymes, but never repeats". In my view, that couldn't be more true of the chart below. This represents the ratio of SOX (Philadelphia Semiconductor Index) / SPX (S&P 500). Directly preceding the early 2000's tech bust, we had a similar run of #semiconductor stocks with cyclical fundamentals that greatly outpaced US large-caps in the S&P 500. Cisco was a notable member in the index at the time. Just look how badly semi's crashed and burned even relative to the S&P 500 during a stock market crash, when market participants quickly capitulated on speculative tech. I started with "history often rhymes, but never repeats" because I don't think we're going to see an identical outcome in this cycle. The equity market environment is very similar - i.e. valuations don't matter, the #AI craze is the early 2000's internet craze, extreme market weight concentration held in mega-caps, etc. - with key distinctions as well, such as monopolistic tech #stocks with multi-trillion dollar market caps. But the real concern could be that the macro backdrop overall is far more dangerous than heading into the early 2000's. I think mostly anyone would agree that's not even a close call. Whether it's economic disappointment, earnings disappointment, or any general economic or asset market related event that leads investors in US equities to capitulate, valuations will begin to urgently matter. I suspect that will inevitably lead market participants to meaningfully selloff excessively overhyped semi's, with expectations of fundamentals like #earnings and revenues drastically disconnected from where their actuals will land. If we do see a similar outcome where there's a severe drawdown in speculative tech, my sense is the difference between the early 2000's could be the duration of the bear market. As many obviously know, tech stocks quite literally crashed in the early 2000's. With today's multi-trillion dollar monopolistic tech firms, the upcoming #valuation rebalance in the US equity market could take significantly longer and perhaps not feel as much like the "crash" of the early 2000's. Time will tell. OVOM Research, TradingView #Research #Economy #Markets #Macro #Equities
36
11 Comments -
Henry Booth
📈 April Performance Update for Hedge Funds As we closed out April, several leading hedge funds demonstrated their adeptness at navigating a turbulent market landscape. Here’s how some of the top players are standing: - Citadel is off to a strong start this year, already up nearly 8%. - Schonfeld impresses with a solid performance of nearly 7% returns. - Walleye has achieved a commendable 6.6%. - Point72 follows with a 6% increase. - Millennium is up by 5%. - Verition and Balyasny are both navigating the complexities well, each posting 4% gains. - ExodusPoint rounds out the list with a 2% increase, still positive in a challenging environment. Performance across the first four months of the year, suggests many will be looking for mid double digit returns come end of year. With rates at 5%, investors will want to see an additional 5/6% on top of that, in order to consider it a good return for taking the extra risk. 💡 What strategies do you think will dominate as we progress through the year? #HedgeFunds #MarketTrends #InvestmentStrategy #QuantitativeTrading #FinancialMarkets https://lnkd.in/dx4hR3zZ
59
8 Comments -
Andrea Bugin
Counterfactuals may open the door to a more rigorous approach “this essay was stimulated by a request [..] to reconsider the process of portfolio manager evaluation. [..] The prompt was ‘find a way to think about portfolio manager performance that differs from what we currently do.’ That workstream was essentially the following: (1) compute the Sharpe ratios for each manager, (2) do some 𝑎𝑑 ℎ𝑜𝑐 inspection of 'bad trades' to learn about how they could have been made less problematic. I think this approach is common to a lot of funds, and I strongly believe that a more rigorous approach is necessary and that comparison of experienced performance to counterfactuals may open the door to such methods.” - Graham Giller, page 164, (2024) #assetmanagement #portfoliomanagement #finance #investing See also; https://lnkd.in/d9XnsgX2 https://lnkd.in/dHpaty-T https://lnkd.in/dGFij5bH
65
-
Gautier Marti
DSPy is a framework for algorithmically optimizing LM prompts and weights. It has been on my radar since a while, but did not take the time to experiment with it until last week-end (nothing fancy, just adapting some RAG tutorial). Anyone built a more sophisticated pipeline with this framework? I would be curious to know about your use cases and how well it can improve on the baseline? Let me know in comments or DM. #llm #dspy #nlp #ml #rag
45
7 Comments -
Sugato Ray
🎉 KAN: Kolmogorov-Arnold Networks ⚡️TL;DR: While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. 👉 Paper Discussion by First Author, Ziming Liu : https://lnkd.in/gYwKMHfw pip install pykan 👉 Abstract: Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs. ⚙️ Code: https://lnkd.in/ghpeJjyU 🦋 Docs: https://lnkd.in/gTBdBfYh 📄 Paper: https://lnkd.in/gcwniZuS 💡 Note: Checkout the “Author’s Note” section on GitHub ReadMe. https://lnkd.in/g7F3HPr7 Also see: Althernate implementation of KAN on GitHub 👉 efficient-kan: https://lnkd.in/gjGEEkpX 👉 fourier-kan: https://lnkd.in/gbjh5x5i #KAN #paper #code #docs #python #research #physics #pinn #ml
3
-
Quant Insider
Hudson River Trading (HRT), a multi-asset class Quant Trading firm is hiring for the role of Quant Researcher MFT at $175,000-$300,000 annual base salary and undisclosed bonus (check the job link at the bottom of the post). Here is an Interview question asked in their Quant role Interview Romeo and Juliet have a date at a given time, and each will arrive at the meeting place with a delay between 0 and 1 hour, with all pairs of delays being equally likely. The first to arrive will wait for 15 minutes and will leave if the other has not yet arrived. What is the probability that they will meet? Solution - Let the point (x,y) mean that Romeo arrives at x time, and Juliet arrives at y time. The sample space is the unit square since 0≤x,y≤1 Let A be the event that Romeo and Juliet meet, A is the subset of points (x,y) satisfying |x−y|≤0.25 and lying inside the unit square. If x≥y then x≤y+0.25⟹y≥x−0.25. If y≥x then y≤x+0.25 Thus event A consists of all points in the unit square satisfying y≥x−0.25 when x≥y and y≤x+0.25 when y≥x. Graphing this set of points and calculating its area (divided by the area of unit square = 1) will yield the answer 7/16 For more such questions along with their solutions, check out Quant Insider Stack. Here's the link to Quant Researcher job application - https://lnkd.in/gpc5sC6s Kickstart your Quant Interview Prep with Quant Insider. Check out Quant Insider Stack - https://lnkd.in/gcfdUEfg A Bundle of Interview Byte, Quantopia Library, and Quant Insider Project Handbook with Bonus Resources. ‘Interview Byte’ contains 1000+ Interview questions (https://lnkd.in/gkqcrrKf) Quantopia Library is the goldmine for building your domain knowledge and technical skills. (https://lnkd.in/geThBB4d) Quant Insider Project Handbook has 10 industry-oriented projects based on challenges and competitions conducted by Top HFTs and Hedge Funds. (https://lnkd.in/gWBEn78U) Quant Insider Career Catalyst is your guide to all interview prep tips, preparation roadmap and job application strategies (https://lnkd.in/gVhA4tNG) Checkout our Course on Machine Learning for Finance - https://lnkd.in/eyXnPRwz Use Coupon code - "EARLYBIRD20" for 20% off on the MLFin course For quant finance memes follow us on Instagram - Quant Insider (https://lnkd.in/gfjc4hBu)
227
15 Comments -
sandro sabene
The issue with CPU/GPU/AI performance isn't just about reducing latencies, but effectively utilizing that time to prevent processor stalls. Current takes time to pass through conductors, inevitably leading to latencies. Therefore, the focus should be on keeping processors active during latency periods (RAM, cache, etc.) rather than solely reducing them. https://lnkd.in/ewx5Zxec #amd #intel #nvidia #qualcomm #microsoft #google #apple #arm #zen #pcore #ecore #chip #cpu #gpu #wafer #tsmc #tesla #rdna #cuda #core #software #hardware #gfx #shader #rasterization #AI #IA #power #energy #samsung #perfection #tesla #teslamotor #elonmusk #x86 #x64 #cache #ram #shaders #raytracing #rt #rasterization #engine #mediatek #dlss #fsr #raytracing #xess #overclock #undervolt #overvolt #overclocking #undervolting #computeunit #driver #partner #partnership #developer #assembly #c++ #coding #semiconductor #imgtec #PowerVR #Catapult #sdk #HBM #amiga #playstation #ps3 #ps4 #ps5 #xbox #sony #riscv #food #rawfood #vegan #vegans #corn #rice #potato #fruits #animal #oil #pollution #land #soil #gas #co2 #free #freedom #freefood #bike #ride #cycling
-
Justin Woddis
"The most important people in capital markets are the worst paid", someone said yesterday. And while it is a provocative statement, it highlights an important structural flaw in the markets. Ops are the Achilles heel, yet they gets the least investment. IR portals pop up, AI revolutionizes research, quant models have been around forever. If we were redesigning the markets from scratch today, they wouldn't look anything like what they do now, and nowhere is it more apparent than in ops, where essentially we have automated the process of old gents with briefcases of bearer bonds crossing Wall Street to deliver securities. Ops is a factory. we need to build it so that a) it is a functioning conveyor belt. We've come reasonably far on this - short of redesigning how the markets work (why do we need internal books, custodians, depositories and transfer agents to record the same event and create opportunities for breaks?), there is only so much we can optimize b) the real value is in figuring out what to do when things fall off the conveyor belt - exception management. Here we need to build out collaboration tools across firms, and eventually cross firm machine to machine query capability. 70% of the effort in ops is in dealing with people outside the firm, and we rely on the modern equivalent of a carrier pigeon (email) to do it. Firms like AccessFintech and Jaid are solving for this, but there is a long way to go. c) there is a scarcity of analytics. This is because most ops systems are built for the practitioners and not for their managers - recon platforms, matching systems, etc, focus on solving the specific exception and not on trends. Which brokers most often fail settlement? Which data vendors get their corporate actions wrong the most? Do we spend more time on matching failures or recon failures? What is our total late settle cost for the year? Funds score their brokers on best execution, less often do they score their custodians on their ops. (I once posited what an ops scorecard might look like to a large fund complex and they said "our portfolio managers would give their right arms to get this sort of data"). Having this information bubble up in real time to the head of ops and the COO would be invaluable. d) this also means there needs to be horizontal integration. It's all very well knowing how many matching breaks you get for each broker every day. It's much more useful knowing how many of those matching breaks translate into fails a day later. We need to look across the ops spectrum, not into functional silos e) We should move from re- to pro-activity. Firms like Smartsettle give some indication of what trades are likely to fail. We should be asking which brokers are most likely to fail based on trends, using the data we have from the past, and using machine learning to provide causes and solutions for issues. To paraphrase, ops are like drains, you don't think about them till they go wrong, and then they stink.
21
8 Comments -
Asif Razzaq
OpenPipe Introduces a New Family of ‘Mixture of Agents’ MoA Models Optimized for Generating Synthetic Training Data: Outperform GPT-4 at 1/25th the Cost Quick read: https://lnkd.in/gfxdiDT5 OpenPipe’s MoA models have excelled in rigorous benchmarking tests, achieving notable scores on LMSYS’s Arena Hard Auto and AlpacaEval 2.0. The MoA model scored 84.8 on Arena Hard Auto and 68.4 on AlpacaEval 2.0, indicating its superior performance in generating high-quality synthetic data. These benchmarks are critical as they represent challenging user queries that test the robustness and adaptability of AI models. The MoA model has been benchmarked against various GPT-4 variants in real-world scenarios. Results showed that OpenPipe’s MoA model was preferred over GPT-4 in 59.5% of the tasks evaluated by Claude 3 Opus. This is a significant achievement, highlighting the model’s effectiveness and practical applicability in diverse tasks encountered by OpenPipe’s customers...... OpenPipe
43
3 Comments -
Bob Elliott
Enjoyed talking with Tania Chen about how the CNY is under pressure from *capital flows* seeing negative forward return. Until depreciation expectations stop, capital flight will persist. The way to change it is not by intervention, but by changing the forward expect return through a lower price. https://lnkd.in/edcH-n6A
11
-
Benjamin Tarzwell
Meta just dropped Llama 3 (70b) last week. Here's what you need to know. Trained on 15 trillion tokens of data. 😯 Llama 3 is leading the space in capabilities for math and reasoning. Scoring an 85 on the MMLU benchmark. (Massive Multitask Language Understanding) Compared to Claude 3's impressive 86.9 score. However, with 25 trillion less training tokens. Llama 3 definitely has the potential to exceed Claude 3. The most exciting development comes in the form of Meta's approach to AI. 𝗢𝗽𝗲𝗻 𝗦𝗼𝘂𝗿𝗰𝗲. Llama 3's being released to developers, Along with a "Getting Started" guide, as well as new trust/safety features. - Llama Guard 2 A beefier version of their Llama guard content classification system. - Code Shield A screener for potentially malicious, or improper code outputs. - CyberSec Eval 2 A system for protection against prompt injections and code interpreter abuse. 💡 Benefits of Open Sourcing Llama 3 Llama 3 is comparable to GPT-4 in terms of capabilities. Tech hardware companies like Nvidia and AMD are supporting the model, As well as Cloud services like AWS, and Hugging Face. By it becoming publicly available to whomever wants to design with it. ✅We'll quickly have many GPT-4 models entering the market. The competition is on! - I'm really excited to announce Hank and I's collaboration with Christoph Rosenboom Keep an eye out next Monday! #llama3 #AI #LargeLanguageModel
17
19 Comments -
Austin Hughes
I see generative AI products changing application software in 2 key phases: Phase 1) AI will see strong adoption because it's able to replace human labor at 95%+ fidelity/quality. This phase will look a lot like digital workers taking over repetitive, mundane jobs. We're already starting to see this as LLMs thoughtfully woven together can solve complex operations jobs. Phase 2) AI will power a "learning engine" on top of your business, which will learn best practices and share learnings to improve your current state. This AI-based system will learn from itself over time, and share recommendations based on industry best practices. Importantly, there's no network effect to Phase 1, but there will be for Phase 2. The company that sets itself up to collect the most usable data should be able to produce the best product experience. What do you think?
15
5 Comments -
Ivana Delevska
Is AMD still an AI play? With the stock down 35%+ from its peak, investors are starting to doubt it. Among the pushbacks are valuation & results. Here are my thoughts: 1.) Valuation: AMD is trading near the high end of its valuation range (despite the recent pullback), making investors conclude that the stock is expensive. Why does the valuation make sense? - - Earnings are at a trough - despite recovery in Data Center, AMD’s other businesses, such as Gaming and Embedded chips, are still facing cyclical headwinds - - GPU demand is not reflected in earnings as the company just entered the market. In its first full year of GPU production (’24) AMD is expected to generate $4bn in GPU revenues (up from $3.5bn and $2.5bn previously). However, applying LTM or even NTM multiples does not capture the LT earnings power from DC GPUs. The stock is tricky to value - an interesting valuation framework is to do sum-of-the parts, splitting up the legacy business and the incremental GPU opportunity. The result is: - - - The legacy business is worth roughly where the stock is trading today; mid-cycle EBITDA ($11bn) X mid-cycle multiple (22x) would result in a share price of $150/share, - - - GPU opportunity could add $95-190/share of upside (based on 5%-10% penetration of the market); assuming similar margins and multiple. Why is everyone running for the hills? 2.) Results: Investors weren’t impressed with the company’s earnings report. - - Data Center (GPUs)/Client (PCs) were strong, but the GPU base is too small to make a difference. Embedded and Gaming (both good businesses) are going through a downturn. - - The company raised the GPU guide to $4bn, but some were hoping for more (~$5bn+). While 2Q24 is supply-constrained, management noted that the rest of the year is not, implying an upside as commitments roll in. But it takes time to get spec-ed in. While it is too early to know the ultimate size of the market and AMD’s ability to compete, the company has been an innovator in the space and has a differentiated technology (chiplet design). AMD CEO, Lisa Su, noted that the company is expected to close the performance gap on the model training side vs. NVIDIA with its new product introductions. Stay tuned for Computex June 4-7! Loved sharing my thoughts with Oliver Renick from Schwab Network. *Not investment advice - do your own research. #amd #semiconductors #artificialintelligence #technology
29
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named David McKenna in United States
-
David McKenna
Scottsdale, AZ -
David McKenna
Chelmsford, MA -
David McKenna
Los Angeles, CA -
David McKenna
VP Land Acquisitions at Aspen Heights Partners
United States -
David McKenna
Greater Chicago Area
177 others named David McKenna in United States are on LinkedIn
See others named David McKenna