Thank you very much. I feel like I didn't go to class the day "dual-use design" was explained, because it's the first time I see it in the context of SoC design 😄Dual use design means that the components Apple designs can be used for multiple applications. It’s really not specific to the Ultra series.
The bigger point of what Apple might start doing with the M5 series is the use of SoIC packaging technology. Currently Apple uses monolithic SoC dies, meaning a variety of system components are crammed into a single square of silicon (CPU, GPU, NPU, memory controller, Secure Enclave, display controllers, USB/Thunderbolt controllers). For smaller chips like the base M this works fine, but as you start to scale it up the SoC dies get to be quite large. The Max is about as large as is practical to make a single die, hence why the Ultra is basically two Max chips stitched together. But this isn’t ideal because you’re wasting silicon space on those extra controllers you might not need (you don’t need two Secure Enclaves for example).
SoIC means Apple can fabricate all of those SoC components separately and merge them together into a smaller and more efficient package (that performs better than a monolithic die). It also means Apple can more easily customize its chips rather than having to tape out an entirely new die. This is where the “dual use design” comes in. Apple can take CPU, GPU, and NPU tile designs from its consumer chips and create custom packages that are ideal for AI servers (for example lots of CPU and NPU cores but skimping on GPU). Because everything is modular there’s more flexibility when it comes to custom use cases.
I'm gonna need to comb through my youtube history to find the video, an AI guy made benchmarks with M1 Pro, M3 Pro/Max and an nvidia card. The scenario is essentially anywhere a gpu would be used, because the main strength over cpus is their higher memory speeds and bandwith. That's why nvidia does the same with their current gen gaming gpus: reducing/holding back memory bus and vram size.In what benchmark or scenario does an M1 Pro beat an M3 Pro? And I'm on your side regarding my dislike of how they handicapped the M3 Pro in hopes of selling more Max chips!
The Max chips, (M1, M2, M3 12 P-core) has 400 GB/s. The 10 P-core M3 Max is reduced to 300 GB/s. M2 Pro was 200, M3 Pro has 150. For base chips, M1 had 66.67, M2 and M3 100, M4 has 120 GB/s.Hi Cervisia - are there stats on this? I would like to understand the memory bandwidth issue better because I need to upgrade my MBP to Apple M-Class chips from my current Mac-Intel Chip. I don't want an AI chip. But I also don't want a chip that has been intentionally degraded in terms of memory bandwidth just so that Apple can reduce production costs. (I don't love Apple so much these days. Feels like they are cheap penny pinching Ebinezer Stooges. I miss the Apple that made it their business to constantly surprise and delight me.) I want to know what is that inflection point that I should look for to help me know when to upgrade. If M2 had the best memory bandwidth, what was that throughput? I may wait for M4 Pro / M4 Max / M5 / M5 Pro / MX to match or exceed that M2 Pro data point.
So will base-model M5 Macs finally come with adequate amounts of RAM, or will the $1299 M5 iMac still have less memory than a $799 Android phone?
View attachment 2394399
Apple touted the high bandwidth as something amazing, a great selling point, then seemed to realise that 99% of people didn't really utilise it. A video I saw ages ago showed sometime trying to utilise it all on a Max and Pro and failing. https://www.iot-now.com/2024/02/07/...ick comparison, we,memory bandwidth (Table 1). Here the research seems to suggest that to utilise all of the TOPS that Apple provide you probably don't need more bandwidth than a regular M4 provides (if I'm reading it right). If Apple trebled the TOPS numbers then M2 Pro levels of bandwidth would be handy.I'm gonna need to comb through my youtube history to find the video, an AI guy made benchmarks with M1 Pro, M3 Pro/Max and an nvidia card. The scenario is essentially anywhere a gpu would be used, because the main strength over cpus is their higher memory speeds and bandwith. That's why nvidia does the same with their current gen gaming gpus: reducing/holding back memory bus and vram size.
I’m sure they’d be delighted if everyone maxed out every computer.But why would Apple want you to buy a Studio if they’re making so much profit off selling you memory and ssd? Are they making even more profit with Studios? I’d be interested in seeing what the numbers are.
No wait until a year later… lucky No. 7!!!Well all of this is interesting, but I’m not upgrading until they get in the M6 family so I still have a ways until this stuff will truly peak my interest.
It will be interesting to compare how Apple is doing with their ARM processing with cloud servers compared to Google which seems to be loosing its green advantages due to excessive AI usage. This advanced SoIC packaging technology for its M5 chips in the next year will likely further illustrate which company has better technology tackling AI's energy requirements with cloud servers.Currently, Apple's AI cloud servers are believed to be running on multiple connected M2 Ultra chips, which were originally designed solely for desktop Macs. Whenever the M5 is adopted, its advanced dual-use design is believed to be a sign of Apple future-proofing its plan to vertically integrate its supply chain for AI functionality across computers, cloud servers, and software.
As Google has rushed to incorporate artificial intelligence into its core products — with sometimes less-than-stellar results — a problem has been brewing behind the scenes: the systems needed to power its AI tools have vastly increased the company’s greenhouse gas emissions.
AI systems need lots of computers to make them work. The data centers needed to run them, essentially warehouses full of powerful computing equipment, suck up tons of energy to process data and manage the heat all of those computers produce.
The end result has been that Google’s greenhouse gas emissions have soared 48% since 2019, according to the tech giant’s annual environment report. The tech giant blamed that growth mainly on “increased data center energy consumption and supply chain emissions.”
Now, Google is calling its goal to reach net-zero emissions by 2030 “extremely ambitious,” and said the pledge is likely to be affected by “the uncertainty around the future environmental impact of AI, which is complex and difficult to predict.” In other words: a sustainability push by the company — which once included the slogan “don’t be evil” in its code of conduct — has gotten more complicated thanks to AI.
Thanks so much for this!The Max chips, (M1, M2, M3 12 P-core) has 400 GB/s. The 10 P-core M3 Max is reduced to 300 GB/s. M2 Pro was 200, M3 Pro has 150. For base chips, M1 had 66.67, M2 and M3 100, M4 has 120 GB/s.
If you look at these numbers, M4's 120 GB/s is probably what Apple thinks as the necessary minimum for Apple Intelligence. M3 Pro barely has more.
What you need depends on what you do. I dug into this because I do astrophysical simulations on very long timescales. I repeat the same simple calculations on a large amount of numbers a lot of times. Which means the most time the CPU spends is waiting for RAM. For me RAM speed is king. For you, it could be completely irrelevant. For most people it's not really relevant, that's why the get away with pushing people who need it to more expensive products (or a PC).
Anyone who thinks "AI does the creativity for you" is part of a serious problem.I think Apple are at a crossroads here. On the one hand, they love to appeal to ‘creators’ but on the other AI does the creativity for you. Be interesting to see how they pitch that in the future.
And I am just blown away at the RAM & SSD upgrades we can get today for only one THOUSAND dollars. In the past we routinely paid more than that for RAM & SSD upgrades measured in MB.I'm still just blown away they are charging one THOUSAND dollars just for your RAM & SSD upgrades there
(At retail prices, that spec goes from $1699 to $2699, just for the RAM/SSD upgrades)
Isn’t this what Intel is doing with their “tile” design? It also lets them fabricate the less important portions of the chip on cheaper nodes like 6nm.It's a bit more sophisticated than that. Right now a SoC has all of its components on a monolithic die - the CPU and GPU cores, the NPU, memory controllers, built in USB/Thunderbolt controllers, Secure Enclave, etc. What SoIC allows for is all of those SoC components to be split out and fabricated separately but then integrated back with each other with resulting performance that's actually better than a monolithic SoC die.
So instead of an Ultra being two Max chips being stitched together it could be a large CPU tile joined to a large GPU tile with smaller controller component tiles added on. All of the M series chips would be built this way, not just the Ultra. This would have the side effect of increased yields by making smaller and less complex chiplet dies, and allow Apple to focus limited bleeding edge process node capacity on important sections of the tiles like the CPU and GPU and have controller tiles manufactured on older but still efficient process nodes.
I think so. Not not a threat to overall performance but performance per dollar.Okay. so does this challenge Nvidia's H100 and Blackwell chips? Asking for my AAPL shares.
Microsoft, Amazon, Google. Tesla, everyone has been developing their own server chips for a while now. Apple has been the biggest holdout of the big guns so far. Especially for AI.Is Apple getting back into the server market?
My AI says “pique“ ¯\_(ツ)_/¯Well all of this is interesting, but I’m not upgrading until they get in the M6 family so I still have a ways until this stuff will truly peak my interest.
What's interesting is that teardowns of the M4 iPad Pros show they actually have 12GB of RAM not 8GB advertised;
where the RAM size isn't advertised prominently
the 1TB+ iPads offer 16GB of RAM, not 12GB
the 1TB+ iPads offer 16GB of RAM
The Max chips, (M1, M2, M3 12 P-core) has 400 GB/s. The 10 P-core M3 Max is reduced to 300 GB/s. M2 Pro was 200, M3 Pro has 150. For base chips, M1 had 66.67, M2 and M3 100, M4 has 120 GB/s.
You know it won’t. At this point we’ll be lucky to start at 12GB. But I hope it does start at 16GB because my intention is to upgrade to the M5 iPad Pro 11” and I don’t want to have to buy a lot of storage to get a decent memory spec. Not that I do a lot on my iPad because iPadOS, but on my M1 which has 16GB, it’s nice to have because apps aren’t always closing out in the background. I would be fine with getting 16GB again, and I’m tired of the large 12.9” size, so whatever is cheap would be great. I’m tired of trying to use my iPad as a computer and just want a great display and sufficient memory.Also hoping by this time 16GB intergrated RAM will be the standard RAM size for the Mac.
I saw some CEO guy say on an interview in 5 years AI will do 90% of your job so you can chill. Dude when an AI can do 51% of my job they've already fired my ass. I will be chilling alrightPerhaps the M5 will reveal Apple’s plans for the foreseeable future. Data centers and consumer products that do all your work for you. I’d prefer to do my own work but I’m not in charge.