32 at least.Here's hoping the M5 will be based on the 2nm architecture with TSMC Gate All Around structure.
Also hoping by this time 16GB intergrated RAM will be the standard RAM size for the Mac.
Yes, I'm not doubting that Apple would be first in line for N2. I'm just not sure if a) N2 will really be ready for mass production next year and b) if it is, if Apple will use it for the M chips or only for the iPhone (with M chips following 2026).Apple is a direct development partner to TSMC. They literally are the only reason N3 (by way of N3B) was feasible to even produce after everyone else pulled out. Apple’s capital investment in TSMC’s lines make it almost a certainty that they’ll be first in line for each new node because the nodes themselves aren’t economically viable without massive upfront capital investment.
I’m surprised they don’t.in 2007 Apple Computer, Inc became Apple, Inc.
Yet today, Apple is more of a “computer” company than they ever were before the name change. Today they design their own CPUs/GPUs, OS, programming languages, developer tools, and now cloud AI servers.
Two big things they do not do themselves are R&D for new chip fabrication technology and manufacturing/assembly. They seem pretty steadfast in their avoidance of manufacturing, but I wonder if they might get into fab process R&D. It might be appealing to them to own some special fab sauce that nobody else has.
It's going to pain Apple to spend the extra ~$20 required per Mac to double the base RAM and storage on every device... the fact they will no longer be able to get away with pathetic amounts of both is perhaps the greatest thing about AI on Macs 😅
Okay. so does this challenge Nvidia's H100 and Blackwell chips?
Has anybody confirmed whether these iPads actually show 12GB, or if they've been knobbled to 8GB?Supposedly 6GB modules are currently cheaper than 4GB modules so it could be cheaper for Apple to put 12 in the iPad and call it 8 (giving them the option to drop it back down to 8 if the pricing changes, hence why they're not advertising it as 12).
If I recall correctly, the 1TB+ iPads offer 16GB of RAM, not 12GB. If they're planning to drop the 8GB Mac RAM. If they're going to offer 12GB and 24GB Macs I don't really see a niche for a 16GB SoC. So the "temporary freebie thanks to price fluctuations" sounds more plausible.Or it could be that Apple is building M4 chips with 12 by default and that 12GB will be the new starting tier for base Macs. While not the jump to 16 most users are hoping for it at least would get us off of 8.
Yeah, I think even the cheapest of the cheapest machines should be shipping with 16GB minimum today. The Pro's should start at either the odd 24GB or 32GB preferably. I think even 12GB on the base model stuff is just OK, but not really enough of an increase, even if the AI stuff eats 4GB, then we are still effectively left with 8GB for all other things which is just where we are now that isn't enough. I've been on the 8GB starting is dumb train for the last 3 years or so. I didn't even like it on the M1 for longevity sake, but now it's just absurd.Has anybody confirmed whether these iPads actually show 12GB, or if they've been knobbled to 8GB?
Apple can probably get away with that on the iPad - where the RAM size isn't advertised prominently and they don't offer BTO RAM upgrades separate from storage bumps. I think it would go down like a lead balloon if they tried it on a Mac and then offered a $200-for-8GB upgrade.
If I recall correctly, the 1TB+ iPads offer 16GB of RAM, not 12GB. If they're planning to drop the 8GB Mac RAM. If they're going to offer 12GB and 24GB Macs I don't really see a niche for a 16GB SoC. So the "temporary freebie thanks to price fluctuations" sounds more plausible.
Still, it's high time that the Macs went to 16GB minimum (except maybe for the cheapest MBA, which - when M4 comes out - will probably be the current 8GB M3 model anyway) especially since the new Copilot+ PCs (with ARM-based SoC processors) start at 16GB.
It appears to me that AAPL's strategy is to avoid owning capital assets and hiring people in its supply chain operations where it can. This gives it flexibility and mitigates risk by keeping hundreds of thousands of employees in jurisdictions across the globe and hundreds of billions of dollars of assets that can become obsolete, off the books.in 2007 Apple Computer, Inc became Apple, Inc.
Yet today, Apple is more of a “computer” company than they ever were before the name change. Today they design their own CPUs/GPUs, OS, programming languages, developer tools, and now cloud AI servers.
Two big things they do not do themselves are R&D for new chip fabrication technology and manufacturing/assembly. They seem pretty steadfast in their avoidance of manufacturing, but I wonder if they might get into fab process R&D. It might be appealing to them to own some special fab sauce that nobody else has.
Not really. My M2-Pro with 16GB RAM is running a local copy of Meta's Open Source Llama3 8b at a very decent speed of 28 tokens per second and that is while also running the usual Mac desktop, Chrome browser, and some other stuff.more like just like the video streaming revolution in the 2000's showed up just how incapable desk top processors were, the Ai revolution is doing the same and the whole industry is caught with it's pants down, Apple included.
I think Apple are at a crossroads here. On the one hand, they love to appeal to ‘creators’ but on the other AI does the creativity for you. Be interesting to see how they pitch that in the future.
Light moves about one foot per nanosecond, give or take a little. Electronic pulses in wire move a bit slower than light in a vacuum. The problem is not the delay but gettinght same pluse to every place it is needed at the same time, so the parts stay in sync. It is this "clock skew" that they are trying to fightSo, because the chips are stacked on top of each other rather than on a long roadway to the edge of the chip die, they can communicate quicker? I know when I was working on the drum machine there was a clock signal line (wire) that went all the way around the edge, a huge, long distance, problem, and it actually did cause a problem! The board’s performance was flaky. And it seems silly to even think that, because electrical signals travel at the speed of light, right? But… Distance is distance. Trust me on this one.🍸😹🙀😹
I could be wrong but I don’t think we’ll see Apple release any kind of AI that would steal work from legitimate creatives.I think Apple are at a crossroads here. On the one hand, they love to appeal to ‘creators’ but on the other AI does the creativity for you. Be interesting to see how they pitch that in the future.
I like to see more articles discussing what Apple uses in their server farms. Something that isn't discussed enough.Currently, Apple's AI cloud servers are believed to be running on multiple connected M2 Ultra chips, which were originally designed solely for desktop Macs. Whenever the M5 is adopted, its advanced dual-use design is believed to be a sign of Apple future-proofing its plan to vertically integrate its supply chain for AI functionality across computers, cloud servers, and software.
Right now the AI is used for stuff that is mostly only entertaining. But the intented is to using it for all the normal stuff you do with a computer like "Siri, I want torepay is susan's email about the 34th street job. Tell her, "let's go with her option #2".If they use Gen AI the way they're doing so far I think it's fine - it's mainly for fun things. Creating Emoji I thought was a genuis idea to be fair.
I'm not sure they'd ever have the very best image generating AI models either.
But why would Apple want you to buy a Studio if they’re making so much profit off selling you memory and ssd? Are they making even more profit with Studios? I’d be interested in seeing what the numbers are.For that price one might as well buy the studio. Which, of course, is the whole point of apple’s memory and ssd upgrading price scheme.
Your M2 Pro is better equipped to run ai tasks than an m4, or even the m3 pro, because it has higher memory bandwidth. Remember when people were making up excuses for cutting back on specs that it doesn't matter in anything average users would do? Well, not true anymore. Reducing memory bandwiths - with 10 core m3 max too - was probably a deliberate step to make the cheaper chips worse at ai tasks. Even the M1 Pro beats the M3 Pro because of the bandwith, despite the M3 Pro having much better gpu cores. Don't upgrade.So I have to disagree and say Apple did quite well with cost vs performance on AI tasks. The M4 is likely much better than my M2 for this kind of work. I'd upgrade except that it is my limited brainpower holding back the development of a robotics project, it is not yet the M2 hardware that is the bottleneck.
In what benchmark or scenario does an M1 Pro beat an M3 Pro? And I'm on your side regarding my dislike of how they handicapped the M3 Pro in hopes of selling more Max chips!Your M2 Pro is better equipped to run ai tasks than an m4, or even the m3 pro, because it has higher memory bandwidth. Remember when people were making up excuses for cutting back on specs that it doesn't matter in anything average users would do? Well, not true anymore. Reducing memory bandwiths - with 10 core m3 max too - was probably a deliberate step to make the cheaper chips worse at ai tasks. Even the M1 Pro beats the M3 Pro because of the bandwith, despite the M3 Pro having much better gpu cores. Don't upgrade.
Your M2 Pro is better equipped to run ai tasks than an m4, or even the m3 pro, because it has higher memory bandwidth. Remember when people were making up excuses for cutting back on specs that it doesn't matter in anything average users would do? Well, not true anymore. Reducing memory bandwiths - with 10 core m3 max too - was probably a deliberate step to make the cheaper chips worse at ai tasks. Even the M1 Pro beats the M3 Pro because of the bandwith, despite the M3 Pro having much better gpu cores. Don't upgrade.
Have you tried on-device AI (LLMs)? It’s actually quite powerful.I honestly don’t believe in the on-device AI, at least not in the next 5-6 years. Sure, there will be some simple tasks that can be processed locally, and the hybrid model will eventually slowly be moved locally, but a hybrid model where most of the AI processing is done on the server side is probably here for the foreseeable future.
Hardware advancements have been slow over the last 5-8 years, but we might be, out of necessity, on the verge of seeing some really big advancements for consumers in the next 2-4 years.
It’s great to see Apple is working on custom server chips too.
Right now the AI is used for stuff that is mostly only entertaining. But the intented is to using it for all the normal stuff you do with a computer like "Siri, I want torepay is susan's email about the 34th street job. Tell her, "let's go with her option #2".
The AI then figures oput which email this is, reads it the find option #2 and then composes a reply and puts it on screen with both "OK" and "edit" buttons. Not entertaining, just basic business. This is what Apple is working on now.
If AI is done right, it will not seem like AI, work under the hood and do what you want. One of my goals, I work on would be a robot and you ask it, "I my UPS package here yet? Bring it inside when you can." The robot's AI wopuld lok for delivery emails or simply look at the outdoor security camera (ring?) and when the package is there, it opens the door and brings it inside and shuts the door. This not 100% briilent but still way more advanced then anything we have now. The abilty to be very simple-minded tasks like this would be revolutionary, folding laundry and placing dishes in the dishwaters, taking out the garbage, Taking grosheries out of the bag and putting them away, cleaning countertops and so on.. No great feat is intellectualism but still very useful. I figure people would pay the price of a car ($35K?) for this. IMO, the "AI" should disappear into the product.
Define "market"...Is Apple getting back into the server market?