Here’s where I have to talk about myself personally a little bit, which is something I don’t like doing.
I’m not a computer scientist or engineer. I’ve never worked in the tech industry, be it in software or hardware. However, I do have a few years working on both in the defense industry, contributing to the development, and later supervising, the transition from second generation solid-state electronics to the current system of modular, multi-modal, integrated computing systems on airborne weapons platforms. I can’t talk specifics, but I witnessed computing evolve over the past 30 years, from single task, power hungry and hot running processors all bussed in a small network so these large, ruggedized computing platforms can all output their information onto a single display, to “multi-mission” computing platforms that eliminated the bus through miniaturization due to greater efficiency, improved computing power and the utilization of non-volatile, solid state memory. I watched cockpits convert from dials to glass, monochrome CRTs become full color LCDs and multiple systems weighing over a half ton combined be replaced by a single housing containing multiple systems integrated onto individual daughter cards.
I was also a hobbyist that rode the second wave of the internet, when the World Wide Web became a thing and the internet experienced was a little less text and a little bit graphical. I remember buying the first discrete 3D graphics card and connecting it inline, externally, with a VGA cable. I began with an Intel i486dx, moved on to a Pentium Pro, then Pentium II on a processor card with inline L2 cache. I then watched that cache get integrated onto the die for the Pentium III, skyrocketing prices that led to the first rise of AMD as a serious contender with the Athlon and Intel’s countering with the Celeron to foreshadow the coming age of cheap computing. I had a 3dfx Voodoo and nVidia GeForce graphics cards, first for PCI and then for the newly adopted AGP bus, only to see AGP die when it was replaced with PCI-e. That wasn’t new to me as I had previously watched ISA slots go from “mandatory” to dead in the decade prior.
I also witnessed the supremacy of x86, eventually taking over for the RISC PowerPC processors in Apple computers after a 15+ year run, and then the transition to x64, aka 64-bit addressing so CISC processors could address more than 4GB of RAM. Speaking of RAM, I watched what became ubiquitous evolve from EDO DRAM to SDRAM, the introduction of Rambus DRAM and DDR SDRAM with the former dying out in the early 2000’s while the latter is going on its 5th generation, and now GDDR and other high speed graphics RAM. I’ll come back to this in a sec.
Don’t even get me started on serial, SCSI and parallel ports; I’m just glad Apple’s enthusiasm for USB pushed the industry to adopt it and make it ubiquitous. Related to that being the PATA ribbon cables for hard disks being supplanted with SATA, which itself is now being made scarce with the advent of consumer-grade SSDs appearing just over a decade ago, now running mostly on PCI-e due to exceeding the 6Gbit limit of SATA.
I still remember the first time I saw an SD card, back then known widely as “Secure Digital.” It shook my world to see a whopping 8 MEGABYTES on a tiny card. My first MP3 player was a Casio WMP-1 that looked like a G-Shock wristwatch but could store and play up to 16MB of music. When it eventually died, I disassembled it to discover it used a non-removable SD card internally to store the music files.
All of this is to say that I’ve been around and witnessed first-hand these evolutions in computing hardware. Not just that, but I was a hobbyist who owned much of it and at least every bit that I’ve mentioned so far. I was in my middle teens when I was first truly interested in owning, servicing and upgrading my own PC and was well into my 20’s by the time I had enough money and access, while stationed in Japan, to afford most of what I wanted to do in the realm of desktop computing. While I may not be as involved, or indulgent, as I once was, my knowledge has managed to keep pace with the speed of the industry. So, back to the Motorola/IBM PowerPC and Apple…
My personal experience with Apple began in 2003 with my first purchase of the iPod. I was so enamored with it, in 2004 I purchased iPod minis for all of my family members. It wasn’t until 2008 I finally purchased my first Macintosh, a unibody MacBook Pro, after a terrible experience attempting to get a new Dell laptop, with a defective hinge and display, serviced under warranty. I began watching them more closely in 2005, after I considered buying a PowerMac in 2001, when news broke of their impending transition to x86 Intel processors. I remember Rosetta before its resurrection as “Rosetta 2.” I remember both the heartaches and headaches caused not only by the transition to OS X, but also round 2 of it beginning with the transition away from PowerPC. Translation code, emulation software and virtual machines back then aren’t much different than they are today, but transitioning from a streamlined RISC processing architecture to a much less efficient CISC processor was doomed to be difficult, and it was. Very little worked well and usability required a lot of computing horsepower to even work at a satisfactory pace. Mind you, I’m talking “satisfactory” within context. Computing speeds back then would cause today’s users to cry, even before things are slowed by the use of an emulator or virtual machines.
That’s why the Apple M1 processor is such a miracle. It does exactly what Apple says it does. It does what Apple promised during the PowerPC transition 15 years ago but failed to deliver. What you hear coming from a millennial’s face on YouTube is not hyperbole; the M1 clearly has processing power to spare since its able to translate/emulate non-optimized code at a speed where users don’t even realize there’s an extra computing layer in-between. If a millennial is shocked at the performance of the M1 when compared side by side with an Intel equipped MacBook, they can’t even imagine what I, and others my age or older, are experiencing.
Our minds are completely blown.
We witnessed first-hand this sort of wholesale transition before from the very same company. We cursed Rosetta. That was a hard shot of reality after being massaged with marketing hype, promises and a near total failure to deliver. This also came on the heels of the painful transition from Classic MacOS 9 to the Unix-based OS X, where little was offered and even that didn’t work well.
Once we get past the fact that we have software running in emulation at a pace that’s faster than the same software being run on native hardware, we are then confronted with the fact that it’s doing it cooler and more efficiently. The M1 runs harder for longer and with much less energy consumption and our imaginations are running wild at the prospect of just how much more performance we’ll get once our entire workflows are coded to run natively. Even faster(?!?) and more efficiently, possibly gaining as much as 50% more battery life once Rosetta 2 is eliminated? We can’t even fathom it. Hell, most of us can’t even fathom what Apple has already delivered.
Add to this the context of Microsoft’s repeated failures to bring an ARM based PC to market. Ask anyone and they’ll tell you the Surface Pro X is not a solution. With the M1, it proves to Microsoft that it can be done, so dedicated Windows users just have to wait until Microsoft is ready to make a full, unrestrained commitment to the RISC platform by shedding the constraints of decades-old interoperability requirements. For some reason, they’re unable to truly tap into the power ARM processors have available, whether it’s due to running a virtual machine, an inefficient emulator or simply not influencing the hardware design in their favor. I absolutely don’t know where their failings are when it comes to ARM desktop computing. All I do know is Apple has disproven all of the years of Microsoft trying to trivialize what ARM, and RISC computing in general, is capable of.
I’d like to try and sum it up this way: I’m a heavy user of Adobe Lightroom and their Creative Suite in general. I’ve also been trying to transition to Capture One because of its greater speed and performance in comparison. Both are used to post process RAW images coming from a 24MP Fujifilm X-H1, whose RAF image files average about 25 megabytes straight out of camera. I also process photos from my 50MP Fujifilm GFX 50S, whose RAF files hover around 140 megabytes. The M1 MacBook Air with 8GB of RAM, the entry-level spec, is able to run Lightroom Classic faster than either my 2016 2.6GHz i7 MacBook Pro 15″ with 16GB of RAM, PCI-e SSD and discrete GPU or my 2015 4GHz i7 iMac 27″ with 64GB of RAM, PCI-e SSD and discrete GPU.
Granted, neither of my machines are either current or top spec, but the M1 MacBook Air I used is an ENTRY-LEVEL machine, with no active cooling and, in the case of the iMac, 1/8th the RAM.
Oh, and the Creative Cloud suite is NOT certified by Adobe as being compatible with the M1. Natively coded versions are currently in beta and I haven’t used it.
I feared this transition due to my experiences with it, mostly as an outsider, during the PowerPC to Intel migration. I assumed very little software support to start, poor performance with Rosetta 2 and slow uptake by third-party software houses. The last thing I ever expected was for Adobe Creative Cloud to run FASTER in EMULATION/TRANSLATION, especially when I saw Adobe was not certifying it for use with Rosetta 2 on the M1. I figured I’d have to wait for Adobe’s native software that is crippled, buggy and would never achieve feature parity with Lightroom Classic. And because of that, I was planning to buy one more MacBook Pro 16″ and iMac 27″ with Intel processors to ensure I wouldn’t have to buy an M1 Macintosh for at least 4-5 years, essentially waiting out the time needed for software development. Instead, I’m able to run what I need on M1. Not only that, but I’m able to run it faster than on my current Intel Macs. I’m completely beside myself.
Brain transplants have never gone this smoothly. Prior experience tells me it’s not supposed to. Apple has managed to detonate the status quo and the M1 is revolutionizing the PC space. To be completely honest, I’m still in a state of shock and amazement, and that’s my point: the reactions and opinions of people far younger than I are all over YouTube, and the wider internet, expressing absolute amazement that Apple has managed to pull this off without a single, show-stopping hitch. Those of us who actually lived through the previous attempts to do the same thing at an older age with more accurate recollections, specifically being in our 20-30’s as opposed to tweens and early teens or younger.
My point is that if you’re under 30 years old, you probably don’t remember just how badly the Mac OS X transition in 2001, and PowerPC to Intel transition in 2006, went for most “power users” back then, especially if your computing was limited to the browser and word processor. As I reflected on earlier, I didn’t transition to Macintosh until 2008 as Apple using friends of mine had advised me to avoid my own transition from Windows until then.
So, what’s next? Despite already having multiple integrated cores, specifically 7-8 of them, arranged as a “system on a chip (SoC),” the next step I foresee is the quilting of multiple M1 processors together into a single chip. In parallel, I also see the development of “compute cards,” essentially daughter boards with a complete M1 SoC and RAM, compatible with a motherboard-like bus board that contains multiple slots to accommodate, say 6-10 of these daughter boards, fully scalable workstation computing. That motherboard would have all of the provisions needed to accept discrete GPUs plus slots for GDDR SDRAM, Apple’s Afterburner card, dedicated PCI-e slots for solid state disks, maybe a SATA bus for large storage, Thunderbolt and USB 4, and the usual complement of standard chips and ports for audio, communication, networking, HDMI in/out, etc. I see this becoming the next iMac and Mac Pro, with the iMac either using 2-4 woven M1 SoC’s on a single mainboard, or integrating 2-4 processing cards in the daughter board formation with a single shared, discrete GPU.
For a Mac Pro, I see a standard mainboard configuration that accepts 2-8 compute daughter boards, designed for modularity, and configured to order. Imagine how powerful the Mac Pro would be with just 4 complete M1 SoC’s running in parallel, supplemented by a pair of discrete GPUs in SLI, boosted by an Afterburner card, and fed by a bank of user configurable GDDR SDRAM. This would be an absolute beast and it would have headroom for even more performance. Plus, it would consume less than half the power and require much less space and energy for cooling, giving it a much higher thermal overhead.
The trashcan Mac Pro could conceivably make a return with far more computing power but without the fan. Imagine that.
We would essentially have access to the latest in supercomputing hardware but scaled down for the home. The most powerful supercomputer is currently the Fujitsu Fugaku in Kobe, Japan, and it’s powered by 8,266,752 individual cores with 48+4 cores per node and 158,976 nodes. Essentially, each CPU has 52 cores with a total of 158,976 individual CPUs and they’re all ARM CPUs due to the much lower power consumption required to achieve higher performance. In fact, most current supercomputers rely at least partially on ARM processors. Yes, really.
So, yeah the possibility of having a scaled down supercomputer at home is not hyperbole. But of course this is all just conjecture; all are realistic possibilities for Apple to scale up the M1 to meet many different computing needs.
Not a fan of Apple? Give it some time for certain manufacturers (cough, Microsoft, cough) to comprehend, then copy, what Apple has managed to do with the M1 and, more importantly, MacOS and Rosetta 2. In 5-10 years, Microsoft and others will finally introduce a version of the Surface Pro X that manages to meet the expectations established by the Apple M1 here in 2020.
The Apple M1 is the start of a whole new ballgame. Our expectations of computing has been changed forever and the general consumer will no longer tolerate the excessive power consumption and long wait times inherent to current PC technology. RISC processing has finally reached its potential.
Leave a Reply