ELI SCHMIDT

Nearly a year after its release, I’ve seen virtually no marketing for the PlayStation Portal. Yet, it's selling like hotcakes. I had to find out what I was missing out on. Is this a product of the Sony propaganda machine, or something worth buying? After a month with PlayStation’s newest handheld, I’ve seen how it impresses, and where it disappoints.

But first, let’s talk about the PSP, the PlayStation Portable. In 2005, Sony released its first handheld console, and since then it's become a classic. It was the first portable device that promised console-quality 3D games on the go. It was celebrated for its library (and how easy it was to hack), even when it failed to live up to this promise. Nearly two decades later, and 12 years after its successor the PlayStation Vita, Sony has re-entered the handheld race. Just not the way you might think.

Pros

Cons

Sony released the PlayStation Portal into an era where the dream of taking your PC and console games on the go is fully realised. Devices like the Steam Deck and ROG Ally do that very thing, and they do it quite well. It would make sense for Sony to release a competitor, one where you can play PlayStation exclusives like Final Fantasy VII Rebirth and God of War: Ragnarök anywhere you are. But Sony didn’t do that... They made this instead.

At SGD295.90, the PlayStation Portal is a great value for the tech, but it’s use-case is remarkably limited. I wanted to love it, and as a piece of hardware I do, but I fear my streaming issues aren’t an isolated incident. If you want to pony up for a Portal, I recommend you do it only if you have a vast PS5 library and scorching fast home internet.

Hardware: An Almost Perfect First Stab

The Portal is a dedicated remote play device that takes the form factor of a PS5 DualSense controller. Imagine cutting a DualSense in half and splicing a screen between each half. That’s exactly what this is. Using PlayStation’s remote play feature, you can stream any game you are playing on your PS5 directly to the Portal, as long as you are on the same Wi-Fi connection. That caveat is a big deal.

As a piece of hardware, the PlayStation Portal impressed me. The 8-inch touchscreen is roomy (not too big) and supports gameplay in 1080p at up to 60 frames per second. It’s a great controller in the first place, and now there’s a pretty damn good screen in the middle.

Most of the impressive (and gimmicky) features of the DualSense carry over to the Portal—including its advanced haptic feedback, adaptive triggers, built-in microphone, and overall ergonomics. The two things it’s lacking are a speaker and a touchpad. The lack of a microphone is mostly no biggie—even though I tested the only game that actually uses the controller, GOTY 2024 contender Astro Bot.

e
ELI SCHMIDT

The real fumble with this device is that the touchpad is replaced by an unreliable touchscreen interface. Tap the screen and two transparent squares will pop up to represent the left and right sides of the touchpad. In theory, these work.

In practice, they don’t. The Wired reviewer noted this made Alan Wake 2 unplayable. I didn’t even try to stream a game that was graphically intense over my internet. But in my time delving into Sony’s library of PS1 and PS2 titles, I found that the touchpad is often used as the start button in these emulated classics. On the Portal, this doesn’t work. When playing Ape Escape (which I was inspired to finally play thanks to Astro Bot), I was unable to switch gadgets because the start menu was inaccessible. In later levels, this makes things unplayable.

I had other hardware nitpicks (the Portal doesn’t support Bluetooth headphones), but on the whole, that’s not where my main concerns about the Portal’s usefulness lie. In fairness the next part isn’t even Sony’s fault. The PlayStation Portal is a letdown… because of my internet.

Streaming: Expectations Meet Reality

Bandwidth is the lynchpin of the PlayStation Portal. How much you have of it determines your experience with the console. Me? I have good enough internet for working and gaming online with no trouble, but I don’t have a connection that I would call fast, nor would I consider it all that stable. This was the Achilles’ heel of my time with the Portal.

In my month with the Portal, I’ve tested good-looking PS5 games including Ghost of TsushimaSpider-Man: Miles MoralesDemon's Souls, and Astro Bot. I also spent time with PS4 games The Last Guardian and God of War. In almost every case, the opening minutes of streaming were a disaster. Often, I would switch a game I was playing from the console to the Portal and would be greeted with pixelated, laggy gameplay. Typically, this would get worse before the game paused altogether, then booting me out and forcing me to reconnect. Only after reconnecting did some games perform.

When the streaming works smoothly, it is inconsistent from game to game. Higher-intensity titles didn’t stream as easily as less graphically demanding games. I had more luck getting the PS4 games to run smoothly after the initial hiccups. Ironically, the games that streamed the best were remastered versions of PSP and Vita games like Final Fantasy VII spin-off Crisis Core and the PS4 version of Gravity Rush. High-speed titles like Insomniac’s Spider-Man games or shooter rogue-lite Returnal never quite feel right on the Portal. Online shooters would certainly be a no-go on my Wi-Fi. Sorry, Helldivers.

Another streaming flaw I encountered almost instantly was the inability to stream “streamed” content to the Portal. In language that doesn’t use the form of “stream” three times in a phrase, that means no Netflix, no YouTube, etc. It also means that if you have access to the PlayStation Plus library of games that are only available via cloud streaming, they won’t run on the Portal. A bit of a disappointing oversight.

Overall, some games I accepted taking a graphical hit (and occasional hitches) on, and many others I would rather play on my TV.

Final Verdict: A Good Value for a Niche Audience

The use case for the PlayStation Portal is niche, for sure. If you have one TV in your home that’s often used by others, it is an appealing offer. Especially for the same price as a pair of Sony’s gaming earbuds. Chances are if you are paying for fiber internet already, the price isn't a big deal.

Still, playing the Portal feels limited and tethered. Not being able to leave the good Wi-Fi zone of your house makes it not in competition at all with what Nintendo and Valve have put out there. I also found that seemingly small quibbles like the lack of touchpad or Bluetooth support were more detrimental than they sound. All the small things. True care, truth brings.

That said, these are the types of setbacks you’d expect from a first-generation device. Even if the Portal was flawless, though, it still wouldn’t solve the nation’s inadequate bandwidth infrastructure. Without any improvements on that front, another PlayStation Portal would be a sequel that wouldn’t make much sense. For now, the current model’s effectiveness depends on your access to Broadband.

Originally published on Esquire US

I've been wearing smartwatches and fitness trackers around my wrist for years, but I’d never worn an Oura Ring before. I hadn’t considered myself much of a “ring guy,” but I admittedly wasn’t a watch guy before I started wearing an Apple Watch. Now I can’t go a day without it. After wearing my Oura Ring Generation 3 for more than two months, I can almost say the same thing about the tiny sleep monitor that currently lives on my finger at all times.

As a sleep tracker, the Oura Ring 3 is remarkable. As a fitness tracker, it’s not bad, could be better. As a piece of wearable tech, it’s comfortable to wear constantly and consistently, even in bed.

(OURA RING)

The Oura Ring vs. a smartwatch

Let’s get straight to it. Does the Oura Ring do enough to replace a smartwatch? No. I think it serves an entirely separate function. To answer the trickier question of “Is an Oura Ring right for you?” it depends on what you’re looking for in a wearable health tracker.

If you want extensive amounts of data about your sleep and daily health tracking, as well as an accurate step counter, yes, it is. Want all that in a package that doesn’t look techy whatsoever? An even better reason to choose one. If you want a completely smart device that will show you texts, calls, and reminders and, most important, tell you what time it is, buy a watch.

Setting up an Oura Ring

My Oura Ring journey began like any other—with a sizing kit. After you choose your make and model, Oura will send you a box of ring sizers ranging from sizes 6 to 13. They recommend you wear the smart ring on your index finger, but—due to a Little League–related accident in my youth—I’ve found it most comfortable to wear on the middle finger of my non-dominant (left) hand. Indecision frequently haunts me, so I was initially worried that my chosen size (11) would be too tight or too loose, but after weeks of everyday wear, I can safely say I don’t think about it too much anymore.

Once your device (it feels strange to call something this small a “device”) arrives, it’s time to download the app. The setup process is pretty easy and the onboarding is gradual. Certain data, like stress levels, resilience, long-term trends, and reports tabs, are inaccessible on day one. To start, I primarily relied on the ring for sleep and restfulness data. In this way, the Oura Ring puts its best foot forward.

First impressions: It’s stylish and discreet

Oura offers several style and finish options for your ring. You can opt for Heritage, the original design with a raised plateau segment, or the fully rounded Horizon. Each has a selection of metal finishes to choose from. In terms of tech, the rings are all identical. No plus or pro offerings, just one ring to rule them all. Each Gen 3 has three sensors on the inside of the ring that use biometrics to track daily functions, including heart rate and blood-oxygen levels.

About a month into my time with the ring, I went on a family vacation and multiple people asked me if my Horizon Oura Ring was a wedding ring or an engagement band. That’s how slick it is. It’s that normal looking. The fact that it’s so high-tech and looks like any other SGD450 ring made it easy to incorporate Oura into my daily routine.

Charging the Oura Ring

Since you are supposed to wear it all the time yet it’s also an electronic device, one of my first questions was “When will I charge my Oura Ring?” The answer: during showers. The ring itself is waterproof up to 330 feet; that means swimming is no problem, and the same goes for doing the dishes, washing your hands, etc. This is meant to monitor you at all times, remember? That makes it a great choice for swimmers who want to track their workouts.

Every morning, I wake up to see how I slept and to confirm the previous day’s activities, then I slip my ring off to shower and back on before I start my day. It ends up fading into the background of my busy life. Sometimes I’ll check the app to see my daily stress levels, but generally I only think about my Oura Ring in the morning and at night.

(OURA RING)

Tracking sleep and getting in tune with myself

While heart rate and blood-oxygen sensing are the newest features of the Oura Ring (only available on the Gen 3), sleep tracking is the most impressive feature, and it has only improved with each iteration. This is where the form, factor, and function fully align to accomplish something a smartwatch has yet to do: provide accurate, seamless data about my sleep health.

At first, I felt the insights were a bit obvious. But I soon realised that I trusted the data, since it reflected how I was actually feeling in the form of a score. Now I wake up each day, ready for my scores to tell me how I slept, not the other way around. Even knowing simple information, like when exactly I fell asleep and precisely how many sleep minutes I get per night, feels like a breakthrough in understanding my body. And that’s just scratching the surface.

The main thing I gravitate toward is the scores. Each morning, once the ring determines I’m fully awake, I will get scores from 0 (typically above 50 if I slept at all) to 100 that rate both my sleep and my readiness for the day. I cannot emphasise how much I love these stupid numbers. Seeing a high readiness score can reinforce a feeling that I’m going to have a good day, while a lower sleep score is an excellent validation of why I feel like shit. In fact, this is where the sleep tab of the app truly comes into play. Broken-down stats on REM and deep-sleep time, or my overall sleep efficiency, allow me to quickly compare each night’s sleep with my norm.

Eventually, the app will start providing a Resilience rating. Mine currently reads “solid,” but with proper self-care, I can raise that to “exceptional” over time. This aspect is actually quite vague and difficult to engage with, but another Oura Ring wearer I spoke to called it her favourite feature. To each their own.

(OURA RING/Courtesy of Bryn Gelbart)

Fitness, health tracking, and data overload

The health data is impressively accurate for a device like this, but it’s not perfect, especially the further your health is from the baseline of what’s expected. An example: I was born with a congenital heart condition, a bicuspid aortic valve, so I have a very strange-sounding heartbeat. My heart also has to pump twice as much as most people’s to produce the same blood flow. The point is, I already have a reason to be suspicious of how accurately the Oura Ring can monitor my heart health, confirmed by its rating of my “cardiovascular age” as thirteen years older than I am. Do I have the heart of a man in his early forties? Maybe, but I’m sure plenty of forty-year-olds have stronger hearts than I do.

The issue is, if I were unaware of my condition, this would be concerning. And everything that the Oura app can recommend is general, lowest-common-denominator health advice. Eating fruit and working out won’t actually do anything substantial for my cardiovascular readings. This is all to say, you are probably never going to get life-saving data from this thing. The most it can do is help you get better sleep and exercise more, which can admittedly feel life-changing.

In terms of general health tracking, like daytime stress and heart-rate data, the Oura Ring and app are very comprehensive. It’s easy to get lost in the sauce, and every week I swear either I’m gaining access to new features or the app is being updated. The amount of information here can be a little overwhelming.

When tracking my activities and exercise, the Oura Ring 3 has advantages and downsides compared with the smartwatches I’ve used. As a pedometer, it’s more accurate at tracking my steps and daily calorie burn than my smartwatch. It also provides way more data than I’ve ever gotten from my Apple Watch, but it’s worth mentioning that I don’t subscribe to Apple Fitness+. For this review, I received Oura’s subscription to test out all of the ring’s features, but there will be more on how that works later. Just know that for now, I was very impressed by the amount of fitness data provided. But when it comes to workout tracking and recognition, the ring lags.

(OURA RING/Courtesy of Bryn Gelbart)

This is one of my favourite features of the Apple Watch. When I start an elliptical workout or a bike ride or even a long walk, it will accurately identify it 95 percent of the time and ask me if I want to record the workout. As a result, I always have digital records of all my workouts on my phone, fully automated. Its tech wasn’t always this accurate, but Apple has invested a lot of time and money into it. I can’t say the same for Oura, unfortunately.

For starters, having to open the app to retroactively confirm and log my workouts is one more step than I’m used to taking. Beyond that, I found the functionality often lacking. Once, my Oura Ring correctly identified a forty-minute elliptical workout. More commonly, though, it will misidentify it as (maybe?) a walk, as it does most non-running workouts. Most days, I have to confirm four or five walks in my exercise log, meaning the ring doesn’t know the difference between a trek to the subway and a short hike.

The hidden cost of an Oura Ring

Up front, an Oura Ring will cost you from approximately SGD450 before tax, depending on which style and finish you choose. The newer Horizon models will generally run you slightly more than the OG Heritage design, and fancier finishes like Brushed Titanium, Gold, and Rose Gold will add to the price tag. While that’s not nothing, I live in a city where a cup of coffee rarely costs less than five bucks. Four or five hundred dollars for something you will use every day is reasonable compared with, well, the state of everything else.

What really irks me is the subscription model that’s tacked on to that. After an included free month of fully featured access, Oura begins charging SGD9 per month for access to in-depth sleep insights, heart-rate monitoring, body-temperature readings, blood-oxygen readings—pretty much everything you would use it for.

It isn’t so much the cost that frustrates me (it’s fairly affordable compared with direct alternatives like Apple Fitness+) but rather the dread of that payment hanging over my head every month until I want to stop using the device—all to use its basic functions. What baffles me is how fundamentally useless the Oura Ring 3 is without a subscription. It just feels like another company trying to bleed its users dry when we’ve just invested hundreds of dollars in a product. You’ll have barely unlocked access to all the features after one month of use, making the free month feel like even more of a “lite” version of the true experience than is advertised. A free year would’ve at least been a compromise.

So, a final verdict

I really, really like the Oura Ring Horizon Gen 3. I like how it looks and how much of a conversation starter it has proved to be. Most of all, I like how it’s confirmed something I’ve always known but never had the data to prove: I get a pretty healthy amount of sleep. My bedtime is way more consistent than I expected. Even a small insight like this has started to change how I think about my sleep and, by extension, my mood and energy levels.

Even as an Apple Watch user for several years, I’ve found a way to slot the Oura Ring into my life and teach me something new about myself. That’s something I can’t say about most products I try. If I ever take this thing off, it’s either because I’m taking a shower or my subscription has finally lapsed.

PROS

CONS


Why trust Esquire?

At Esquire, we’ve been testing and reviewing the latest and greatest products for decades. We do hands-on testing with every gadget and piece of gear we review. From portable monitors to phone cameras, we’ve tested the best products—and some not-so-great ones for good measure.

To review this Oura Ring, I tried it out for many weeks before even sitting down to start writing. Plus, I spoke with other Esquire staff members about their past and current experiences with the product to get the fullest picture possible.

Originally published on Esquire US

Just when I thought we'd hit capacity on mid-tier consumer headphones, Sonos made its long-awaited entrance. We've already got classic brands like Apple, Beats, Bose, Marshall, and Sony. We've got luxury plays from Bowers & Wilkins, Bang & Olufsen, and most recently Dyson. Consumer headphones are a multibillion-dollar industry (Statista values it at SGD24 billion globally), so there's a lot of money to be made off our active-noise-cancelling obsession, and there have been a lot of shitty attempts to enter the market.

So, did Sonos do it right? Do the brand-new Sonos Ace headphones move me in any way? Surprisingly, yes. After a couple months of testing, I think these are some of the best headphones available. At SGD699, they're good for music listening and travel, but they're best in class for at-home TV watching.

PROS

CONS

First, what makes them stand out?

(SONOS)

One thing: Sonos Audio Swap. Everything else that's great about these headphones—active noise cancellation, spatial audio, lossless streaming—other headphones do just as well. Audio Swap establishes these as TV-watching headphones, a category where they face little to no competition.

When you have a Sonos soundbar, Audio Swap uses the HDMI connection to pull hi-fi sound from the TV and share it with the headphones via Bluetooth. (Currently, this is only available with the Sonos Arc, but the brand is promising compatibility with lesser soundbars as soon as possible.) For flat living, it's great. My girlfriend and I are both guilty of holding unpredictable late-night movie-watching hours long after the other has gone to sleep.

Normally, there are two options. 1) Movie watcher tells sleeper to wear earplugs and get over it. 2) Movie watcher respectfully turns the sound down so low that the dialogue is impossible to hear. Sonos Audio Swap is the fix we've both craved. The Dolby Atmos spatial audio makes it feel as if you're listening on a proper surround-sound system, but it's all within your own head.

Full transparency, though: This is not a new concept. You can already stream TV audio to a pair of spatial audio-equipped headphones with Apple TV 4K and a pair of AirPods Max. The difference is that the Sonos home-entertainment ecosystem takes it up a notch.

See, since Sonos is already deep into home audio, the Ace has been built into that infrastructure. The most obvious example is in the TrueCinema technology. At the time of this writing, the software is still being worked on for a consumer rollout, but I got a little taste at an exclusive Sonos media event. TrueCinema will use the room-mapping capabilities of the Sonos soundbar to determine what your movie-watching experience sounds like in various positions around the room.

Then it shares that information with the headphones, so when you're sitting on your sofa, the audio sounds exactly the same as when it's coming from your soundbar. And if you walk around the room, the spatial audio centre doesn't move with you, so you get a different listening experience. Sonos is trying to replicate what it sounds like to watch TV without headphones while wearing headphones. An ambitious goal that I think will pay off big.

Okay, shut up about watching TV; are they good day-to-day headphones?

Ace headphones and Arc soundbar.
(SONOS)

Yes, they're amazing for travel, music, podcast or audiobook listening, and everything else. But pretty much all the headphones in this price range are. When you're comparing any of these models, you have to dig deep to find differences.

As for me, I split the category into two (a bit arbitrary) subcategories: music headphones and podcast/audiobook headphones. Bose and Sony are podcast/audiobook headphones, because they have the best active noise cancellation. So is Bowers & Wilkins, because its bright house sound is good for dialogue. All-rounders like Bang & Olufsen, Apple, and the Sonos Ace are music headphones. (Beats are in their own bass-heavy category.)

The best compliment I can give the Sonos Ace is that they're the best competitor to the AirPods Max, which I love. The sound is full, from bottom to top. On the low end, you get deep bass and those rich low-mids that make you feel the music. In the middle, it's true to life. On the high end, you get crisp treble and vocals that cut through the rest of it. As expected, Sonos hit all the notes it needed to.

And how do they stack up to the AirPods Max in terms of usability? About the same. They connect quickly, and the Sonos app lets you play with EQ settings. They look good in either white or black. The headband is sturdy, with stainless-steel interior components, and smooth when adjusting. The case is fine. To be nitpicky, I think the recycled plastic feels a bit cheap. But the case itself is sturdy, slim, and great for travel.

Speaking of travel, that's where I think these would overtake the AirPods Max for me. They're ever so slightly lighter but feel just as substantial. The case is hard and about the size of a book, so it's easy to slip into a crowded carry-on without worrying about damaging the headphones. But the biggest win is that Sonos includes a USB-C-to-3.5mm jack in the case. That means no dongles or stupid pretravel purchases. From day one, you're good to go with in-flight entertainment.

All right, final verdict. Who should buy the Sonos Ace?

If you've already got a Sonos home audio system, or have grand ambitions to get a Sonos home audio system, buy a pair. If you're a frequent flyer who's always wanted a pair of headphones with a better travel case and an included 3.5mm adaptor, buy a pair.

The music performance is great, but it's not miles better than the other options out there. What I can say for a fact is that Sonos Ace headphones are the best home entertainment headphones on the market. If you can drop the money on both these and the Arc soundbar, there's not a better home audio setup available. If you're not interested in sitting at home watching TV through your headphones, maybe play the field.

PRO: Easily the best headphones for watching TV and movies

CON: For music, podcasts, or audiobooks it's not clear cut—on-par with AirPods, in my opinion

Originally published on Esquire US

It was an audacious move when Dyson decided to plunge into the deep end of audio. Dyson is allowed to experiment but with the Dyson Zone, it was trying to be a lot of things. For one, it's a pair of headphones but it was also an air purifier? It's as though the brand wasn't confident in their foray into the audio space and still cling to the signature fans that put them on the map in the first place. Those two disparate functions—audio fidelity and the air purifying—found a shaky common ground in the Zone but not only was the design ridiculous (Bane, anyone?), it was heavy and, in some cases, the air purifying sensors weren't as accurate as it should be. But the noise cancellation and audio fidelity showed promise, which brings us to the brand's first audio-only headphones: the Dyson OnTrac.

Drawing from 30 years' worth of aeroacoustics R&D, Dyson has going is their own custom Active Noise Cancellation (ANC) algorithm. The ear cushions on the headphone cups, create a seal on the ears and each headphone cup is outfitted with eight microphones that cancel out external sounds at 384,000 times per second and reduce noises up to 40dB. Armed with 40mm, 16-ohm neodymium speaker drivers and advanced audio signal processing, you get a clear delivery. You get your highs and lows with a wide frequency range—a resonant 6 Hertz to a crisp 21,000 Hertz. Another feature is the tilting of the speaker housing at 13 degrees towards the ear for a more direct audio response.

You get a battery life of up to 55 hours. For weight distribution, instead of being housed in the cups, two high-capacity lithium-ion battery cells, are positioned at 10- and 2-o'clock of the headband. The ergonomics of the headphones are great. We have been wearing them for about two hours and we don't have any tension on the neck or the temples. High-grade foam cushions and multi-pivot gimbal arms relieve ear pressure, while the soft micro-suede ear cushions and optimised clamp force ensure a consistent and comfortable fit.

Design and Customisation

One thing that sets this apart from all the other headphones is that the Dyson OnTrac allows for customisation for the ear cushion and the outer cups. Usually, that sort of feature is disabled to maintain the drivers' integrity but Dyson is confident enough that even when you swap out the modular cushion and cups, the Dyson OnTrac will perform as well as it should.

The Dyson OnTrac comes in four base colourways—aluminium (that's finished via computer numerical control machining); copper; nickel and a ceramic cinnabar variant that has a ceramic-like painted finish. Then you have customisable caps and cushions in different hues, which give over 2,000 colour combos. The caps are made of high-grade aluminium and are in either anodised or ceramic finishes.

The Dyson OnTrac Headphones retail for SGD699 and will be available on September 2024 at all Dyson outlets and online.

(DAN WINTERS)

When it comes to prognostication, look to the dreamers, who imagine the things that will be made possible. For Apple, it's the sort of longview that benefits from the company's long development cycle for its products. It is this sort of gestation period that allows the Apple Vision Pro (AVP) to come to fruition.

In 2007, a patent was granted to Apple for a "HMD (head-mounted display)", which meant that development for the Apple Vision Pro ran for 16 years. That's the problem with dreamers, it takes a while before everybody, including the technology, catches up. But while most of the tech needed to be invented, some things like the look of the device didn't veer too far from a concept sketch.

"When we started this project, almost none of these tech existed," said Mike Rockwell, VP of Apple’s Vision Products Group. "We had to invent almost everything to make it happen."

Alan Dye, Apple’s Vice President of Human Interface Design, said that the AVP was, by far, the most ambitious Apple product that they had to design. "I can't believe that any other company would be able to make something like this as it requires all disciplines across the studio together, to create one singular product experience. It's kinda unprecedented."

Richard Howarth, Vice President of Industrial Design, concurs. "One of the reasons that it was very ambitious was that it hadn't been done. Nothing with this sort of resolution and computing power had ever been done.

"We didn't even know if it was possible."


The prototype was huge. Powered by a roomful of computers. Thick cables run between them. Although a behemoth, the prototype represented proof that it was possible.

Yes, during development, there were VR headsets that were released to the public. But Dye and Howarth weren't interested in creating a VR headset; they wanted a way to bridge people. Dye explains that whenever someone dons a VR headset, they are isolated from other people around them. "We wanted [the Apple Vision Pro] to foster connection, both by bringing people, from across the world, right into your space or by remaining connected with people around you."

The intent to connect framed how the product was designed. Like EyeSight, where your eyes—or a simulacrum of your eyes—appear on the front of the Apple Vision Pro if you're addressing someone or if you're using an app (an animation plays letting others know that you can't see them). Essentially, it's a visual aid for others to know whether you're available or not.

Mixing AR and VR, the Apple Vision Pro would pioneer "spatial computing", where the integration of digital information into the user's physical environment. The only way that could work was Apple's proprietary M2 chip that powers the device and an R1 spatial co-processor. Another way for the AVP to process the workload is "foveated rendering", where it only renders what your eyes are looking at.

The micro‑OLED display puts out 23 million pixels didn't hurt either. There are also 12 cameras for precise inside-out tracking (it tracks your eyes, your hand gestures and human bodies who come within your ambit).

Dye and Howarth didn't want to use external controllers and opted for hand gestures and voice commands to get around. The hardware is only as good as the software. That's where visionOS comes in.

visionOS lets you create your own Persona, an almost realistic avatar of yourself or allows for the aforementioned EyeSight. It still is kinda janky (eye tracking is left wanting if I'm selecting something at the edge of my periphery).

But still, the visionOS won the prestigious D&AD Black Pencil award for Digital Design and a Silver Cannes Lion for Digital Craft. The judges saw potential and there's still the visionOS 2 on the horizon, where the update allows for a functioning Magic Keyboard to appear in a virtual environment or customise the icons' position on the home screen.

One of the features that I look forward to is creating spatial photos from photos from the Photos app library. Using advanced machine learning, the visionOS turns a 2D image into a spatial photo that comes to life on the AVP.


(DAN WINTERS)

It was only a few years ago, that the Google Glasses was slammed by the public for being too intrusive. While cultural norms have shifted to the point where the public is lax about their privacy, Apple is still up in arms about privacy and security.

"It's important to know that Vision Pro has a privacy-first design at its core. We took great care in privacy and security for it," Rockwell says. "We don't give camera access to the developers directly. When your eyes highlight a UI element, the developers won't know where your eye position is. They are only informed if you tap on something. Another thing is that if you're capturing a spatial video or spatial photo, it alerts others on the front display that you're recording."

The Apple Vision Pro retails for SGD5,299 and there will be many, who would baulk at that price.

"We built an incredible product that we believe has enormous value," Rockwell explains. "This is not a toy. It's a very powerful tool. It's intended to be something that can give you computing capabilities, the ability to use this in a way where there's nothing else out there that can do it. We reached into the future to pull back a bunch of technology to make it happen.

"We want to ensure that this has fundamental and intrinsic value and we believe that at the price, it is of good value."

Perhaps access to the future is worth the ticket? The Apple Vision Pro is emblematic of the promise of the imminent. Of the convenience of speaking with your loved ones or the experience to traipse in lands unseen.

Like the eyes of the oracle, the device brims with potential and given time, the future is more realised.

The Apple Vision Pro is out now.

The luxury fashion House is breaking new grounds by venturing into the gaming and virtual world—Fortnite. How? By letting players to experience the Versace Mercury sneaker in-game for a limited time.

The collaboration adds a fun twist by inviting players to Murder Mystery (Fortnite Creative map) and explore an archaeological dig site. The first to discover the Versace Mercury sneaker wins and discovering the digital treasure comes with a special in-game perk and puts that player in the spotlight for that round. To promote the sneaker, Twitch streamer Agent 00 offers his viewers a chance to win a real pair of said sneakers during his stream.

(VERSACE X FORTNITE)

That's not all. The collection also collaborates with Snapchat, offering a Snap AR experience and a Bitmoji digital collection. Platforms like Fortnite offer a new medium for brands to convey their story and visual identity in gamified and social ways. This approach gives players relevant rewards that enhance their experience, whether through gameplay or cosmetic items for self-expression. With this collab, Versace can continue its commitment to virtual worlds and the value of digital fashion items.

In the real world, the Versace Mercury collection is made from high-quality calf leather and features a single sole, embodying both futuristic design and versatility. Sci-fi inspired, these sneakers have a complex structure of 86 precisely crafted components. The upper and lining of each pair alone consist of 30 pieces, all seamlessly cut and stitched together.

Fortnite is free-to-play and is available on most platforms including PlayStation 4; PlayStation 5; Nintendo Switch and Xbox Series X/S.

Samsung's biannual unveiling of its devices yesterday in Paris. It was, after all, a marketing strategy, a slew of Samsung devices announced at the locale of this year's Olympics. With pomp and circumstance, there comes the expectation of something new from the South Korean tech giant. These are what were announced at this year's Unpacked event.

Galaxy AI

The AI game heats up even further as Samsung reiterates its commitment to the integration of its Galaxy AI into their product ecosystem. Samsung was the first major phone brand to announce its use of Galaxy AI and while that thunder was stole with Apple announcing their own proprietary Apple Intelligence, Samsung reminds us that it already has a working Galaxy AI and that more of its products will have them.

One of the more impressive Galaxy AI addition is the Sketch to image feature. Your rudimentary doodle can be generated into different fully-fleshed image styles. Apple mentioned similar actions with its Image Wand but at this point, it's all about the speed to showcase AI, so this round goes to Samsung.

Galaxy Z Fold6 and Z Flip6

Samsung's signature phones return: the Galaxy Z Fold6 and the Galaxy Z Flip6. Touted to be "the slimmest and lightest Z series", the series are also blessed with enhanced Armor Aluminum2 and Corning Gorilla Glass Victus 2 for more durability. Both the Fold6 and Flip6 have Snapdragon 8 Gen In addition to being reliable, every element of the Z series is also powerful. Both the Z Fold6 and Z Flip6 are equipped with the Snapdragon Gen 3 Mobile Platform, the most advanced Snapdragon mobile processor yet.

The Galaxy Z Fold6 has a sleeker design and a Dynamic AMOLED 2X screen, which gives it unparalleled brightness. There's an upgraded gaming experience that's within the Fold6 by its chipset and a 1.6x larger vapour chamber. Meanwhile, the Galaxy Z Flip6 has a new 50MP wide camera, a 12MP Ultra-wide sensors and larger battery life.

Galaxy Watch Ultra

Let's address the elephant dominating the room: yes, obvious comparisons would be made with the Apple Watch Ultra. From the orange band to the orange "action button" to the shape of the dial, I guess, imitation is a form of flattery? But other than the looks, the Galaxy Watch Ultra seem to hold its own with its pricing and health measurement specs.

Galaxy Ring

Other than the smartwatch, this smart ring is meant to be worn throughout. It's less intrusive than the smartwatch, which makes for easier health tracking. Imbued with three sensors—accelerometer, photoplethysmography and skin temperature reading—the Galaxy Ring can monitor and collate various health metrics. It comes in several sizes.

Relive the unpacking here

Welp, those were our key takeaways from this year's Unpacked. As we go through the devices, we'll let you know in-depth what to further expect with each of these devices.

Let's start with science fiction and how we imagine it—the time travelling; phasers; light sabers. It's what makes the future so alluring. That the things we imagine are made real. Of course, there are always the pesky constraints of real-world physics that prevent such wonders to stay shackled in the realm of the mind. But sometimes a little stubbornness goes a long way. Such is the case of Apple and its entry into the mixed reality game: the Vision Pro.

From your View-Masters (remember those) to the Oculus Rift, we have been creating "headsets that immerse you into another reality". (To set the record straight, we're not talking about augmented reality, which is digital content overlaid over the real world but mixed reality that integrates digital objects into the user's environment.)

Apple may not have pioneered mixed reality but it sure is gonna leave its competitor in its wake of "spatial computing".

We tried the Apple Vision Pro (or the AVP, which shares the same initialism with Aliens Versus Predator) and the visuals are, for the lack of a better word, magical. It's magical that you're able to look at an icon and double tapping your fingertips would open up the programme. It's magical that you don't get the bends from being in an immersive video. And, it is so magical that you can open up multiple windows and... work became fun? It felt like that Jonny Mnemonic scene.

One of the ways that the AVP is able to process the workload is a sneaky thing called "foveated rendering". Because it tracks your eye, it only renders what your eyes are looking at: stare at a window and it comes into clear. Look at another window and that becomes sharp. If you think about it, that's how our eyes work anyway.

The hardware of this is incredible. Made of magnesium and carbon fibre, there are twelve cameras—from tracking of your hands to spatial tracking—positioned throughout the headset. There's an M2 processor and an R1 spatial co-processor to deliver a smooth performance. The eye tracking is a cinch and there's no lag in the video passthrough.

On the corners of the goggles are a digital crown that adjusts the volume and the immersion and a button that you can depress to take photos and videos. There are speakers fixed to the arms of the Vision Pro but if the volume goes past a certain level, everybody else around you are privy to what you're hearing.

The AVP's Persona feature is kinda weird. Think of a Persona as your avatar. Your Pesona will reflect youryour facial expressions (sticking out your tongue; gesticulate with your hands), it has fringes of the Uncanny Valley. It. You can FaceTime or enter into an online meeting with them; they would appear and the hairs on your arm will rise a little. But after a while, you get used to it. And then their Personas kinda look like ghosts in your living room. Except they are presenting a PowerPoint.

If you're wondering, why not use a memoji? And the only reason I can think of is that if you're in a business meeting, there has to be a level of professionalism so a unicorn or a poop memoji may not fly. Then, again, it would be nice to have options. Perhaps in the next VisionOS upgrade.

By the way, there's an announcement that there would be a VisionOS 2, where you can create spatial photos from your 2D images, have new gesture controls and an enhanced Persona—accurate skin tone, clothing colour options. Who knows, maybe there would be an inclusion of memojis?

Is the writer opening up an app or is he dead?

The Downsides

The price is expensive. Like SGD5,299 expensive. But that's to justify the years of R&D and the components. You hold the AVP in your hands and it feels nice. And I suspect that months later, people wouldn't blink at the price tag. I remember when mobile phones retailed at four digits and my uncle self thought, welp, I'm not paying that much for a compact supercomputer. A year or two later, that sort of pricing for a mobile phone became normalise.

To fit in all that goodness that makes the AVP work its magic, it will have some weight to it. To be fair, it weighs about 649g. That's equivalent to a medium-sized chinchilla or a bag of Cadbury Triple Pack Mixed Eggs. Not that heavy, right? But when you're wearing the AVP that's outfitted with a Solo Knit Band on your face, after a while, you're gonna feel it in your face and because of my terrible posture, my neck will compensate for the weight and I'll hunch even further.

As a remedy, you can swap out the Solo Knit Band for the Dual Loop Band, which gives better weight distribution. Or, if you're a stubborn cock like me and you find it leceh to change to a Dual Loop Band, you can wear it lying down.

If you're worried about the tension in your neck, don't worry; you'll know its time to put down the AVP when it runs out of battery at two hours of general use.

I kid.

Verdict

It's not perfect but this is a game changer. It possesses the tech of today to The AVP shown what is possible and yet also poses what else can be done. We don't think that Apple is done with the Vision Pro; there's a roadmap and it's gonna take a few generations of the AVP before it gets to that stage, where you can't ignore it any longer. Like the first-gen iPod or the first-gen iPhone, the AVP has raised the bar and the other brands are gonna have to play catch-up.

It's a promise of a future, one that is bright with potential and all it took was an Apple Vision Pro for that glimpse.

The Apple Vision Pro is out now.

It's hard to think of Dyson as anything but a vacuum company. It's true that it was founder, James Dyson's reinvention of the vacuum turbine that propelled the still-family-owned business into the spotlight but the brand has been diversifying into other areas like hair dryers, lamps and air purifiers. They even dipped their toes into EVs for a period before abandoning the project altogether. The company sees a market in household equipments, which makes this next product kinda a no-brainer but also have us scratching our heads. Y'all, meet the WashG1.

This is marketed as a "wet cleaner"... which my mother, in her infinite wisdom, calls an "atas mop". But this isn't Dyson's foray into mopping. There was the V12s Detect Submarine, which was a dry vacuum that can mop up as well.

The conventional thinking was, is that a wet cleaner operates by suctioning up wet debris. But that usually clogs up the moving parts and trapped debris can emit a bad odour. So, fixing a turbine in the WashG1 is a no-go. Instead of air suction, the machine uses water pressure. how does a brand known for their turbine innovation reinvent the wet cleaner? Simple. Instead of air, water pressure is used.

Water delivery is determined by a pulse modulated hydration pump that adjusts for the amount of water. With a water tank that can contain one litre in volume, while the other half-of that tank contains the filthy water. There's a separation feature that divides debris and dirty water at source, enabling hygienic, no-touch disposal. You can use plain water for the clean-up or you can add a little floor cleaning liquid to it as it. Alas, the WashG1 only works on hard flooring. Carpets? Forget about them. With three modes of cleaning, users can also opt for a no-water mode.

Close-up of the two rollers pick up dirt and said dirt is separated.

The cleaner head has two motorised counter-rotating microfibre rollers that absorbs the dirt. With each rotation, dirt is extracted before water wets the roller before it presses against a plate to squeeze out the dirty water. A secondary roller with nylon bristles pick up bigger debris and hair and they are collected into a tray (that sits in the cleaner head).

In the end, the WashG1 does the job. Quite remarkably, I must add.

A charging stand lets you rest the WashG1 into its dock and cleans itself. The time it takes to clean itself? About two minutes. But, if you're anything like my mom, you can clean the WashG1 yourself, where you can detach the rollers from the cleaner head and wash them. The water tanks can also be removed for cleaning as well.

Downsides to the WashG1? Well, we mentioned that it is only effective on hard flooring. And the rollers won't last forever. Exactly how often they need replacing depends on how much you're clean washing but for a daily clean, Dyson puts it down to a minimum of six months.

Bottom line: will the WashG1 replace the mop? It depends. It's pretty good with the clean-up but the price might put some people off (SGD999).

Housework isn't usually sexy but with the WashG1, it makes the process a hell a lot easier.

The Dyson WashG1 will be available online and at all Dyson stores and distributors in July.

In the age of AI, it can feel as if this technology’s march into our lives is inevitable. From taking our jobs to writing our poetry, AI is suddenly everywhere we don’t want it to be.

But it doesn’t have to be this way. Just ask Madhumita Murgia, the AI editor at The Financial Times and the author of the barn-burning new book Code Dependent: Living in the Shadow of AI. Unlike most reporting about AI, which focuses on Silicon Valley power players or the technology itself, Murgia trains her lens on ordinary people encountering AI in their daily lives.

This “global precariat” of working people is often irrevocably harmed by these dust-ups; as Murgia writes, the implementation and governance of algorithms has become “a human rights issue.” She tells Esquire, “Whether it was health care, criminal justice, or government services, again and again you could see the harms perpetrated on mostly marginalised groups, because that’s how the AI supply chain is built.”

Murgia takes readers around the globe in a series of immersive reported vignettes, each one trained on AI’s damaging effects on the self, from “your livelihood” to “your freedom.” In Amsterdam, she highlights a predictive policing program that stigmatises children as likely criminals; in Kenya, she spotlights data workers lifted out of brutal poverty but still vulnerable to corporate exploitation; in Pittsburgh, she interviews UberEats couriers fighting back against the black-box algorithms that cheat them out of already meagre wages.

Yet there are also bright spots, particularly a chapter set in rural Indian villages, where under-resourced doctors use AI-assisted apps as diagnostic aids in their fight against tuberculosis. Despite the prevalent sense of impending doom, there’s still time to reconfigure our relationship to this technology, Murgia insists. “This is how we should all see AI,” she tells Esquire, “as a way to preserve the world we know and believe in what we bring to it, but then use it to augment us.”

Murgia spoke with Esquire by Zoom from her home in London about data labour, the future of technology regulation, and how to keep AI from reading bedtime stories to our children.


ESQUIRE: What is data colonialism, and how do we see it manifest through the lens of AI?

MADHUMITA MURGIA: Two academics, Nick Couldry and Ulises A. Mejias, came up with this term to draw parallels between modern colonialism and older forms of colonialism, like the British colonisation of India and other parts of the world. The resource extraction during that period harmed the lives of those who were colonised, much like how corporations today, particularly tech companies, are performing a similar kind of resource extraction. In this case, rather than oil or cotton, the resource is data.

In reporting this book, I saw how big Silicon Valley firms go to various parts of the world I visited, like India, Argentina, Kenya, and Bulgaria, and use the people there as data points to build systems that become trillion-dollar companies. But the people never see the full benefits of those AI systems to which they’ve given their data. Whether it was health care, criminal justice, or government services, again and again you could see the harms perpetrated on mostly marginalised groups, because that’s how the AI supply chain is built.

You write that data workers “are as precarious as factory workers; their labour is largely ghost work and they remain an undervalued bedrock of the AI industry.” What would it take to make their labour more apparent, and what would change if the reality of how AI works was more widely understood?

For me, the first surprise was how invisible these workers really are. When I talk to people, they’re shocked to learn that there are factories of real humans who tag data. Most assume that AI teaches itself somehow. So even just increasing understanding of their existence means that people start thinking, There’s somebody on the other end of this. Beyond that, the way the AI supply chain is set up, we only see the engineers building the final product. We think of them as the creators of the technology, so automatically, all the value is placed there.

Of course, these are brilliant computer scientists, so you can see why they’re paid millions of dollars for their work. But because the workers on the other end of the supply chain are so invisible, we underplay what they’re worth, and that shows up in the wages. Yes, these are workers in developing countries, and this is a standard outsourcing model. But when you look at the huge disparity in their living wage of $2.50 an hour going into the technology inside a Tesla car, and then you see what a Tesla car costs or what Elon Musk is worth or what that company is making, the disparity is huge. There’s just no way these workers benefit from being a part of this business.

If you hear technologists talking about it, they say we all get brought along for the ride—that productivity rises, bottom lines rise, money is flushed into our economy, and all of our lives get better. But what we’re seeing in practise is those who are most in need of these jobs are not seeing the huge upside that AI companies are starting to see, and so we’re failing them in that promise. We have to decide as a society: What is fair pay for somebody who’s part of this pipeline? What labour rights should they have? These workers don’t really have a voice. They’re so precarious economically. And so we need to have an active discussion. If there are going to be more AI systems, there’s going to be more data labour, so now is the time for us to figure out how they can see the upside of this revolution we’re all shouting from the rooftops about.

One of our readers asks: What are your thoughts on publishers like The New York Times suing OpenAI for copyright infringement? Do you think theyll succeed in protecting journalists from seeing their work scraped and/or plagiarised?

This hits hard for me, because I’m both the person reporting on it and the person that it impacts. We’ve seen how previous waves of technological growth, particularly the social media wave, have undermined the press and the publishing industry. There’s been a huge disintermediation of the news through social media platforms and tech platforms; these are now the pipes through which people get information, and we rely on them to do it for us. We’ve come to a similar inflection point where you can see how these companies can scrape the data we’ve all created and generate something that looks a lot like what we do with far less labor, time, and expertise.

It could easily undermine what creative people spend their lives doing. So I think it’s really important that the most respected and venerable institutions take a stand for why human creativity matters. Ultimately, I don’t know what the consequences will be. Maybe it’s a financial deal where we’re compensated for what we’ve produced, rather than it being scraped for free. There are a range of solutions. But for me, it’s important that those who have a voice stand up for creative people in a world where it's easy to automate these tasks to the standard of “good enough.”

Another reader asks: What AI regulations do you foresee governments enacting? Will ethical considerations be addressed primarily through legislation, or will they rely on nonlegal frameworks like ethical codes?

Especially over the last five years, there have been dozens and dozens of codes of conduct, all self-regulating. It’s exactly like what we saw with social media. There has been no Internet regulation, so companies come up with their own terms of service and codes of conduct. I think this time around, with the AI shift, there’s a lot more awareness and participation from regulators and governments.

There’s no way around it; there will be regulation because regulation is required. Even the companies agree with this, because you can’t define what’s ethical when you’re a corporation, particularly a profit-driven corporation. If these things are going to impact people’s health, people’s jobs, people’s mortgages, and whether somebody ends up in jail or gets bail, you need regulation involved. We’ll need lines drawn in the sand, and that will come via the law.

In the book, you note how governments have become dependent on these private tech companies for certain services. What would it look like to change course there, and if we don’t, where does that road lead?

It goes back to that question of colonialism. I spoke to Cori Crider, who used to be a lawyer for Guantanamo Bay prisoners and is now fighting algorithms. She sees them as equally consequential, which is really interesting. She told me about reading a book about the East India Company and the Anglo Iranian Oil Corporation, which played a role in the Iranian coup in the ’70s, and how companies become state-like and the state becomes reliant on them. Now, decades later, the infrastructure of how government runs is all done on cloud services.

There are four or five major cloud providers, so when you want to roll out something quickly at scale, you need these infrastructure companies. It’s amazing that we don’t have the expertise or even the infrastructure owned publicly; these are all privately owned. It’s not new, right? You do have procurement from the private sector, but it’s so much more deeply embedded when it comes to cloud services and AI, because there are so few players who have the knowledge and the expertise that governments don’t. In many cases, these companies are richer and have more users than many countries. The balance of who has the power is really shifting.

When you say there are so few players, do you see any sort of antitrust agitation here?

In the U.S., the FTC is looking at this from an antitrust perspective. They’re exploring this exact question: “If you can’t build AI services without having a cloud infrastructure, then are you in an unfair position of power? If you’re not Microsoft, Google, Amazon, or a handful of others, and you need them to build algorithms, is that fair? Should they be allowed to invest and acquire these companies and sequester that?” That’s an open question here in the UK as well. The CMA, which is our antitrust body, is investigating the relationships between Microsoft, OpenAI, and startups like Mistral, which have received investment from Microsoft.

I think there will be an explosion of innovation, because that’s what Silicon Valley does best. What you’re seeing is a lot of people building on top of these structures and platforms, so there will be more businesses and more competition in that layer. But it’s unclear to me how you would ever compete on building a foundational model like a GPT-4 or a Gemini without the huge investment access to infrastructure and data that these three or four companies have. So I think there will be innovation, but I’m not sure it will be at that layer.

In the final chapter of the book, you turn to science fiction as a lens on this issue. In this moment where the ability to make a living as an artist is threatened by this technology, I thought it was inspired to turn to a great artist like Ted Chiang. How can sci-fi and speculative fiction help us understand this moment?

You know, it’s funny, because I started writing this book well before ChatGPT came out. In fact, I submitted my manuscript two months after ChatGPT came out. When it did come out, I was trying to understand, “What do I want to say about this now that will still ring true in a year from now when this book comes out?” For me, sci-fi felt like the most tangible way to actually explore that question when everything else seemed to be changing. Science fiction has always been a way for us to imagine these futures, to explore ideas, and to take those ideas through to a conclusion that others fear to see.

I love Ted Chiang’s work, so I sat down to ask him about this. Loads of technologists in Silicon Valley will tell you they were inspired by sci-fi stories to build some of the things that we writers see as dystopian, but technologists interpret them as something really cool. We may think they’re missing the point of the stories, but for them, it’s a different perspective. They see it through this optimistic lens, which is something you need to be an entrepreneur and build stuff like the metaverse.

Sci-fi can both inspire and scare, but I think more than anything, we are now suffering from a lack of imagination about what technology could do in shaping humans and our relationships. That’s because most of what we’re hearing is coming from tech companies. They’re putting the products in our hands, so theirs are the visions that we receive and that we are being shaped by. That’s fine; that’s one perspective. But there are so many other perspectives I want to hear, whether that’s educators or public servants or prosecutors. AI has entered those areas already, but I want to hear their visions of what they think it could do in their world. We’re very limited on those perspectives at the moment, so that’s where science fiction comes in. It expands our imagination of the possibilities of this thing, both the good and the bad, and figuring out what we want out of it.

I loved what Chiang had to say about how this technology exposes “how much bullshit we are required to generate and deal with in our daily lives.” When I think about AI, I often think that these companies have gotten it backwards. As a viral tweet so aptly put it: “I want AI to do my laundry and dishes so I can do my art and writing, not for AI to do my art and writing so I can do my laundry and dishes.” That’s a common sentiment—a lot of us would like to see AI take over the bullshit in our lives, but instead it’s threatening our joys. How have we gotten to this point where the push is for AI to do what we love and what makes us human instead of what wed actually like to outsource?

I think about this all the time. When it started off, automation was just supposed to help us do the difficult things that we couldn’t. Way back at the beginning of factory automation, the idea was “We’ll make your job safer, and you can spend more time on the things that you love.” Even with generative AI, it was supposed to be about productivity and email writing. But we’ve slid into this world where it’s undermining the things that, as you say, make us human. The things that make our lives worth living and our jobs worth doing. It’s something I try to push back on; when I hear this assumption that AI is good, I have to ask, “But why? What should it be used for?” Why aren’t we talking about AI doing our taxes—something that we struggle with and don’t want to spend our time doing?

This is why we need other voices and other imaginings. I don’t want AI to tell bedtime stories to my children. I don’t want AI to read all audiobooks, because I love to hear my favourite author read her own memoir. I think that’s why that became a meme and spoke to so many people. We’ve all been gaslighted into believing that AI should be used to write poetry. It’s part of a shift we’ll all experience together from saying, “It’s amazing how we’ve invented something that can write and make music” to “Okay, but what do we actually need it for?” Let’s not accept its march into these spaces where we don’t want it. That’s what my book is about: about having a voice and finding a way to be heard.

I’m reminded of the chapter about a doctor using AI as a diagnostic aid. It could never replace her, but it’s a great example of how this technology can support a talented professional.

She’s such a good personification of how we can preserve the best of our humanity but be open to how AI might help us with what we care about; in her case, that’s her patients. But crucially, her patients want to see her. That’s why I write about her previous job, where people were dying and she didn’t have the equipment to help them. She had to accept that there were limitations to what she could do as a doctor, but she could perform the human side of medicine, which people need and appreciate. This is how we should all see AI: as a way to preserve the world we know and believe in what we bring to it, but then use it to augment us. She was an amazing voice to help me understand that.

With the daily torrent of frightening news about the looming threat of AI, it’s easy to feel hopeless. What gives you hope?

I structured my book to start with the individual and end with wider society. Along the way, I discovered amazing examples of people coming together to fight back, to question, to break down the opacity in automation and AI systems. That’s what gives me hope: that we are all still engaging with this, that we’re bringing to it our humanness, our empathy, our rage. That we’re able to collectivise and find a way through it. The strikes in Hollywood were a bright spot, and there’s been so much change in the unionisation of gig workers across the world, from Africa to Latin America to Asia. It gives me hope that we can find a path and we’re not just going to sleepwalk into this. Even though I write about the concentration of power and influence that these companies have, I think there’s so much power in human collectivism and what we can achieve.

Also, I believe that the technology can do good, particularly in health care and science; that’s an area where we can really break through the barriers of what we can do as people and find out more about the world. But we need to use it for that and not to replace us in doing what we love. My ultimate hopefulness is that humans will figure out a way through this somehow. I’ve seen examples of that and brought those stories to light in my book. They do exist, and we can do this.

Originally published on Esquire US

We return to the intersection of "Style" and "Tech", where the Fendi x Devialet Mania mash-up resides. The Italian fashion house teams with the French audio maestros for a portable speaker that turns heads. It's a Devialet Mania—a high-fidelity speaker boasting 360° stereo sound—wrapped in Fendi's iconic monogram.

Earlier in the year, the Fendi x Devialet Mania edition made its first appearance at the Fendi Autumn/Winter 2024 menswear runway show in Milan. At first, it looked like a male model sauntering with a rotund carrier before finding out that it was a Devialet Mania model covered with Fendi's two-tone monogram in tobacco and brown with a sand handle and gold details (which we are told is actual gold).

Originally launched in 2022, the Devialet Mania model utilises its own proprietary acoustic mapping technology and Active Stereo Calibration (ASC) to adjust its sound to suit any room. This means, as a listener, you'll get the optimal delivery of pitch-perfect treble and bone-rattling bass. Each edition comes complete with an add-on wireless charging dock, the Devialet Mania Station. And with a staggering 30–20,000 hertz audio range, an IPX4 splash resistance and Devialet’s first built-in battery offering up to 10 hours of wireless bliss, now with the Fendi motif, it elevates this piece of tech into a piece of art.

The Fendi x Devialet Mania edition retails for SGD4,100 and is available online and at Devialet outlets.

It's that time of the year where Apple kickstarts its Worldwide Developer Conference (WWDC) 2024. Esquire Singapore was at Apple Park where it all went down. Although Tim Cook opened the keynote and revealed a few of what the company was working on, it was ultimately Senior VP of Software Engineering, Craig Federighi's show. Through his amiable style and parkour (you'll understand if you watch the keynote video), it was announced that there would be updates to its OS—iOS 18; iPadOS 18; macOS Sequoia; watchOS 11; visionOS 2—; what's on Apple TV+ slate; the Vision Pro coming to Singapore and the reveal of Apple Intelligence... or AI (“give-the-marketing-team-a-raise”). Here are the biggest takeaways from WWDC.

Apple Intelligence

After keeping mum on AI, Apple loudly announced its proprietary AI, the Apple Intelligence. The Apple Intelligence works across all of Apple's devices and we saw a demonstration of its use in Writing Tools. Now you can see summaries of your e-mails or books and its ability to rewrite the e-mail tone to reflect your intent. Apple Intelligence can also generate transcript summaries of live phone convos or a recordings.

If you tire of 😉 (winking face), 🫃("Uh-oh, I seem to have cirrhosis of the liver.") or 💦🍆 (wash your vegetables), you can generate customised emojis with Genmoji. Simply describe what you want to see as an emoji and Apple Intelligence will create it.

A step up from Genmoji is Image Playground. Again, type in any descriptor and the style (currently only animation, illustration and sketch options are available) and the image will be produced. You can do the same with images from your Photos library or from your Contact list. We were also shown how Apple Intelligence can flesh out rudimentary sketches or ideas through Image Wand. With a finger or Apple Pencil, circle a sketch and after analysing it, Image Wand will produce a complementary visual.

With Apple Intelligence, Siri finally gets the limelight it deserves. Siri can carry out specific tasks with an awareness of your personal context. This means that it’s able to go through your apps and create a personalised approach. For example, if you ask Siri, how to get to a destination, Siri will trawl through your travel history and the weather forecast to formulate the best and personalised route for you. Which for me, is a long languid bus ride because I have no money for cabs and I hate playing the game of “Should I Give Up This Seat For This Person?”

Siri also has a richer language understanding, so if you have made a verbal faux pas and you backtrack, Siri will know what you mean. Does this mean that Siri will understand Singlish? Welp, Apple says that US English will roll out first, followed by other languages. Hope springs eternal, I guess.

And if you’re skittish about speaking out loud to Siri about—oh for example—whether you need to give up your seat to someone who may or may not take offence to said seat offer, you can type it to Siri instead, you coward (my words).

There were rumours leading up to WWDC24 about Apple’s collaboration with ChatGPT came true as it was announced that ChatGPT is integrated into Apple’s Siri and Writing Tools. If Siri is stymied by your request, it will tap into ChatGPT’s expertise. You will be asked if your info can be shared with ChatGPT and can control when it is used. It’s also free to use without the need to create an account. Some people aren't too keen on the Apple Intelligence and ChatGPT union.

Given the outcry about user data being sneakily used to aid in machine learning, Apple doubled down on its stance on user privacy ensuring that even though Apple Intelligence is privy to your personal information, it doesn’t collect it. While many of the large language and diffusion models are run on the device, there are certain instances where it needs to be stored on the cloud. That's where Private Cloud Compute comes in. As a cloud-based model on special servers using Apple Silicon, your data is never stored and only used to handle your AI request. This is what Apple proudly termed as a “new standard for privacy”.

Apple TV+

Ever wondered who the hell is on screen and you scroll through IMDB? Now, there inSights, an Apple TV+ feature that shows who is playing what when their characters appear in a scene. There's even a handy bit of info of the music that's playing in the scene as well. inSights is only available for Apple TV+ original programming.

We even got a preview of what's coming to Apple TV+. A slight squeal may or may not have issued from us over the sight of Severance and Silo in the montage.

macOS

Called Sequoia, it comes with a Continuity app that allows for iPhone mirroring. You can connect to your iPhone from your Mac. We saw a demo where one could access the iPhone's Duolingo app and actually go through a lesson. The best part of it is that while this is happening, the iPhone is still in locked mode so that no one else, other than you, can have access to it.

iPadOS 18

There's now the Calculator app but with an added feature. Using your Apple Pencil, you can utilise Math Notes in the Calculator app and write out an equation. Once you write out the "=" sign, it immediately calculates. If you change any of the numbers, the tally automatically adjusts.

There's a Smart Script feature that refines your handwritten notes. You can scratch out certain words and it automatically erases, just like that.

VisionOS 2

Finally, this special announcement from WWDC: Apple's Vision Pro gets an operating system update. Using machine learning, it takes your 2D photos and adds depth to it; giving it more life to these spatial photos. There are expanded intuitive gestures to use with your Vision Pro and an ultrawide virtual display to operate on.

Oh, and the Vision Pro will soon be available to Singapore on 28 June.

For more information on WWDC 2024, check out the Apple website.

crosschevron-down