It was an audacious move when Dyson decided to plunge into the deep end of audio. Dyson is allowed to experiment but with the Dyson Zone, it was trying to be a lot of things. For one, it's a pair of headphones but it was also an air purifier? It's as though the brand wasn't confident in their foray into the audio space and still cling to the signature fans that put them on the map in the first place. Those two disparate functions—audio fidelity and the air purifying—found a shaky common ground in the Zone but not only was the design ridiculous (Bane, anyone?), it was heavy and, in some cases, the air purifying sensors weren't as accurate as it should be. But the noise cancellation and audio fidelity showed promise, which brings us to the brand's first audio-only headphones: the Dyson OnTrac.

Drawing from 30 years' worth of aeroacoustics R&D, Dyson has going is their own custom Active Noise Cancellation (ANC) algorithm. The ear cushions on the headphone cups, create a seal on the ears and each headphone cup is outfitted with eight microphones that cancel out external sounds at 384,000 times per second and reduce noises up to 40dB. Armed with 40mm, 16-ohm neodymium speaker drivers and advanced audio signal processing, you get a clear delivery. You get your highs and lows with a wide frequency range—a resonant 6 Hertz to a crisp 21,000 Hertz. Another feature is the tilting of the speaker housing at 13 degrees towards the ear for a more direct audio response.

You get a battery life of up to 55 hours. For weight distribution, instead of being housed in the cups, two high-capacity lithium-ion battery cells, are positioned at 10- and 2-o'clock of the headband. The ergonomics of the headphones are great. We have been wearing them for about two hours and we don't have any tension on the neck or the temples. High-grade foam cushions and multi-pivot gimbal arms relieve ear pressure, while the soft micro-suede ear cushions and optimised clamp force ensure a consistent and comfortable fit.

Design and Customisation

One thing that sets this apart from all the other headphones is that the Dyson OnTrac allows for customisation for the ear cushion and the outer cups. Usually, that sort of feature is disabled to maintain the drivers' integrity but Dyson is confident enough that even when you swap out the modular cushion and cups, the Dyson OnTrac will perform as well as it should.

The Dyson OnTrac comes in four base colourways—aluminium (that's finished via computer numerical control machining); copper; nickel and a ceramic cinnabar variant that has a ceramic-like painted finish. Then you have customisable caps and cushions in different hues, which give over 2,000 colour combos. The caps are made of high-grade aluminium and are in either anodised or ceramic finishes.

The Dyson OnTrac Headphones retail for SGD699 and will be available on September 2024 at all Dyson outlets and online.

(DAN WINTERS)

When it comes to prognostication, look to the dreamers, who imagine the things that will be made possible. For Apple, it's the sort of longview that benefits from the company's long development cycle for its products. It is this sort of gestation period that allows the Apple Vision Pro (AVP) to come to fruition.

In 2007, a patent was granted to Apple for a "HMD (head-mounted display)", which meant that development for the Apple Vision Pro ran for 16 years. That's the problem with dreamers, it takes a while before everybody, including the technology, catches up. But while most of the tech needed to be invented, some things like the look of the device didn't veer too far from a concept sketch.

"When we started this project, almost none of these tech existed," said Mike Rockwell, VP of Apple’s Vision Products Group. "We had to invent almost everything to make it happen."

Alan Dye, Apple’s Vice President of Human Interface Design, said that the AVP was, by far, the most ambitious Apple product that they had to design. "I can't believe that any other company would be able to make something like this as it requires all disciplines across the studio together, to create one singular product experience. It's kinda unprecedented."

Richard Howarth, Vice President of Industrial Design, concurs. "One of the reasons that it was very ambitious was that it hadn't been done. Nothing with this sort of resolution and computing power had ever been done.

"We didn't even know if it was possible."


The prototype was huge. Powered by a roomful of computers. Thick cables run between them. Although a behemoth, the prototype represented proof that it was possible.

Yes, during development, there were VR headsets that were released to the public. But Dye and Howarth weren't interested in creating a VR headset; they wanted a way to bridge people. Dye explains that whenever someone dons a VR headset, they are isolated from other people around them. "We wanted [the Apple Vision Pro] to foster connection, both by bringing people, from across the world, right into your space or by remaining connected with people around you."

The intent to connect framed how the product was designed. Like EyeSight, where your eyes—or a simulacrum of your eyes—appear on the front of the Apple Vision Pro if you're addressing someone or if you're using an app (an animation plays letting others know that you can't see them). Essentially, it's a visual aid for others to know whether you're available or not.

Mixing AR and VR, the Apple Vision Pro would pioneer "spatial computing", where the integration of digital information into the user's physical environment. The only way that could work was Apple's proprietary M2 chip that powers the device and an R1 spatial co-processor. Another way for the AVP to process the workload is "foveated rendering", where it only renders what your eyes are looking at.

The micro‑OLED display puts out 23 million pixels didn't hurt either. There are also 12 cameras for precise inside-out tracking (it tracks your eyes, your hand gestures and human bodies who come within your ambit).

Dye and Howarth didn't want to use external controllers and opted for hand gestures and voice commands to get around. The hardware is only as good as the software. That's where visionOS comes in.

visionOS lets you create your own Persona, an almost realistic avatar of yourself or allows for the aforementioned EyeSight. It still is kinda janky (eye tracking is left wanting if I'm selecting something at the edge of my periphery).

But still, the visionOS won the prestigious D&AD Black Pencil award for Digital Design and a Silver Cannes Lion for Digital Craft. The judges saw potential and there's still the visionOS 2 on the horizon, where the update allows for a functioning Magic Keyboard to appear in a virtual environment or customise the icons' position on the home screen.

One of the features that I look forward to is creating spatial photos from photos from the Photos app library. Using advanced machine learning, the visionOS turns a 2D image into a spatial photo that comes to life on the AVP.


(DAN WINTERS)

It was only a few years ago, that the Google Glasses was slammed by the public for being too intrusive. While cultural norms have shifted to the point where the public is lax about their privacy, Apple is still up in arms about privacy and security.

"It's important to know that Vision Pro has a privacy-first design at its core. We took great care in privacy and security for it," Rockwell says. "We don't give camera access to the developers directly. When your eyes highlight a UI element, the developers won't know where your eye position is. They are only informed if you tap on something. Another thing is that if you're capturing a spatial video or spatial photo, it alerts others on the front display that you're recording."

The Apple Vision Pro retails for SGD5,299 and there will be many, who would baulk at that price.

"We built an incredible product that we believe has enormous value," Rockwell explains. "This is not a toy. It's a very powerful tool. It's intended to be something that can give you computing capabilities, the ability to use this in a way where there's nothing else out there that can do it. We reached into the future to pull back a bunch of technology to make it happen.

"We want to ensure that this has fundamental and intrinsic value and we believe that at the price, it is of good value."

Perhaps access to the future is worth the ticket? The Apple Vision Pro is emblematic of the promise of the imminent. Of the convenience of speaking with your loved ones or the experience to traipse in lands unseen.

Like the eyes of the oracle, the device brims with potential and given time, the future is more realised.

The Apple Vision Pro is out now.

The luxury fashion House is breaking new grounds by venturing into the gaming and virtual world—Fortnite. How? By letting players to experience the Versace Mercury sneaker in-game for a limited time.

The collaboration adds a fun twist by inviting players to Murder Mystery (Fortnite Creative map) and explore an archaeological dig site. The first to discover the Versace Mercury sneaker wins and discovering the digital treasure comes with a special in-game perk and puts that player in the spotlight for that round. To promote the sneaker, Twitch streamer Agent 00 offers his viewers a chance to win a real pair of said sneakers during his stream.

(VERSACE X FORTNITE)

That's not all. The collection also collaborates with Snapchat, offering a Snap AR experience and a Bitmoji digital collection. Platforms like Fortnite offer a new medium for brands to convey their story and visual identity in gamified and social ways. This approach gives players relevant rewards that enhance their experience, whether through gameplay or cosmetic items for self-expression. With this collab, Versace can continue its commitment to virtual worlds and the value of digital fashion items.

In the real world, the Versace Mercury collection is made from high-quality calf leather and features a single sole, embodying both futuristic design and versatility. Sci-fi inspired, these sneakers have a complex structure of 86 precisely crafted components. The upper and lining of each pair alone consist of 30 pieces, all seamlessly cut and stitched together.

Fortnite is free-to-play and is available on most platforms including PlayStation 4; PlayStation 5; Nintendo Switch and Xbox Series X/S.

Samsung's biannual unveiling of its devices yesterday in Paris. It was, after all, a marketing strategy, a slew of Samsung devices announced at the locale of this year's Olympics. With pomp and circumstance, there comes the expectation of something new from the South Korean tech giant. These are what were announced at this year's Unpacked event.

Galaxy AI

The AI game heats up even further as Samsung reiterates its commitment to the integration of its Galaxy AI into their product ecosystem. Samsung was the first major phone brand to announce its use of Galaxy AI and while that thunder was stole with Apple announcing their own proprietary Apple Intelligence, Samsung reminds us that it already has a working Galaxy AI and that more of its products will have them.

One of the more impressive Galaxy AI addition is the Sketch to image feature. Your rudimentary doodle can be generated into different fully-fleshed image styles. Apple mentioned similar actions with its Image Wand but at this point, it's all about the speed to showcase AI, so this round goes to Samsung.

Galaxy Z Fold6 and Z Flip6

Samsung's signature phones return: the Galaxy Z Fold6 and the Galaxy Z Flip6. Touted to be "the slimmest and lightest Z series", the series are also blessed with enhanced Armor Aluminum2 and Corning Gorilla Glass Victus 2 for more durability. Both the Fold6 and Flip6 have Snapdragon 8 Gen In addition to being reliable, every element of the Z series is also powerful. Both the Z Fold6 and Z Flip6 are equipped with the Snapdragon Gen 3 Mobile Platform, the most advanced Snapdragon mobile processor yet.

The Galaxy Z Fold6 has a sleeker design and a Dynamic AMOLED 2X screen, which gives it unparalleled brightness. There's an upgraded gaming experience that's within the Fold6 by its chipset and a 1.6x larger vapour chamber. Meanwhile, the Galaxy Z Flip6 has a new 50MP wide camera, a 12MP Ultra-wide sensors and larger battery life.

Galaxy Watch Ultra

Let's address the elephant dominating the room: yes, obvious comparisons would be made with the Apple Watch Ultra. From the orange band to the orange "action button" to the shape of the dial, I guess, imitation is a form of flattery? But other than the looks, the Galaxy Watch Ultra seem to hold its own with its pricing and health measurement specs.

Galaxy Ring

Other than the smartwatch, this smart ring is meant to be worn throughout. It's less intrusive than the smartwatch, which makes for easier health tracking. Imbued with three sensors—accelerometer, photoplethysmography and skin temperature reading—the Galaxy Ring can monitor and collate various health metrics. It comes in several sizes.

Relive the unpacking here

Welp, those were our key takeaways from this year's Unpacked. As we go through the devices, we'll let you know in-depth what to further expect with each of these devices.

Let's start with science fiction and how we imagine it—the time travelling; phasers; light sabers. It's what makes the future so alluring. That the things we imagine are made real. Of course, there are always the pesky constraints of real-world physics that prevent such wonders to stay shackled in the realm of the mind. But sometimes a little stubbornness goes a long way. Such is the case of Apple and its entry into the mixed reality game: the Vision Pro.

From your View-Masters (remember those) to the Oculus Rift, we have been creating "headsets that immerse you into another reality". (To set the record straight, we're not talking about augmented reality, which is digital content overlaid over the real world but mixed reality that integrates digital objects into the user's environment.)

Apple may not have pioneered mixed reality but it sure is gonna leave its competitor in its wake of "spatial computing".

We tried the Apple Vision Pro (or the AVP, which shares the same initialism with Aliens Versus Predator) and the visuals are, for the lack of a better word, magical. It's magical that you're able to look at an icon and double tapping your fingertips would open up the programme. It's magical that you don't get the bends from being in an immersive video. And, it is so magical that you can open up multiple windows and... work became fun? It felt like that Jonny Mnemonic scene.

One of the ways that the AVP is able to process the workload is a sneaky thing called "foveated rendering". Because it tracks your eye, it only renders what your eyes are looking at: stare at a window and it comes into clear. Look at another window and that becomes sharp. If you think about it, that's how our eyes work anyway.

The hardware of this is incredible. Made of magnesium and carbon fibre, there are twelve cameras—from tracking of your hands to spatial tracking—positioned throughout the headset. There's an M2 processor and an R1 spatial co-processor to deliver a smooth performance. The eye tracking is a cinch and there's no lag in the video passthrough.

On the corners of the goggles are a digital crown that adjusts the volume and the immersion and a button that you can depress to take photos and videos. There are speakers fixed to the arms of the Vision Pro but if the volume goes past a certain level, everybody else around you are privy to what you're hearing.

The AVP's Persona feature is kinda weird. Think of a Persona as your avatar. Your Pesona will reflect youryour facial expressions (sticking out your tongue; gesticulate with your hands), it has fringes of the Uncanny Valley. It. You can FaceTime or enter into an online meeting with them; they would appear and the hairs on your arm will rise a little. But after a while, you get used to it. And then their Personas kinda look like ghosts in your living room. Except they are presenting a PowerPoint.

If you're wondering, why not use a memoji? And the only reason I can think of is that if you're in a business meeting, there has to be a level of professionalism so a unicorn or a poop memoji may not fly. Then, again, it would be nice to have options. Perhaps in the next VisionOS upgrade.

By the way, there's an announcement that there would be a VisionOS 2, where you can create spatial photos from your 2D images, have new gesture controls and an enhanced Persona—accurate skin tone, clothing colour options. Who knows, maybe there would be an inclusion of memojis?

Is the writer opening up an app or is he dead?

The Downsides

The price is expensive. Like SGD5,299 expensive. But that's to justify the years of R&D and the components. You hold the AVP in your hands and it feels nice. And I suspect that months later, people wouldn't blink at the price tag. I remember when mobile phones retailed at four digits and my uncle self thought, welp, I'm not paying that much for a compact supercomputer. A year or two later, that sort of pricing for a mobile phone became normalise.

To fit in all that goodness that makes the AVP work its magic, it will have some weight to it. To be fair, it weighs about 649g. That's equivalent to a medium-sized chinchilla or a bag of Cadbury Triple Pack Mixed Eggs. Not that heavy, right? But when you're wearing the AVP that's outfitted with a Solo Knit Band on your face, after a while, you're gonna feel it in your face and because of my terrible posture, my neck will compensate for the weight and I'll hunch even further.

As a remedy, you can swap out the Solo Knit Band for the Dual Loop Band, which gives better weight distribution. Or, if you're a stubborn cock like me and you find it leceh to change to a Dual Loop Band, you can wear it lying down.

If you're worried about the tension in your neck, don't worry; you'll know its time to put down the AVP when it runs out of battery at two hours of general use.

I kid.

Verdict

It's not perfect but this is a game changer. It possesses the tech of today to The AVP shown what is possible and yet also poses what else can be done. We don't think that Apple is done with the Vision Pro; there's a roadmap and it's gonna take a few generations of the AVP before it gets to that stage, where you can't ignore it any longer. Like the first-gen iPod or the first-gen iPhone, the AVP has raised the bar and the other brands are gonna have to play catch-up.

It's a promise of a future, one that is bright with potential and all it took was an Apple Vision Pro for that glimpse.

The Apple Vision Pro is out now.

It's hard to think of Dyson as anything but a vacuum company. It's true that it was founder, James Dyson's reinvention of the vacuum turbine that propelled the still-family-owned business into the spotlight but the brand has been diversifying into other areas like hair dryers, lamps and air purifiers. They even dipped their toes into EVs for a period before abandoning the project altogether. The company sees a market in household equipments, which makes this next product kinda a no-brainer but also have us scratching our heads. Y'all, meet the WashG1.

This is marketed as a "wet cleaner"... which my mother, in her infinite wisdom, calls an "atas mop". But this isn't Dyson's foray into mopping. There was the V12s Detect Submarine, which was a dry vacuum that can mop up as well.

The conventional thinking was, is that a wet cleaner operates by suctioning up wet debris. But that usually clogs up the moving parts and trapped debris can emit a bad odour. So, fixing a turbine in the WashG1 is a no-go. Instead of air suction, the machine uses water pressure. how does a brand known for their turbine innovation reinvent the wet cleaner? Simple. Instead of air, water pressure is used.

Water delivery is determined by a pulse modulated hydration pump that adjusts for the amount of water. With a water tank that can contain one litre in volume, while the other half-of that tank contains the filthy water. There's a separation feature that divides debris and dirty water at source, enabling hygienic, no-touch disposal. You can use plain water for the clean-up or you can add a little floor cleaning liquid to it as it. Alas, the WashG1 only works on hard flooring. Carpets? Forget about them. With three modes of cleaning, users can also opt for a no-water mode.

Close-up of the two rollers pick up dirt and said dirt is separated.

The cleaner head has two motorised counter-rotating microfibre rollers that absorbs the dirt. With each rotation, dirt is extracted before water wets the roller before it presses against a plate to squeeze out the dirty water. A secondary roller with nylon bristles pick up bigger debris and hair and they are collected into a tray (that sits in the cleaner head).

In the end, the WashG1 does the job. Quite remarkably, I must add.

A charging stand lets you rest the WashG1 into its dock and cleans itself. The time it takes to clean itself? About two minutes. But, if you're anything like my mom, you can clean the WashG1 yourself, where you can detach the rollers from the cleaner head and wash them. The water tanks can also be removed for cleaning as well.

Downsides to the WashG1? Well, we mentioned that it is only effective on hard flooring. And the rollers won't last forever. Exactly how often they need replacing depends on how much you're clean washing but for a daily clean, Dyson puts it down to a minimum of six months.

Bottom line: will the WashG1 replace the mop? It depends. It's pretty good with the clean-up but the price might put some people off (SGD999).

Housework isn't usually sexy but with the WashG1, it makes the process a hell a lot easier.

The Dyson WashG1 will be available online and at all Dyson stores and distributors in July.

In the age of AI, it can feel as if this technology’s march into our lives is inevitable. From taking our jobs to writing our poetry, AI is suddenly everywhere we don’t want it to be.

But it doesn’t have to be this way. Just ask Madhumita Murgia, the AI editor at The Financial Times and the author of the barn-burning new book Code Dependent: Living in the Shadow of AI. Unlike most reporting about AI, which focuses on Silicon Valley power players or the technology itself, Murgia trains her lens on ordinary people encountering AI in their daily lives.

This “global precariat” of working people is often irrevocably harmed by these dust-ups; as Murgia writes, the implementation and governance of algorithms has become “a human rights issue.” She tells Esquire, “Whether it was health care, criminal justice, or government services, again and again you could see the harms perpetrated on mostly marginalised groups, because that’s how the AI supply chain is built.”

Murgia takes readers around the globe in a series of immersive reported vignettes, each one trained on AI’s damaging effects on the self, from “your livelihood” to “your freedom.” In Amsterdam, she highlights a predictive policing program that stigmatises children as likely criminals; in Kenya, she spotlights data workers lifted out of brutal poverty but still vulnerable to corporate exploitation; in Pittsburgh, she interviews UberEats couriers fighting back against the black-box algorithms that cheat them out of already meagre wages.

Yet there are also bright spots, particularly a chapter set in rural Indian villages, where under-resourced doctors use AI-assisted apps as diagnostic aids in their fight against tuberculosis. Despite the prevalent sense of impending doom, there’s still time to reconfigure our relationship to this technology, Murgia insists. “This is how we should all see AI,” she tells Esquire, “as a way to preserve the world we know and believe in what we bring to it, but then use it to augment us.”

Murgia spoke with Esquire by Zoom from her home in London about data labour, the future of technology regulation, and how to keep AI from reading bedtime stories to our children.


ESQUIRE: What is data colonialism, and how do we see it manifest through the lens of AI?

MADHUMITA MURGIA: Two academics, Nick Couldry and Ulises A. Mejias, came up with this term to draw parallels between modern colonialism and older forms of colonialism, like the British colonisation of India and other parts of the world. The resource extraction during that period harmed the lives of those who were colonised, much like how corporations today, particularly tech companies, are performing a similar kind of resource extraction. In this case, rather than oil or cotton, the resource is data.

In reporting this book, I saw how big Silicon Valley firms go to various parts of the world I visited, like India, Argentina, Kenya, and Bulgaria, and use the people there as data points to build systems that become trillion-dollar companies. But the people never see the full benefits of those AI systems to which they’ve given their data. Whether it was health care, criminal justice, or government services, again and again you could see the harms perpetrated on mostly marginalised groups, because that’s how the AI supply chain is built.

You write that data workers “are as precarious as factory workers; their labour is largely ghost work and they remain an undervalued bedrock of the AI industry.” What would it take to make their labour more apparent, and what would change if the reality of how AI works was more widely understood?

For me, the first surprise was how invisible these workers really are. When I talk to people, they’re shocked to learn that there are factories of real humans who tag data. Most assume that AI teaches itself somehow. So even just increasing understanding of their existence means that people start thinking, There’s somebody on the other end of this. Beyond that, the way the AI supply chain is set up, we only see the engineers building the final product. We think of them as the creators of the technology, so automatically, all the value is placed there.

Of course, these are brilliant computer scientists, so you can see why they’re paid millions of dollars for their work. But because the workers on the other end of the supply chain are so invisible, we underplay what they’re worth, and that shows up in the wages. Yes, these are workers in developing countries, and this is a standard outsourcing model. But when you look at the huge disparity in their living wage of $2.50 an hour going into the technology inside a Tesla car, and then you see what a Tesla car costs or what Elon Musk is worth or what that company is making, the disparity is huge. There’s just no way these workers benefit from being a part of this business.

If you hear technologists talking about it, they say we all get brought along for the ride—that productivity rises, bottom lines rise, money is flushed into our economy, and all of our lives get better. But what we’re seeing in practise is those who are most in need of these jobs are not seeing the huge upside that AI companies are starting to see, and so we’re failing them in that promise. We have to decide as a society: What is fair pay for somebody who’s part of this pipeline? What labour rights should they have? These workers don’t really have a voice. They’re so precarious economically. And so we need to have an active discussion. If there are going to be more AI systems, there’s going to be more data labour, so now is the time for us to figure out how they can see the upside of this revolution we’re all shouting from the rooftops about.

One of our readers asks: What are your thoughts on publishers like The New York Times suing OpenAI for copyright infringement? Do you think theyll succeed in protecting journalists from seeing their work scraped and/or plagiarised?

This hits hard for me, because I’m both the person reporting on it and the person that it impacts. We’ve seen how previous waves of technological growth, particularly the social media wave, have undermined the press and the publishing industry. There’s been a huge disintermediation of the news through social media platforms and tech platforms; these are now the pipes through which people get information, and we rely on them to do it for us. We’ve come to a similar inflection point where you can see how these companies can scrape the data we’ve all created and generate something that looks a lot like what we do with far less labor, time, and expertise.

It could easily undermine what creative people spend their lives doing. So I think it’s really important that the most respected and venerable institutions take a stand for why human creativity matters. Ultimately, I don’t know what the consequences will be. Maybe it’s a financial deal where we’re compensated for what we’ve produced, rather than it being scraped for free. There are a range of solutions. But for me, it’s important that those who have a voice stand up for creative people in a world where it's easy to automate these tasks to the standard of “good enough.”

Another reader asks: What AI regulations do you foresee governments enacting? Will ethical considerations be addressed primarily through legislation, or will they rely on nonlegal frameworks like ethical codes?

Especially over the last five years, there have been dozens and dozens of codes of conduct, all self-regulating. It’s exactly like what we saw with social media. There has been no Internet regulation, so companies come up with their own terms of service and codes of conduct. I think this time around, with the AI shift, there’s a lot more awareness and participation from regulators and governments.

There’s no way around it; there will be regulation because regulation is required. Even the companies agree with this, because you can’t define what’s ethical when you’re a corporation, particularly a profit-driven corporation. If these things are going to impact people’s health, people’s jobs, people’s mortgages, and whether somebody ends up in jail or gets bail, you need regulation involved. We’ll need lines drawn in the sand, and that will come via the law.

In the book, you note how governments have become dependent on these private tech companies for certain services. What would it look like to change course there, and if we don’t, where does that road lead?

It goes back to that question of colonialism. I spoke to Cori Crider, who used to be a lawyer for Guantanamo Bay prisoners and is now fighting algorithms. She sees them as equally consequential, which is really interesting. She told me about reading a book about the East India Company and the Anglo Iranian Oil Corporation, which played a role in the Iranian coup in the ’70s, and how companies become state-like and the state becomes reliant on them. Now, decades later, the infrastructure of how government runs is all done on cloud services.

There are four or five major cloud providers, so when you want to roll out something quickly at scale, you need these infrastructure companies. It’s amazing that we don’t have the expertise or even the infrastructure owned publicly; these are all privately owned. It’s not new, right? You do have procurement from the private sector, but it’s so much more deeply embedded when it comes to cloud services and AI, because there are so few players who have the knowledge and the expertise that governments don’t. In many cases, these companies are richer and have more users than many countries. The balance of who has the power is really shifting.

When you say there are so few players, do you see any sort of antitrust agitation here?

In the U.S., the FTC is looking at this from an antitrust perspective. They’re exploring this exact question: “If you can’t build AI services without having a cloud infrastructure, then are you in an unfair position of power? If you’re not Microsoft, Google, Amazon, or a handful of others, and you need them to build algorithms, is that fair? Should they be allowed to invest and acquire these companies and sequester that?” That’s an open question here in the UK as well. The CMA, which is our antitrust body, is investigating the relationships between Microsoft, OpenAI, and startups like Mistral, which have received investment from Microsoft.

I think there will be an explosion of innovation, because that’s what Silicon Valley does best. What you’re seeing is a lot of people building on top of these structures and platforms, so there will be more businesses and more competition in that layer. But it’s unclear to me how you would ever compete on building a foundational model like a GPT-4 or a Gemini without the huge investment access to infrastructure and data that these three or four companies have. So I think there will be innovation, but I’m not sure it will be at that layer.

In the final chapter of the book, you turn to science fiction as a lens on this issue. In this moment where the ability to make a living as an artist is threatened by this technology, I thought it was inspired to turn to a great artist like Ted Chiang. How can sci-fi and speculative fiction help us understand this moment?

You know, it’s funny, because I started writing this book well before ChatGPT came out. In fact, I submitted my manuscript two months after ChatGPT came out. When it did come out, I was trying to understand, “What do I want to say about this now that will still ring true in a year from now when this book comes out?” For me, sci-fi felt like the most tangible way to actually explore that question when everything else seemed to be changing. Science fiction has always been a way for us to imagine these futures, to explore ideas, and to take those ideas through to a conclusion that others fear to see.

I love Ted Chiang’s work, so I sat down to ask him about this. Loads of technologists in Silicon Valley will tell you they were inspired by sci-fi stories to build some of the things that we writers see as dystopian, but technologists interpret them as something really cool. We may think they’re missing the point of the stories, but for them, it’s a different perspective. They see it through this optimistic lens, which is something you need to be an entrepreneur and build stuff like the metaverse.

Sci-fi can both inspire and scare, but I think more than anything, we are now suffering from a lack of imagination about what technology could do in shaping humans and our relationships. That’s because most of what we’re hearing is coming from tech companies. They’re putting the products in our hands, so theirs are the visions that we receive and that we are being shaped by. That’s fine; that’s one perspective. But there are so many other perspectives I want to hear, whether that’s educators or public servants or prosecutors. AI has entered those areas already, but I want to hear their visions of what they think it could do in their world. We’re very limited on those perspectives at the moment, so that’s where science fiction comes in. It expands our imagination of the possibilities of this thing, both the good and the bad, and figuring out what we want out of it.

I loved what Chiang had to say about how this technology exposes “how much bullshit we are required to generate and deal with in our daily lives.” When I think about AI, I often think that these companies have gotten it backwards. As a viral tweet so aptly put it: “I want AI to do my laundry and dishes so I can do my art and writing, not for AI to do my art and writing so I can do my laundry and dishes.” That’s a common sentiment—a lot of us would like to see AI take over the bullshit in our lives, but instead it’s threatening our joys. How have we gotten to this point where the push is for AI to do what we love and what makes us human instead of what wed actually like to outsource?

I think about this all the time. When it started off, automation was just supposed to help us do the difficult things that we couldn’t. Way back at the beginning of factory automation, the idea was “We’ll make your job safer, and you can spend more time on the things that you love.” Even with generative AI, it was supposed to be about productivity and email writing. But we’ve slid into this world where it’s undermining the things that, as you say, make us human. The things that make our lives worth living and our jobs worth doing. It’s something I try to push back on; when I hear this assumption that AI is good, I have to ask, “But why? What should it be used for?” Why aren’t we talking about AI doing our taxes—something that we struggle with and don’t want to spend our time doing?

This is why we need other voices and other imaginings. I don’t want AI to tell bedtime stories to my children. I don’t want AI to read all audiobooks, because I love to hear my favourite author read her own memoir. I think that’s why that became a meme and spoke to so many people. We’ve all been gaslighted into believing that AI should be used to write poetry. It’s part of a shift we’ll all experience together from saying, “It’s amazing how we’ve invented something that can write and make music” to “Okay, but what do we actually need it for?” Let’s not accept its march into these spaces where we don’t want it. That’s what my book is about: about having a voice and finding a way to be heard.

I’m reminded of the chapter about a doctor using AI as a diagnostic aid. It could never replace her, but it’s a great example of how this technology can support a talented professional.

She’s such a good personification of how we can preserve the best of our humanity but be open to how AI might help us with what we care about; in her case, that’s her patients. But crucially, her patients want to see her. That’s why I write about her previous job, where people were dying and she didn’t have the equipment to help them. She had to accept that there were limitations to what she could do as a doctor, but she could perform the human side of medicine, which people need and appreciate. This is how we should all see AI: as a way to preserve the world we know and believe in what we bring to it, but then use it to augment us. She was an amazing voice to help me understand that.

With the daily torrent of frightening news about the looming threat of AI, it’s easy to feel hopeless. What gives you hope?

I structured my book to start with the individual and end with wider society. Along the way, I discovered amazing examples of people coming together to fight back, to question, to break down the opacity in automation and AI systems. That’s what gives me hope: that we are all still engaging with this, that we’re bringing to it our humanness, our empathy, our rage. That we’re able to collectivise and find a way through it. The strikes in Hollywood were a bright spot, and there’s been so much change in the unionisation of gig workers across the world, from Africa to Latin America to Asia. It gives me hope that we can find a path and we’re not just going to sleepwalk into this. Even though I write about the concentration of power and influence that these companies have, I think there’s so much power in human collectivism and what we can achieve.

Also, I believe that the technology can do good, particularly in health care and science; that’s an area where we can really break through the barriers of what we can do as people and find out more about the world. But we need to use it for that and not to replace us in doing what we love. My ultimate hopefulness is that humans will figure out a way through this somehow. I’ve seen examples of that and brought those stories to light in my book. They do exist, and we can do this.

Originally published on Esquire US

We return to the intersection of "Style" and "Tech", where the Fendi x Devialet Mania mash-up resides. The Italian fashion house teams with the French audio maestros for a portable speaker that turns heads. It's a Devialet Mania—a high-fidelity speaker boasting 360° stereo sound—wrapped in Fendi's iconic monogram.

Earlier in the year, the Fendi x Devialet Mania edition made its first appearance at the Fendi Autumn/Winter 2024 menswear runway show in Milan. At first, it looked like a male model sauntering with a rotund carrier before finding out that it was a Devialet Mania model covered with Fendi's two-tone monogram in tobacco and brown with a sand handle and gold details (which we are told is actual gold).

Originally launched in 2022, the Devialet Mania model utilises its own proprietary acoustic mapping technology and Active Stereo Calibration (ASC) to adjust its sound to suit any room. This means, as a listener, you'll get the optimal delivery of pitch-perfect treble and bone-rattling bass. Each edition comes complete with an add-on wireless charging dock, the Devialet Mania Station. And with a staggering 30–20,000 hertz audio range, an IPX4 splash resistance and Devialet’s first built-in battery offering up to 10 hours of wireless bliss, now with the Fendi motif, it elevates this piece of tech into a piece of art.

The Fendi x Devialet Mania edition retails for SGD4,100 and is available online and at Devialet outlets.

It's that time of the year where Apple kickstarts its Worldwide Developer Conference (WWDC) 2024. Esquire Singapore was at Apple Park where it all went down. Although Tim Cook opened the keynote and revealed a few of what the company was working on, it was ultimately Senior VP of Software Engineering, Craig Federighi's show. Through his amiable style and parkour (you'll understand if you watch the keynote video), it was announced that there would be updates to its OS—iOS 18; iPadOS 18; macOS Sequoia; watchOS 11; visionOS 2—; what's on Apple TV+ slate; the Vision Pro coming to Singapore and the reveal of Apple Intelligence... or AI (“give-the-marketing-team-a-raise”). Here are the biggest takeaways from WWDC.

Apple Intelligence

After keeping mum on AI, Apple loudly announced its proprietary AI, the Apple Intelligence. The Apple Intelligence works across all of Apple's devices and we saw a demonstration of its use in Writing Tools. Now you can see summaries of your e-mails or books and its ability to rewrite the e-mail tone to reflect your intent. Apple Intelligence can also generate transcript summaries of live phone convos or a recordings.

If you tire of 😉 (winking face), 🫃("Uh-oh, I seem to have cirrhosis of the liver.") or 💦🍆 (wash your vegetables), you can generate customised emojis with Genmoji. Simply describe what you want to see as an emoji and Apple Intelligence will create it.

A step up from Genmoji is Image Playground. Again, type in any descriptor and the style (currently only animation, illustration and sketch options are available) and the image will be produced. You can do the same with images from your Photos library or from your Contact list. We were also shown how Apple Intelligence can flesh out rudimentary sketches or ideas through Image Wand. With a finger or Apple Pencil, circle a sketch and after analysing it, Image Wand will produce a complementary visual.

With Apple Intelligence, Siri finally gets the limelight it deserves. Siri can carry out specific tasks with an awareness of your personal context. This means that it’s able to go through your apps and create a personalised approach. For example, if you ask Siri, how to get to a destination, Siri will trawl through your travel history and the weather forecast to formulate the best and personalised route for you. Which for me, is a long languid bus ride because I have no money for cabs and I hate playing the game of “Should I Give Up This Seat For This Person?”

Siri also has a richer language understanding, so if you have made a verbal faux pas and you backtrack, Siri will know what you mean. Does this mean that Siri will understand Singlish? Welp, Apple says that US English will roll out first, followed by other languages. Hope springs eternal, I guess.

And if you’re skittish about speaking out loud to Siri about—oh for example—whether you need to give up your seat to someone who may or may not take offence to said seat offer, you can type it to Siri instead, you coward (my words).

There were rumours leading up to WWDC24 about Apple’s collaboration with ChatGPT came true as it was announced that ChatGPT is integrated into Apple’s Siri and Writing Tools. If Siri is stymied by your request, it will tap into ChatGPT’s expertise. You will be asked if your info can be shared with ChatGPT and can control when it is used. It’s also free to use without the need to create an account. Some people aren't too keen on the Apple Intelligence and ChatGPT union.

Given the outcry about user data being sneakily used to aid in machine learning, Apple doubled down on its stance on user privacy ensuring that even though Apple Intelligence is privy to your personal information, it doesn’t collect it. While many of the large language and diffusion models are run on the device, there are certain instances where it needs to be stored on the cloud. That's where Private Cloud Compute comes in. As a cloud-based model on special servers using Apple Silicon, your data is never stored and only used to handle your AI request. This is what Apple proudly termed as a “new standard for privacy”.

Apple TV+

Ever wondered who the hell is on screen and you scroll through IMDB? Now, there inSights, an Apple TV+ feature that shows who is playing what when their characters appear in a scene. There's even a handy bit of info of the music that's playing in the scene as well. inSights is only available for Apple TV+ original programming.

We even got a preview of what's coming to Apple TV+. A slight squeal may or may not have issued from us over the sight of Severance and Silo in the montage.

macOS

Called Sequoia, it comes with a Continuity app that allows for iPhone mirroring. You can connect to your iPhone from your Mac. We saw a demo where one could access the iPhone's Duolingo app and actually go through a lesson. The best part of it is that while this is happening, the iPhone is still in locked mode so that no one else, other than you, can have access to it.

iPadOS 18

There's now the Calculator app but with an added feature. Using your Apple Pencil, you can utilise Math Notes in the Calculator app and write out an equation. Once you write out the "=" sign, it immediately calculates. If you change any of the numbers, the tally automatically adjusts.

There's a Smart Script feature that refines your handwritten notes. You can scratch out certain words and it automatically erases, just like that.

VisionOS 2

Finally, this special announcement from WWDC: Apple's Vision Pro gets an operating system update. Using machine learning, it takes your 2D photos and adds depth to it; giving it more life to these spatial photos. There are expanded intuitive gestures to use with your Vision Pro and an ultrawide virtual display to operate on.

Oh, and the Vision Pro will soon be available to Singapore on 28 June.

For more information on WWDC 2024, check out the Apple website.

Assassin's Creed is Ubisoft long-running tentpole series. It started in the Holy Land during the Crusades to the far-reaching terrains of Ancient Greece and now the latest chapter will be set in feudal Japan. We have always thought that shinobis would be a natural fit in a series about assassins but given the glut of the Assassin's Creed world, can this latest instalment reinvigorate the franchise?

Assassin's Creed: Shadows was first known as Assassin's Creed: Codename Red when it was leaked in 2022. (It was leaked alongside another game-in-development, Assassin's Creed: Codename Hexe—about the witch trials in the Holy Roman Empire.) Shadows was further leaked at store listings while a marketing push was made via an ARG that led fans to the number, "1579", which is the year when the first Black samurai, Yasuke, was believed to arrive in Japan.

The Trailer

You'll get to see Yasuke in the trailer, alongside Naoe, as the two of them embark on a quest against the backdrop of civil wars and social upheavals during the Sengoku period. It appears that you can switch between Naoe and Yasuke with different play styles—stealthily as a shinobi or more combat-based as a samurai, respectively. Players get to explore an open-world feudal Japan, where according to Ubisoft's creative director, Jonathan Dumont, Shadows will be "a little bit more to the size of Assassin's Creed Origins".

Other reported features for Shadows include a light metre, where you can snuff out light sources so that you can hide in the shadows; there will be a settlement system with customisable buildings, dojos, shrines, armoury and more; seasonal changes that will impact the environment you're in.

The trailer looks promising. And given the sudden interest in historic Japan, it's high time that we have a Japan-centric chapter to the Assassin's Creed franchise.

Assassin's Creed: Shadows is expected to be released on 15 November, 2024 and is available for Microsoft Windows, PlayStation 5 and Xbox Series X/S. Pre-orders are now open.

APPLE

Before Apple announced something in their burgeoning pipeline, you usually know what to expect. Because there wasn't an update for the iPad line last year, this is the year where the smart money should be when an iPad announcement would be made. And what an announcement it was.

Last week, we reported on-site about a revamp to the iPad line-up. A 13-inch option is added to the iPad Air family with both 10- and 13-inch models powered by the M2 chip and an improved Apple Pencil the Apple Pencil Pro. Of course, there was the reveal of the iPad Pro, that's available in either a 10- or 13-inch. The iPad Pro comes with an Ultra Retina XDR display with state-of-the-art tandem OLED tech. "Tandem" in the sense that two OLED panels are stacked on top of the other so it gets that 1,600 nits peak for HDR.

The previous iPad Pro model suffered from blooming (aka "the halo effect", where light from isolated bright objects on a screen bleeds into darker surrounding areas) but for this latest iPad Pro, we saw perfect blacks and very exacting per-pixel illumination.

It's How Thin?!

Which brings us to the miracle of the iPad Pro's width. It holds the honour of not only being the thinnest in the iPad Pro line but also in Apple's entire catalogue. The last thin contender was the iPod Nano at 5.4mm; the iPad Pro 11-inch measures 5.3mm while the 13-inch is a mind-boggling 5.1mm. With that sort of measurement, it's hard to wrap your head around the idea of a "tandem OLED panels".

What's surprising is the chipset used in the iPad Pro. The previous iPad Pro model is outfitted with an M2 chip but for this year's model, Apple introduced the M4 chip. Bear in mind that Apple's latest chipset was the M3 for the MacBook Air so very few expected that the brand would skip the M3 and use an upgraded Apple silicone for its iPad Pro line-up. For an iPad Pro to be that thin, there needs to be a chipset that's able to handle the performance.

Siao, hor. Look at how thin it is. APPLE

Thus, the M4 with the promise of better CPU and GPU performances. The M4 chip is supposed to make things more "efficient". There's a new display engine, dynamic caching (caching improves response time and reduces system load) and hardware-accelerated ray tracing (light simulation in games). A couple of online games we tried performed swimmingly. According to Apple, when compared to the M2 chip, the M4 delivers the same performance only using half the power.

(We are unable to push the M4 potential at this point of writing but we'll update this in future.)

Dock the iPad Pro with the upgraded Magic Keyboard (added function keys, larger trackpad) and voilá, a MacBook. It's a simplified descriptor but with the iPad Pro as it is, as a tablet, it is an overkill. With workflow, it holds its own. It's almost like my MacBook, where I type my e-mails on it; draft out stories... hell, I'm writing this article on the iPad Pro.

A Reworked Model

The front-facing camera is now moved to the—hallelujah—middle of the horizontal bezel. Muy useful now for that pantless work meeting (my house, my rules). But because of the relocation of the camera, everything else has to shift. Remember the Apple Pencil Pro? To dock it, you can place the stylus on the horizontal side but because of the new front-facing camera position, the magnetic interface needs to shift along the bezel, which means the hardware of the Apple Pencil Pro needs to adapt to the new docking system. Thus, your new Apple Pencil Pro only works with this year's iPad Pro and iPad Air models; it's not backwards compatible with previous iPad models.

Give and take, I guess.

But the Apple Pencil Pro sure is something. It has more capabilities like the squeeze function, where depressing the sides brings up more options on the screen. There's the added haptic feedback, which adds more tactile-ness to using the stylus. Also, there's the barrel roll effect.

Uh, not that. More like this.

APPLE

A slight roll of the stylus allows the versatility of the nib to perform those calligraphic flourishes or shading. There are other nuanced touches such as the appearance of the stylus' shadow on the screen (this isn't projected by an external light source) and hovering the Apple Pencil Pro will show a preview of where the pencil will contact with the display. Finally, if you misplace the Apple Pencil Pro, you can locate it with the Find My app.

The iPad Pro is available in two colourways—silver and space black. The 11-inch version starts at SGD1,499 and the 13-inch device starts at SGD1,999.

At Battersea Power Station—the iconic structure of Pink Floyd's 10th album and, now, office space for Apple—journos and KOLs were gathered for product announcement at 3pm BST (10pm SGT) today. Given the nadir of any new iPad releases last year, all bets were on the disclosure of new iPads at the "Let Loose" event. At the keynote, a slew of releases were unveiled like the new 13-inch iPad Air and an Apple Pen Pro. But one of the more knock-me-down-with-a-feather news was the inclusion of the M4 chip—a leapfrog from the M2 chip in the iPad Pros (2022). Here is a run-down of what went down.

iPad Air

A new member to the iPad Air family is the new 13-incher. Both models are powered by the M2 chip that grants a faster CPU, GPU, and Neural Engine. With a front-facing Ultra Wide 12MP camera, faster Wi-Fi, 5G capabilities, the iPad Air has a Liquid Retina display, anti-reflective screen coating, True Tone tech and utilises, not only the Apple Pencil, but also the Apple Pencil Pro (we'll get to that later).

The 13-inch, however, gives proper real estate to its display that allows for 30 per cent more space in a Freeform app. There's even an improvement in sound quality with double the bass that's a boon for your cat videos (that's still a thing, right?)

iPad Pro

The iPad Pro gets that glow-up that my insecure 14-year-old self wished for (said glow-up only arrived when I was 18, thanks to MY WINNING PERSONALITY 👍). It comes in two sizes—10- and 13-inches—and has the Ultra Retina XDR display with state-of-the-art tandem OLED tech. (Due to my limited understanding, to get that 1,600 nits peak for HDR, Apple stacks two OLED screens. Y'know, like a sandwich. A very hard-to-digest sandwich. I am writing this close to dinner time.)

And the iPad Pros are thin. Not just the thinnest in the iPad Pro line but also the thinnest in Apple's catalogue. Your 11-inch model measures at 5.3mm thin while the 13-inch model is a mind-boggling 5.1mm thin (the iPod Nano measures 5.4mm thin. #rip #illalwaysrememberyouipod) How can something that's bigger be lighter? Is it witchcraft? Nay, I suspect due to a larger surface area, the motherboard is spread out. But I could be wrong. Again, I'm writing this close to dinner time. Available in two colourways—silver and space black—both models are enclosed in 100 per cent recycled aluminium cases. And because of the redesign of the 10- and 13-inch iPad Pro models, there are revised Magic Keyboards to go with.

M4 Chip

Now, this is the best bit: while the previous iPad Pro is outfitted with an M2 chip, for the latest iPad Pro, Apple introduced the M4 chip. Bear in mind that Apple's latest chipset was the M3 for the MacBook Air. Very few expected Apple would eschew the M3 and showcase an upgraded Apple silicone for the iPad Pro line-up but there you go. The M4 promises "stunning precision [in] colour and brightness. A powerful GPU with hardware-accelerated ray tracing renders game-changing graphics. And the Neural Engine in M4 makes iPad Pro an absolute powerhouse for AI."

Apple Pencil Pro

We know all about the Apple Pencil's features but the Pro verstion has more capabilities. Now you can squeeze the pencil's body for more options, haptic feedback and a barrel roll effect with the pencil's nib that allows for different strokes. There are nuanced touches like seeing a shadow of the pencil on the screen (this isn't projected by an external light source) and hovering the Apple Pencil will show you a preview of where the pencil will contact with the display. Finally, if you misplace it, you can locate it via the Find My app.

crosschevron-down