In the age of AI, it can feel as if this technology’s march into our lives is inevitable. From taking our jobs to writing our poetry, AI is suddenly everywhere we don’t want it to be.
But it doesn’t have to be this way. Just ask Madhumita Murgia, the AI editor at The Financial Times and the author of the barn-burning new book Code Dependent: Living in the Shadow of AI. Unlike most reporting about AI, which focuses on Silicon Valley power players or the technology itself, Murgia trains her lens on ordinary people encountering AI in their daily lives.
This “global precariat” of working people is often irrevocably harmed by these dust-ups; as Murgia writes, the implementation and governance of algorithms has become “a human rights issue.” She tells Esquire, “Whether it was health care, criminal justice, or government services, again and again you could see the harms perpetrated on mostly marginalised groups, because that’s how the AI supply chain is built.”
Murgia takes readers around the globe in a series of immersive reported vignettes, each one trained on AI’s damaging effects on the self, from “your livelihood” to “your freedom.” In Amsterdam, she highlights a predictive policing program that stigmatises children as likely criminals; in Kenya, she spotlights data workers lifted out of brutal poverty but still vulnerable to corporate exploitation; in Pittsburgh, she interviews UberEats couriers fighting back against the black-box algorithms that cheat them out of already meagre wages.
Yet there are also bright spots, particularly a chapter set in rural Indian villages, where under-resourced doctors use AI-assisted apps as diagnostic aids in their fight against tuberculosis. Despite the prevalent sense of impending doom, there’s still time to reconfigure our relationship to this technology, Murgia insists. “This is how we should all see AI,” she tells Esquire, “as a way to preserve the world we know and believe in what we bring to it, but then use it to augment us.”
Murgia spoke with Esquire by Zoom from her home in London about data labour, the future of technology regulation, and how to keep AI from reading bedtime stories to our children.
ESQUIRE: What is data colonialism, and how do we see it manifest through the lens of AI?
MADHUMITA MURGIA: Two academics, Nick Couldry and Ulises A. Mejias, came up with this term to draw parallels between modern colonialism and older forms of colonialism, like the British colonisation of India and other parts of the world. The resource extraction during that period harmed the lives of those who were colonised, much like how corporations today, particularly tech companies, are performing a similar kind of resource extraction. In this case, rather than oil or cotton, the resource is data.
In reporting this book, I saw how big Silicon Valley firms go to various parts of the world I visited, like India, Argentina, Kenya, and Bulgaria, and use the people there as data points to build systems that become trillion-dollar companies. But the people never see the full benefits of those AI systems to which they’ve given their data. Whether it was health care, criminal justice, or government services, again and again you could see the harms perpetrated on mostly marginalised groups, because that’s how the AI supply chain is built.
You write that data workers “are as precarious as factory workers; their labour is largely ghost work and they remain an undervalued bedrock of the AI industry.” What would it take to make their labour more apparent, and what would change if the reality of how AI works was more widely understood?
For me, the first surprise was how invisible these workers really are. When I talk to people, they’re shocked to learn that there are factories of real humans who tag data. Most assume that AI teaches itself somehow. So even just increasing understanding of their existence means that people start thinking, There’s somebody on the other end of this. Beyond that, the way the AI supply chain is set up, we only see the engineers building the final product. We think of them as the creators of the technology, so automatically, all the value is placed there.
Of course, these are brilliant computer scientists, so you can see why they’re paid millions of dollars for their work. But because the workers on the other end of the supply chain are so invisible, we underplay what they’re worth, and that shows up in the wages. Yes, these are workers in developing countries, and this is a standard outsourcing model. But when you look at the huge disparity in their living wage of $2.50 an hour going into the technology inside a Tesla car, and then you see what a Tesla car costs or what Elon Musk is worth or what that company is making, the disparity is huge. There’s just no way these workers benefit from being a part of this business.
If you hear technologists talking about it, they say we all get brought along for the ride—that productivity rises, bottom lines rise, money is flushed into our economy, and all of our lives get better. But what we’re seeing in practise is those who are most in need of these jobs are not seeing the huge upside that AI companies are starting to see, and so we’re failing them in that promise. We have to decide as a society: What is fair pay for somebody who’s part of this pipeline? What labour rights should they have? These workers don’t really have a voice. They’re so precarious economically. And so we need to have an active discussion. If there are going to be more AI systems, there’s going to be more data labour, so now is the time for us to figure out how they can see the upside of this revolution we’re all shouting from the rooftops about.
One of our readers asks: What are your thoughts on publishers like The New York Times suing OpenAI for copyright infringement? Do you think they’ll succeed in protecting journalists from seeing their work scraped and/or plagiarised?
This hits hard for me, because I’m both the person reporting on it and the person that it impacts. We’ve seen how previous waves of technological growth, particularly the social media wave, have undermined the press and the publishing industry. There’s been a huge disintermediation of the news through social media platforms and tech platforms; these are now the pipes through which people get information, and we rely on them to do it for us. We’ve come to a similar inflection point where you can see how these companies can scrape the data we’ve all created and generate something that looks a lot like what we do with far less labor, time, and expertise.
It could easily undermine what creative people spend their lives doing. So I think it’s really important that the most respected and venerable institutions take a stand for why human creativity matters. Ultimately, I don’t know what the consequences will be. Maybe it’s a financial deal where we’re compensated for what we’ve produced, rather than it being scraped for free. There are a range of solutions. But for me, it’s important that those who have a voice stand up for creative people in a world where it's easy to automate these tasks to the standard of “good enough.”
Another reader asks: What AI regulations do you foresee governments enacting? Will ethical considerations be addressed primarily through legislation, or will they rely on nonlegal frameworks like ethical codes?
Especially over the last five years, there have been dozens and dozens of codes of conduct, all self-regulating. It’s exactly like what we saw with social media. There has been no Internet regulation, so companies come up with their own terms of service and codes of conduct. I think this time around, with the AI shift, there’s a lot more awareness and participation from regulators and governments.
There’s no way around it; there will be regulation because regulation is required. Even the companies agree with this, because you can’t define what’s ethical when you’re a corporation, particularly a profit-driven corporation. If these things are going to impact people’s health, people’s jobs, people’s mortgages, and whether somebody ends up in jail or gets bail, you need regulation involved. We’ll need lines drawn in the sand, and that will come via the law.
In the book, you note how governments have become dependent on these private tech companies for certain services. What would it look like to change course there, and if we don’t, where does that road lead?
It goes back to that question of colonialism. I spoke to Cori Crider, who used to be a lawyer for Guantanamo Bay prisoners and is now fighting algorithms. She sees them as equally consequential, which is really interesting. She told me about reading a book about the East India Company and the Anglo Iranian Oil Corporation, which played a role in the Iranian coup in the ’70s, and how companies become state-like and the state becomes reliant on them. Now, decades later, the infrastructure of how government runs is all done on cloud services.
There are four or five major cloud providers, so when you want to roll out something quickly at scale, you need these infrastructure companies. It’s amazing that we don’t have the expertise or even the infrastructure owned publicly; these are all privately owned. It’s not new, right? You do have procurement from the private sector, but it’s so much more deeply embedded when it comes to cloud services and AI, because there are so few players who have the knowledge and the expertise that governments don’t. In many cases, these companies are richer and have more users than many countries. The balance of who has the power is really shifting.
When you say there are so few players, do you see any sort of antitrust agitation here?
In the U.S., the FTC is looking at this from an antitrust perspective. They’re exploring this exact question: “If you can’t build AI services without having a cloud infrastructure, then are you in an unfair position of power? If you’re not Microsoft, Google, Amazon, or a handful of others, and you need them to build algorithms, is that fair? Should they be allowed to invest and acquire these companies and sequester that?” That’s an open question here in the UK as well. The CMA, which is our antitrust body, is investigating the relationships between Microsoft, OpenAI, and startups like Mistral, which have received investment from Microsoft.
I think there will be an explosion of innovation, because that’s what Silicon Valley does best. What you’re seeing is a lot of people building on top of these structures and platforms, so there will be more businesses and more competition in that layer. But it’s unclear to me how you would ever compete on building a foundational model like a GPT-4 or a Gemini without the huge investment access to infrastructure and data that these three or four companies have. So I think there will be innovation, but I’m not sure it will be at that layer.
In the final chapter of the book, you turn to science fiction as a lens on this issue. In this moment where the ability to make a living as an artist is threatened by this technology, I thought it was inspired to turn to a great artist like Ted Chiang. How can sci-fi and speculative fiction help us understand this moment?
You know, it’s funny, because I started writing this book well before ChatGPT came out. In fact, I submitted my manuscript two months after ChatGPT came out. When it did come out, I was trying to understand, “What do I want to say about this now that will still ring true in a year from now when this book comes out?” For me, sci-fi felt like the most tangible way to actually explore that question when everything else seemed to be changing. Science fiction has always been a way for us to imagine these futures, to explore ideas, and to take those ideas through to a conclusion that others fear to see.
I love Ted Chiang’s work, so I sat down to ask him about this. Loads of technologists in Silicon Valley will tell you they were inspired by sci-fi stories to build some of the things that we writers see as dystopian, but technologists interpret them as something really cool. We may think they’re missing the point of the stories, but for them, it’s a different perspective. They see it through this optimistic lens, which is something you need to be an entrepreneur and build stuff like the metaverse.
Sci-fi can both inspire and scare, but I think more than anything, we are now suffering from a lack of imagination about what technology could do in shaping humans and our relationships. That’s because most of what we’re hearing is coming from tech companies. They’re putting the products in our hands, so theirs are the visions that we receive and that we are being shaped by. That’s fine; that’s one perspective. But there are so many other perspectives I want to hear, whether that’s educators or public servants or prosecutors. AI has entered those areas already, but I want to hear their visions of what they think it could do in their world. We’re very limited on those perspectives at the moment, so that’s where science fiction comes in. It expands our imagination of the possibilities of this thing, both the good and the bad, and figuring out what we want out of it.
I loved what Chiang had to say about how this technology exposes “how much bullshit we are required to generate and deal with in our daily lives.” When I think about AI, I often think that these companies have gotten it backwards. As a viral tweet so aptly put it: “I want AI to do my laundry and dishes so I can do my art and writing, not for AI to do my art and writing so I can do my laundry and dishes.” That’s a common sentiment—a lot of us would like to see AI take over the bullshit in our lives, but instead it’s threatening our joys. How have we gotten to this point where the push is for AI to do what we love and what makes us human instead of what we’d actually like to outsource?
I think about this all the time. When it started off, automation was just supposed to help us do the difficult things that we couldn’t. Way back at the beginning of factory automation, the idea was “We’ll make your job safer, and you can spend more time on the things that you love.” Even with generative AI, it was supposed to be about productivity and email writing. But we’ve slid into this world where it’s undermining the things that, as you say, make us human. The things that make our lives worth living and our jobs worth doing. It’s something I try to push back on; when I hear this assumption that AI is good, I have to ask, “But why? What should it be used for?” Why aren’t we talking about AI doing our taxes—something that we struggle with and don’t want to spend our time doing?
This is why we need other voices and other imaginings. I don’t want AI to tell bedtime stories to my children. I don’t want AI to read all audiobooks, because I love to hear my favourite author read her own memoir. I think that’s why that became a meme and spoke to so many people. We’ve all been gaslighted into believing that AI should be used to write poetry. It’s part of a shift we’ll all experience together from saying, “It’s amazing how we’ve invented something that can write and make music” to “Okay, but what do we actually need it for?” Let’s not accept its march into these spaces where we don’t want it. That’s what my book is about: about having a voice and finding a way to be heard.
I’m reminded of the chapter about a doctor using AI as a diagnostic aid. It could never replace her, but it’s a great example of how this technology can support a talented professional.
She’s such a good personification of how we can preserve the best of our humanity but be open to how AI might help us with what we care about; in her case, that’s her patients. But crucially, her patients want to see her. That’s why I write about her previous job, where people were dying and she didn’t have the equipment to help them. She had to accept that there were limitations to what she could do as a doctor, but she could perform the human side of medicine, which people need and appreciate. This is how we should all see AI: as a way to preserve the world we know and believe in what we bring to it, but then use it to augment us. She was an amazing voice to help me understand that.
With the daily torrent of frightening news about the looming threat of AI, it’s easy to feel hopeless. What gives you hope?
I structured my book to start with the individual and end with wider society. Along the way, I discovered amazing examples of people coming together to fight back, to question, to break down the opacity in automation and AI systems. That’s what gives me hope: that we are all still engaging with this, that we’re bringing to it our humanness, our empathy, our rage. That we’re able to collectivise and find a way through it. The strikes in Hollywood were a bright spot, and there’s been so much change in the unionisation of gig workers across the world, from Africa to Latin America to Asia. It gives me hope that we can find a path and we’re not just going to sleepwalk into this. Even though I write about the concentration of power and influence that these companies have, I think there’s so much power in human collectivism and what we can achieve.
Also, I believe that the technology can do good, particularly in health care and science; that’s an area where we can really break through the barriers of what we can do as people and find out more about the world. But we need to use it for that and not to replace us in doing what we love. My ultimate hopefulness is that humans will figure out a way through this somehow. I’ve seen examples of that and brought those stories to light in my book. They do exist, and we can do this.
Originally published on Esquire US
We return to the intersection of "Style" and "Tech", where the Fendi x Devialet Mania mash-up resides. The Italian fashion house teams with the French audio maestros for a portable speaker that turns heads. It's a Devialet Mania—a high-fidelity speaker boasting 360° stereo sound—wrapped in Fendi's iconic monogram.
Earlier in the year, the Fendi x Devialet Mania edition made its first appearance at the Fendi Autumn/Winter 2024 menswear runway show in Milan. At first, it looked like a male model sauntering with a rotund carrier before finding out that it was a Devialet Mania model covered with Fendi's two-tone monogram in tobacco and brown with a sand handle and gold details (which we are told is actual gold).
Originally launched in 2022, the Devialet Mania model utilises its own proprietary acoustic mapping technology and Active Stereo Calibration (ASC) to adjust its sound to suit any room. This means, as a listener, you'll get the optimal delivery of pitch-perfect treble and bone-rattling bass. Each edition comes complete with an add-on wireless charging dock, the Devialet Mania Station. And with a staggering 30–20,000 hertz audio range, an IPX4 splash resistance and Devialet’s first built-in battery offering up to 10 hours of wireless bliss, now with the Fendi motif, it elevates this piece of tech into a piece of art.
The Fendi x Devialet Mania edition retails for SGD4,100 and is available online and at Devialet outlets.
It's that time of the year where Apple kickstarts its Worldwide Developer Conference (WWDC) 2024. Esquire Singapore was at Apple Park where it all went down. Although Tim Cook opened the keynote and revealed a few of what the company was working on, it was ultimately Senior VP of Software Engineering, Craig Federighi's show. Through his amiable style and parkour (you'll understand if you watch the keynote video), it was announced that there would be updates to its OS—iOS 18; iPadOS 18; macOS Sequoia; watchOS 11; visionOS 2—; what's on Apple TV+ slate; the Vision Pro coming to Singapore and the reveal of Apple Intelligence... or AI (“give-the-marketing-team-a-raise”). Here are the biggest takeaways from WWDC.
After keeping mum on AI, Apple loudly announced its proprietary AI, the Apple Intelligence. The Apple Intelligence works across all of Apple's devices and we saw a demonstration of its use in Writing Tools. Now you can see summaries of your e-mails or books and its ability to rewrite the e-mail tone to reflect your intent. Apple Intelligence can also generate transcript summaries of live phone convos or a recordings.
If you tire of 😉 (winking face), 🫃("Uh-oh, I seem to have cirrhosis of the liver.") or 💦🍆 (wash your vegetables), you can generate customised emojis with Genmoji. Simply describe what you want to see as an emoji and Apple Intelligence will create it.
A step up from Genmoji is Image Playground. Again, type in any descriptor and the style (currently only animation, illustration and sketch options are available) and the image will be produced. You can do the same with images from your Photos library or from your Contact list. We were also shown how Apple Intelligence can flesh out rudimentary sketches or ideas through Image Wand. With a finger or Apple Pencil, circle a sketch and after analysing it, Image Wand will produce a complementary visual.
With Apple Intelligence, Siri finally gets the limelight it deserves. Siri can carry out specific tasks with an awareness of your personal context. This means that it’s able to go through your apps and create a personalised approach. For example, if you ask Siri, how to get to a destination, Siri will trawl through your travel history and the weather forecast to formulate the best and personalised route for you. Which for me, is a long languid bus ride because I have no money for cabs and I hate playing the game of “Should I Give Up This Seat For This Person?”
Siri also has a richer language understanding, so if you have made a verbal faux pas and you backtrack, Siri will know what you mean. Does this mean that Siri will understand Singlish? Welp, Apple says that US English will roll out first, followed by other languages. Hope springs eternal, I guess.
And if you’re skittish about speaking out loud to Siri about—oh for example—whether you need to give up your seat to someone who may or may not take offence to said seat offer, you can type it to Siri instead, you coward (my words).
There were rumours leading up to WWDC24 about Apple’s collaboration with ChatGPT came true as it was announced that ChatGPT is integrated into Apple’s Siri and Writing Tools. If Siri is stymied by your request, it will tap into ChatGPT’s expertise. You will be asked if your info can be shared with ChatGPT and can control when it is used. It’s also free to use without the need to create an account. Some people aren't too keen on the Apple Intelligence and ChatGPT union.
Given the outcry about user data being sneakily used to aid in machine learning, Apple doubled down on its stance on user privacy ensuring that even though Apple Intelligence is privy to your personal information, it doesn’t collect it. While many of the large language and diffusion models are run on the device, there are certain instances where it needs to be stored on the cloud. That's where Private Cloud Compute comes in. As a cloud-based model on special servers using Apple Silicon, your data is never stored and only used to handle your AI request. This is what Apple proudly termed as a “new standard for privacy”.
Ever wondered who the hell is on screen and you scroll through IMDB? Now, there inSights, an Apple TV+ feature that shows who is playing what when their characters appear in a scene. There's even a handy bit of info of the music that's playing in the scene as well. inSights is only available for Apple TV+ original programming.
We even got a preview of what's coming to Apple TV+. A slight squeal may or may not have issued from us over the sight of Severance and Silo in the montage.
Called Sequoia, it comes with a Continuity app that allows for iPhone mirroring. You can connect to your iPhone from your Mac. We saw a demo where one could access the iPhone's Duolingo app and actually go through a lesson. The best part of it is that while this is happening, the iPhone is still in locked mode so that no one else, other than you, can have access to it.
There's now the Calculator app but with an added feature. Using your Apple Pencil, you can utilise Math Notes in the Calculator app and write out an equation. Once you write out the "=" sign, it immediately calculates. If you change any of the numbers, the tally automatically adjusts.
There's a Smart Script feature that refines your handwritten notes. You can scratch out certain words and it automatically erases, just like that.
Finally, this special announcement from WWDC: Apple's Vision Pro gets an operating system update. Using machine learning, it takes your 2D photos and adds depth to it; giving it more life to these spatial photos. There are expanded intuitive gestures to use with your Vision Pro and an ultrawide virtual display to operate on.
Oh, and the Vision Pro will soon be available to Singapore on 28 June.
For more information on WWDC 2024, check out the Apple website.
Assassin's Creed is Ubisoft long-running tentpole series. It started in the Holy Land during the Crusades to the far-reaching terrains of Ancient Greece and now the latest chapter will be set in feudal Japan. We have always thought that shinobis would be a natural fit in a series about assassins but given the glut of the Assassin's Creed world, can this latest instalment reinvigorate the franchise?
Assassin's Creed: Shadows was first known as Assassin's Creed: Codename Red when it was leaked in 2022. (It was leaked alongside another game-in-development, Assassin's Creed: Codename Hexe—about the witch trials in the Holy Roman Empire.) Shadows was further leaked at store listings while a marketing push was made via an ARG that led fans to the number, "1579", which is the year when the first Black samurai, Yasuke, was believed to arrive in Japan.
You'll get to see Yasuke in the trailer, alongside Naoe, as the two of them embark on a quest against the backdrop of civil wars and social upheavals during the Sengoku period. It appears that you can switch between Naoe and Yasuke with different play styles—stealthily as a shinobi or more combat-based as a samurai, respectively. Players get to explore an open-world feudal Japan, where according to Ubisoft's creative director, Jonathan Dumont, Shadows will be "a little bit more to the size of Assassin's Creed Origins".
Other reported features for Shadows include a light metre, where you can snuff out light sources so that you can hide in the shadows; there will be a settlement system with customisable buildings, dojos, shrines, armoury and more; seasonal changes that will impact the environment you're in.
The trailer looks promising. And given the sudden interest in historic Japan, it's high time that we have a Japan-centric chapter to the Assassin's Creed franchise.
Assassin's Creed: Shadows is expected to be released on 15 November, 2024 and is available for Microsoft Windows, PlayStation 5 and Xbox Series X/S. Pre-orders are now open.
Before Apple announced something in their burgeoning pipeline, you usually know what to expect. Because there wasn't an update for the iPad line last year, this is the year where the smart money should be when an iPad announcement would be made. And what an announcement it was.
Last week, we reported on-site about a revamp to the iPad line-up. A 13-inch option is added to the iPad Air family with both 10- and 13-inch models powered by the M2 chip and an improved Apple Pencil the Apple Pencil Pro. Of course, there was the reveal of the iPad Pro, that's available in either a 10- or 13-inch. The iPad Pro comes with an Ultra Retina XDR display with state-of-the-art tandem OLED tech. "Tandem" in the sense that two OLED panels are stacked on top of the other so it gets that 1,600 nits peak for HDR.
The previous iPad Pro model suffered from blooming (aka "the halo effect", where light from isolated bright objects on a screen bleeds into darker surrounding areas) but for this latest iPad Pro, we saw perfect blacks and very exacting per-pixel illumination.
Which brings us to the miracle of the iPad Pro's width. It holds the honour of not only being the thinnest in the iPad Pro line but also in Apple's entire catalogue. The last thin contender was the iPod Nano at 5.4mm; the iPad Pro 11-inch measures 5.3mm while the 13-inch is a mind-boggling 5.1mm. With that sort of measurement, it's hard to wrap your head around the idea of a "tandem OLED panels".
What's surprising is the chipset used in the iPad Pro. The previous iPad Pro model is outfitted with an M2 chip but for this year's model, Apple introduced the M4 chip. Bear in mind that Apple's latest chipset was the M3 for the MacBook Air so very few expected that the brand would skip the M3 and use an upgraded Apple silicone for its iPad Pro line-up. For an iPad Pro to be that thin, there needs to be a chipset that's able to handle the performance.
Thus, the M4 with the promise of better CPU and GPU performances. The M4 chip is supposed to make things more "efficient". There's a new display engine, dynamic caching (caching improves response time and reduces system load) and hardware-accelerated ray tracing (light simulation in games). A couple of online games we tried performed swimmingly. According to Apple, when compared to the M2 chip, the M4 delivers the same performance only using half the power.
(We are unable to push the M4 potential at this point of writing but we'll update this in future.)
Dock the iPad Pro with the upgraded Magic Keyboard (added function keys, larger trackpad) and voilá, a MacBook. It's a simplified descriptor but with the iPad Pro as it is, as a tablet, it is an overkill. With workflow, it holds its own. It's almost like my MacBook, where I type my e-mails on it; draft out stories... hell, I'm writing this article on the iPad Pro.
The front-facing camera is now moved to the—hallelujah—middle of the horizontal bezel. Muy useful now for that pantless work meeting (my house, my rules). But because of the relocation of the camera, everything else has to shift. Remember the Apple Pencil Pro? To dock it, you can place the stylus on the horizontal side but because of the new front-facing camera position, the magnetic interface needs to shift along the bezel, which means the hardware of the Apple Pencil Pro needs to adapt to the new docking system. Thus, your new Apple Pencil Pro only works with this year's iPad Pro and iPad Air models; it's not backwards compatible with previous iPad models.
Give and take, I guess.
But the Apple Pencil Pro sure is something. It has more capabilities like the squeeze function, where depressing the sides brings up more options on the screen. There's the added haptic feedback, which adds more tactile-ness to using the stylus. Also, there's the barrel roll effect.
Uh, not that. More like this.
A slight roll of the stylus allows the versatility of the nib to perform those calligraphic flourishes or shading. There are other nuanced touches such as the appearance of the stylus' shadow on the screen (this isn't projected by an external light source) and hovering the Apple Pencil Pro will show a preview of where the pencil will contact with the display. Finally, if you misplace the Apple Pencil Pro, you can locate it with the Find My app.
The iPad Pro is available in two colourways—silver and space black. The 11-inch version starts at SGD1,499 and the 13-inch device starts at SGD1,999.
At Battersea Power Station—the iconic structure of Pink Floyd's 10th album and, now, office space for Apple—journos and KOLs were gathered for product announcement at 3pm BST (10pm SGT) today. Given the nadir of any new iPad releases last year, all bets were on the disclosure of new iPads at the "Let Loose" event. At the keynote, a slew of releases were unveiled like the new 13-inch iPad Air and an Apple Pen Pro. But one of the more knock-me-down-with-a-feather news was the inclusion of the M4 chip—a leapfrog from the M2 chip in the iPad Pros (2022). Here is a run-down of what went down.
A new member to the iPad Air family is the new 13-incher. Both models are powered by the M2 chip that grants a faster CPU, GPU, and Neural Engine. With a front-facing Ultra Wide 12MP camera, faster Wi-Fi, 5G capabilities, the iPad Air has a Liquid Retina display, anti-reflective screen coating, True Tone tech and utilises, not only the Apple Pencil, but also the Apple Pencil Pro (we'll get to that later).
The 13-inch, however, gives proper real estate to its display that allows for 30 per cent more space in a Freeform app. There's even an improvement in sound quality with double the bass that's a boon for your cat videos (that's still a thing, right?)
The iPad Pro gets that glow-up that my insecure 14-year-old self wished for (said glow-up only arrived when I was 18, thanks to MY WINNING PERSONALITY 👍). It comes in two sizes—10- and 13-inches—and has the Ultra Retina XDR display with state-of-the-art tandem OLED tech. (Due to my limited understanding, to get that 1,600 nits peak for HDR, Apple stacks two OLED screens. Y'know, like a sandwich. A very hard-to-digest sandwich. I am writing this close to dinner time.)
And the iPad Pros are thin. Not just the thinnest in the iPad Pro line but also the thinnest in Apple's catalogue. Your 11-inch model measures at 5.3mm thin while the 13-inch model is a mind-boggling 5.1mm thin (the iPod Nano measures 5.4mm thin. #rip #illalwaysrememberyouipod) How can something that's bigger be lighter? Is it witchcraft? Nay, I suspect due to a larger surface area, the motherboard is spread out. But I could be wrong. Again, I'm writing this close to dinner time. Available in two colourways—silver and space black—both models are enclosed in 100 per cent recycled aluminium cases. And because of the redesign of the 10- and 13-inch iPad Pro models, there are revised Magic Keyboards to go with.
Now, this is the best bit: while the previous iPad Pro is outfitted with an M2 chip, for the latest iPad Pro, Apple introduced the M4 chip. Bear in mind that Apple's latest chipset was the M3 for the MacBook Air. Very few expected Apple would eschew the M3 and showcase an upgraded Apple silicone for the iPad Pro line-up but there you go. The M4 promises "stunning precision [in] colour and brightness. A powerful GPU with hardware-accelerated ray tracing renders game-changing graphics. And the Neural Engine in M4 makes iPad Pro an absolute powerhouse for AI."
We know all about the Apple Pencil's features but the Pro verstion has more capabilities. Now you can squeeze the pencil's body for more options, haptic feedback and a barrel roll effect with the pencil's nib that allows for different strokes. There are nuanced touches like seeing a shadow of the pencil on the screen (this isn't projected by an external light source) and hovering the Apple Pencil will show you a preview of where the pencil will contact with the display. Finally, if you misplace it, you can locate it via the Find My app.
Oppo has always been the dark horse. While the Android scene is fixated on Samsung, Oppo has slowly but surely manoeuvred itself farther ahead in the smartphone race. Remember the Oppo Find N3 Flip that was released last year? With an improved flexion hinge and souped-up camera system, it demonstrated what a good foldable phone ought to be. In the same vein, the Oppo Reno11 series showcased what a dependable midrange phone could be.
Released in China last year, the Reno11 series made its first SEA stop in Singapore this January. With marketing fiercely touting the Reno11 Pro smartphone’s Ultra-Clear Portrait Camera System, Oppo banks on attracting a new segment of smartphone photographers.
The Reno11 Pro’s glass back was inspired by nature and comes in two colourways: Pearl White and Rock Grey. We were handed the former to test, and at first glance, it reminded us of expensive marble. The effect is courtesy of a 3D etching process. That creates millions of reflective micro surfaces, giving it a shimmer that moves under the light. Curiously, the thickness of the Pearl White smartphone measures 7.66mm, while the Rock Grey is 7.59mm. However, unless you’re some sort of hypersensory mutant, the difference is negligible.
Set in the curved glass back panel is a raised Sunshine Ring camera system. This contains the main 50MP camera with an f/1.8 aperture and OIS, a 32MP telephoto with an f/2.0 aperture and an 8MP ultra-wide (112 degrees of Field of Vision) with an f/2.2 aperture. Front-wise, you have a 32MP camera with a f/2.4 aperture.
The 32MP telephoto lens is something else. It shares an IMX709 sensor with the front camera and can capture portrait photos even under low light conditions. The Reno11 Pro includes a next-gen computational photography algorithm called the HyperTone Imaging Engine. Originally featured in the Find X6 Pro and Find N3, the Reno11 Pro has an improved version. One that combines multiple uncompressed images in the RAW domain. It also applies AI-powered de-noising and de-mosaicing to give your images clarity, dynamic range and colour richness.
On top of the photography expertise, there is one more thing that stuck out: ColorOS 14.
There is something inherently exciting about the updated ColorOS 14, which serves as an exciting launchpad for the brand in 2024. With an updated user interface; Aqua Dynamics; allows for more network connectivity in challenging environments; the use of Oppo’s Trinity Engine and more, ColorOS 14 allows the company to lean away from Android and into its own identity.
The smartphone race has also always been about whether the hardware can catch up to the software. The Find N2 Flip was the first device with ColorOS 14 but its hardware was unable to do anything else to the cover screen when its closed. But with the Reno11, the series presents what it can do with ColorOS 14 and what the operating system could possibly do in the future. And that is a future we will eagerly stick around for.
The Oppo Reno11 Pro is now out in stores.
Tech nerds, how are we feeling about 2024? Are y'all freaking out about all the new things and dohickeys that are getting released at CES? Sorry if that sounded like I'm talking down on CES, it wasn't meant to be. However, I'm just a lowly tech editor who is a little bit sick of everything that everyone seems to be freaking out about. We're at a point where there's so much tech that most of the things we hype up are, honestly... not that great.
We've got tech in our hands, tech over our eyes, tech in our homes, tech on our kitchen counters, and tech in our bedroom. Even our paper notebooks are tech-enabled. Hell, we're using tech to wake us up instead of the sun. So where is any one person meant to keep up with all the tech that actually matters? Right here.
I've kept an eye on the early-year releases, and I've kept tabs on what is actually moving the needle for me. Is the new tactile iPhone keyboard from Clicks moving the needle for me? Not really. Is the Apple Vision Pro moving the needle for me? Yes, absolutely. What I'm trying to say is that this isn't a list of little releases. This is where Esquire dot com is keeping track of the biggest, most groundbreaking tech of 2024—everything you should buy or keep an eye on in the future. It's still early doors, so there's a lot of preorders and speculation. But, as the year rolls on, we'll keep this list updated with all the best new tech of 2024.
We got a sneak peak at Apple's biggest innovation in a long time last year. Officially launching on 2 February, this seems to be Apple's next big bet. The focus is less on making a toy and more on making a new type of personal computer. The powers that be in Cupertino obviously see this as a desktop and laptop replacement. We'll see how well they deliver.
Ever looked at your TV and wish that you could, see through it? Me either. But once I saw LG's new entertainment play, I was... slightly more convinced. Move it around (it's wireless) and place it in direct sunlight or in front of a mirror (no glare). It's a weird bet, but I can definitely see it growing on me. LG's transparent OLED TV is scheduled to hit the market in 2024.
Per usual, there's going to be a new iPhone. Whether or not it'll be a big jump from the 15 Pro remains to be seen. The iPhone 15 Pro had a lot of initial fanfare (from myself included), but its stumbled out of the gate a little bit. The biggest innovation has been the titanium build. We'll see where Apple goes this year.
For my money, Samsung is the top Apple competitor, with a much deeper catalog than the Googles or Motorolas of the world and a great suite of foldables. No disrespect to those two, but Samsung does so much it makes for a lovely little ecosystem. As there is every year, there's going to be some fort of upgrade on Samsung's flagship smartphone. Will it be enough to leapfrog Apple? Not in America. But, it could be a big year for Samsung.
Since being the first big company to do the whole VR thing, Meta has sent out a bunch of flops. The Meta Quest 2 was just a novelty gaming device. The newer Quest 3 and Quest Pro aren't anything to write home about either. But, Meta has confirmed plans for a new, more affordable VR headset in 2024. We'll see if it actually catches on this time around.
Another big rumor in Apple world is that there might be a foldable iPad on the horizon. If it happens, it would be the company's first foray into the foldable market and surely a dress rehearsal for a foldable iPhone. Still, it's a massive if. Don't hold out for this one.
At CES 2024, Samsung gave us an update on one of its best weird little projects. Ballie, an R2-D2 type personal assistant was introduced at CES 2020. This time around, Samsung made the little guy bigger and gave him a projector.
Wait, so what is this thing?
Sorry. "Alexa on wheels," is how I would describe Ballie. He'll be able to follow you around, tell you the weather, answer phone calls, and project onto whatever flat surface you can find. Don't hold out hope though. This is more of a speculative project from Samsung. I wouldn't expect to see it on the market in 2024.
Originally published on Esquire US
Considering that Hedi Slimane is constantly inspired by music and uses it as a way of crafting the narrative of each collection—his runway shows for Celine often involves commissioned music pieces—Celine-branded audio accessories ought to be a given. It has been almost six years since he's assumed the position of the luxury brand's creative, artistic and image director, and we're finally getting just that.
The first Celine wireless headphones made their debut on the brand's Summer 2024 womenswear runway. To the tune of a specially commissioned extended version of "Too Much Love" by LCD Soundsystem, the all-black headphones were seen around the necks of a number of models—styled as an accessory to complete a look more than anything. But thankfully, they're capable of more than making one look a tad cool.
Celine has partnered up with Master & Dynamic for its first foray into the audio space. If you're already familiar with Master & Dynamic, you'd know that the audio brand is universally known for its make, rich audio quality, and signature design. Celine's variation is an aesthetic update of the MH40 model identifiable by its lightweight anodised aluminium body. Both the headband and removable ear pads are crafted from supple lambskin, with the capabilities of the MH40—Bluetooth 5.2 connectivity, noise isolation, and up to 30 hours of battery life—ensuring that the audio experience is as luxe as it gets.
While its runway debut only showcased the all-black iteration, the Celine headphones come in three colourways: the aforementioned all-black, black and silver, and tan and silver. The black-and-silver iteration features "Celine" right on the exterior of each speaker; the all-black as well as the tan-and-silver colourways are decorated with the Celine Triomphe motif at the same spots. The partnership goes as far as adding more subtle details such as "Celine Paris" laser-engraved on the included charging cables, and "Designed and developed in Paris" marked on the right headphone.
The retail price? Well, it is a collaborative effort and branded with the signatures of a luxury fashion house so SGD1,350 isn't exactly out of left field. At the very least, it does more than say, a white shirt by Celine that also retails for around the same price.
The Celine wireless headphones will be available in boutiques and online soon.
In a world's first, Tiger Beer roped in Izzy Du for a puffer jacket... that can keep you cool in the tropics. On paper, the idea reads like a madman's manifesto but the result... well, take a look at it. It stands out, no doubt. A voluminous bright traffic cone orange marshmallow. And to wear it in Singapore? Think of the whiplashes from all of the turned heads.
So, what's the secret? How can you stay cool in an item like that? Two words: cooling system. There's a series of tubes in the puffer jacket. And it's cooled by Tiger Beer (I mean, you can also use water but this is a Tiger Beer thing). Contents of ice-cold Tiger Beer cans are pumped throughout the suit, which lowers the temperature. [A correction: according to the presser, it's a "beer-powered cooling system" and it works by "using the cold beer to chill water which is then pumped around the wearer’s body via a network of tubes. These tubes make contact with key points where the arteries are closest to the skin, cooling the body down by up to 5° Celsius in the sun".] Putting their money where their mouths are, this outfit will be worn during this year's ZoukOut.
We talked to the TIGER Summer Puffer designer, Izzy Du, about what goes into creating this and whether it can stand the test of gyrating with other sweaty bodies during ZoukOut.
How long did it take to conceptualise and create the outfit, especially when you’re working with Whatever Co.?
We were in the development and testing phase for a couple of months. Excellent workflow with Whatever Co. which helped the process along smoothly. Two weeks for toiling, fitting, and pattern development, and one week to stitch and finish the final prototype.
You've always had some sort of sustainability effort with your clothing; in what manner does this collab reflect that?
The TIGER Summer Puffer fabrics are made from excess manufacturer stock and is recycled, PVC-free nylon. This reduces the manufacturing of new textiles and additional chemical processing and diverts otherwise waste products from landfills. This collaboration for me is also a trial; a first step, towards imbuing additional functions to increase a garment's shelf life. I believe by embedding garments with wearable technology, we can eliminate the built-in obsolescence within the nature of clothing and slow down the industry’s fast-paced cycle.
What are the other considerations when you’re marrying a third-party’s technicality with your design? Were there any obstacles?
As communication is key for a cohesive process, our discourse was smooth and natural. An obstacle was the placement of the beer pouch. Initially, it was preferred to be in the front, however, there would be a weight imbalance due to the front opening. After some testing, we made it in the back. We created an additional pouch on the front to keep a spare beer at hand.
Your creations veered towards the voluminous, can this outfit really measure up against the crowded shores of ZoukOut?
Absolutely, my designs often gravitate towards shape and volume; and while keeping that in mind, the TIGER Summer Puffer was much more. When conceiving this statement jacket, my goal was to ensure it would stand out and make a statement no matter where you choose to wear it, even amid the vibrant energy of ZoukOut.
The TIGER Summer Puffer merges fashion and function, it represents a harmonious blend of style and innovation. Crafted to be both chic and practical, it's the ideal choice for navigating the tropical heat of ZoukOut. The striking orange hue, drawn from the vibrant spirit of Tiger Beer, exudes confidence, while the groundbreaking beer-powered cooling system elevates it to another level. This ingenious system harnesses the cold beer to chill water, which is then delicately circulated throughout the wearer's body, providing a refreshing temperature reduction of up to 5° Celsius in the sun.
Without a doubt, it can hold its own amidst the bustling shores of ZoukOut. It offers festival goers a comfortable and fashion-forward way to revel in the event's vibrant atmosphere.
What’s next for Izzy Du?
It’s been super fun collaborating with Tiger Beer for the TIGER Summer Puffer. Right now, I am focused on building out the IZZY DU as a brand with accessible, wearable garments that people can enjoy while sprinkling in my conceptual work along the way. The brand’s first Spring/Summer collection comes out January 2024. And the next IZZY DU pop-up store will be at Printemps in Paris opening this November 16th. Come by if you’re in Paris!
I don't quite know how to feel about the new Ray-Ban Meta Smart Glasses. Especially when they run on AI. We get it, it's the whole handsfree, first-person POV experience ("Hey Meta, share this photo I took with just my literal face"). The convenience is clearly purposed for content creation, livestreaming and all that jazz. Allowing users to preview social media comments in real-time, even audibly, the ambitious eyewear also doubles as a pair of headphones and takes phone calls. Perhaps Meta thinks we aren't glued enough to our phones as it is.
In partnership with EssilorLuxottica, the first generation—called "Ray-Ban Stories" because why bother hiding what they're really for—was launched in September 2021. They came in three styles (wayfarer, round and meteor), one colour (the very exciting black in shiny or matte) and two transitions options (the just as exciting grey and brown).
The second iteration now streamlined and lighter, boasts up to 150 frame and lens design combinations. More importantly, first-hand reviews are actually calling them comfortable. Water resistance clocks in at an IPX4 rating, should you consider skinny dipping.
The biggest change, though, would undoubtedly be replacing the 5MP camera with an ultra-wide 12MP one. Capable of recording 1080p videos from a prior 780p in 60-second stints, the default mode is—surprise, surprise—now portrait rather than landscape. It also went from one microphone, which apparently wasn't much good in strong breeze, to a whopping five, including one on the nose bridge for a true 360 audio capture.
There's a marked difference in the listening experience too, via a 50 percent maximum volume increase and better directional output. Meaning you can continue discreetly enjoying the K-pop band you pretend not to like, unless you're standing in proximity within a silent room.
For privacy, which was a priority Meta strangely felt the need to emphasize, a blinking white light goes off when the device is recording. Minimizing the creep factor is something to appreciate when photo and video functions are easily activated by touchpads on the glasses' stems. Interestingly, this became the reason why certain frame colour options such as beige were removed as they were less obvious to see when the LED was turned on.
Operating on Qualcomm’s Snapdragon AR1 Gen 1 processor and eight times more internal storage at 32GB, the glasses allegedly last up to four hours of active use and come with a nifty sunglass charging case …which take approximately 75 minutes to full charge.
Besides taking your annoying voice commands, the integrated Meta AI is slated for an update next year to enable interaction with AR surroundings. Augmented reality is an intriguing direction to head in, when gadgets like Google Glass and Bose Frames never really took off. It begs the question why, when it didn't gain much traction two years ago, are they pitching a new version now?
Does the company know something we don't about the near future that produces this unfounded confidence in consumer demand? Will there be another pandemic where we will all be forced indoors to see the resurgence of virtual reality, NFTs and cryptocurrency? In other words, will the Ray-Ban Meta Smart Glasses finally be cool? And will I ever get to answering these speculative questions as opposed to simply throwing them out there? I guess some things we'll never know.
Ray-Ban Meta Smart Glasses are up for preorder now on Ray-Ban / Meta and on sale 17 October from USD299.
The colour turquoise has been linked with opulence ever since its namesake gemstone adorned the movers and shakers of the ancient world; ergo, it’s no surprise that it looks right at home on one of the world’s most popular luxury watches: the Omega Seamaster.
The Omega Planet Ocean Deep Black Chronograph Seamaster, to be specific, embraces the greeny-blue hue to pay homage to Emirates Team New Zealand (ENTZ). This is a sailing team the Swiss marque has supported since 1995. This is ahead of the 37th America’s Cup taking place in Barcelona in 2024.
About an hour’s drive outside of Barcelona, in the genial coastal town of Vilanova where the first Preliminary Regatta for the forthcoming sporting event took place (as a matter of course, Omega served as the official timekeeper), the black and turquoise watch was unveiled on Wednesday 13 September in the presence of the ETNZ leaders, Omega representatives and a select few members of the international press, including Esquire.
From the outset, the colour palette was the key talking point—a conspicuous combination unmistakably inspired by the New Zealand team’s new motif which is anchored by a turquoise fern.
Raynald Aeschlimann, CEO of Omega, admits that despite turquoise’s deep association with affluence, it took careful thought to find a suitable way to incorporate the colour into the time-honoured design.
“Bringing in that blue was a challenge—I wanted this watch to be recognisable but still in line with what we’ve recently been doing,” he tells Esquire. “We wanted to create something that wasn’t just a collab with another name on the dial.”
Here, the only telltale sign that the watch has been created with the Oceanic sailing team in mind is the logo discoverable on the case's NAIAD Lock rear.
But that’s not to say that the limited-edition release is otherwise identical to your classic Seamaster, because it isn’t.
Touches that make the ETNZ-edition unique include the gradient-effect seconds hands complete with an America’s Cup trophy counterweight, and the 10-minute countdown indicator positioned at 3 o'clock that may be used by the team as they prepare to participate in the competition.
Naturally, Omega hopes that the water-resistant timepiece can aid another win for the titleholders.
“The team has won the cup four times—twice with us, and two other times before our day in the nineties,” says Grant Dalton, CEO of the Emirates-sponsored sports crew. “We’re often asked what’s the motivation to win it again… It's never been won by the same team.”
Beyond its appropriateness for bona fide sailors, the new Seamaster is also an impressive lifestyle accessory for swanky land dwellers, with its brushed black zirconium oxide ceramic case, white lacquer Super-LumiNova detailing (that's watch talk for glow-in-the-dark) and turquoise accents.
Even the packaging is impressive. It arrives in a dual-branded black and turquoise zip case, making for an unboxing experience fit for the movers and shakers of the modern world, counting the defending champions of the world's oldest sporting contest.
Sign up to be notified about the stock of the Emirates Team New Zealand Edition of the Planet Ocean Seamaster over on the Omega webstore. The timepiece, complete with a black and turquoise textile and rubber strap; the turquoise strap, available with or without a satin-brushed ceramic clasp, can be purchased separately.