In the age of AI, it can feel as if this technology’s march into our lives is inevitable. From taking our jobs to writing our poetry, AI is suddenly everywhere we don’t want it to be.

But it doesn’t have to be this way. Just ask Madhumita Murgia, the AI editor at The Financial Times and the author of the barn-burning new book Code Dependent: Living in the Shadow of AI. Unlike most reporting about AI, which focuses on Silicon Valley power players or the technology itself, Murgia trains her lens on ordinary people encountering AI in their daily lives.

This “global precariat” of working people is often irrevocably harmed by these dust-ups; as Murgia writes, the implementation and governance of algorithms has become “a human rights issue.” She tells Esquire, “Whether it was health care, criminal justice, or government services, again and again you could see the harms perpetrated on mostly marginalised groups, because that’s how the AI supply chain is built.”

Murgia takes readers around the globe in a series of immersive reported vignettes, each one trained on AI’s damaging effects on the self, from “your livelihood” to “your freedom.” In Amsterdam, she highlights a predictive policing program that stigmatises children as likely criminals; in Kenya, she spotlights data workers lifted out of brutal poverty but still vulnerable to corporate exploitation; in Pittsburgh, she interviews UberEats couriers fighting back against the black-box algorithms that cheat them out of already meagre wages.

Yet there are also bright spots, particularly a chapter set in rural Indian villages, where under-resourced doctors use AI-assisted apps as diagnostic aids in their fight against tuberculosis. Despite the prevalent sense of impending doom, there’s still time to reconfigure our relationship to this technology, Murgia insists. “This is how we should all see AI,” she tells Esquire, “as a way to preserve the world we know and believe in what we bring to it, but then use it to augment us.”

Murgia spoke with Esquire by Zoom from her home in London about data labour, the future of technology regulation, and how to keep AI from reading bedtime stories to our children.

ESQUIRE: What is data colonialism, and how do we see it manifest through the lens of AI?

MADHUMITA MURGIA: Two academics, Nick Couldry and Ulises A. Mejias, came up with this term to draw parallels between modern colonialism and older forms of colonialism, like the British colonisation of India and other parts of the world. The resource extraction during that period harmed the lives of those who were colonised, much like how corporations today, particularly tech companies, are performing a similar kind of resource extraction. In this case, rather than oil or cotton, the resource is data.

In reporting this book, I saw how big Silicon Valley firms go to various parts of the world I visited, like India, Argentina, Kenya, and Bulgaria, and use the people there as data points to build systems that become trillion-dollar companies. But the people never see the full benefits of those AI systems to which they’ve given their data. Whether it was health care, criminal justice, or government services, again and again you could see the harms perpetrated on mostly marginalised groups, because that’s how the AI supply chain is built.

You write that data workers “are as precarious as factory workers; their labour is largely ghost work and they remain an undervalued bedrock of the AI industry.” What would it take to make their labour more apparent, and what would change if the reality of how AI works was more widely understood?

For me, the first surprise was how invisible these workers really are. When I talk to people, they’re shocked to learn that there are factories of real humans who tag data. Most assume that AI teaches itself somehow. So even just increasing understanding of their existence means that people start thinking, There’s somebody on the other end of this. Beyond that, the way the AI supply chain is set up, we only see the engineers building the final product. We think of them as the creators of the technology, so automatically, all the value is placed there.

Of course, these are brilliant computer scientists, so you can see why they’re paid millions of dollars for their work. But because the workers on the other end of the supply chain are so invisible, we underplay what they’re worth, and that shows up in the wages. Yes, these are workers in developing countries, and this is a standard outsourcing model. But when you look at the huge disparity in their living wage of $2.50 an hour going into the technology inside a Tesla car, and then you see what a Tesla car costs or what Elon Musk is worth or what that company is making, the disparity is huge. There’s just no way these workers benefit from being a part of this business.

If you hear technologists talking about it, they say we all get brought along for the ride—that productivity rises, bottom lines rise, money is flushed into our economy, and all of our lives get better. But what we’re seeing in practise is those who are most in need of these jobs are not seeing the huge upside that AI companies are starting to see, and so we’re failing them in that promise. We have to decide as a society: What is fair pay for somebody who’s part of this pipeline? What labour rights should they have? These workers don’t really have a voice. They’re so precarious economically. And so we need to have an active discussion. If there are going to be more AI systems, there’s going to be more data labour, so now is the time for us to figure out how they can see the upside of this revolution we’re all shouting from the rooftops about.

One of our readers asks: What are your thoughts on publishers like The New York Times suing OpenAI for copyright infringement? Do you think theyll succeed in protecting journalists from seeing their work scraped and/or plagiarised?

This hits hard for me, because I’m both the person reporting on it and the person that it impacts. We’ve seen how previous waves of technological growth, particularly the social media wave, have undermined the press and the publishing industry. There’s been a huge disintermediation of the news through social media platforms and tech platforms; these are now the pipes through which people get information, and we rely on them to do it for us. We’ve come to a similar inflection point where you can see how these companies can scrape the data we’ve all created and generate something that looks a lot like what we do with far less labor, time, and expertise.

It could easily undermine what creative people spend their lives doing. So I think it’s really important that the most respected and venerable institutions take a stand for why human creativity matters. Ultimately, I don’t know what the consequences will be. Maybe it’s a financial deal where we’re compensated for what we’ve produced, rather than it being scraped for free. There are a range of solutions. But for me, it’s important that those who have a voice stand up for creative people in a world where it's easy to automate these tasks to the standard of “good enough.”

Another reader asks: What AI regulations do you foresee governments enacting? Will ethical considerations be addressed primarily through legislation, or will they rely on nonlegal frameworks like ethical codes?

Especially over the last five years, there have been dozens and dozens of codes of conduct, all self-regulating. It’s exactly like what we saw with social media. There has been no Internet regulation, so companies come up with their own terms of service and codes of conduct. I think this time around, with the AI shift, there’s a lot more awareness and participation from regulators and governments.

There’s no way around it; there will be regulation because regulation is required. Even the companies agree with this, because you can’t define what’s ethical when you’re a corporation, particularly a profit-driven corporation. If these things are going to impact people’s health, people’s jobs, people’s mortgages, and whether somebody ends up in jail or gets bail, you need regulation involved. We’ll need lines drawn in the sand, and that will come via the law.

In the book, you note how governments have become dependent on these private tech companies for certain services. What would it look like to change course there, and if we don’t, where does that road lead?

It goes back to that question of colonialism. I spoke to Cori Crider, who used to be a lawyer for Guantanamo Bay prisoners and is now fighting algorithms. She sees them as equally consequential, which is really interesting. She told me about reading a book about the East India Company and the Anglo Iranian Oil Corporation, which played a role in the Iranian coup in the ’70s, and how companies become state-like and the state becomes reliant on them. Now, decades later, the infrastructure of how government runs is all done on cloud services.

There are four or five major cloud providers, so when you want to roll out something quickly at scale, you need these infrastructure companies. It’s amazing that we don’t have the expertise or even the infrastructure owned publicly; these are all privately owned. It’s not new, right? You do have procurement from the private sector, but it’s so much more deeply embedded when it comes to cloud services and AI, because there are so few players who have the knowledge and the expertise that governments don’t. In many cases, these companies are richer and have more users than many countries. The balance of who has the power is really shifting.

When you say there are so few players, do you see any sort of antitrust agitation here?

In the U.S., the FTC is looking at this from an antitrust perspective. They’re exploring this exact question: “If you can’t build AI services without having a cloud infrastructure, then are you in an unfair position of power? If you’re not Microsoft, Google, Amazon, or a handful of others, and you need them to build algorithms, is that fair? Should they be allowed to invest and acquire these companies and sequester that?” That’s an open question here in the UK as well. The CMA, which is our antitrust body, is investigating the relationships between Microsoft, OpenAI, and startups like Mistral, which have received investment from Microsoft.

I think there will be an explosion of innovation, because that’s what Silicon Valley does best. What you’re seeing is a lot of people building on top of these structures and platforms, so there will be more businesses and more competition in that layer. But it’s unclear to me how you would ever compete on building a foundational model like a GPT-4 or a Gemini without the huge investment access to infrastructure and data that these three or four companies have. So I think there will be innovation, but I’m not sure it will be at that layer.

In the final chapter of the book, you turn to science fiction as a lens on this issue. In this moment where the ability to make a living as an artist is threatened by this technology, I thought it was inspired to turn to a great artist like Ted Chiang. How can sci-fi and speculative fiction help us understand this moment?

You know, it’s funny, because I started writing this book well before ChatGPT came out. In fact, I submitted my manuscript two months after ChatGPT came out. When it did come out, I was trying to understand, “What do I want to say about this now that will still ring true in a year from now when this book comes out?” For me, sci-fi felt like the most tangible way to actually explore that question when everything else seemed to be changing. Science fiction has always been a way for us to imagine these futures, to explore ideas, and to take those ideas through to a conclusion that others fear to see.

I love Ted Chiang’s work, so I sat down to ask him about this. Loads of technologists in Silicon Valley will tell you they were inspired by sci-fi stories to build some of the things that we writers see as dystopian, but technologists interpret them as something really cool. We may think they’re missing the point of the stories, but for them, it’s a different perspective. They see it through this optimistic lens, which is something you need to be an entrepreneur and build stuff like the metaverse.

Sci-fi can both inspire and scare, but I think more than anything, we are now suffering from a lack of imagination about what technology could do in shaping humans and our relationships. That’s because most of what we’re hearing is coming from tech companies. They’re putting the products in our hands, so theirs are the visions that we receive and that we are being shaped by. That’s fine; that’s one perspective. But there are so many other perspectives I want to hear, whether that’s educators or public servants or prosecutors. AI has entered those areas already, but I want to hear their visions of what they think it could do in their world. We’re very limited on those perspectives at the moment, so that’s where science fiction comes in. It expands our imagination of the possibilities of this thing, both the good and the bad, and figuring out what we want out of it.

I loved what Chiang had to say about how this technology exposes “how much bullshit we are required to generate and deal with in our daily lives.” When I think about AI, I often think that these companies have gotten it backwards. As a viral tweet so aptly put it: “I want AI to do my laundry and dishes so I can do my art and writing, not for AI to do my art and writing so I can do my laundry and dishes.” That’s a common sentiment—a lot of us would like to see AI take over the bullshit in our lives, but instead it’s threatening our joys. How have we gotten to this point where the push is for AI to do what we love and what makes us human instead of what wed actually like to outsource?

I think about this all the time. When it started off, automation was just supposed to help us do the difficult things that we couldn’t. Way back at the beginning of factory automation, the idea was “We’ll make your job safer, and you can spend more time on the things that you love.” Even with generative AI, it was supposed to be about productivity and email writing. But we’ve slid into this world where it’s undermining the things that, as you say, make us human. The things that make our lives worth living and our jobs worth doing. It’s something I try to push back on; when I hear this assumption that AI is good, I have to ask, “But why? What should it be used for?” Why aren’t we talking about AI doing our taxes—something that we struggle with and don’t want to spend our time doing?

This is why we need other voices and other imaginings. I don’t want AI to tell bedtime stories to my children. I don’t want AI to read all audiobooks, because I love to hear my favourite author read her own memoir. I think that’s why that became a meme and spoke to so many people. We’ve all been gaslighted into believing that AI should be used to write poetry. It’s part of a shift we’ll all experience together from saying, “It’s amazing how we’ve invented something that can write and make music” to “Okay, but what do we actually need it for?” Let’s not accept its march into these spaces where we don’t want it. That’s what my book is about: about having a voice and finding a way to be heard.

I’m reminded of the chapter about a doctor using AI as a diagnostic aid. It could never replace her, but it’s a great example of how this technology can support a talented professional.

She’s such a good personification of how we can preserve the best of our humanity but be open to how AI might help us with what we care about; in her case, that’s her patients. But crucially, her patients want to see her. That’s why I write about her previous job, where people were dying and she didn’t have the equipment to help them. She had to accept that there were limitations to what she could do as a doctor, but she could perform the human side of medicine, which people need and appreciate. This is how we should all see AI: as a way to preserve the world we know and believe in what we bring to it, but then use it to augment us. She was an amazing voice to help me understand that.

With the daily torrent of frightening news about the looming threat of AI, it’s easy to feel hopeless. What gives you hope?

I structured my book to start with the individual and end with wider society. Along the way, I discovered amazing examples of people coming together to fight back, to question, to break down the opacity in automation and AI systems. That’s what gives me hope: that we are all still engaging with this, that we’re bringing to it our humanness, our empathy, our rage. That we’re able to collectivise and find a way through it. The strikes in Hollywood were a bright spot, and there’s been so much change in the unionisation of gig workers across the world, from Africa to Latin America to Asia. It gives me hope that we can find a path and we’re not just going to sleepwalk into this. Even though I write about the concentration of power and influence that these companies have, I think there’s so much power in human collectivism and what we can achieve.

Also, I believe that the technology can do good, particularly in health care and science; that’s an area where we can really break through the barriers of what we can do as people and find out more about the world. But we need to use it for that and not to replace us in doing what we love. My ultimate hopefulness is that humans will figure out a way through this somehow. I’ve seen examples of that and brought those stories to light in my book. They do exist, and we can do this.

Originally published on Esquire US

Don’t get me wrong. Freedom is great. Power to the people. Without it, I wouldn’t be able to write a scathing op-ed about what of it makes me weary (thanks, Ancient Greece, birthplace of modern democracy). Trust me, as both a consumer and producer of content, I fully acknowledge the irony.

Dispersing legislative and judicial authority prevents a single entity or individual from abusing their position—which is generally the direction you’d want to head as a civilisation. It’s this civic participation that promotes accountability, but what happens when the opportunity to participate is too freely available? And active participants are only composed of select personalities that are naturally inclined to, well, participate?

Let me steer away from the notion of government and focus on culture. The Internet was obviously the great usher of an equitable albeit virtual society. With effectively no one owning or governing it, in the words of Berkeley astronomer Clifford Stoll, “It’s the closest thing to true anarchy that ever existed.”

Anyone with access could contribute as much as they could partake. From requiring expensive equipment, experience and connections to make and market an album or a movie, to the ability to do so without all that save a smartphone, dramatically levelled the playing field (resulting in stats like the one about lifetimes worth of hours needed to watch all existing videos on YouTube alone).

It would be beyond ungrateful to lament about the extremely wide spectrum of choice. We will never run out of things to watch when we train the machine to automatically feed us with at least four more you might like this.

But thus, we stay stuck in a loop of limited exposure.

Talk to anyone from the days of yore (specifically, before digital TV), and you’ll find most of them are able to bond over what was on screen at prime time. Author of The Nineties called it “the last era that held to the idea of an objective, hegemonic mainstream before everything began to fracture” in his exposition on the defining decade.

It may not be direct causation but is it possible one factor pushing an all-time high divisive climate of opinions and temperaments is the fact that we remain chowing down only what appeals to us, made by people who already share the same perspectives as us?

Our last major shared experience was probably COVID. And maybe Tiger King. Now, at the seeming height of streaming, enter Sora. OpenAI’s next big thing since ChatGPT constructs realistic videos from text prompts at a standard that is frightening. It’s great that tools to create are available for anyone to express their ideas (maybe not so much for graduates who spent years earning qualifications to use earlier versions of said tools, but c’est la vie).

It means more diversity, representation, and recognition. However, at this zenith of infotainment free-for-all, opening ourselves to alternative viewpoints is definitely going to take a little more conscious diligence than sitting back to let an algorithm decide what to watch.


The evolution of music consumption over the past three decades has been a wild ride from questionable downloads to unlimited playlists. Remember when downloading music and burning CDs felt like it took an eternity? 

With internet speeds being what they were back then, patience was indeed a virtue. Today, it’s all about 24/7 access and listening. It’s incredible how fast things can change. 

Amid the rapid rise of AI and the digital age, the tempo of music consumption shows no signs of slowing down. As physical album sales plummet and streaming services take over, where will this relentless progress take us next? 

Rewind the tape

The ’90s was the era of physical albums, which stored about 700 MB worth of audio tracks. Then came MPEG and MP3 formats, where transferring music between devices became as common as burning CDs. MP3s—and the world’s open secret—digital music piracy in the 2000s were the unsung heroes of the time, allowing people to acquire and carry tunes wherever they were. 

That was everyone’s reality before iTunes, where instead of buying a physical album, you can buy music from your computer., SoundCloud and Bandcamp entered the market and offered budding artists a place to share their music with the world. 

But it was Spotify’s arrival on the scene in 2008 that created a seismic shift in music consumption. It’s as if the platform has everything—infinite music to listen to, free and premium account options and an algorithm that seems to know every person’s music taste. Spotify quickly became the go-to destination for music lovers everywhere 

Contemporary perspective


♬ Water - Tyla

Fast forward to today, the dynamics are evolving yet again. Research has shown that Gen Z spends more time streaming music than every other generation, dedicating 40 minutes more than the rest of the population. 

Their eclectic taste spans genres like hip-hop, R&B and alternative rock. Having grown up with the internet as an integral part of their lives, this demographic embraces genre diversity more than any other generation. 

It’s not just the younger audience—older generations are jumping on the bandwagon. Have you ever gone to TikTok, found great music and added it to your Spotify playlist? TikTok has emerged as a place where viral hits can catapult artists to stardom even with just one hit. 

One perfect example is “Driver’s License” by Olivia Rodrigo, which became a massive song on the platform before dominating streaming services and “Water” by Tyla, who is often called a one-hit wonder. 

What’s truly exciting, though, is the rise of DIY music. With a rising preference for fresh beats produced outside established recording studios, aspiring musicians are embracing their creativity like never before. This democratisation of music creation is not just a trend, but a movement reshaping how music can empower and connect people with others on their own terms. 

The rising popularity of home studio

J.Cole waited two hours in the rain outside Jay Z’s studio to give him his mixtape, which the latter casually dismissed. Back in the day, aspiring artists needed to get through the O.Gs to reach the top. 

Gone are the days when success in the music industry depended on securing deals with prestigious labels. That was the reality for many musicians, but the game has changed. Today, indie artists are rewriting the rules. For the first time in many years, a new breed of independent copyright owners is growing and making music from the comforts of their own homes. 

Home studios are all the rage today—with the rise of independent artists, they’re not going anywhere soon. With the advancements in technology and the rising accessibility of tools, artists can craft professional-grade music from the comfort of their own space. 

This newfound accessibility will continue to empower many artists to embrace their own creativity in the following years. Who knows, it might inspire casual listeners to create their own beats, too. 

The future of learning an instrument

The rise of home studios isn’t just changing how music is made—it’s reigniting the interest in learning musical instruments. Thanks to the digital age, access to music education has never been more democratised. 

From free tutorials on platforms like YouTube to hybrid instruments, anyone can be a musician. Studies may have shown that music-related ability is 50 per cent inherited from a family member. Still, the availability of free resources means anyone can hone their skills if they dedicate enough time and effort to learning.

Musical instruments have also continuously adapted to the technological advancements of artists. Case in point: virtual instruments—powered by artificial intelligence and advanced software—allow individuals to learn a specific instrument and experiment with unlimited possibilities. 

It’s also hard to keep up with the recent otherworldly musical inventions, such as sitars made from golf clubs and miniature synthesisers. Recently, the world’s first Kovar guitar strings were produced. They’re more corrosion-resistant than your typical Titanium string. Kovar is a nickel-cobalt alloy commonly used in the construction industry and has now made its way into the music industry. Will these strings strike a chord with guitarists? Only time will tell. 

Even if you’re not strumming a guitar yourself, the prospect of future instruments looks promising. Picture wearable instruments like bracelets embedded with sensors and hybrid instruments that seamlessly blend digital and acoustic elements. In an AI-dominated era, what better way to appreciate technological advancements than through music? 

Innovations to look out for

As streaming continues to dominate the musical landscape, expect to see even more tailored-fit experiences in the years to come. Much of people’s lives are accompanied by a soundtrack, whether at work, home or play—and it’s not going anywhere. Around 71% of people say music is essential to their mental wellness, and 78% say it helps them relax and cope with stress. Given that, what we can expect is a total blast on hyper-personalisation.

As streaming platforms use artificial intelligence and machine learning to improve recommendations, you can expect more innovations like Spotify’s AI DJ and Daylist in the coming years. Soon enough, systems can analyse beyond your streaming activities, current weather, time of the day and location. 

It’s a bit frightening knowing that AI can soon predict your desires long before you identify the need for it. That future is not impossible, given the rapid advances of AI. One thing’s for sure, though—personalised innovations will quickly rise as CD sales and digital downloads slowly go extinct. 

With the rise of VR and AR technologies, music streaming will become a catalyst for more innovative live music experiences—exclusive live streaming of concerts, DJ sets and virtual series are possibilities of the future. Considering the future 6G, you can look forward to virtual visual streaming—imagine having your favourite artist performing in front of you as their only audience. It’s like having an intimate concert in the comfort of your own home. 

With music playing 24/7, it’s easy to get tired of the same tunes. Talking about music is more than finding new songs to listen to—it’s a way for people to connect. That being said, you can expect to see the emergence of social music streaming, where users can follow friends’ listening activities, share playlists and collaborate on music creation. 

How AI plays in the scene

Future music consumption tools would likely involve a mix of AI-generated and human-created instrumentals, songs and soundscapes. When the song “Heart on My Sleeve,” featuring Drake and the Weeknd’s AI-generated vocals dropped, it immediately went viral. The track was posted on TikTok and streaming services, which racked up 600,000 Spotify streams and 15 million TikTok views before it was removed from all platforms due to copyright violation claims. Despite the controversy, people love it, even going as far as telling AI is terrible, but not until this song dropped.

While some artists feel threatened by AI, others see it as an opportunity to make passive income from other creators producing songs that use their voices. Grimes is the living embodiment of this concept—she released, a platform that allows people to create new songs using her voice. 

If you’ve ever created YouTube videos, you know the struggle of finding royalty-free music. Enter Beatoven and Boomy—platforms that let you generate music and royalty-free tracks with the help of AI. These tools will let you create music based on your chosen genre, energy level and mood. What a way to be your own DJ. 

What the future holds

Looking back on the past, present and future of music consumption, one thing is certain—streaming will remain an unstoppable force. What’s exciting about the future is how people listen to music and the opportunities for music creation as home studios become more popular. 

Whatever the future holds, remember that consuming music is more than just hitting that play button. It’s also about connecting people. 


There's a ton going on in the AI space, but if it's associated with meme lord Elon Musk, we naturally pay attention. The billionaire's Artificial Intelligence startup xAI has just officially launched its large language model into the wild wild web.

Grok was unleashed in chatbot form last year, only accessible with a Premium+ subscription on X (formerly Twitter, as you already know yet we somehow still feel obliged to mention). Now, it's available on GitHub under Apache License 2.0. Which allows commercial use, modification and distribution; albeit without liability or warranty.

Which means

Developers, researchers, maybe enthusiasts with enough Internet knowledge and a supercomputer can build on Grok-1 and directly impact how the xAI updates future versions of the model. Base model weights and network architecture have been released, but without its training code. Which simply implies users can't see what Grok learnt from ...but to say it's text data from X wouldn't be too much of a stretch.

Artwork via prompt proposed by Grok on Midjourney

What's the big deal about Grok?

Created by the team involved in molding OpenAI's ChatGPT (more on that later), one thing Grok had going was access to real-time data on X. While that's live information, it is also a source highly susceptible to inaccuracy.

Grok-1 is currently "not fine-tuned for specific application such as dialogue". Yet, it's modeled after Douglas Adams' Hitchhiker’s Guide to the Galaxy as a cheeky alternative to relatively serious rival models from OpenAI (GPT-4), Meta (LlLaMa 2), Google (Gemini, Gemma2B/7B) and others.

Grok has two modes you can toggle—'fun' and 'regular'. No points for guessing which is default. If that wasn't enough to drive the point home, its site spells out that "Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!"

And if you're wondering how Musk never fails to come up with alien-sounding names and what Grok means, we can answer the latter. It's slang for intuitive understanding, or establishing rapport.

There goes the Walter White dream.

The thing about open sourcing

Musk's stance on open sourcing AI is something he has been very verbal about. The most notable company he criticised being OpenAI; which the businessman initially helped fund and previously co-found, but eventually sued for breaching an agreement about keeping it non-profit. The proprietor subsequently revealed emails claiming that Musk was well aware of the plans and was more about wanting it folded into Tesla.

ANYWAY. Making generative AI tools free-for-all in the name of democracy has always been iffy waters. There's always the risk of abuse in the hands of unpredictable, fallible human beings. With fake news already its own monster, Gen AI sometimes feels like the steroids to its hulking mass.

All things considered, it's definitely a buzz-worthy announcement about exciting tech. But as the safest way any common folk can conclude about the consequences—only time will tell.

More about Grok here.

I can't recall the last time I did, off the top of my head. My mind goes straight to basic survival like consumption and absolution of energy; eating and defecating—excuse the savoury start to this article. Yet, even these exercises are hardly ever carried out solely anymore. You spend your lunch with mobile Netflix and play <insert top App Store game> on the crapper.
It certainly doesn’t help that AI is continuously advancing its proficiencies. If the Industrial Revolution reduced back-breaking labour 300 years ago, bestowing folks time to pursue interests outside the daily grind, AI is now doing the same with mental labour. Which means more time on our hands, and theconstant need to do something only intensifies.


We’re wired for stimulation, as exemplified by doom scrolling. Even without Tik-Tok induced dopamine highs, we’re too permeated in a state of overstimulation to acknowledge it. On numerous occasions, I’ve caught myself thumbing my phone not only during commercials (thanks, YouTube) but the shows that I’m watching.
It blows my mind to recall that listening to music used to be a pastime. Ever since they made gramophones fit in our pockets, songs are now musical white noise for commute. Even then, Spotify isn't the app you’re primarily engaging with. You’re sifting emails, answering texts, replying to comments (I promise this is not a smartphone-hating piece).

A lot came with modern convenience but a lot left as well. With everything instantaneously available, value is lost and gratitude diminishes. It's an extreme analogy but we once (and for certain parts of the world still, optional or not) had to physically get out there and source for sustenance; not just click "Check Out".
This displacement is so poetically encapsulated in Triangle of Sadness, after the motley crew gets marooned on the island. The dynamic shift based on life and death priorities effectively spells out how challenging and therefore, valuable a simple task like keeping yourself fed can be.

We are evidently geared for different times. Consider a washer-dryer versus manually doing a load of laundry. With this luxury of time, we should allow ourselves to simmer in one activity in a moment.

Say it with me: "Meaningful engagement" isn't a hippie phrase

A study by the Institute of Psychiatry at the University of London proved that multitasking makes you dumber. Think poor sleeping habits are bad? It's found that multitasking is detrimental to your IQ more severely than losing 40 winks or watching hours of trash TV. Done chronically and it can decrease grey matter density in parts of the brain.
There's no self-help angle here. I could advise to schedule “deep work” at “peak performance time” set away from “distractions” but I'd instead proffer the sinful cliché of a perspective change.

Is it plausible to retrain ourselves to concentrate even amid internal and external interferences? To plan for recreation—in its true sense—the way we do with healthy work practices. In blocks of pure, present, recognition.
In the past, fasting was one religious way to connect to a higher power. Perhaps not only because the devout abstains from sensory indulgence but the absence of needing to hunt/kill/gather/flay/cook/clean likely resulted in hours returned; hours used for quiet meditation. A contemporary equivalent wouldn't be one from bodily grub but mental fodder.

You'd be amazed how long a weekend can be without the Internet. Remove media consumption from leisure and all that’s left is either existential panic at newfound boredom or production. To create. Write, sketch, heck, dance. Explore what the body and mind are capable of. Appreciate the endeavour and how we can afford to partake in it.

Dial down the ambition; we don’t have to do them all. Only one at a time. And for once in a long time, focus.


One watch brand not short on out-there ideas is Hublot.

Despite closing in on its 45th birthday, it is still regarded as the enfant terrible of the luxury watchmaking biz*. With its Big Bang series, it skilfully blends all kinds of weird and wonderful materials including ceramic, cermet, Kevlar, tungsten, magnesium and rubber into shamelessly hench watches beloved of millionaires and sportsmen, and especially millionaire sportsmen.

It’s MP** series is the place to see its nuttiest creations. For example, 2013’s MP-02 Key of Time came with a one-off mechanism that allowed the wearer to adjust the time to four times faster or four times slower than the rate of actual time passing (Why? It was something to do with being able to control time, the true luxury of our age…).

While 2011’s MP-08 Antikythera Sun Moon paid tribute to the ancient Greek hand-powered model of the solar system, sometimes called the oldest-known example of an analogue computer. Looks-wise these creations have veered heavily towards the steampunk, and they tend to be wildly impractical for actually telling the time.

Hublot just unveiled the latest in the series—the MP-10 Tourbillon Weight Energy System Titanium, a timepiece every bit as unwieldy as its name. (It doesn’t have a dial or hands. You wind it using a pair of tiny sliding white gold weights.)

You’ve got to love that the MP series exists. It’s so barmy you wouldn’t be totally surprised if Hublot announced it had all been dreamed up by a computer squirrelling away in a Swiss bunker while the rest of the company got on with selling its (comparatively) normal watches.

We mention this because Ricardo Guadalupe, Hublot’s CEO, told Esquire he’d recently given the idea of an AI-generated watch some credence.


“It happened three weeks ago,” he said. “We tried to use it in design. We did some experiments. I must say—amazing results.”

If Hublot was to introduce an AI-designed watch, would it make a virtue of it? Or would it hide behind it?

“I don’t know,” Guadalupe said. “It came up with ideas where it incorporated some complications from other brands, where we can see it was inspired by a [avant-garde independent brand] Grubel Forsey, for example. But really—the results were ‘wow!’ Because if you ask a designer in the company to do that, it will cost you a fortune! And that was for free! And it showed me 10 or 12 products.”

Happily for the human designers, many were only possible in theory.

“Some of them would be impossible to make. One was a kind of a tourbillon / minute repeater with an equation of time [complication]—a Big Bang. They put the screws in a different way. This one was impossible to realise. But it’s really interesting. Because even if it’s impossible, it can give you an idea, you know? It was inspirational. I was really surprised.”

If not Hublot, some brand will surely come up with an AI-designed watch, and soon. On Wednesday, the womenswear designer Norma Kamali announced she was teaching an AI system to replicate her design style—"downloading my brain”, she called it—so that when the day comes for her to retire, she won’t have to worry about a successor—a computer will simply carry on with her ideas.

Obviously this is all fairly terrifying and awful for anyone involved in the creative industries in any way at all. But it does make you wonder if a Hans Wilsdorf ‘designed’ Rolex from beyond the grave would make it any more authentic. Or quite what the ghost-in-the-machine of Omega’s founder Louis Brandt would have made of the 21 plastic MoonSwatches currently stealing the limelight from the brand’s more luxurious creations. Quite possibly he’d be spinning in his grave. Under a full Moon.

*Not least by itself.

**It stands for 'masterpiece'.

Originally published on Esquire UK

Something’s off, but you can’t quite name it. It’s the moment you get home after staying with friends and an influencer using their exact coffeemaker pops up on your Instagram feed. There's the split-second after an actor delivers a quippy line on a streaming series and you try to parse whether this scene has already become a meme or if it’s just written to court them. It’s the new song you’ve been hearing everywhere, only to discover it’s an ‘80s deep cut, inexplicably trending on TikTok.

There is a name for this uneasiness. It’s called “algorithmic anxiety,” and it’s one of the main subjects of Kyle Chayka’s new book, Filterworld: How Algorithms Flattened Culture. A staff writer for The New Yorker, Chayka charts the rise of algorithmic recommendations and decision-making. He shows how culture has slowly started effacing itself to fit more neatly within our social media platforms' parameters

Algorithms, Chayka reminds us, don’t spring from the machine fully-formed. They’re written by humans—in this case, humans employed by the world's biggest tech conglomerates—and their goal is simple: to prioritise content that keeps us scrolling, keeps us tapping and does not, under any circumstances, divert us from the feed.

Filterworld shows us all the ways this can manifest, both online and IRL, into a kind of contentless content. Songs are getting shorter, because it only takes 30 seconds to rack up a listen on Spotify. Poetry has enjoyed an unexpected revival on Instagram. But mostly when it is universal, aphoristic and neatly formatted to work as image as well as text.

There’s the phenomenon of the “fake movie” on streaming services like Netflix. These cultural artefacts have actors, plots, settings—all the makings of a real film. But it still seem slickly artificial, crowd-sourced and focus-grouped down to nothing.

If our old tech anxiety amounted to well-founded paranoia (“Are they tracking me? Of course they are.”), the new fear in Filterworld is more existential: “Do I really like this? Am I really like this?” Is the algorithm feeding us the next video, the next song, tailored to our unique taste? Or is it serving us the agglomerated preferences of a billion other users? Users who, like us, may just want something facile and forgettable to help us wind down at the end of the day.

Chayka doesn’t give us easy answers at the end of Filterworld. He does, however, offer an alternative to the numbing flow of the feed: taste! Remember taste? We still have it. Although the muscles may have atrophied after so many of us have ceded our decision-making abilities to the machines.

Rediscovering our personal taste doesn’t have to be an exercise in high culture or indie elitism. But it does require what Chayka calls the conscientious consumption of culture. In seeking out trusted curators, seeking out culture that challenges us and taking the time to share with others what we love.

To go deeper, Esquire sat down with Chayka to talk about the cultural equivalent of junk food, the difference between human and algorithmic gatekeepers, and why “tastemaker” doesn’t need to be a dirty word. This interview has been edited for length and clarity.

ESQUIRE: Let me start with a slightly provocative question. Is there anyone with a bigger grudge against algorithms than journalists?

KYLE CHAYKA: Well, journalists are known to have a grudge against algorithms. I can speak to my own dislike of them. Just because they’ve taken away this filtering, tastemaking function that journalists have had for so long. But through the course of the book, I talk to all sorts of creators who hate algorithms just as much.

It’s the illustrator who got trapped into doing one bit on Instagram because it succeeded all the time. Or the influencer whose hot selfies get tons of likes but their actually earnest, artistic posts don’t get any attention. In the book, I interview coffee shop founders around the world, and even they are like, “I hate the algorithm because I have to engage with all these peoples’ photos of my cappuccinos.” Everyone feels kind of terrorised.

Maybe journalists were just part of the first wave to realise this?

I think journalists are often canaries in the coal mine, partly because we complain the loudest about everything. But you could see the impact of algorithmic feeds in the media really early on. We moved from consuming news on cable TV or in a newspaper or even on a website homepage to consuming stories the majority of the time through social media feeds. And that just takes away so much control.

A newspaper front page or a website homepage is a human-curated, thought-through intentional thing that highlights important stuff, along with fun stuff, along with goofy stuff. There was an intention and a knowledge to that, which algorithmic feeds have just totally automated away.

Let’s take it from news to culture, which is really the focus of your book. Filterworld explains that the algorithms driving social media exist to keep us engaged as long as possible.The result is a kind of flattening of culture. Our social feeds privilege content that’s easily digestible so we can keep on grazing. What happens to us when all the culture we consume is flattened like that? And we’re not pushed to seek out new things, or to just try something that makes us uncomfortable? What happens to us when we aren’t getting any nutrients, you could say, from the feed?

It makes me think of the cultural equivalent of junk food. It’s engineered to appeal to you. To engage your senses in ways you might not even like, per se, but it’s just so chemically perfect. I talk a lot about how creators feel pressure to conform in certain ways to the feed. Consumers also have to conform in a way. Algorithmic feeds push us to become more passive consumers. That we don't really think about what we’re consuming. We float along on the feed and not think about our own taste too much. I feel like that makes us into more boring people. It makes the cultural landscape less interesting. But it also takes away this opportunity for us to encounter art that is really shocking or surprising or ambiguous.

Take the example of a Spotify playlist. You start by listening to something that you choose. Then Spotify pushes you along on this lazy river of music that is similar to what you put on and is not going to disrupt your experience but it’s also not going to push you anywhere new. It’s not going to try to disrupt you; it’s not going to try to challenge your taste. In the book I contrast that with an indie radio DJ who is making these intentional choices to put songs next to each other that don’t really fit but have some kind of implied meaning based on their proximity. Algorithmic feeds fundamentally can’t create meaning by putting things next to each other. There’s no meaning inherent in that choice because it’s purely automated, machine choice. There’s no consciousness behind it.

You talk a lot about curators in Filterworld. What else can a curator do for us that an algorithm cannot do? Why should we trust them more than an algorithm?

Curating as a word has this very long history dating back to Ancient Rome to the Catholic priesthood. It always had this meaning of taking responsibility for something. I feel like curators now take responsibility for culture. They take responsibility for providing the background to something, providing a context, telling you about the creator of something, putting one object next to others that build more meaning for it. So curating isn’t just about putting one thing next to another, it's all this background research and labour and thought that goes into presenting something in the right way.

That’s true of a museum curator who puts together an art exhibition. It’s true for a radio DJ who assembles a complicated playlist. It’s true for a librarian who chooses which books to buy for a library. But it’s not true for a Spotify algorithmic playlist. The Twitter feed is not trying to contextualise things for you with what it feeds to you. It’s just trying to spark your engagement. TikTok is maybe the worst offender because it’s constantly trying to engage your attention in a shallow way. But it’s absolutely not pushing you to find out anything more about something. There’s no depth there, there’s no context. It actively erases context, actually. It makes it even harder to find.

But we know curators can have their own agendas. What’s the difference between, say, a magazine editor who needs to please their advertisers and a tech company looking after their bottom line? Is there a difference?

There’s this transition that I write about in the book from human gatekeepers to algorithmic gatekeepers, so moving from the magazine editors and the record label executives to the kind of brute mathematics of the TikTok ‘For You’ feed. I think they both have their flaws. The human gatekeepers were biased. They were also beholden to advertisers; they had their own preferences and probably prioritised the people that they knew in their social circles. Whereas the flaw of the algorithmic feed is that while anyone can get their stuff out there, the only metric by which they’re judged is: How much engagement does it get? How much promotion does it merit based on the algorithmic feed?

So they’re both flawed. The question is: which flaws do we prefer? Or which flaws do we want to take with their benefits? The ability of the human gatekeeper was to highlight some voice that would be totally surprising or shocking—to highlight some new and strange thing that totally doesn’t fit with your preconceived notions of what art or music or writing is. The algorithmic feed can’t really do that because it’s only able to measure how much other people already consider it popular.

The advertiser thing—another hobbyhorse of mine is Monocle magazine, which has existed for a decade or two now. It’s a print magazine with a very nice mix of shopping and international news and culture and profiles. That magazine does really well selling print ads because they put print advertising in a good context with good articles. The advertisers appreciate the quality of the content that surrounds it. So that’s a net positive for everyone. Whereas with the internet now, the advertisers are almost in a war with the platforms just as much as the users are. Advertisers don’t want their content appearing willy-nilly, messily next to the crappy content the algorithmic feeds promote, which at this point might be snuff videos or videos of bombings in Gaza. That’s not serving either users or advertisers.

The other night, I was scrolling through this beautiful, curated interiors account and then there was an ad for Ex-Lax, just dropped in the middle of this very aspirational stuff.

That collision to me is the case and point. It’s so useless, and so not productive for either party, that it just feels like a glitch, you know? And that’s because of algorithmic targeting. It’s because these feeds don’t prioritise anything besides engagement.

Places like Monocle, for instance, cater to a relatively small readership. It’s not for everybody; it’s for this smaller subset of people who consider themselves clued-in. We’re getting into a sticky discussion about taste and tastemaking here, but: how do these more niche platforms react against the algorithm?

Tastemaking is a really complicated topic. I think it strikes a lot of people as elitist because you're talking about what people should like and why they should like it, and why I know something that you don’t. “I’m going to tell you something, and it's going to heighten your sensibilities or lead you somewhere different.” That can be intimidating, it can be pretentious, it can be alienating, it can be very biased in class ways, identity ways, all sorts of ways.

But I almost feel like it has to be defended at this point, just because we’re all so immersed in automated feeds. We’re consuming so much through different platforms that we’ve kind of lost touch with the human tastemaker. We all have voices we love following on Twitter or Instagram or TikTok but those voices get lost in the feed. We sometimes lose track of them and we sometimes don’t see their content. Those feeds are also not serving those creators particularly well because the business models are all based on advertising and the creators don’t get access to the bulk of that revenue. Through the book, I propose that one answer to Filterworld, to the dominance of these algorithmic feeds, is to find those human voices. Find tastemakers who you like and really follow them and support them and build a connection with those people.

Thinking about your own taste doesn’t have to be elitist. Fundamentally it’s just about creating a human connection around a piece of culture that you enjoy, and that should be open to anyone. It’s literally telling a friend why you like this specific song, or saying, “We should go see this movie, because I like the director because of XYZ reasons.”

Tastemaking is almost just being more conscientious about cultural consumption, being more intentional in the way that we’ve become totally intentional about food, right? Food is such a source of identity and community, and we take pride in what we eat, what restaurants we go to, what we cook. I would love it if people took more pride in going to a gallery, going to a library, going to a concert series at a concert hall. I think those are all acts of human tastemaking that can be really positive.

And all the things you mentioned are also things outside the house.

Yes. You’re coming together with other people in appreciation of the kind of culture you like to consume. And that’s really good. That helps everyone.

I want to finish by talking about the idea of ambient culture. You clearly appreciate ambient music, and in Filterworld you describe genres like lofi hiphop and Japanese City Pop as music that feels almost designed for the algorithm. Our feeds seem to push us toward ambient content: stuff that’s frictionless and easy to ignore. So I’m wondering, is that always a bad thing? When is ambience necessary and when is it detrimental?

I do really enjoy ambient content. My first book was about minimalism, which has a kind of ambient quality. I wrote an essay about Emily in Paris and ambient TV. I've written about Brian Eno a lot, the musician who coined the term ambient music. That kind of art fulfills a function: to put your brain at rest. It provides a pleasant background at a technological moment when we have a lot of distractions. Ambient TV is maybe the perfect TV to look at your phone in front of. It relies on the presence of that second screen to complement it. The TV show doesn’t have to be that interesting because your phone is interesting.

The problem becomes that through algorithmic recommendations, so much content is pushed towards ambience, and you never want all of your stuff to be ambient. You don’t only want to consume ambient art because then what are you actually paying attention to? If everything exists as a soothing background, what’s actually provoking you? What’s leading you somewhere new?

I think the critique goes back to Brian Eno’s definition of ambient music, which was that the music has to be “as ignorable as it is interesting.” You have to be able to ignore it. It can be in the background, but you should also be able to pay attention to it and be rewarded by your attention to it. I feel like a lot of culture now only falls into that former category. You’re only able to ignore it. Once you start paying attention, there’s nothing really gripping there. Certainly with TikTok and Spotify playlists, there’s this prioritisation of the soothing, numbing quality of ambient content. Functional stimulus in the form of culture is so big these days, whether it’s ambient music or ASMR videos.

Sleep sounds…

So now sometimes, culture exists in a functional context rather than an artistic context. You’re like, “Oh I watch The Office to fall asleep,” or, “I listen to this track while I run because it sustains my exercise.” I personally always want to make an argument for culture for its own sake and for thinking deeply about artistic process and ideas.

Originally published on Esquire US

The Ray-Ban Meta Smart Glasses. RAY-BAN

I don't quite know how to feel about the new Ray-Ban Meta Smart Glasses. Especially when they run on AI. We get it, it's the whole handsfree, first-person POV experience ("Hey Meta, share this photo I took with just my literal face"). The convenience is clearly purposed for content creation, livestreaming and all that jazz. Allowing users to preview social media comments in real-time, even audibly, the ambitious eyewear also doubles as a pair of headphones and takes phone calls. Perhaps Meta thinks we aren't glued enough to our phones as it is.

Previously on Ray-Ban Meta Smart Glasses…

In partnership with EssilorLuxottica, the first generation—called "Ray-Ban Stories" because why bother hiding what they're really for—was launched in September 2021. They came in three styles (wayfarer, round and meteor), one colour (the very exciting black in shiny or matte) and two transitions options (the just as exciting grey and brown).

The second iteration now streamlined and lighter, boasts up to 150 frame and lens design combinations. More importantly, first-hand reviews are actually calling them comfortable. Water resistance clocks in at an IPX4 rating, should you consider skinny dipping.

Fancy design gif. RAY-BAN

Software upgrades

The biggest change, though, would undoubtedly be replacing the 5MP camera with an ultra-wide 12MP one. Capable of recording 1080p videos from a prior 780p in 60-second stints, the default mode is—surprise, surprise—now portrait rather than landscape. It also went from one microphone, which apparently wasn't much good in strong breeze, to a whopping five, including one on the nose bridge for a true 360 audio capture.

There's a marked difference in the listening experience too, via a 50 percent maximum volume increase and better directional output. Meaning you can continue discreetly enjoying the K-pop band you pretend not to like, unless you're standing in proximity within a silent room.

For privacy, which was a priority Meta strangely felt the need to emphasize, a blinking white light goes off when the device is recording. Minimizing the creep factor is something to appreciate when photo and video functions are easily activated by touchpads on the glasses' stems. Interestingly, this became the reason why certain frame colour options such as beige were removed as they were less obvious to see when the LED was turned on.

Operating on Qualcomm’s Snapdragon AR1 Gen 1 processor and eight times more internal storage at 32GB, the glasses allegedly last up to four hours of active use and come with a nifty sunglass charging case …which take approximately 75 minutes to full charge.

Wireless charging case. RAY-BAN

The AI bit

Besides taking your annoying voice commands, the integrated Meta AI is slated for an update next year to enable interaction with AR surroundings. Augmented reality is an intriguing direction to head in, when gadgets like Google Glass and Bose Frames never really took off. It begs the question why, when it didn't gain much traction two years ago, are they pitching a new version now?

Does the company know something we don't about the near future that produces this unfounded confidence in consumer demand? Will there be another pandemic where we will all be forced indoors to see the resurgence of virtual reality, NFTs and cryptocurrency? In other words, will the Ray-Ban Meta Smart Glasses finally be cool? And will I ever get to answering these speculative questions as opposed to simply throwing them out there? I guess some things we'll never know.

Ray-Ban Meta Smart Glasses are up for preorder now on Ray-Ban / Meta and on sale 17 October from USD299.

I'm not certain whether James Taylor meant to predict the takeover of artificial intelligence and the death of our collective imagination in his 1970 song “Fire and Rain.” Still, somewhere a music teacher is saying to herself: “Called it.”

That teacher is Miss Molloy—a bowl-cutted, crochet-sweatered, denim-skirted woman of 23 or 53—who taught our third-grade music class. One autumn morning, after we sang “Fire and Rain” off mimeographed lyric sheets, Miss Molloy taught us what the song was about, which was the robot apocalypse. “Suzanne, the plans they made put an end to you” meant she had succumbed to the computer chip in her brain, as had all of humanity. This left Sweet Baby James the last remaining human, with the song he’d written her, but he “just can’t remember who to send it to,” because his own chip had been implanted and the surrender of his own consciousness had begun. Pretty chilling stuff for third graders, but we absorbed it, uncritically.

Fifteen years later, I was in a friend’s dorm room listening to “Fire and Rain,” and I said, “I love this song, as scary as it is.” My friend looked at me with concern. I continued, “With the robots and everything?” And then about four seconds later it hit me: I’m going to have to make up a pseudonym for that teacher, because she absolutely got high.

That assessment stands, but listen: It’s 2023, I have at least three pieces of wearable tech on my body at all times, and AI has come for my job. But the most insidious development is that robots that curate our choices, guiding us on what to read, watch, and listen to. When you open Spotify, dozens of playlists wait for you—none of which you or anyone you know created. We have surrendered our taste to the machine. And what’s worse, we’re starting to forget we lived a different way.

Miss Molloy’s interpretation of “Fire and Rain” is objectively bananapants. But was she wrong about the future?

There's a line in Nick Hornby's novel High Fidelity in which the record-store-owning main character says, “What really matters is what you like, not what you are like.” Twenty-eight years after the release of the book, Spotify has prompted new questions: What do we lose when we stop making our own playlists? If the algorithm decides what we like, then what are we like?

“There’s no way a Spotify playlist is as good as a mixtape, or at least mine aren’t,” Hornby tells me. “Because you had to do things in real time, you had the opportunity to think and hear. You were reminded of a lyric, a beat, a sound that would lead you to the next song.” You had to think about who you were giving it to and how you could change their world. “There’s no construction now. In the digital era, it’s just: Here’s some songs you might like.” What I miss—just enough to remember it, for now—is a well-curated jukebox, the way a dollar-bill-huffing machine with a 100-compact-disc capacity could express the personality of a place. My favourite was at the Boiler Room, a friendly, scruffy gay bar in the East Village. This was the ’90s, and we East Village gays shunned the mainstream, so the selection was just slightly to the left of it: Jon Spencer Blues Explosion, Stereolab, Cibo Matto. The exact right soundtrack for a room packed with guys who could fit into X-girl T-shirts. A curatorial ear and a hive mind.

Without curation, everything is also nothing.

I returned to the Boiler Room recently, and as most places have, it’s adopted an Internet-enabled jukebox. Every song that exists on streaming, at your fingertips. But without curation, everything is also nothing. The hive mind breaks down into individual bees. A proper jukebox, like a homemade mixtape, is already largely a memory.

And soon enough it won’t be. It will be a thing you forgot even existed in the first place, like decent mass-produced chocolate, like a flight that doesn’t end with a pitch for a credit card. Like the Boiler Room itself, which is closing later this year.

"The absence of curveballs in algorithmic playlists is noticeable,” Hornby says. “I don’t want something that sounds exactly like what I usually listen to, just like I don’t want recommendations for books in a similar vein to the ones I write.” Right around the time Hornby was writing High Fidelity, the best mixtape I ever got came from a college friend named Brady. It arrived in my P. O. Box just before I graduated and moved to New York City. There were pop songs, left-field disco tracks, and at the end of side 1, “Being Alive,” from Stephen Sondheim’s Company. I’d never heard it and It was a gut punch: the precise sound of my soul as I prepared to start my life. A reminder to be less aloof in the real world than I had been in college. An I see you from someone I didn’t know was watching. A life changer.

The algorithm can’t be Brady. It can give you what it knows you want. But without human insight, it cannot give you what you need. It will not encourage you to evolve, because it cannot work as well if you do. The algorithm can know you. Scarily well. But it cannot love you. Commit an act of rebellion today: Make a playlist for someone. Assemble it with care. Throw in some curveballs. Choose a song or two that will make them feel seen, give it a clever name, text it to the person. Do it right now, before the chip takes hold of your brain.

I want to find Miss Molloy. I want to tell her I remember. All I have to do is find her. Anybody got an extra ticket to Burning Man?

Illustration By Matt Mahurin

Originally published on Esquire US

Having dedicated my career to funding, building, and growing new technologies, I’ve been present for many disruptions. In the post-dot-com bubble pop era, we witnessed the revolution of communications, online payments, marketplaces, advertising, and media by companies like Skype, Amazon, Uber, Google, and Netflix. These companies leveraged new technologies to build new business models or digitise ones that, in some cases, had existed for hundreds of years. Long after Facebook became a verb, we witnessed the eruption of Web3 and the eruption of evangelist techbro social media influencers shilling their special brew of lifestyle grandstanding and NFT minting.

But, as a former venture capitalist, I’ve yet to see a nascent technology so wholly and rapidly evangelised outside of its own circles of investors, builders, and users as Artificial Intelligence. It’s almost as if Steve Jobs rose from the grave to deliver a keynote on the potential of AI. So, out of curiosity for Steve’s insights on the topic, I tasked OpenAI’s ChatGPT tool with bringing him back to life and introduce AI in the style of one of his famous keynote speeches:

Good morning everyone, and welcome to this exciting keynote on artificial intelligence.

Today, I want to talk about a revolutionary technology that has the power to change the world as we know it. It’s a technology that’s been around for decades, but it’s only in recent years that we’ve seen its true potential start to unfold. I’m talking, of course, about artificial intelligence.

AI has the power to automate many of the tasks that we do every day, freeing us up to focus on the things that matter most. It has the power to make our lives easier, more efficient, and more connected. But more importantly, it has the power to solve some of the world’s biggest problems.

Think about it. With AI, we can tackle issues like climate change, poverty, and even the spread of diseases. The possibilities are truly endless, and the potential for impact is massive.

At the same time, we need to be mindful of the potential risks that come with this technology. We need to ensure that AI is developed in a responsible and ethical way, and that it serves the interests of humanity as a whole.

So let’s get to work. Let’s harness the power of artificial intelligence to build a better future for all of us. Thank you.

AI Steve Jobs Speech generated by ChatGPT

As AI Steve highlighted above, AI itself is not new. Prior to the release of tools like DALL-E, we saw AI leveraged for specific use cases across most major industries such as marketing, cybersecurity, and even CGI / animation in media. We’ve been using the technology for decades to classify, analyse, and create data (including text and images) for narrow sets of tasks, which is why it’s referred to as “Artificial Narrow Intelligence”. In contrast, new models allow for many use cases with no additional training or fine-tuning. This evolution from the previous generation of AI to today’s Generative AI models underpinning applications such as ChatGPT, DALL-E and others has been driven by advances in computing power, cloud data storage, and machine learning algorithms.

Unlike Web3, AI has already demonstrated its usefulness and potential beyond theoretical adoption. Also, unlike its predecessor in the timeline of popular new technologies, it doesn’t require mass adoption of a new protocol or regulatory approval. There are two possible applications of AI: the optimization of existing digital processes and the digitization of human tasks.

Generated with DALL-E | Prompt ‘Dubai in the style of Edward Hopper’

Optimization increases the speed and reduces the need for human input into existing algorithms. A straightforward example would be chatbots, which had their moment in the latter half of the 2010s and are making a comeback, armed with better-trained algorithms. Chatbots trained with existing customer care data sets will replace notoriously difficult-to-navigate FAQ pages on websites and costly call centres. The result will be a lower cost to do business and improved satisfaction.

This brings us to the frightening or exciting – depending on who you ask – scenarios where AI leads to the replacement of human roles. Short-term, this could range from copywriting, software engineering, art, animation, business analysis, and journalism. But, again, this isn’t a futuristic dream pontificated by the Silicon Valley elite on their three-hour podcasts; this is happening today. For example, Buzzfeed recently announced that it would start using ChatGPT to write journalistic pieces after technology journalism outlet CNET was found to use the same tool to write personal finance articles.

Readers should consider that despite the countless applications of AI, there is no cause for alarm when it comes to making the human worker redundant. The evolution of technology is an inevitability and we are better served by preparing for it rather than resisting it. Many pundits draw comparisons to the widespread fears of the industrial revolution replacing jobs. In the short term, these fears are unfounded. So long as the models underpinning applications exist in a state of ANI, even the most advanced tools will require human input and oversight. However, these tools will complement and augment human work by replacing menial, repetitive tasks in creative and technical fields. For example, this article was reviewed using Grammarly to check for spelling and grammar mistakes. 

Although we’re progressing beyond ANI, there’s still quite the journey ahead of us and little consensus on when we might reach our destination. Some scientists estimate that we’re decades away from progressing to the next state of Artificial Intelligence: Artificial General Intelligence (AGI). AGI would offer capabilities such as sensory perception, advanced problem-solving, fine motor skills and even social and emotional engagement. There’s quite a distance to travel from writing Shakespearean sonnets about lost socks in the dryer to developing a personality like that of Samantha, the protagonist’s AI companion in the 2013 film Her. It’s impossible to predict how soon we could begin to describe AI as AGI; estimates on timing range from a decade or twenty, if ever.

When it comes to Arabic, language models need to catch up. Today’s models are predominantly trained on content publicly available on the internet: webpages, Reddit, and Wikipedia make up approximately 85% of ChatGPT’s training data set, for example. Considering that approximately 60% of written content online is in English and less than 1% in Arabic, the inputs necessary to achieve the same quality outputs in the latter are nonexistent. It’s no secret that English is the lingua franca of Middle East business. Still, we should ask ourselves whether this will further subdue the use of Arabic in such settings. The impetus to ensure the development of the Arabic language in technology and business settings lies both in the private and public sectors in wealthy Gulf countries.

While there are reasons to celebrate AI’s coming of age, we need to keep our feet on the ground. The limitation on the applications of AI is compounded by questions of ethical standards, reliability, accuracy, and truthfulness raised by academics such as Gary Marcus, a leading AI sceptic. Even Mira Murati, the CTO of OpenAI (creators of the models underpinning DALL-E and ChatGPT) is arguing for regulatory oversight of AI. Questions remain on how to solve topics such as moderating offensive model outputs, intellectual property infringement, policing disinformation, and academic honesty to name a few. 

“How do you get the model to do the thing that you want it to do, and how do you make sure it’s aligned with human intention and ultimately in service of humanity?” 

Mira Murati, CTO, OpenAI

Make no mistake, AI is beyond the point of no return, but that doesn’t mean we can’t harness its power to empower our workforces and transform our lives. Although the excitement surrounding AI’s potential is justified, the challenges of its usage and misuse are much more significant than those of previous generations of technology and should not be taken lightly. We have at our disposal an incredible new tool; however, we must balance our eagerness to watch with mindfulness of the risks and implications and careful regulation. 

Rayan Dawud is a former venture capitalist who has held senior roles at Careem and Outliers Venture Capital in Dubai. He’s currently on a career break in London, where he’s exploring Artificial Intelligence.

Featured image generated using DALL-E with prompt ‘android from the film Ex Machina in a Hopper painting’

Originally published on Esquire ME