There's a ton going on in the AI space, but if it's associated with meme lord Elon Musk, we naturally pay attention. The billionaire's Artificial Intelligence startup xAI has just officially launched its large language model into the wild wild web.

Grok was unleashed in chatbot form last year, only accessible with a Premium+ subscription on X (formerly Twitter, as you already know yet we somehow still feel obliged to mention). Now, it's available on GitHub under Apache License 2.0. Which allows commercial use, modification and distribution; albeit without liability or warranty.

Which means

Developers, researchers, maybe enthusiasts with enough Internet knowledge and a supercomputer can build on Grok-1 and directly impact how the xAI updates future versions of the model. Base model weights and network architecture have been released, but without its training code. Which simply implies users can't see what Grok learnt from ...but to say it's text data from X wouldn't be too much of a stretch.

Artwork via prompt proposed by Grok on Midjourney

What's the big deal about Grok?

Created by the team involved in molding OpenAI's ChatGPT (more on that later), one thing Grok had going was access to real-time data on X. While that's live information, it is also a source highly susceptible to inaccuracy.

Grok-1 is currently "not fine-tuned for specific application such as dialogue". Yet, it's modeled after Douglas Adams' Hitchhiker’s Guide to the Galaxy as a cheeky alternative to relatively serious rival models from OpenAI (GPT-4), Meta (LlLaMa 2), Google (Gemini, Gemma2B/7B) and others.

Grok has two modes you can toggle—'fun' and 'regular'. No points for guessing which is default. If that wasn't enough to drive the point home, its site spells out that "Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!"

And if you're wondering how Musk never fails to come up with alien-sounding names and what Grok means, we can answer the latter. It's slang for intuitive understanding, or establishing rapport.

There goes the Walter White dream.

The thing about open sourcing

Musk's stance on open sourcing AI is something he has been very verbal about. The most notable company he criticised being OpenAI; which the businessman initially helped fund and previously co-found, but eventually sued for breaching an agreement about keeping it non-profit. The proprietor subsequently revealed emails claiming that Musk was well aware of the plans and was more about wanting it folded into Tesla.

ANYWAY. Making generative AI tools free-for-all in the name of democracy has always been iffy waters. There's always the risk of abuse in the hands of unpredictable, fallible human beings. With fake news already its own monster, Gen AI sometimes feels like the steroids to its hulking mass.

All things considered, it's definitely a buzz-worthy announcement about exciting tech. But as the safest way any common folk can conclude about the consequences—only time will tell.

More about Grok here.

I can't recall the last time I did, off the top of my head. My mind goes straight to basic survival like consumption and absolution of energy; eating and defecating—excuse the savoury start to this article. Yet, even these exercises are hardly ever carried out solely anymore. You spend your lunch with mobile Netflix and play <insert top App Store game> on the crapper.
It certainly doesn’t help that AI is continuously advancing its proficiencies. If the Industrial Revolution reduced back-breaking labour 300 years ago, bestowing folks time to pursue interests outside the daily grind, AI is now doing the same with mental labour. Which means more time on our hands, and theconstant need to do something only intensifies.


We’re wired for stimulation, as exemplified by doom scrolling. Even without Tik-Tok induced dopamine highs, we’re too permeated in a state of overstimulation to acknowledge it. On numerous occasions, I’ve caught myself thumbing my phone not only during commercials (thanks, YouTube) but the shows that I’m watching.
It blows my mind to recall that listening to music used to be a pastime. Ever since they made gramophones fit in our pockets, songs are now musical white noise for commute. Even then, Spotify isn't the app you’re primarily engaging with. You’re sifting emails, answering texts, replying to comments (I promise this is not a smartphone-hating piece).

A lot came with modern convenience but a lot left as well. With everything instantaneously available, value is lost and gratitude diminishes. It's an extreme analogy but we once (and for certain parts of the world still, optional or not) had to physically get out there and source for sustenance; not just click "Check Out".
This displacement is so poetically encapsulated in Triangle of Sadness, after the motley crew gets marooned on the island. The dynamic shift based on life and death priorities effectively spells out how challenging and therefore, valuable a simple task like keeping yourself fed can be.

We are evidently geared for different times. Consider a washer-dryer versus manually doing a load of laundry. With this luxury of time, we should allow ourselves to simmer in one activity in a moment.

Say it with me: "Meaningful engagement" isn't a hippie phrase.

A study by the Institute of Psychiatry at the University of London proved that multitasking makes you dumber. Think poor sleeping habits are bad? It's found that multitasking is detrimental to your IQ more severely than losing 40 winks or watching hours of trash TV. Done chronically and it can decrease grey matter density in parts of the brain.
There's no self-help angle here. I could advise to schedule “deep work” at “peak performance time” set away from “distractions” but I'd instead proffer the sinful cliché of a perspective change.

Is it plausible to retrain ourselves to concentrate even amid internal and external interferences? To plan for recreation—in its true sense—the way we do with healthy work practices. In blocks of pure, present, recognition.
In the past, fasting was one religious way to connect to a higher power. Perhaps not only because the devout abstains from sensory indulgence but the absence of needing to hunt/kill/gather/flay/cook/clean likely resulted in hours returned; hours used for quiet meditation. A contemporary equivalent wouldn't be one from bodily grub but mental fodder.

You'd be amazed how long a weekend can be without the Internet. Remove media consumption from leisure and all that’s left is either existential panic at newfound boredom or production. To create. Write, sketch, heck, dance. Explore what the body and mind are capable of. Appreciate the endeavour and how we can afford to partake in it.

Dial down the ambition; we don’t have to do them all. Only one at a time. And for once in a long time, focus.


One watch brand not short on out-there ideas is Hublot.

Despite closing in on its 45th birthday, it is still regarded as the enfant terrible of the luxury watchmaking biz*. With its Big Bang series, it skilfully blends all kinds of weird and wonderful materials including ceramic, cermet, Kevlar, tungsten, magnesium and rubber into shamelessly hench watches beloved of millionaires and sportsmen, and especially millionaire sportsmen.

It’s MP** series is the place to see its nuttiest creations. For example, 2013’s MP-02 Key of Time came with a one-off mechanism that allowed the wearer to adjust the time to four times faster or four times slower than the rate of actual time passing (Why? It was something to do with being able to control time, the true luxury of our age…).

While 2011’s MP-08 Antikythera Sun Moon paid tribute to the ancient Greek hand-powered model of the solar system, sometimes called the oldest-known example of an analogue computer. Looks-wise these creations have veered heavily towards the steampunk, and they tend to be wildly impractical for actually telling the time.

Hublot just unveiled the latest in the series—the MP-10 Tourbillon Weight Energy System Titanium, a timepiece every bit as unwieldy as its name. (It doesn’t have a dial or hands. You wind it using a pair of tiny sliding white gold weights.)

You’ve got to love that the MP series exists. It’s so barmy you wouldn’t be totally surprised if Hublot announced it had all been dreamed up by a computer squirrelling away in a Swiss bunker while the rest of the company got on with selling its (comparatively) normal watches.

We mention this because Ricardo Guadalupe, Hublot’s CEO, told Esquire he’d recently given the idea of an AI-generated watch some credence.


“It happened three weeks ago,” he said. “We tried to use it in design. We did some experiments. I must say—amazing results.”

If Hublot was to introduce an AI-designed watch, would it make a virtue of it? Or would it hide behind it?

“I don’t know,” Guadalupe said. “It came up with ideas where it incorporated some complications from other brands, where we can see it was inspired by a [avant-garde independent brand] Grubel Forsey, for example. But really—the results were ‘wow!’ Because if you ask a designer in the company to do that, it will cost you a fortune! And that was for free! And it showed me 10 or 12 products.”

Happily for the human designers, many were only possible in theory.

“Some of them would be impossible to make. One was a kind of a tourbillon / minute repeater with an equation of time [complication]—a Big Bang. They put the screws in a different way. This one was impossible to realise. But it’s really interesting. Because even if it’s impossible, it can give you an idea, you know? It was inspirational. I was really surprised.”

If not Hublot, some brand will surely come up with an AI-designed watch, and soon. On Wednesday, the womenswear designer Norma Kamali announced she was teaching an AI system to replicate her design style—"downloading my brain”, she called it—so that when the day comes for her to retire, she won’t have to worry about a successor—a computer will simply carry on with her ideas.

Obviously this is all fairly terrifying and awful for anyone involved in the creative industries in any way at all. But it does make you wonder if a Hans Wilsdorf ‘designed’ Rolex from beyond the grave would make it any more authentic. Or quite what the ghost-in-the-machine of Omega’s founder Louis Brandt would have made of the 21 plastic MoonSwatches currently stealing the limelight from the brand’s more luxurious creations. Quite possibly he’d be spinning in his grave. Under a full Moon.

*Not least by itself.

**It stands for 'masterpiece'.

Originally published on Esquire UK

Something’s off, but you can’t quite name it. It’s the moment you get home after staying with friends and an influencer using their exact coffeemaker pops up on your Instagram feed. There's the split-second after an actor delivers a quippy line on a streaming series and you try to parse whether this scene has already become a meme or if it’s just written to court them. It’s the new song you’ve been hearing everywhere, only to discover it’s an ‘80s deep cut, inexplicably trending on TikTok.

There is a name for this uneasiness. It’s called “algorithmic anxiety,” and it’s one of the main subjects of Kyle Chayka’s new book, Filterworld: How Algorithms Flattened Culture. A staff writer for The New Yorker, Chayka charts the rise of algorithmic recommendations and decision-making. He shows how culture has slowly started effacing itself to fit more neatly within our social media platforms' parameters

Algorithms, Chayka reminds us, don’t spring from the machine fully-formed. They’re written by humans—in this case, humans employed by the world's biggest tech conglomerates—and their goal is simple: to prioritise content that keeps us scrolling, keeps us tapping and does not, under any circumstances, divert us from the feed.

Filterworld shows us all the ways this can manifest, both online and IRL, into a kind of contentless content. Songs are getting shorter, because it only takes 30 seconds to rack up a listen on Spotify. Poetry has enjoyed an unexpected revival on Instagram. But mostly when it is universal, aphoristic and neatly formatted to work as image as well as text.

There’s the phenomenon of the “fake movie” on streaming services like Netflix. These cultural artefacts have actors, plots, settings—all the makings of a real film. But it still seem slickly artificial, crowd-sourced and focus-grouped down to nothing.

If our old tech anxiety amounted to well-founded paranoia (“Are they tracking me? Of course they are.”), the new fear in Filterworld is more existential: “Do I really like this? Am I really like this?” Is the algorithm feeding us the next video, the next song, tailored to our unique taste? Or is it serving us the agglomerated preferences of a billion other users? Users who, like us, may just want something facile and forgettable to help us wind down at the end of the day.

Chayka doesn’t give us easy answers at the end of Filterworld. He does, however, offer an alternative to the numbing flow of the feed: taste! Remember taste? We still have it. Although the muscles may have atrophied after so many of us have ceded our decision-making abilities to the machines.

Rediscovering our personal taste doesn’t have to be an exercise in high culture or indie elitism. But it does require what Chayka calls the conscientious consumption of culture. In seeking out trusted curators, seeking out culture that challenges us and taking the time to share with others what we love.

To go deeper, Esquire sat down with Chayka to talk about the cultural equivalent of junk food, the difference between human and algorithmic gatekeepers, and why “tastemaker” doesn’t need to be a dirty word. This interview has been edited for length and clarity.

ESQUIRE: Let me start with a slightly provocative question. Is there anyone with a bigger grudge against algorithms than journalists?

KYLE CHAYKA: Well, journalists are known to have a grudge against algorithms. I can speak to my own dislike of them. Just because they’ve taken away this filtering, tastemaking function that journalists have had for so long. But through the course of the book, I talk to all sorts of creators who hate algorithms just as much.

It’s the illustrator who got trapped into doing one bit on Instagram because it succeeded all the time. Or the influencer whose hot selfies get tons of likes but their actually earnest, artistic posts don’t get any attention. In the book, I interview coffee shop founders around the world, and even they are like, “I hate the algorithm because I have to engage with all these peoples’ photos of my cappuccinos.” Everyone feels kind of terrorised.

Maybe journalists were just part of the first wave to realise this?

I think journalists are often canaries in the coal mine, partly because we complain the loudest about everything. But you could see the impact of algorithmic feeds in the media really early on. We moved from consuming news on cable TV or in a newspaper or even on a website homepage to consuming stories the majority of the time through social media feeds. And that just takes away so much control.

A newspaper front page or a website homepage is a human-curated, thought-through intentional thing that highlights important stuff, along with fun stuff, along with goofy stuff. There was an intention and a knowledge to that, which algorithmic feeds have just totally automated away.

Let’s take it from news to culture, which is really the focus of your book. Filterworld explains that the algorithms driving social media exist to keep us engaged as long as possible.The result is a kind of flattening of culture. Our social feeds privilege content that’s easily digestible so we can keep on grazing. What happens to us when all the culture we consume is flattened like that? And we’re not pushed to seek out new things, or to just try something that makes us uncomfortable? What happens to us when we aren’t getting any nutrients, you could say, from the feed?

It makes me think of the cultural equivalent of junk food. It’s engineered to appeal to you. To engage your senses in ways you might not even like, per se, but it’s just so chemically perfect. I talk a lot about how creators feel pressure to conform in certain ways to the feed. Consumers also have to conform in a way. Algorithmic feeds push us to become more passive consumers. That we don't really think about what we’re consuming. We float along on the feed and not think about our own taste too much. I feel like that makes us into more boring people. It makes the cultural landscape less interesting. But it also takes away this opportunity for us to encounter art that is really shocking or surprising or ambiguous.

Take the example of a Spotify playlist. You start by listening to something that you choose. Then Spotify pushes you along on this lazy river of music that is similar to what you put on and is not going to disrupt your experience but it’s also not going to push you anywhere new. It’s not going to try to disrupt you; it’s not going to try to challenge your taste. In the book I contrast that with an indie radio DJ who is making these intentional choices to put songs next to each other that don’t really fit but have some kind of implied meaning based on their proximity. Algorithmic feeds fundamentally can’t create meaning by putting things next to each other. There’s no meaning inherent in that choice because it’s purely automated, machine choice. There’s no consciousness behind it.

You talk a lot about curators in Filterworld. What else can a curator do for us that an algorithm cannot do? Why should we trust them more than an algorithm?

Curating as a word has this very long history dating back to Ancient Rome to the Catholic priesthood. It always had this meaning of taking responsibility for something. I feel like curators now take responsibility for culture. They take responsibility for providing the background to something, providing a context, telling you about the creator of something, putting one object next to others that build more meaning for it. So curating isn’t just about putting one thing next to another, it's all this background research and labour and thought that goes into presenting something in the right way.

That’s true of a museum curator who puts together an art exhibition. It’s true for a radio DJ who assembles a complicated playlist. It’s true for a librarian who chooses which books to buy for a library. But it’s not true for a Spotify algorithmic playlist. The Twitter feed is not trying to contextualise things for you with what it feeds to you. It’s just trying to spark your engagement. TikTok is maybe the worst offender because it’s constantly trying to engage your attention in a shallow way. But it’s absolutely not pushing you to find out anything more about something. There’s no depth there, there’s no context. It actively erases context, actually. It makes it even harder to find.

But we know curators can have their own agendas. What’s the difference between, say, a magazine editor who needs to please their advertisers and a tech company looking after their bottom line? Is there a difference?

There’s this transition that I write about in the book from human gatekeepers to algorithmic gatekeepers, so moving from the magazine editors and the record label executives to the kind of brute mathematics of the TikTok ‘For You’ feed. I think they both have their flaws. The human gatekeepers were biased. They were also beholden to advertisers; they had their own preferences and probably prioritised the people that they knew in their social circles. Whereas the flaw of the algorithmic feed is that while anyone can get their stuff out there, the only metric by which they’re judged is: How much engagement does it get? How much promotion does it merit based on the algorithmic feed?

So they’re both flawed. The question is: which flaws do we prefer? Or which flaws do we want to take with their benefits? The ability of the human gatekeeper was to highlight some voice that would be totally surprising or shocking—to highlight some new and strange thing that totally doesn’t fit with your preconceived notions of what art or music or writing is. The algorithmic feed can’t really do that because it’s only able to measure how much other people already consider it popular.

The advertiser thing—another hobbyhorse of mine is Monocle magazine, which has existed for a decade or two now. It’s a print magazine with a very nice mix of shopping and international news and culture and profiles. That magazine does really well selling print ads because they put print advertising in a good context with good articles. The advertisers appreciate the quality of the content that surrounds it. So that’s a net positive for everyone. Whereas with the internet now, the advertisers are almost in a war with the platforms just as much as the users are. Advertisers don’t want their content appearing willy-nilly, messily next to the crappy content the algorithmic feeds promote, which at this point might be snuff videos or videos of bombings in Gaza. That’s not serving either users or advertisers.

The other night, I was scrolling through this beautiful, curated interiors account and then there was an ad for Ex-Lax, just dropped in the middle of this very aspirational stuff.

That collision to me is the case and point. It’s so useless, and so not productive for either party, that it just feels like a glitch, you know? And that’s because of algorithmic targeting. It’s because these feeds don’t prioritise anything besides engagement.

Places like Monocle, for instance, cater to a relatively small readership. It’s not for everybody; it’s for this smaller subset of people who consider themselves clued-in. We’re getting into a sticky discussion about taste and tastemaking here, but: how do these more niche platforms react against the algorithm?

Tastemaking is a really complicated topic. I think it strikes a lot of people as elitist because you're talking about what people should like and why they should like it, and why I know something that you don’t. “I’m going to tell you something, and it's going to heighten your sensibilities or lead you somewhere different.” That can be intimidating, it can be pretentious, it can be alienating, it can be very biased in class ways, identity ways, all sorts of ways.

But I almost feel like it has to be defended at this point, just because we’re all so immersed in automated feeds. We’re consuming so much through different platforms that we’ve kind of lost touch with the human tastemaker. We all have voices we love following on Twitter or Instagram or TikTok but those voices get lost in the feed. We sometimes lose track of them and we sometimes don’t see their content. Those feeds are also not serving those creators particularly well because the business models are all based on advertising and the creators don’t get access to the bulk of that revenue. Through the book, I propose that one answer to Filterworld, to the dominance of these algorithmic feeds, is to find those human voices. Find tastemakers who you like and really follow them and support them and build a connection with those people.

Thinking about your own taste doesn’t have to be elitist. Fundamentally it’s just about creating a human connection around a piece of culture that you enjoy, and that should be open to anyone. It’s literally telling a friend why you like this specific song, or saying, “We should go see this movie, because I like the director because of XYZ reasons.”

Tastemaking is almost just being more conscientious about cultural consumption, being more intentional in the way that we’ve become totally intentional about food, right? Food is such a source of identity and community, and we take pride in what we eat, what restaurants we go to, what we cook. I would love it if people took more pride in going to a gallery, going to a library, going to a concert series at a concert hall. I think those are all acts of human tastemaking that can be really positive.

And all the things you mentioned are also things outside the house.

Yes. You’re coming together with other people in appreciation of the kind of culture you like to consume. And that’s really good. That helps everyone.

I want to finish by talking about the idea of ambient culture. You clearly appreciate ambient music, and in Filterworld you describe genres like lofi hiphop and Japanese City Pop as music that feels almost designed for the algorithm. Our feeds seem to push us toward ambient content: stuff that’s frictionless and easy to ignore. So I’m wondering, is that always a bad thing? When is ambience necessary and when is it detrimental?

I do really enjoy ambient content. My first book was about minimalism, which has a kind of ambient quality. I wrote an essay about Emily in Paris and ambient TV. I've written about Brian Eno a lot, the musician who coined the term ambient music. That kind of art fulfills a function: to put your brain at rest. It provides a pleasant background at a technological moment when we have a lot of distractions. Ambient TV is maybe the perfect TV to look at your phone in front of. It relies on the presence of that second screen to complement it. The TV show doesn’t have to be that interesting because your phone is interesting.

The problem becomes that through algorithmic recommendations, so much content is pushed towards ambience, and you never want all of your stuff to be ambient. You don’t only want to consume ambient art because then what are you actually paying attention to? If everything exists as a soothing background, what’s actually provoking you? What’s leading you somewhere new?

I think the critique goes back to Brian Eno’s definition of ambient music, which was that the music has to be “as ignorable as it is interesting.” You have to be able to ignore it. It can be in the background, but you should also be able to pay attention to it and be rewarded by your attention to it. I feel like a lot of culture now only falls into that former category. You’re only able to ignore it. Once you start paying attention, there’s nothing really gripping there. Certainly with TikTok and Spotify playlists, there’s this prioritisation of the soothing, numbing quality of ambient content. Functional stimulus in the form of culture is so big these days, whether it’s ambient music or ASMR videos.

Sleep sounds…

So now sometimes, culture exists in a functional context rather than an artistic context. You’re like, “Oh I watch The Office to fall asleep,” or, “I listen to this track while I run because it sustains my exercise.” I personally always want to make an argument for culture for its own sake and for thinking deeply about artistic process and ideas.

Originally published on Esquire US

The Ray-Ban Meta Smart Glasses. RAY-BAN

I don't quite know how to feel about the new Ray-Ban Meta Smart Glasses. Especially when they run on AI. We get it, it's the whole handsfree, first-person POV experience ("Hey Meta, share this photo I took with just my literal face"). The convenience is clearly purposed for content creation, livestreaming and all that jazz. Allowing users to preview social media comments in real-time, even audibly, the ambitious eyewear also doubles as a pair of headphones and takes phone calls. Perhaps Meta thinks we aren't glued enough to our phones as it is.

Previously on Ray-Ban Meta Smart Glasses…

In partnership with EssilorLuxottica, the first generation—called "Ray-Ban Stories" because why bother hiding what they're really for—was launched in September 2021. They came in three styles (wayfarer, round and meteor), one colour (the very exciting black in shiny or matte) and two transitions options (the just as exciting grey and brown).

The second iteration now streamlined and lighter, boasts up to 150 frame and lens design combinations. More importantly, first-hand reviews are actually calling them comfortable. Water resistance clocks in at an IPX4 rating, should you consider skinny dipping.

Fancy design gif. RAY-BAN

Software upgrades

The biggest change, though, would undoubtedly be replacing the 5MP camera with an ultra-wide 12MP one. Capable of recording 1080p videos from a prior 780p in 60-second stints, the default mode is—surprise, surprise—now portrait rather than landscape. It also went from one microphone, which apparently wasn't much good in strong breeze, to a whopping five, including one on the nose bridge for a true 360 audio capture.

There's a marked difference in the listening experience too, via a 50 percent maximum volume increase and better directional output. Meaning you can continue discreetly enjoying the K-pop band you pretend not to like, unless you're standing in proximity within a silent room.

For privacy, which was a priority Meta strangely felt the need to emphasize, a blinking white light goes off when the device is recording. Minimizing the creep factor is something to appreciate when photo and video functions are easily activated by touchpads on the glasses' stems. Interestingly, this became the reason why certain frame colour options such as beige were removed as they were less obvious to see when the LED was turned on.

Operating on Qualcomm’s Snapdragon AR1 Gen 1 processor and eight times more internal storage at 32GB, the glasses allegedly last up to four hours of active use and come with a nifty sunglass charging case …which take approximately 75 minutes to full charge.

Wireless charging case. RAY-BAN

The AI bit

Besides taking your annoying voice commands, the integrated Meta AI is slated for an update next year to enable interaction with AR surroundings. Augmented reality is an intriguing direction to head in, when gadgets like Google Glass and Bose Frames never really took off. It begs the question why, when it didn't gain much traction two years ago, are they pitching a new version now?

Does the company know something we don't about the near future that produces this unfounded confidence in consumer demand? Will there be another pandemic where we will all be forced indoors to see the resurgence of virtual reality, NFTs and cryptocurrency? In other words, will the Ray-Ban Meta Smart Glasses finally be cool? And will I ever get to answering these speculative questions as opposed to simply throwing them out there? I guess some things we'll never know.

Ray-Ban Meta Smart Glasses are up for preorder now on Ray-Ban / Meta and on sale 17 October from USD299.

I'm not certain whether James Taylor meant to predict the takeover of artificial intelligence and the death of our collective imagination in his 1970 song “Fire and Rain.” Still, somewhere a music teacher is saying to herself: “Called it.”

That teacher is Miss Molloy—a bowl-cutted, crochet-sweatered, denim-skirted woman of 23 or 53—who taught our third-grade music class. One autumn morning, after we sang “Fire and Rain” off mimeographed lyric sheets, Miss Molloy taught us what the song was about, which was the robot apocalypse. “Suzanne, the plans they made put an end to you” meant she had succumbed to the computer chip in her brain, as had all of humanity. This left Sweet Baby James the last remaining human, with the song he’d written her, but he “just can’t remember who to send it to,” because his own chip had been implanted and the surrender of his own consciousness had begun. Pretty chilling stuff for third graders, but we absorbed it, uncritically.

Fifteen years later, I was in a friend’s dorm room listening to “Fire and Rain,” and I said, “I love this song, as scary as it is.” My friend looked at me with concern. I continued, “With the robots and everything?” And then about four seconds later it hit me: I’m going to have to make up a pseudonym for that teacher, because she absolutely got high.

That assessment stands, but listen: It’s 2023, I have at least three pieces of wearable tech on my body at all times, and AI has come for my job. But the most insidious development is that robots that curate our choices, guiding us on what to read, watch, and listen to. When you open Spotify, dozens of playlists wait for you—none of which you or anyone you know created. We have surrendered our taste to the machine. And what’s worse, we’re starting to forget we lived a different way.

Miss Molloy’s interpretation of “Fire and Rain” is objectively bananapants. But was she wrong about the future?

There's a line in Nick Hornby's novel High Fidelity in which the record-store-owning main character says, “What really matters is what you like, not what you are like.” Twenty-eight years after the release of the book, Spotify has prompted new questions: What do we lose when we stop making our own playlists? If the algorithm decides what we like, then what are we like?

“There’s no way a Spotify playlist is as good as a mixtape, or at least mine aren’t,” Hornby tells me. “Because you had to do things in real time, you had the opportunity to think and hear. You were reminded of a lyric, a beat, a sound that would lead you to the next song.” You had to think about who you were giving it to and how you could change their world. “There’s no construction now. In the digital era, it’s just: Here’s some songs you might like.” What I miss—just enough to remember it, for now—is a well-curated jukebox, the way a dollar-bill-huffing machine with a 100-compact-disc capacity could express the personality of a place. My favourite was at the Boiler Room, a friendly, scruffy gay bar in the East Village. This was the ’90s, and we East Village gays shunned the mainstream, so the selection was just slightly to the left of it: Jon Spencer Blues Explosion, Stereolab, Cibo Matto. The exact right soundtrack for a room packed with guys who could fit into X-girl T-shirts. A curatorial ear and a hive mind.

Without curation, everything is also nothing.

I returned to the Boiler Room recently, and as most places have, it’s adopted an Internet-enabled jukebox. Every song that exists on streaming, at your fingertips. But without curation, everything is also nothing. The hive mind breaks down into individual bees. A proper jukebox, like a homemade mixtape, is already largely a memory.

And soon enough it won’t be. It will be a thing you forgot even existed in the first place, like decent mass-produced chocolate, like a flight that doesn’t end with a pitch for a credit card. Like the Boiler Room itself, which is closing later this year.

"The absence of curveballs in algorithmic playlists is noticeable,” Hornby says. “I don’t want something that sounds exactly like what I usually listen to, just like I don’t want recommendations for books in a similar vein to the ones I write.” Right around the time Hornby was writing High Fidelity, the best mixtape I ever got came from a college friend named Brady. It arrived in my P. O. Box just before I graduated and moved to New York City. There were pop songs, left-field disco tracks, and at the end of side 1, “Being Alive,” from Stephen Sondheim’s Company. I’d never heard it and It was a gut punch: the precise sound of my soul as I prepared to start my life. A reminder to be less aloof in the real world than I had been in college. An I see you from someone I didn’t know was watching. A life changer.

The algorithm can’t be Brady. It can give you what it knows you want. But without human insight, it cannot give you what you need. It will not encourage you to evolve, because it cannot work as well if you do. The algorithm can know you. Scarily well. But it cannot love you. Commit an act of rebellion today: Make a playlist for someone. Assemble it with care. Throw in some curveballs. Choose a song or two that will make them feel seen, give it a clever name, text it to the person. Do it right now, before the chip takes hold of your brain.

I want to find Miss Molloy. I want to tell her I remember. All I have to do is find her. Anybody got an extra ticket to Burning Man?

Illustration By Matt Mahurin

Originally published on Esquire US

Having dedicated my career to funding, building, and growing new technologies, I’ve been present for many disruptions. In the post-dot-com bubble pop era, we witnessed the revolution of communications, online payments, marketplaces, advertising, and media by companies like Skype, Amazon, Uber, Google, and Netflix. These companies leveraged new technologies to build new business models or digitise ones that, in some cases, had existed for hundreds of years. Long after Facebook became a verb, we witnessed the eruption of Web3 and the eruption of evangelist techbro social media influencers shilling their special brew of lifestyle grandstanding and NFT minting.

But, as a former venture capitalist, I’ve yet to see a nascent technology so wholly and rapidly evangelised outside of its own circles of investors, builders, and users as Artificial Intelligence. It’s almost as if Steve Jobs rose from the grave to deliver a keynote on the potential of AI. So, out of curiosity for Steve’s insights on the topic, I tasked OpenAI’s ChatGPT tool with bringing him back to life and introduce AI in the style of one of his famous keynote speeches:

Good morning everyone, and welcome to this exciting keynote on artificial intelligence.

Today, I want to talk about a revolutionary technology that has the power to change the world as we know it. It’s a technology that’s been around for decades, but it’s only in recent years that we’ve seen its true potential start to unfold. I’m talking, of course, about artificial intelligence.

AI has the power to automate many of the tasks that we do every day, freeing us up to focus on the things that matter most. It has the power to make our lives easier, more efficient, and more connected. But more importantly, it has the power to solve some of the world’s biggest problems.

Think about it. With AI, we can tackle issues like climate change, poverty, and even the spread of diseases. The possibilities are truly endless, and the potential for impact is massive.

At the same time, we need to be mindful of the potential risks that come with this technology. We need to ensure that AI is developed in a responsible and ethical way, and that it serves the interests of humanity as a whole.

So let’s get to work. Let’s harness the power of artificial intelligence to build a better future for all of us. Thank you.

AI Steve Jobs Speech generated by ChatGPT

As AI Steve highlighted above, AI itself is not new. Prior to the release of tools like DALL-E, we saw AI leveraged for specific use cases across most major industries such as marketing, cybersecurity, and even CGI / animation in media. We’ve been using the technology for decades to classify, analyse, and create data (including text and images) for narrow sets of tasks, which is why it’s referred to as “Artificial Narrow Intelligence”. In contrast, new models allow for many use cases with no additional training or fine-tuning. This evolution from the previous generation of AI to today’s Generative AI models underpinning applications such as ChatGPT, DALL-E and others has been driven by advances in computing power, cloud data storage, and machine learning algorithms.

Unlike Web3, AI has already demonstrated its usefulness and potential beyond theoretical adoption. Also, unlike its predecessor in the timeline of popular new technologies, it doesn’t require mass adoption of a new protocol or regulatory approval. There are two possible applications of AI: the optimization of existing digital processes and the digitization of human tasks.

Generated with DALL-E | Prompt ‘Dubai in the style of Edward Hopper’

Optimization increases the speed and reduces the need for human input into existing algorithms. A straightforward example would be chatbots, which had their moment in the latter half of the 2010s and are making a comeback, armed with better-trained algorithms. Chatbots trained with existing customer care data sets will replace notoriously difficult-to-navigate FAQ pages on websites and costly call centres. The result will be a lower cost to do business and improved satisfaction.

This brings us to the frightening or exciting – depending on who you ask – scenarios where AI leads to the replacement of human roles. Short-term, this could range from copywriting, software engineering, art, animation, business analysis, and journalism. But, again, this isn’t a futuristic dream pontificated by the Silicon Valley elite on their three-hour podcasts; this is happening today. For example, Buzzfeed recently announced that it would start using ChatGPT to write journalistic pieces after technology journalism outlet CNET was found to use the same tool to write personal finance articles.

Readers should consider that despite the countless applications of AI, there is no cause for alarm when it comes to making the human worker redundant. The evolution of technology is an inevitability and we are better served by preparing for it rather than resisting it. Many pundits draw comparisons to the widespread fears of the industrial revolution replacing jobs. In the short term, these fears are unfounded. So long as the models underpinning applications exist in a state of ANI, even the most advanced tools will require human input and oversight. However, these tools will complement and augment human work by replacing menial, repetitive tasks in creative and technical fields. For example, this article was reviewed using Grammarly to check for spelling and grammar mistakes. 

Although we’re progressing beyond ANI, there’s still quite the journey ahead of us and little consensus on when we might reach our destination. Some scientists estimate that we’re decades away from progressing to the next state of Artificial Intelligence: Artificial General Intelligence (AGI). AGI would offer capabilities such as sensory perception, advanced problem-solving, fine motor skills and even social and emotional engagement. There’s quite a distance to travel from writing Shakespearean sonnets about lost socks in the dryer to developing a personality like that of Samantha, the protagonist’s AI companion in the 2013 film Her. It’s impossible to predict how soon we could begin to describe AI as AGI; estimates on timing range from a decade or twenty, if ever.

When it comes to Arabic, language models need to catch up. Today’s models are predominantly trained on content publicly available on the internet: webpages, Reddit, and Wikipedia make up approximately 85% of ChatGPT’s training data set, for example. Considering that approximately 60% of written content online is in English and less than 1% in Arabic, the inputs necessary to achieve the same quality outputs in the latter are nonexistent. It’s no secret that English is the lingua franca of Middle East business. Still, we should ask ourselves whether this will further subdue the use of Arabic in such settings. The impetus to ensure the development of the Arabic language in technology and business settings lies both in the private and public sectors in wealthy Gulf countries.

While there are reasons to celebrate AI’s coming of age, we need to keep our feet on the ground. The limitation on the applications of AI is compounded by questions of ethical standards, reliability, accuracy, and truthfulness raised by academics such as Gary Marcus, a leading AI sceptic. Even Mira Murati, the CTO of OpenAI (creators of the models underpinning DALL-E and ChatGPT) is arguing for regulatory oversight of AI. Questions remain on how to solve topics such as moderating offensive model outputs, intellectual property infringement, policing disinformation, and academic honesty to name a few. 

“How do you get the model to do the thing that you want it to do, and how do you make sure it’s aligned with human intention and ultimately in service of humanity?” 

Mira Murati, CTO, OpenAI

Make no mistake, AI is beyond the point of no return, but that doesn’t mean we can’t harness its power to empower our workforces and transform our lives. Although the excitement surrounding AI’s potential is justified, the challenges of its usage and misuse are much more significant than those of previous generations of technology and should not be taken lightly. We have at our disposal an incredible new tool; however, we must balance our eagerness to watch with mindfulness of the risks and implications and careful regulation. 

Rayan Dawud is a former venture capitalist who has held senior roles at Careem and Outliers Venture Capital in Dubai. He’s currently on a career break in London, where he’s exploring Artificial Intelligence.

Featured image generated using DALL-E with prompt ‘android from the film Ex Machina in a Hopper painting’

Originally published on Esquire ME