In the age of AI, it can feel as if this technology’s march into our lives is inevitable. From taking our jobs to writing our poetry, AI is suddenly everywhere we don’t want it to be.

But it doesn’t have to be this way. Just ask Madhumita Murgia, the AI editor at The Financial Times and the author of the barn-burning new book Code Dependent: Living in the Shadow of AI. Unlike most reporting about AI, which focuses on Silicon Valley power players or the technology itself, Murgia trains her lens on ordinary people encountering AI in their daily lives.

This “global precariat” of working people is often irrevocably harmed by these dust-ups; as Murgia writes, the implementation and governance of algorithms has become “a human rights issue.” She tells Esquire, “Whether it was health care, criminal justice, or government services, again and again you could see the harms perpetrated on mostly marginalised groups, because that’s how the AI supply chain is built.”

Murgia takes readers around the globe in a series of immersive reported vignettes, each one trained on AI’s damaging effects on the self, from “your livelihood” to “your freedom.” In Amsterdam, she highlights a predictive policing program that stigmatises children as likely criminals; in Kenya, she spotlights data workers lifted out of brutal poverty but still vulnerable to corporate exploitation; in Pittsburgh, she interviews UberEats couriers fighting back against the black-box algorithms that cheat them out of already meagre wages.

Yet there are also bright spots, particularly a chapter set in rural Indian villages, where under-resourced doctors use AI-assisted apps as diagnostic aids in their fight against tuberculosis. Despite the prevalent sense of impending doom, there’s still time to reconfigure our relationship to this technology, Murgia insists. “This is how we should all see AI,” she tells Esquire, “as a way to preserve the world we know and believe in what we bring to it, but then use it to augment us.”

Murgia spoke with Esquire by Zoom from her home in London about data labour, the future of technology regulation, and how to keep AI from reading bedtime stories to our children.


ESQUIRE: What is data colonialism, and how do we see it manifest through the lens of AI?

MADHUMITA MURGIA: Two academics, Nick Couldry and Ulises A. Mejias, came up with this term to draw parallels between modern colonialism and older forms of colonialism, like the British colonisation of India and other parts of the world. The resource extraction during that period harmed the lives of those who were colonised, much like how corporations today, particularly tech companies, are performing a similar kind of resource extraction. In this case, rather than oil or cotton, the resource is data.

In reporting this book, I saw how big Silicon Valley firms go to various parts of the world I visited, like India, Argentina, Kenya, and Bulgaria, and use the people there as data points to build systems that become trillion-dollar companies. But the people never see the full benefits of those AI systems to which they’ve given their data. Whether it was health care, criminal justice, or government services, again and again you could see the harms perpetrated on mostly marginalised groups, because that’s how the AI supply chain is built.

You write that data workers “are as precarious as factory workers; their labour is largely ghost work and they remain an undervalued bedrock of the AI industry.” What would it take to make their labour more apparent, and what would change if the reality of how AI works was more widely understood?

For me, the first surprise was how invisible these workers really are. When I talk to people, they’re shocked to learn that there are factories of real humans who tag data. Most assume that AI teaches itself somehow. So even just increasing understanding of their existence means that people start thinking, There’s somebody on the other end of this. Beyond that, the way the AI supply chain is set up, we only see the engineers building the final product. We think of them as the creators of the technology, so automatically, all the value is placed there.

Of course, these are brilliant computer scientists, so you can see why they’re paid millions of dollars for their work. But because the workers on the other end of the supply chain are so invisible, we underplay what they’re worth, and that shows up in the wages. Yes, these are workers in developing countries, and this is a standard outsourcing model. But when you look at the huge disparity in their living wage of $2.50 an hour going into the technology inside a Tesla car, and then you see what a Tesla car costs or what Elon Musk is worth or what that company is making, the disparity is huge. There’s just no way these workers benefit from being a part of this business.

If you hear technologists talking about it, they say we all get brought along for the ride—that productivity rises, bottom lines rise, money is flushed into our economy, and all of our lives get better. But what we’re seeing in practise is those who are most in need of these jobs are not seeing the huge upside that AI companies are starting to see, and so we’re failing them in that promise. We have to decide as a society: What is fair pay for somebody who’s part of this pipeline? What labour rights should they have? These workers don’t really have a voice. They’re so precarious economically. And so we need to have an active discussion. If there are going to be more AI systems, there’s going to be more data labour, so now is the time for us to figure out how they can see the upside of this revolution we’re all shouting from the rooftops about.

One of our readers asks: What are your thoughts on publishers like The New York Times suing OpenAI for copyright infringement? Do you think theyll succeed in protecting journalists from seeing their work scraped and/or plagiarised?

This hits hard for me, because I’m both the person reporting on it and the person that it impacts. We’ve seen how previous waves of technological growth, particularly the social media wave, have undermined the press and the publishing industry. There’s been a huge disintermediation of the news through social media platforms and tech platforms; these are now the pipes through which people get information, and we rely on them to do it for us. We’ve come to a similar inflection point where you can see how these companies can scrape the data we’ve all created and generate something that looks a lot like what we do with far less labor, time, and expertise.

It could easily undermine what creative people spend their lives doing. So I think it’s really important that the most respected and venerable institutions take a stand for why human creativity matters. Ultimately, I don’t know what the consequences will be. Maybe it’s a financial deal where we’re compensated for what we’ve produced, rather than it being scraped for free. There are a range of solutions. But for me, it’s important that those who have a voice stand up for creative people in a world where it's easy to automate these tasks to the standard of “good enough.”

Another reader asks: What AI regulations do you foresee governments enacting? Will ethical considerations be addressed primarily through legislation, or will they rely on nonlegal frameworks like ethical codes?

Especially over the last five years, there have been dozens and dozens of codes of conduct, all self-regulating. It’s exactly like what we saw with social media. There has been no Internet regulation, so companies come up with their own terms of service and codes of conduct. I think this time around, with the AI shift, there’s a lot more awareness and participation from regulators and governments.

There’s no way around it; there will be regulation because regulation is required. Even the companies agree with this, because you can’t define what’s ethical when you’re a corporation, particularly a profit-driven corporation. If these things are going to impact people’s health, people’s jobs, people’s mortgages, and whether somebody ends up in jail or gets bail, you need regulation involved. We’ll need lines drawn in the sand, and that will come via the law.

In the book, you note how governments have become dependent on these private tech companies for certain services. What would it look like to change course there, and if we don’t, where does that road lead?

It goes back to that question of colonialism. I spoke to Cori Crider, who used to be a lawyer for Guantanamo Bay prisoners and is now fighting algorithms. She sees them as equally consequential, which is really interesting. She told me about reading a book about the East India Company and the Anglo Iranian Oil Corporation, which played a role in the Iranian coup in the ’70s, and how companies become state-like and the state becomes reliant on them. Now, decades later, the infrastructure of how government runs is all done on cloud services.

There are four or five major cloud providers, so when you want to roll out something quickly at scale, you need these infrastructure companies. It’s amazing that we don’t have the expertise or even the infrastructure owned publicly; these are all privately owned. It’s not new, right? You do have procurement from the private sector, but it’s so much more deeply embedded when it comes to cloud services and AI, because there are so few players who have the knowledge and the expertise that governments don’t. In many cases, these companies are richer and have more users than many countries. The balance of who has the power is really shifting.

When you say there are so few players, do you see any sort of antitrust agitation here?

In the U.S., the FTC is looking at this from an antitrust perspective. They’re exploring this exact question: “If you can’t build AI services without having a cloud infrastructure, then are you in an unfair position of power? If you’re not Microsoft, Google, Amazon, or a handful of others, and you need them to build algorithms, is that fair? Should they be allowed to invest and acquire these companies and sequester that?” That’s an open question here in the UK as well. The CMA, which is our antitrust body, is investigating the relationships between Microsoft, OpenAI, and startups like Mistral, which have received investment from Microsoft.

I think there will be an explosion of innovation, because that’s what Silicon Valley does best. What you’re seeing is a lot of people building on top of these structures and platforms, so there will be more businesses and more competition in that layer. But it’s unclear to me how you would ever compete on building a foundational model like a GPT-4 or a Gemini without the huge investment access to infrastructure and data that these three or four companies have. So I think there will be innovation, but I’m not sure it will be at that layer.

In the final chapter of the book, you turn to science fiction as a lens on this issue. In this moment where the ability to make a living as an artist is threatened by this technology, I thought it was inspired to turn to a great artist like Ted Chiang. How can sci-fi and speculative fiction help us understand this moment?

You know, it’s funny, because I started writing this book well before ChatGPT came out. In fact, I submitted my manuscript two months after ChatGPT came out. When it did come out, I was trying to understand, “What do I want to say about this now that will still ring true in a year from now when this book comes out?” For me, sci-fi felt like the most tangible way to actually explore that question when everything else seemed to be changing. Science fiction has always been a way for us to imagine these futures, to explore ideas, and to take those ideas through to a conclusion that others fear to see.

I love Ted Chiang’s work, so I sat down to ask him about this. Loads of technologists in Silicon Valley will tell you they were inspired by sci-fi stories to build some of the things that we writers see as dystopian, but technologists interpret them as something really cool. We may think they’re missing the point of the stories, but for them, it’s a different perspective. They see it through this optimistic lens, which is something you need to be an entrepreneur and build stuff like the metaverse.

Sci-fi can both inspire and scare, but I think more than anything, we are now suffering from a lack of imagination about what technology could do in shaping humans and our relationships. That’s because most of what we’re hearing is coming from tech companies. They’re putting the products in our hands, so theirs are the visions that we receive and that we are being shaped by. That’s fine; that’s one perspective. But there are so many other perspectives I want to hear, whether that’s educators or public servants or prosecutors. AI has entered those areas already, but I want to hear their visions of what they think it could do in their world. We’re very limited on those perspectives at the moment, so that’s where science fiction comes in. It expands our imagination of the possibilities of this thing, both the good and the bad, and figuring out what we want out of it.

I loved what Chiang had to say about how this technology exposes “how much bullshit we are required to generate and deal with in our daily lives.” When I think about AI, I often think that these companies have gotten it backwards. As a viral tweet so aptly put it: “I want AI to do my laundry and dishes so I can do my art and writing, not for AI to do my art and writing so I can do my laundry and dishes.” That’s a common sentiment—a lot of us would like to see AI take over the bullshit in our lives, but instead it’s threatening our joys. How have we gotten to this point where the push is for AI to do what we love and what makes us human instead of what wed actually like to outsource?

I think about this all the time. When it started off, automation was just supposed to help us do the difficult things that we couldn’t. Way back at the beginning of factory automation, the idea was “We’ll make your job safer, and you can spend more time on the things that you love.” Even with generative AI, it was supposed to be about productivity and email writing. But we’ve slid into this world where it’s undermining the things that, as you say, make us human. The things that make our lives worth living and our jobs worth doing. It’s something I try to push back on; when I hear this assumption that AI is good, I have to ask, “But why? What should it be used for?” Why aren’t we talking about AI doing our taxes—something that we struggle with and don’t want to spend our time doing?

This is why we need other voices and other imaginings. I don’t want AI to tell bedtime stories to my children. I don’t want AI to read all audiobooks, because I love to hear my favourite author read her own memoir. I think that’s why that became a meme and spoke to so many people. We’ve all been gaslighted into believing that AI should be used to write poetry. It’s part of a shift we’ll all experience together from saying, “It’s amazing how we’ve invented something that can write and make music” to “Okay, but what do we actually need it for?” Let’s not accept its march into these spaces where we don’t want it. That’s what my book is about: about having a voice and finding a way to be heard.

I’m reminded of the chapter about a doctor using AI as a diagnostic aid. It could never replace her, but it’s a great example of how this technology can support a talented professional.

She’s such a good personification of how we can preserve the best of our humanity but be open to how AI might help us with what we care about; in her case, that’s her patients. But crucially, her patients want to see her. That’s why I write about her previous job, where people were dying and she didn’t have the equipment to help them. She had to accept that there were limitations to what she could do as a doctor, but she could perform the human side of medicine, which people need and appreciate. This is how we should all see AI: as a way to preserve the world we know and believe in what we bring to it, but then use it to augment us. She was an amazing voice to help me understand that.

With the daily torrent of frightening news about the looming threat of AI, it’s easy to feel hopeless. What gives you hope?

I structured my book to start with the individual and end with wider society. Along the way, I discovered amazing examples of people coming together to fight back, to question, to break down the opacity in automation and AI systems. That’s what gives me hope: that we are all still engaging with this, that we’re bringing to it our humanness, our empathy, our rage. That we’re able to collectivise and find a way through it. The strikes in Hollywood were a bright spot, and there’s been so much change in the unionisation of gig workers across the world, from Africa to Latin America to Asia. It gives me hope that we can find a path and we’re not just going to sleepwalk into this. Even though I write about the concentration of power and influence that these companies have, I think there’s so much power in human collectivism and what we can achieve.

Also, I believe that the technology can do good, particularly in health care and science; that’s an area where we can really break through the barriers of what we can do as people and find out more about the world. But we need to use it for that and not to replace us in doing what we love. My ultimate hopefulness is that humans will figure out a way through this somehow. I’ve seen examples of that and brought those stories to light in my book. They do exist, and we can do this.

Originally published on Esquire US

Theo Wargo/WIREIMAGE

There's a ton going on in the AI space, but if it's associated with meme lord Elon Musk, we naturally pay attention. The billionaire's Artificial Intelligence startup xAI has just officially launched its large language model into the wild wild web.

Grok was unleashed in chatbot form last year, only accessible with a Premium+ subscription on X (formerly Twitter, as you already know yet we somehow still feel obliged to mention). Now, it's available on GitHub under Apache License 2.0. Which allows commercial use, modification and distribution; albeit without liability or warranty.

Which means

Developers, researchers, maybe enthusiasts with enough Internet knowledge and a supercomputer can build on Grok-1 and directly impact how the xAI updates future versions of the model. Base model weights and network architecture have been released, but without its training code. Which simply implies users can't see what Grok learnt from ...but to say it's text data from X wouldn't be too much of a stretch.

Artwork via prompt proposed by Grok on Midjourney

What's the big deal about Grok?

Created by the team involved in molding OpenAI's ChatGPT (more on that later), one thing Grok had going was access to real-time data on X. While that's live information, it is also a source highly susceptible to inaccuracy.

Grok-1 is currently "not fine-tuned for specific application such as dialogue". Yet, it's modeled after Douglas Adams' Hitchhiker’s Guide to the Galaxy as a cheeky alternative to relatively serious rival models from OpenAI (GPT-4), Meta (LlLaMa 2), Google (Gemini, Gemma2B/7B) and others.

Grok has two modes you can toggle—'fun' and 'regular'. No points for guessing which is default. If that wasn't enough to drive the point home, its site spells out that "Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!"

And if you're wondering how Musk never fails to come up with alien-sounding names and what Grok means, we can answer the latter. It's slang for intuitive understanding, or establishing rapport.

There goes the Walter White dream.

The thing about open sourcing

Musk's stance on open sourcing AI is something he has been very verbal about. The most notable company he criticised being OpenAI; which the businessman initially helped fund and previously co-found, but eventually sued for breaching an agreement about keeping it non-profit. The proprietor subsequently revealed emails claiming that Musk was well aware of the plans and was more about wanting it folded into Tesla.

ANYWAY. Making generative AI tools free-for-all in the name of democracy has always been iffy waters. There's always the risk of abuse in the hands of unpredictable, fallible human beings. With fake news already its own monster, Gen AI sometimes feels like the steroids to its hulking mass.

All things considered, it's definitely a buzz-worthy announcement about exciting tech. But as the safest way any common folk can conclude about the consequences—only time will tell.

More about Grok here.

Having dedicated my career to funding, building, and growing new technologies, I’ve been present for many disruptions. In the post-dot-com bubble pop era, we witnessed the revolution of communications, online payments, marketplaces, advertising, and media by companies like Skype, Amazon, Uber, Google, and Netflix. These companies leveraged new technologies to build new business models or digitise ones that, in some cases, had existed for hundreds of years. Long after Facebook became a verb, we witnessed the eruption of Web3 and the eruption of evangelist techbro social media influencers shilling their special brew of lifestyle grandstanding and NFT minting.

But, as a former venture capitalist, I’ve yet to see a nascent technology so wholly and rapidly evangelised outside of its own circles of investors, builders, and users as Artificial Intelligence. It’s almost as if Steve Jobs rose from the grave to deliver a keynote on the potential of AI. So, out of curiosity for Steve’s insights on the topic, I tasked OpenAI’s ChatGPT tool with bringing him back to life and introduce AI in the style of one of his famous keynote speeches:

Good morning everyone, and welcome to this exciting keynote on artificial intelligence.

Today, I want to talk about a revolutionary technology that has the power to change the world as we know it. It’s a technology that’s been around for decades, but it’s only in recent years that we’ve seen its true potential start to unfold. I’m talking, of course, about artificial intelligence.

AI has the power to automate many of the tasks that we do every day, freeing us up to focus on the things that matter most. It has the power to make our lives easier, more efficient, and more connected. But more importantly, it has the power to solve some of the world’s biggest problems.

Think about it. With AI, we can tackle issues like climate change, poverty, and even the spread of diseases. The possibilities are truly endless, and the potential for impact is massive.

At the same time, we need to be mindful of the potential risks that come with this technology. We need to ensure that AI is developed in a responsible and ethical way, and that it serves the interests of humanity as a whole.

So let’s get to work. Let’s harness the power of artificial intelligence to build a better future for all of us. Thank you.

AI Steve Jobs Speech generated by ChatGPT

As AI Steve highlighted above, AI itself is not new. Prior to the release of tools like DALL-E, we saw AI leveraged for specific use cases across most major industries such as marketing, cybersecurity, and even CGI / animation in media. We’ve been using the technology for decades to classify, analyse, and create data (including text and images) for narrow sets of tasks, which is why it’s referred to as “Artificial Narrow Intelligence”. In contrast, new models allow for many use cases with no additional training or fine-tuning. This evolution from the previous generation of AI to today’s Generative AI models underpinning applications such as ChatGPT, DALL-E and others has been driven by advances in computing power, cloud data storage, and machine learning algorithms.

Unlike Web3, AI has already demonstrated its usefulness and potential beyond theoretical adoption. Also, unlike its predecessor in the timeline of popular new technologies, it doesn’t require mass adoption of a new protocol or regulatory approval. There are two possible applications of AI: the optimization of existing digital processes and the digitization of human tasks.

Generated with DALL-E | Prompt ‘Dubai in the style of Edward Hopper’

Optimization increases the speed and reduces the need for human input into existing algorithms. A straightforward example would be chatbots, which had their moment in the latter half of the 2010s and are making a comeback, armed with better-trained algorithms. Chatbots trained with existing customer care data sets will replace notoriously difficult-to-navigate FAQ pages on websites and costly call centres. The result will be a lower cost to do business and improved satisfaction.

This brings us to the frightening or exciting – depending on who you ask – scenarios where AI leads to the replacement of human roles. Short-term, this could range from copywriting, software engineering, art, animation, business analysis, and journalism. But, again, this isn’t a futuristic dream pontificated by the Silicon Valley elite on their three-hour podcasts; this is happening today. For example, Buzzfeed recently announced that it would start using ChatGPT to write journalistic pieces after technology journalism outlet CNET was found to use the same tool to write personal finance articles.

Readers should consider that despite the countless applications of AI, there is no cause for alarm when it comes to making the human worker redundant. The evolution of technology is an inevitability and we are better served by preparing for it rather than resisting it. Many pundits draw comparisons to the widespread fears of the industrial revolution replacing jobs. In the short term, these fears are unfounded. So long as the models underpinning applications exist in a state of ANI, even the most advanced tools will require human input and oversight. However, these tools will complement and augment human work by replacing menial, repetitive tasks in creative and technical fields. For example, this article was reviewed using Grammarly to check for spelling and grammar mistakes. 

Although we’re progressing beyond ANI, there’s still quite the journey ahead of us and little consensus on when we might reach our destination. Some scientists estimate that we’re decades away from progressing to the next state of Artificial Intelligence: Artificial General Intelligence (AGI). AGI would offer capabilities such as sensory perception, advanced problem-solving, fine motor skills and even social and emotional engagement. There’s quite a distance to travel from writing Shakespearean sonnets about lost socks in the dryer to developing a personality like that of Samantha, the protagonist’s AI companion in the 2013 film Her. It’s impossible to predict how soon we could begin to describe AI as AGI; estimates on timing range from a decade or twenty, if ever.

When it comes to Arabic, language models need to catch up. Today’s models are predominantly trained on content publicly available on the internet: webpages, Reddit, and Wikipedia make up approximately 85% of ChatGPT’s training data set, for example. Considering that approximately 60% of written content online is in English and less than 1% in Arabic, the inputs necessary to achieve the same quality outputs in the latter are nonexistent. It’s no secret that English is the lingua franca of Middle East business. Still, we should ask ourselves whether this will further subdue the use of Arabic in such settings. The impetus to ensure the development of the Arabic language in technology and business settings lies both in the private and public sectors in wealthy Gulf countries.

While there are reasons to celebrate AI’s coming of age, we need to keep our feet on the ground. The limitation on the applications of AI is compounded by questions of ethical standards, reliability, accuracy, and truthfulness raised by academics such as Gary Marcus, a leading AI sceptic. Even Mira Murati, the CTO of OpenAI (creators of the models underpinning DALL-E and ChatGPT) is arguing for regulatory oversight of AI. Questions remain on how to solve topics such as moderating offensive model outputs, intellectual property infringement, policing disinformation, and academic honesty to name a few. 

“How do you get the model to do the thing that you want it to do, and how do you make sure it’s aligned with human intention and ultimately in service of humanity?” 

Mira Murati, CTO, OpenAI

Make no mistake, AI is beyond the point of no return, but that doesn’t mean we can’t harness its power to empower our workforces and transform our lives. Although the excitement surrounding AI’s potential is justified, the challenges of its usage and misuse are much more significant than those of previous generations of technology and should not be taken lightly. We have at our disposal an incredible new tool; however, we must balance our eagerness to watch with mindfulness of the risks and implications and careful regulation. 

Rayan Dawud is a former venture capitalist who has held senior roles at Careem and Outliers Venture Capital in Dubai. He’s currently on a career break in London, where he’s exploring Artificial Intelligence.

Featured image generated using DALL-E with prompt ‘android from the film Ex Machina in a Hopper painting’

Originally published on Esquire ME

crosschevron-down