by Darrin Zhou
Nominated by Jeremiah Chamberlin for English 425: Immersion Writing
Instructor Introduction
Darrin Zhou's essay was completed for the literary journalism segment of Immersion Writing (English 425). This assignment is a long-form investigation written in a first-person, narrative style that weaves together personal experience with research. Students choose topics that are part of the cultural moment, and spend nearly two months immersed in their material, investigating the subject with a journalistic approach (e.g., interviews, field observations, reportage, etcetera), before ultimately crafting that material into a compelling story. To do so successfully, one must not only skillfully observe and engage with subjects, but also find avenues to get beyond the reporting—to find the story behind the story. And to give life to the material in an artful, compelling way. Darrin’s essay does just that. His investigation is timely and illuminating, elegantly balancing the literary and journalistic elements of the subject, while offering readers insightful reflection, deft characterization, and superb narrative development. All of which culminates marvelously at the end.
— Jeremiah Chamberlin
The New Wave
ANTI-BIG TECH BIG TECH CLUB: printed on white t-shirts, stacked in a pile. Shiny stickers that say: while (breathing): eat, code, sleep optional. The occasional pull-up banner, advertising whatever a student team has made.
V1, a self-described gathering place for hyper-technical and driven people looking to break into the startup space, is gathering for Demo Day: a biannual event where students from their product studio cohort — essentially an early-stage startup incubator — and beyond pitch their businesses to a crowd and show what they’ve accomplished over the course of a year. I catch a glimpse of my aging apartment building through the panoramic windows of the airy conference room we’re perched in, on the top floor of the University of Michigan’s business school, Ross. I’m floating around the booths, watching founders talk to groups of interested bystanders huddled around. I go up to two shrewd students building something called Concepted AI, which as I understand it is an web-app to visually tutor you through SAT problems. They give me a demo — a cursor traces diagrams on a digital canvas as an AI-generated voice guides you through some math solution, similar to the videos Sal Khan first made for what would become Khan Academy.
“This is impressive,” I say, “How long did this take you?”
“Oh, 2 nights,” he replies. “I didn’t meet my co-founder until 2 days ago, and then we had this idea and built this. It was an intense couple of nights.”
Most of the other projects are also AI-related, from an AI-assisted development suite for embedded systems engineers, to an AI developmental learning app for young children, to an AI dermatology app, to an email service designed for AI, declaring, in their presentation, “AI agents will be first-class citizens of the internet.” AI. Artificial Intelligence, those words bouncing off the walls of the computer science building like buzzards circling a vole — all anyone can talk about. Everywhere. I vividly remember the first day chatGPT released in late 2022: me sitting on the greying grass of the campus quadrangle as I’m asking it for tofu recipes, completely shocked it can produce anything remotely cogent.
“It even gets the measurements kind of right! You realize this passes the Turing test?” I say, rapidly texting my friend with a zeal that would fade as the true capabilities of that model revealed itself, but even in that excitement I’d only mentally categorize the thing as some sort of a computerized oddity; it was impossible to predict how big it would grow. A survey the Michigan Daily conducted in early 2024 — a year and and some change after chatGPT’s release — found that 58.1% of all students were using the tool to assist with coursework.
I’d made the decision to major in Computer Science around the beginning of 2022. My father was a systems engineer, and I’d always felt something sublime emanating from the FORTRAN code — now, C++ for the newer generation — that he’d worked on. Truth to be told, I was never a fan of AI after I figured out the GPTs of the world were around to stay. It took away the mental acuity needed in writing beautiful programs: the process became mush, sticky, instead of the sharp, austere feeling I’d enjoyed. I didn’t want to engage with it much, but increasingly it became impossible to ignore how much AI has shaped the tech world around its image, demanding to be recognized. James Somers wrote, for the New Yorker, “When I got into programming, it was because computers felt like a form of magic. The machine gave you powers but required you to study its arcane secrets—to learn a spell language… Then, one day, it became possible to achieve many of the same ends without the thinking and without the knowledge.
Looked at in a certain light, this can make quite a lot of one’s working life seem like a waste of time.” The world where someone could stick their head under the sand and let the AI wave pass has dissipated; it is time to reach my hand out and see what Artificial Intelligence can offer.
A couple weeks before demo day, I make my way into the basement — a small lecture hall shaped like the bulwark of a wooden galleon — for this week’s Ship-It Sunday, a V1 event. The goal is simple: work on whatever you want, whether it’d be a fun side-project or the next big Silicon Valley darling, but don’t do schoolwork. The 35 or so other students greeted me not by asking my name, major or whatever other frivolities, but by saying, “Hello! What are you building?”
Discussion flowed freely. The man to my right, a freshman in Computer Science named Vatsal — who had 15,000 or so followers on LinkedIn, I’d found out, looking at his website where a portrait of him loomed over me on the right side of my screen like a lawyer’s advert — enthusiastically showed me the startup he was a founding member for: a AI-driven notetaker named TwinMind that he described as a second brain for the user. Its core functionality was transcribing and summarizing whatever audio was coming through your computer or being recorded by your smartphone, packaged into what the app called memories, which could be searched and prompted through using a large language model. As we spoke, he had his phone out, recording our conversation, making a new memory.
“It’s a culture of people who are never complacent with the way they are,” Mihir Arya, another V1 member, tells me over zoom, fitting me into his busy schedule several days past Ship-It Sunday. He’s building his own startup called Quanta, describing it as a TikTok for research papers. He tells me it’s an effort to democratize the ranking and reach for research papers behind academia’s closed doors by, of course, the same as everyone else, utilizing Artificial Intelligence.
V1 — a supposedly holistic tech-entrepreneurial club — being assimilated by an onslaught of AI projects isn’t surprising. The AI market has grown around 28% annually since 2014, notes Sequoia Capital’s AI year-in-review. They write, “…the building blocks are now firmly in place. AI’s potential is now congealing into something real and tangible.” It’s captured the venture capital market, too, accounting for around 46% of the $209 billion raised in 2024 in VC according to a Reuters article. It only captured 10% of VC money a decade earlier. Hundreds of new startups have emerged. Techno-prospectors travel into San Francisco, a startup hub for many prominent AI companies, looking for a share of the pie.
Mihir describes it as big companies laying out rails for the world’s entrepreneurs to pilot a train through: many tools have made the daunting challenge of technical production easier, from APIs for large language models from OpenAI to the full-scale SDKs for web from Vercel to the proliferation of AI-assisted coding tools like Cursor. Combined, an experienced developer could get a fully-functional chatbot deployed on the internet in maybe 10 to 20 minutes. It’s made the road leading to the launch of your own dream startup company forcibly discard its own speed limits, as we see AI solutions applied to everything from HR to finance to marketing to whatever lagging, legacy industry some hip 20-something wants to turn upside down. “If you don’t have an understanding of what generative AI can do and what its limitations are, you’re going to be left in the dust,” Jonathan Griffiths says in a Babson Magazine article.
“An ideal landscape would still have early-stage companies lighting a fire under Big Tech’s feet.” Mihir identifies a sort of grit that a startup needs to face: a willingness to hear no over and over again and still stick by your problem. “You’re working on very low margins, so you’re very scrappy…but you have this innate drive, you have to really believe in your idea.” You can’t be stagnant.
On the other side, Mihir says, “I think we’ll see a surge in fluff.” An easier entry point into the startup stratosphere means more vapid products. Silicon Valley has made mistakes: memories of the dot-com bubble linger in the region, with its vertigo-inducing boom and equally violent crash, as ideas investors pumped gluttonous amounts of cash into forcibly revealed their own flawed business models when the sky fell, like Pets.com, which had the backing of Amazon, a Macy’s Thanksgiving Day Parade appearance, a Super Bowl Ad — just not a salient vision that sustained itself past the mania-colored googles of those boom years. Is the new wave of AI just history repeating itself, or is there something more endemic in the water?
Leo Liu and Shrey Panda met with me for a coffee chat at one of those cafes with pastel, ceramic mugs where the barista floats a milky heart on top of your latte; we sat outside on a picnic bench, under the old elm tree where the streets crossed, forming a diamond patch of grass. They were the current and previous executive directors of V1 respectively, wearing sleek, utilitarian rain jackets to combat the chilly wind: spring in Ann Arbor hadn’t quite thawed into the heat of summer yet. “It’s basically the next Industrial Revolution,” Leo says, referring, of course, to AI, “that completely changes the game of how efficiently we can do things.”
“Technology is a lever to change the world,” Shrey follows up, letting his fingers trace out his ideas in the air with a trained decorum, “and now, in the 21st century, that lever is software, right?” He points at me: “Think of Dominoes. Dominoes is not a pizza company, it’s a software company.” None of them had time to get coffees before starting the interview. “It’s a problem of, how do you build a distribution network quick enough to get every single pizza made within 30 minutes and delivered to someone’s door, right?”
Shrey and Silicon Valley have a classic love story. He’s crashing on the CEO’s couch instead of an office, watching the startup completely re-invent himself the second week into his internship and rewriting the entire frontend from a whole new codebase — thrown into the deep end, he says. “I was blown away. If this is the experience that startups give me, then I need this.” Leo’s not far off with a failed startup trying to disrupt the appliance industry under his belt, using AI to interpret user manuals as a business-to-business, software-as-a-service company.
A sense of excitement rather than irritation at those hard problems startup founders tackle was one of V1’s unwritten rules. Shrey talks about running V1 itself like a startup, constantly experimenting with new ideas for club operation and looking for passionate people above all else — creating a place where there’s no bounds, no ceiling on what a diverse group of people, together, can accomplish.
“You know the PayPal mafia?” He says, referring to a group of former PayPal employees that have gone on to carve out much of the tech landscape as we know it today, including YouTube, LinkedIn, Yelp and infamous member of the group Elon Musk’s SpaceX, Tesla and now X. It’s a group of people who famously eschewed the corporate structure of eBay for something new, novel, exciting. “We’re trying to do that for V1. We want to create a V1 Mafia.” Shrey’s talking about V1 like one of those scrappy founders, like he’s cornering me in an elevator, pitching the next greatest thing.
“I hope we live in a techno-utopia where AI becomes something to support humanity, creating almost an oasis, to generate experiences for us.” Shrey says. “At least, in the short term, our interactions with AI will make our lives easier, allowing us to focus on solving harder problems,” Leo adds. I wonder if V1, at its core, is an attempt at an utopia more than an attempt at a startup — a place where people can work to uplift the good of the world with software, increasingly AI, as its method of gospel. Shrey mentions how people pitch themselves into leadership at V1 instead of applying like a company would: students contribute their skills to the club, doing outreach to venture capitalists and investors, postering for events and the such, almost like a commune. “It really does take a village,” Shrey says.
Then, again, maybe startups themselves are attempts at a version of utopia more than anything else: an attempt at reimagining a future for the better. “I hope humanity goes to a world in which we don’t have to do that much work, in which we’re able to please and enjoy ourselves because of AI,” Shrey says.
Leo, later, shared an memo with me from the AI Futures Project called AI 2027 — a forecast of AI’s trajectory in the coming years. It makes the bold prediction that we’ll have publiclyaccessible AGI (artificial general intelligence, or AI that can match human capabilities in almost all economically-valuable cognitive tasks) model by July of 2027, drawing graphs that extrapolate current computing powers into exponential beyonds. Research and cognitive tasks like programming will look more like managing a team of super-intelligent AI models, the memo argues. Quantum computing, widespread robotics and fusion power all become commonplace. It states, “Cities become clean and safe. Even in developing countries, poverty becomes a thing of the past.” Rockets terraform the solar system starting in 2030. An AI-developed cure for cancer and aging exists. It reads like speculative fiction more than anything else, but maybe I’m still behind in visualizing how much the world will change.
More importantly, the memo imagines the next five years, from 2025 to 2030, as pivotal in this societal restructuring, when AGI will grow to dominate every sector of humanity: a time period where major changes are outlined in months instead of years or decades. It’s easier to contextualize the fever many AI acolytes have towards the technology through this viewpoint: who wouldn’t want to drop everything and spend their waking moments for a chance to build this future, to harvest those fruits, and to claim your spot in Eden?
Amulya Parmar is in front of me on a zoom screen, his background masked over with a picture of the Golden Gate Bridge at dusk. The sun is setting behind him, or at least it is in this digital sense, anyways. Like everyone else, he’s the founder or co-founder of multiple startups. “We’re living in the greatest Manhattan Project, and we’re all partaking,” he says. His optimism for the future is undeniable: he practically jumps out from the screen at me.
“Have you heard of the blue ocean strategy?” Amulya asks me, tilting his head, referring to some book on marketing, about how to get at untapped market potential or whatever the doers of the business world are worried about. “I believe we’re dealing with,” he says with the gusto of Steve Jobs revealing the iPhone, “A. Blue. Tsunami. You’re going to be re-wiped away. The entire surface area of many of those ideas are going to be wiped away, and there’ll be an opportunity to build things again, from scratch.
There’s a version of a democratic society that Amulya is fervent about: a version of a world in which Artificial Intelligence will redistribute the means to everyone. “Never in our lifetimes, and not only in our lifetimes, in society's lifetimes, has the average person had this much leverage,” he insists. “The best models are the ones in the consumers' hands today. They're not being hidden or only available for the aristocrats.”
It echoes many of the early sentiments about the web and internet as a universal, democratic space; Ronald Reagan once said, in a meeting with Thatcher in 1989, ”The Goliath of totalitarianism will be brought down by the David of the microchip.” This vision was definitively defeated by market realities. Look at FaceBook’s contributions to the Rohingya genocide, to China’s technological wonder of a censorship apparatus in the Great Firewall, in any of the thousands of things around you. Maybe the downfall started when Google — which, in a past world, was an indie darling of San Francisco — changed its slogan from “Don’t be evil” to “Do the right thing” in 2015.
Amulya’s startup is tour, an AI-video platform for real-estate agents to show off their properties with a digital leasing agent experience that greets visitors from a website corner, offering personalized property tours. “I think the two things I've been relatively good at over the last several years has been removing friction from some form of communication between like, physical and digital worlds, right? Tour is an example of a product that basically digitizes a physical process.”
He envisions artificial intelligence’s evolution in three distinct phases: “The first generation of AI is reactive - you ask questions, it responds. The next layer is you ask questions, it generates things.” But what really excites him is the third phase — proactive AI — which “becomes truly, proactively intervening in your life.” His other startup, Microdeadlines, tries to address this version of artificial intelligence. “Can we create this ambient AI that understands your goals and priorities?” he says. It’s an AI voice agent that serves as an accountability buddy, periodically checking in on your web browser to see your progress from a prompt you gave it.
“What’s your take on some of the ethical, environmental considerations that have come up around AI usage?” I ask him.
It’s a legitimate problem to him, but he feels like we’re over a point of no return. “We’re so past the copyright problem, that ethnical problem, at this point. We’re so past it, that there’s no turning back now.” There’s a belief that we'll overcome energy constraints just as we've solved other seemingly insurmountable problems. “We’re turning rocks to rockets.” There’s no room to slow down and re-assess for him.
In some regard, this is a question about faith for Amulya. “I believe that like, our God is a creator, right? That's fundamentally what he is. So if, if we are made in the likeness of God, aren't we also called to create? And the Bible says to make a new song, right?” There’s a level of arguably-disproportionate faith he has to have in the power of his own ideas, in the ability for him to get the first customer through the door, in order for him, for anyone, to make it in the startup space.
“I feel like I got into so much, what it feels like, painful situations, where I didn't know exactly what to do or how to progress forward,” he says, his normally bubbly voice taking a somber tone I hadn’t heard before. “I think faith just keeps let you get the perseverance to keep going and have like that, like, mustard seed of faith that can explode something a lot larger.”
“When people talk about AGI by 2027, I feel like I just kind of don't know quite what they're what they're talking about. They’re not arguing it’s going to have drywall intelligence,” Eric Swanson, professor of philosophy at U-M, says. I note his office is covered full with books on each wall as he reaches back to turn off the CD player before our conversation.
He uses drywall intelligence in a tongue-in-cheek way: he had a ceiling leak in one of his classes, he told me, that he just couldn’t stop talking about during lecture. Obviously, the university eventually sent maintenance to fix the classroom — I like to imagine him watching in the corner, cheering valiantly — and it got him thinking about the type of physical tasks that AI still couldn’t do, like patching drywall; work that he, as a human, has no problem learning how to do, from the nature of having a body. An AI would have to be able to patch this bit of drywall for it to have drywall intelligence.
“I think there's a tendency to think of knowledge in a kind-of rarified way, and intelligence in a kind-of rarefied way, and that leads people to think AI will have an impact that's more broadly distributed across the population than it actually will.” It’s very clear, to me, the extent that large language models can and will disrupt software development and programming worlds; the writing’s on the wall for how much they already have. It’s unclear how much they can touch something like plumbing or woodworking: optimistic projections would hand-wave the word “robotics” at you and leave it at that, but the outlook in those industries is unclear at best. “We have to think about what an AI with really differential impacts on different parts of the population might look like,” Swanson argues. What happens when Artificial Intelligence leaves certain people out of the innovation arms race?
“Alongside the ever-widening social divisions,” Barbrook and Cameron, in 1995, write in The Californian Ideology, “another apartheid is being created between the 'information-rich' and the 'information-poor'. In this hi-tech 'Jeffersonian democracy', the relationship between masters and slaves endures in a new form.” This concept is easily observed within the modern AI ecosystem: for example, large language models like chatGPT rely on enormous data annotation workloads to function, which is a job mainly done by workers in the Global South being paid around 1 - 2 dollars per hour, with even some reports of child labor. “I think it's a mistake to try to predict what AI could do in isolation of the kind of historical moment that the development finds itself in,” Swanson says.
I ask Swanson about the things he feels like are lost in the AI revolution. He takes a second, and says, “I see my fifth grader having as part of his assignment for a project using AI art, and I eel like, well, it’s cool that he's getting these professional images, but on the other hand creating stuff with an AI doesn't feel satisfying in the way that creating stuff yourself is.” He remembers afternoons when he had truly nothing else to do, when he’d just play something on his guitar for 3 hours, do things with the monotony of music practice and be content. There’s a sense of boredom here that he finds gone from the world. Or at least, it’s shifted into the boredom of content, of having everything coming at you all at once, a constant barrage from your social media apps.
A vision of a future dominated by artificial intelligence will mean averse affects. Tim Hartford for the Financial Times, discussing a book by Acemoglu and Johnson named Power and Progress, writes, “while technological progress can produce broad-based prosperity, there is no guarantee that this will happen quickly — and in some cases, no guarantee that it will happen at all. Textile factories of the early British industrial revolution generated great wealth for a few but did not raise worker incomes for almost a hundred years.” The road to a computerized utopia, of a world without friction between the digital and physical world, spells large differences in benefit between different demographics — an often invisible risk that AI founders will have to be committed to bearing, let alone the things we lose from abandoning a slower, more gentle pace of life. Past a romantic view of a technological future, this is the reality I can most easily see.
People will be on the ship, and people won’t, and people will drown.
Later, I’m digging through my bookshelf — a measly thing from IKEA that’s become more of a heap of books, as they’ve accumulated — past the Abdulrazak Gurnah, the Robert Frost, the Joan Didion, the Mark Doty: the books that’ve dominated by reading as of late, as I’ve fell in love with the literary world, as I’ve grown more cognizant — into Zero to One, to The 48 Laws of Power, to Shoe Dog, to the Walter Isaacson biography of Steve Jobs, where his monochrome picture, pensive, hands on his chin, bleeds into the edges of the cover. It sat propped up against my desktop computer on my desk in middle school, when I made some poor classmate in 7th grade read it with me, in all its 650-paged glory, for a project in an English class. Everyone else was reading things like Eragon or The Fault in our Stars. The picture stared at me, out the window behind me in my bedroom, and into the grasslands of South Dakota where I grew up, past the horizon, into blue. The inside is filled with the jejune marginalia of my middle school years, scribbled during a time when America still believed in a figure like Steve Jobs.
I had a phase from middle into high school where all I wanted to be was an entrepreneur building the future of the world. I’m not sure when that dream died. Maybe it came with the crushing reality of the world that parents always tried to shield their kids from. Maybe it came from the recognition of being Chinese in America, in South Dakota, as I grew old enough to see the blood in the soil of this country, and could no longer put my faith in Silicon Valley to do good. I’ll recognize this as the character growth that authors are often looking for — the shedding of old ideals into the new.
But then I can’t explain, for the life of me, why I brought those books here. Why they live in the corner of some decrepit college apartment; intentionally hidden, even, from my roommate’s gaze; why I refuse to let them go. I’m flipping idly through the book — things like “The best way to predict the future is to invent it” are hastily underlined — and there’s something, still, in my handwriting that I can’t let go of. An insidiousness nostalgia carries with it, but I can’t let go of the hope that, if we wait long enough, the world those books promised will return.
In the eleventh hour, Luke Zhu is there in some company’s office, working on their tax records on their secure computers, getting a version of his AI agent to run from the startup he co-founded named Dezu. He’s a junior studying business administration, with a cushy internship in investment banking — at least, he was, before he dropped out to pursue his startup full time. “I’m spending, I don’t know, about an hour or two on schoolwork each week and 70 to 80 hours on my startup.”
Before they get concrete results, corporate, overseeing the branch of the company they’re working for, catches a sniff of the project. They shut it down. “You have to wait until tax season is over,” some bigwig says. Corporate says no. “And then we still said, fuck it,” Luke goes.
Him and his co-founder, Riya Patel, drives an hour out of Ann Arbor to meet with them. They manage to book a meeting with the CEO in Canada next week, but before that, he’s off to Miami to talk with the CIO of a top 50 accounting firm in Miami, to secure another client, before realizing he didn’t have his passport. He flies back from Miami the next day, takes a bus to East Lansing to pick it up from his parents, drives back to Ann Arbor that night before waking up at 7AM the next day to drive to Canada to meet this CEO.
“It was a crazy journey,” Luke recalls. “We had to take a day or two off after that.”
Still, this type of hectic, risk-it-all life appealed to Luke over the comfort and prestige of one of the nation’s best business schools.
“How do your parents feel about it?” I ask him, as he mentioned he only finalized his dropout status last week.
“Yeah, so I haven’t exactly told them yet. I want a little bit more traction before.”
To be fair, Ross isn’t the most likable entity, either. The business school’s top 3 ranking in the country likely owes itself to the complex system of business fraternities and finance clubs — many of which are as selective as the jobs a Ross graduate hopes to ascertain to — more than its education. “I found 70% (11 out of 15 classes) of the curriculum completely impractical. For my peers, I would guess the standard deviation of useful classes is ±2,” Justin Guo, a recent Ross graduate, writes in a medium article.
“You’re not forced to learn under the traditional route,” Luke says. “What they teach in Ross isn’t exactly what happens. When you’re working on a deal, when you’re trying to close a client, you don’t create a customer persona or any of that bullshit.”
There’s the possibility of immense growth for him, of condensing a decade of experience acquired from business school and consulting firms into two. It comes a kind of learned obsession that, Luke feels, goes into every startup, even if your obsession is taxes: dropping yourself into the deep end, out of academia’s cradle, will necessitate you learn, and learn fast, and learn deeply about your topic to a level of expertise not many others will ascertain to.
It’s been 40 calls a week consistently, he says, that’s forced him to be more extroverted and develop those interpersonal skills, that’s made him become dangerous in tax, that’s put him on the radar with the titans of TurboTax and QuickBooks. “They're really antiquated. They move slow. They've been in their positions for like, 20, 30, 40, years, and there's just not that much innovation.”
Still, there’s sadness at the life he will be leaving behind. “I don’t have time to go out and enjoy the nightlife. The biggest thing I’m going to miss is my friends,” Luke says, noting the housing contract that he had to back out of to pursue Dezu full-time. “I’d imagined I would’ve worked at an investment bank, do the typical finance route, for a couple of years before starting my own thing. I just didn’t think it would happen so early.” But the world, he notes, is changing so fast.
Who knows what it’ll be like in 5 or 6 years? Who knows if he’ll be able to do this then?
Still, it’s an exciting time to be solving problems. “I feel bullish with all the AI technology. There are so many new companies coming out that are a super lean team, that are making, like, hundreds of millions in ARR.” Time will see if Luke will become one of them.
There’s an objective rigor a journalist has to take, for the sake of truth-keeping. It’s why you don’t tend to interview your friends and loved ones: betraying them for the sake of your story, for the sake of trying to grasp at a truth, is too morally compromising; it’s good practice to keep that psychic distance between the interviewer and the interviewee. To grasp the true effects of an AIaccelerated future would mean to map out its harms, and to be prepared to condemn someone like Luke Zhu. But I look across the table, and I can’t help but to see a Chinese kid who grew up in the Midwest, and I can’t help but to root for Dezu: just a little.
A blue tsunami, I whisper to myself on some evening, walking my way back to my apartment. Ross, the business school, with its high panes of glass, looms over the trees on the horizon. I can see the room where Demo Day happened. I suddenly get a sense that I’m looked down upon. It’s not pleasant.
The quote from Amulya stuck with me through everything else I’ve done. He was the first person I interviewed for this piece: maybe its a little intellectually dishonest for me to have stuck him in the middle, the place in the narrative where all the pieces the writer doesn’t quite know what to do with slots in. I couldn’t figure out why this was stuck in my mind, before it struck me as immediately obvious that he was alluding to the biblical flood in the book of Genesis, whether he intended to or not, and the story of Noah’s Ark. But a flood myth is actually very common across different cultures, from the Epic of Gilgamesh of Sumeria to the Manvantara-sandhya of Hindu India. Most of the time, a great flood is sent by some deity as divine retribution, as a measure to cleanse humanity. There’s also often a mythological hero of some sort, representing the human craving for life. “The persistence of the flood myth in all parts of the world, even those were real floods are unlikely to have occurred, suggests a human vision of both imperfection and the possibility of redemption,” David Leeming writes in The Oxford companion to world mythology.
It’s also a human tendency to see oneself as heroic. It’s too late for a computer science student in this day to graduate blind: they’ve been introduced to predictive policing algorithms, to the techno-survillence systems, to the digitally addictive social media apps, to the elaborate disinformation campaigns cyber-agents are able to accomplish. It’s no wonder, then, at the malaise that one might feel going into some cushy big tech job, lest you accidentally build a defense drone. The appeal of the startup alternative, where companies are at least trying for this transformative, good vision of the world, then, isn’t surprising to me. It became increasingly clear that every one of those founders I interviewed truly believed in Artificial Intelligence as a cultural and social good. None of them were posing, for economic gain or otherwise. I can stare at everything we will grow to lose, but I can’t condemn a belief at a better world, even if it is naive.
Maybe this is not the conclusion the reader is looking for: maybe they are looking to sink their teeth into something sharper, more austere. Maybe they are looking for conviction.
Then I will offer you this. A conclusion written by a large language model:
Perhaps what's most striking about this AI revolution isn't just the technological fervor of V1's entrepreneurs, but how it mirrors ancient flood myths — a cleansing destruction followed by rebirth. Standing between Ross's gleaming towers and my bookshelf of entrepreneurial dreams, I recognize both the seduction of being Noah in this narrative and the sobering reality that technological tides rarely lift all boats equally. The blue tsunami will come, washing away industries and remaking others, but the true measure of this generation of founders won't be their technical brilliance or market disruption — it will be whether they build arks large enough for everyone, or only for the privileged few. Perhaps this is why I keep those books from my middle school days. They remind me of a time when I believed in technological salvation without reservation. Now I understand that faith requires doubt — the tension between possibility and responsibility is what keeps us honest. The most thoughtful builders hold both the dream and its consequences in careful balance.
The future they're building deserves neither uncritical acceptance nor reflexive dismissal, but rather our most thoughtful engagement with what it means to be human in an increasingly artificial world.
