Hand 1

I wake up as usual by immediately going on my phone. I reply to some of my messages - yes dad, the solar installation you have at the villa could be redundant soon, but again, it can still benefit them (even if slightly) if you comply and let them integrate it into the new grid. A grid which operates on principles only slightly less opaque than divine providence, but is demonstrably more efficient. I also sent him the relevant passage from The Spec. Energy generation is covered thoroughly.

I go to the bathroom and use the auto-brush. It performs flawlessly, eliminating the user failure modes I recall from manual brushing, though the brief floss-like motion under my bridge stings a little. I visit the toilet - still stubbornly un-smart, lacking even basic output analysis. A shocking oversight in an otherwise quantified existence. Someone should file a feature request (though, yes, it’s probably because for whatever reason, bathroom privacy is hardcoded into the spec). My own internal feature request - a morning drink - prompts a move towards the kitchen. Not everyone has bought into it, but I’ve always been into nootropics so I am eager for my dose of ‘Coffee+’, OA’s bespoke blend optimized for cognitive profiles like mine. I do have to continuously share so much of my data about my vitals with OA to ensure it’s refined based on its effects on me, but honestly, it’s worth it. I am not important. My significance is negligible in the face of OA’s aggregate data. The very concept of ‘important people,’ in the pre-OA sense, feels increasingly archaic. So I might as well “sacrifice” my data or, rather, lease it out for substantial cognitive dividends..

I start feeling the effects, and while the blend doesn’t confer NZT-48 levels of hyper-cognition, its effects are nonetheless remarkable. When it kicks in, the morning mental fog dissipates. It doesn’t grant superpowers, just.. executive function. The ability to align my actions with my stated goals, a previously depressingly non-trivial task. I want to and probably could understand all the tiny nudges to my neurotransmitters, connectivity and everything else the substances affect, but really I don’t need to know how it works. Maybe I’ll allocate time to unpack it one day. A project for future self-knowledge acquisition, perhaps.

I go to my desk - though I should give in and do the VR/AR setup that’s catching on. I don’t know if I’m ready for corneal overlays. I used to be squeamish about them, worried about firmware updates bricking my eyeballs. I’ll probably try it eventually, once the fail-safes have fail-safes. My monitors light up as I approach, and I suppress a little smile as I see it’s opening the analytics of the animation I posted last week. Yeah, you can get unlimited generated for you slop (okay, it’s not slop; I’ve mostly come to terms with how good it really is), but a lot of us also watch channels directed by humans. I wish I had a big following back then because legacy creators get so much extra attention compared to relative newcomers. Still, the content I produce gets traction and directing OA’s creative suite is almost as fun as watching the final product it generates.

I checked my chats again. Perry has been obsessed with going to the Moon - it does seem like it’ll happen eventually, but I don’t think it’s going to be any time soon. I guess, technically, we can request anything feasible, but surely, going to the Moon is so resource-extensive that those resources aren’t getting allocated to it any time soon. I imagine OA’s resource allocation algorithm flagged it with a gentle ‘User budget exceeded. Suggest a simulated alternative or perhaps a nice cup of tea?’ I have OA show me a simulation of what Perry will see on such an excursion. I close it a few seconds later - it’s about what I expect. I go back to the kitchen and wait patiently for my food. The robotic arm appliance prepares it while I do my first set of light exercises. My form is probably suboptimal, but the robot arms, thankfully, are programmed for non-judgmental food preparation only. I get my food and lean on the old “I need to watch something while I eat” excuse to put on an episode. The quality ceiling for entertainment has been raised so high, it’s practically at the level of where Perry wants to go.

The episode, naturally, was algorithmically tailored. Not just to my inferred preferences based on viewing history - that was primitive stuff - but dynamically adjusted during viewing. Just as my focus might have drifted, a subtle shift in pacing or a character’s micro-expression, presumably cued by my biometric feed via the desk sensors, pulled me back. OA knew precisely when my attention dipped, when my pulse quickened, and when pupil dilation indicated intrigue. The narrative arc might have been pre-written, but the details were still optimized in real time for me.

Yet, watching it still felt almost like cheating. A process smoothed out. I found myself wondering about the cost function being minimized. Was it my enjoyment? If it was just that I’d likely end up wireheading, so it takes “well-being” and other factors into account. I am sure it’s also optimizing for other goals - making me fit a little better in the new paradigm. It might seem almost sinister, but it isn’t really. The system delivered. My life is demonstrably better, safer, and more comfortable than any pre-OA baseline. My dad, bless his villa-solar-panel-installing heart, still grappled with the idea that centralized, AI-managed grids outperformed decentralized individual effort, even with the best intentions. It was a category error he hadn’t quite shaken yet, clinging to an outdated model of agency like a physicist still insisting on phlogiston.. He’ll adapt. Eventually. Probably. Perry’s Moon obsession was another facet of the same pattern. Why allocate vast, tangible resources - materials, energy, risk-exposure for biological units - to achieve an experience OA could simulate with near-perfect fidelity, piped directly into your sensorium? The desire felt… archaic. Like wanting to hand-crank your car instead of letting the autonomous network manage traffic flow for optimal efficiency and safety. It wasn’t about feasibility - OA could calculate the resource cost down to the last gram - it was about optimality. Sending Perry to the Moon was likely a net-negative on the global utility function OA was calibrated to maximize (or whatever inscrutable N-dimensional optimization landscape it navigates), wasn’t it? His simulated trip, costing negligible computation cycles, was the rational choice. I guess we do have the resources to spare - not that you can trust a mere human’s resource allocation guesses. It makes me wonder whether we haven’t gone too far in emphasising that people’s desires should always be eventually met in The Spec. I end up watching and musing until noon. I get briefly excited about creating my own video about a similar type of time loop as the one in the show before I stop myself - No point recreating what OA produced for me. I push back from the desk. What should I do? The question itself feels slightly anachronistic. It’s not like I have a job. My schedule suggests another dive into the workings of OA, this time with a focus on error correction. It is interesting, and it is packaged so well, but my motivation to dig deeper and deeper into how it all comes together is waning. Who am I kidding? Perhaps I’ll leave it for now and instead I’ll finally explore the new advances in Cosmology - we almost have the blueprints for getting a ToE even - there’s so many experiments left to do of course, and I have only barely looked into it. Yeah, that should be fun to explore. I ask OA to prepare a learning plan while I digest the best part. The carefully crafted overview which hits all my intellectual porn desires. Sure, learning the rest will be enjoyable as well, but those first insights, delivered so delicately in just the right way. Intellectual bliss, mainlined..

I can’t indulge too much as it’s time for me to meet Sam and Lina for lunch. I brush up on the latest updates on the election before I go, as I know Lina will want to talk about it one way or another.

I step outside. The air is so clean today, but it is chilly. Transport is, of course, seamless. The car that picks me up is modern but looks to be repurposed for OA driving and not originally designed with it in mind.

Sam and Lina are already at the cafe, a place that retains a surprising amount of pre-OA charm, albeit with automated staff and nutrient-optimised menu options overseen by the central system. Sam waves, already sipping something that looks suspiciously like standard water - he’s always been a bit of a purist, finding the edge cases where OA’s optimisations fall short for his specific, self-diagnosed sensitivities. Lina, however, is animated, gesturing with her tablet.

“–but the principle of it, Sam!” she’s saying as I approach. “Even if the predicted outcome variance is within the noise floor, the act of collective choice matters!”

“Hey,” I say, sliding into the seat. “Pestering him about the election already?”

Lina turns to me, her expression earnest. “He doesn’t even plan to vote. It’s the only lever left for influencing the world, and he doesn’t even care.”

“I do care. It’s just that the impact is so negligible,” Sam interjects mildly. “Some might even say there is approximately zero impact”

“But how can you know?” Lina insists, leaning forward. Her tablet displays polling data - colourful charts showing predicted sentiment shifts smaller than the margin of error. “The Spec allows for amendment based on sustained, overwhelming public consensus. Section 7, Subsection B. The elections are the designated mechanism for measuring that consensus.”

“Ah, Subsection B,” Sam sighs, taking a slow sip of his water. “The one requiring a ninety-five percent consensus sustained over two consecutive electoral cycles, and demonstrating non-contradiction with core utility axioms outlined in Sections 1 through 4. Possible in principle, maybe, but irrelevant in practice.”

I jump in, trying to bridge the gap. “Lina has a point about the principle, Sam. The existence of the mechanism, even if it is barely reachable, serves a function. If nothing else, it maintains the illusion of human sovereignty.”

Lina nods vigorously. “Exactly! We all look at the election as the place where change could happen, even if it doesn’t. If we stop participating, we signal that we accept the current Spec as immutable, forever.”

“But the point,” Sam counters, his voice calm, “isn’t whether the mechanism exists. It’s whether activating it is within the realm of practical possibility under current conditions. We’re arguing about the theoretical existence of a lever encased in meters of hardened plasteel, requiring a coordinated effort roughly equivalent to getting humanity to agree on the best pizza topping.”

The politics brief is still open on my phone, and it reminds me to retort, “It’s not like we haven’t changed The Spec before, Sam - The 2032 Clarification..”

Sam waved a dismissive hand, nearly sloshing his water. “‘The Clarification’? Please. That wasn’t us changing The Spec in any meaningful way. That was OA flagging an internal inconsistency and us rewording some definitions. No fundamental shift.”

I mentally concede his point. We can, in theory, change The Spec, but in practice, we probably won’t. Not really. We might look at The Clarification as an example of our ability to affect things but really, it just entrenched the whole thing.

I eventually steer the topic away from politics and towards the shows we all watch - we’ve agreed on which creators to follow so we can participate in discussions about their content. We cling to that one additional reason to consume non-personalized shows.

I go back home, and I find myself pulling up the official election portal on my monitor. Candidates dutifully uploaded policy papers - dense documents exploring minute variations to Spec-related parameters. It does feel more like the work of someone keeping themselves busy than of someone who is actually busy.

There were, of course, the fringe candidates. The “Spec Rejectionists” who advocated for a full rollback (computationally infeasible and axiomatically disallowed by OA). The “Human Supremacists,” yearning for a return to pre-OA chaos (polling at statistically zero). Their platforms were automatically flagged by OA’s content filters as “non-viable” or “incoherent,” relegated to obscure corners of the data sphere, their arguments accessible but never promoted. Participating felt like meticulously choosing between slightly different shades of beige wallpaper, while OA architected the entire building according to optimization principles we barely grasped.

Lina would argue that even choosing beige was an act. Sam would say it’s wasted cognitive effort. I.. I wasn’t sure. Was apathy the rational response? Or was it a failure, giving up on the last vestige of the old world’s structure? Ultimately, I don’t think it matters - what we have is good. Why worry about changing it? “Not every change is an improvement but every improvement is a change.” Sure, but what change can we hope to make that would realistically be an improvement?