How to See the Dead
A retinal implant designer must decide if translating mourning into light is progress or a refusal to let go.
The first question I ask my clients is, “How would you like to see the world?”
Some answers are charming; they want to see the world as an infant does, everything new and unspoiled by habit and familiarity. Some are more professionally minded and wish for magnification-enhanced, telescopic, UV, infrared, or radiation-attuned alterations that will help them with career tasks. Still, others are curious, wishing to see the world as an insect does, through thousands of hexagonal ommatidia. By far the most popular request is visual synesthesia. They want to “see the smell of cut grass” or “watch a symphony as they would a sunset.”
April’s request, however, did not make me smile. She wanted to see the dead.
“I-I’m sorry. I think that’s out of my area of expertise,” I stuttered, not unusual for me in my personal life but rare in my professional one.
“I heard you were the best,” she said.
I was surprised to feel influenced by the flattery. Just last year, I had built eyes for a team of commercial divers working on offshore wind farms in the Arctic. The implants filtered out the stirred-up silt by canceling the polarized backscatter patterns unique to turbidity and used a low-power LiDAR array to render pressure-wave distortions as faint light ripples so the divers could see the approach of water currents. I was good. Still, an old mentor in Lagos had been printing and sculpting retinal implants since before I’d started undergrad. She’d once wired a drone pilot’s prosthetics directly into night-vision satellite feeds, so I wasn’t the best.
“It doesn’t matter how good I am,” I said. “If Newton couldn’t crack the afterlife, neither can I.”
“I know,” she said matter-of-factly. “I don’t want to see the dead continuously, just sometimes.” She sat straight-spined and kept her eyes on mine.
“Haven’t you ever lost anyone?” she asked.
Step 01. Acknowledge the Limits of Natural Photoreception
Beneath the white glow of the lab lights, my gloves stuck to a polymer bench liner as I eased a latticed disk of organoid-grown photoreceptors from the incubator. It took hours of programming and careful printing to beat what nature equips us with naturally; eyes that can only detect a narrow 380-750 nanometer range of light. Trays of half-formed retinal cups floated in culture baths nearby.
Fifty years ago, the best visual-cortical direct-to-brain implants were only capable of translating a video feed into about two or three hundred phosphenes, those little squiggles of light that come when you rub your eyes. These days, I can grow gene-edited retinal cells and electrical-integrated optic nerves and implants that vastly exceed the capacity of the human eye.
Once I removed the disk from the incubator, I laid it onto the microelectrode array as thousands of graphene contacts tested for conductivity across the tissue. The traces came back clean, so I started injecting the first layer of Müller glia to knit the surface, then injected retinal ganglion precursors to carry the signals downstream. It’d take days for the cells to settle and form stable synaptic contacts, but once that happened, the construct would be ready for the neural lace web of electrodes that would translate between an encoder and my client’s optic nerve.
Spools of unspent neural lace waited in shallow dishes nearby, coiled like black spiderwebs. It was a bad habit, but so much of the build was muscle memory that my mind was usually a step or two ahead of whatever my hands were accomplishing.
This eye wasn’t for April but for a geologist who wanted to see magnetic field lines as shimmers. For April, constructing a functional, physical eye capable of integrating with other tech would only be the first step. If we managed that, then we’d need to tackle the addition of memories.
While I waited for the array to finish its sweep, I pondered the intricacies of April’s request.
The central mission behind almost every eye I’d ever built was to translate some sort of sensory data into visual brain-speak. Take the geologist’s eye: I had to build a bio-integratable magnetometer array that captured magnetic field data, program a small encoder that converted the compact fluxgate vector field into visual grammar, and then feed that through the eye. It wasn’t exactly “simple,” but it at least followed well-charted biophysics.
April’s implant involved many more unknowns. In the consultation, she had explained it like this: Grief only partially brought people back: “The fuzzy haze of remembrance,” she had called it. Some days, you can imagine a face as clearly as if it were right in front of you. On other days, it was a challenge to summon even the right eye color. She wanted this particular face to look real. She wanted to see her husband, Cline, doing simple things, like bringing trash outside in a downpour, beads of water clinging to his cheeks when he came back inside. She wanted me to translate her memories of him into visual information, integrate that with her real-time optical field, and project that seamlessly back to her brain. If I took it on, it would be the hardest technical challenge I’d ever faced.
A metallic scent returned me to the room, where the trash bin near me overflowed with pipette tips smeared with pink growth medium, crumpled gloves, and a discarded photoreceptor scaffold curled like an old contact lens. The bin smelled of ozone. I rose from my chair and tied the bag off, designs for April’s thanatopic eye swimming through my mind like ghosts of my own.
Step 02. Design the Scaffold
Although I’d spent my career perfecting the intricacies of neurological inputs, I was, conservatively, several miles out of my depth when it came to brain outputs. I called an old roommate, Aux, who worked in imagination tech, and was a little surprised at how eagerly he agreed to help. “Tia!” he exclaimed brightly as he answered. He explained that he’d designed a system that translated and recorded dreams, and had been crestfallen when a private contracting company had bought the patent to use as spyware. He’d quit soon after.
Together, we sketched out a basic 3-part modular integration plan. We were, essentially, building what amounted to a complicated projector system. The retinal implant would be the screen: a thin, curved layer of lab-grown photoreceptors fused with an electrode mesh that would sit at the back of her eye, bonded to the optic nerve and a cortical bridge upon which the generated images would combine with her visual field. The bridge was the projector. I’d sketched out its design before Aux came: a thin sheet of flexible polymer and grown tissue studded with microelectrode pins that would drape across her visual cortex. It would help combine and stabilize mental images with her visual field, so they’d map onto reality.
What I needed his help with was the playback device. He proposed a hippocampal recall interface, a bundle of neural leads we’d embed into her hippocampus that would record grief-evoked firing patterns into an AI decoder, which in turn would convert those into a visual blueprint the cortical bridge could position and the implant display.
Aux was excited about the project. He’d done his Master’s thesis on AI recreations of loved ones when that was just taking off. He was disappointed that the enthusiasm that followed their first-gen rollouts faded pretty quickly. His take was that one of the problems with their designs was that they’d actually been overfitted to physical reality, rather than people’s emotional one. “People don’t want to see their loved ones as they actually were; they want to see them as they prefer to remember them. There’s a surprisingly high delta between those two things,” he said.
The illusion that these digital ghosts were workable replacements for their loved ones quickly evaporated. By the early 2030s, they’d been demoted to another type of deepfake, rather than a compelling memorial.
While I worked on the retinal implant, Aux took point on the grief modeling. We wouldn’t be training the AI on shared “reality,” but on April’s subjectivity.
In order to program an encoder that could translate April’s grief and memories into visuals, we needed to get a strong grip on her visual memory. My lab wasn’t exactly set up for recording sessions, so we jerry-rigged a studio, wedging a chair beneath petri dishes of retinal sheets stacked in humidified drawers. Their translucent layers quivered faintly each time the air vents cycled.
Still, if we had been establishing this decades ago, we would have had to spend months recording fMRI scans, implanting everything, and only then been able to start fine-tuning the system and training the AI model, as we couldn’t access neuron-level activity non-invasively. Today, we have portable fMRI machines that we can run in the lab, and invasive no longer means gruesome or risky. Before her recording sessions, we injected her with a suspension of biodegradable neural motes — micron-scale, encapsulated sensors that settled along her brain’s vascular network and powered themselves through ultrasound backscatter. These let us make neuron-level recordings while we interviewed her.
I began by asking April who she’d like to remember, and when she felt their absences most acutely. I learned that her childhood best friend had been wearing red the last time she saw her and that her grandmother’s fingers always smelled faintly of the glue she used to make model trains. Aux noted that focusing on sensory details heightened both the breadth and depth of memory retrieval. The more I asked April to recall the way things smelled or felt, the richer the data that emerged.
Aux approved of her baseline. “Cogent, strong visual thinker — a regular apple-rotator,” were his exact words. After a couple of hours, the motes dissolved into harmless salts and amino acids, leaving only the recording data in Aux’s cloud as evidence that they’d ever been there.
Step 03. Stack the System
Aux and I then spent weeks interviewing April, having her look over the pictures and videos she’d shared of her lost loved ones and their favorite places, all while recording her brain activity. Aux was building both the encoder, which translated her memories into visuals, and the recall hook that would trigger during specific bouts of grief and remembrance.
We quickly realized how helpful it was to introduce physical artifacts to our sessions as well. April looked at pictures of her grandmother while eating some of her famous late-August apricot marmalade. Her grandmother’s handwriting had long since smudged off the jar, but April could still recall the way the bubbling pot would steam up her grandmother’s round glasses.
A few hours after she left, Aux tapped me on the shoulder as I was bent over the humming incubator. He beamed as he showed me a rendering of April’s grandmother, glasses fogged, on his monitor.
“This is completely translated from the output readings, ” he added.
I was also making progress.
Within a week, the final ex vivo model of the retinal implant passed all my low–fidelity visual testing, so we implanted it into April. It was as simple as cataract surgery and just as quick. A surgeon incised her sclera, and a micro-catheter slid the implant under the retina. I’d explained that without the bridge and recall system, the implant wasn’t yet going to be able to summon images of anything or anyone.
The cortical bridge was far more complicated than the retinal implant. To start, it had to handle a tremendous amount of visual data without misfiring. It lived for weeks in glass tanks and atop workbenches crowded with circuit boards, power supplies, and diagnostic monitors. I pulsed it in saline baths to mimic electrical firings and showed April how the graphene threads glittered faintly as we sent charge test patterns — gratings, flickers, edgemaps, and checkerboards, substitutes for the visual data that the recall interface would eventually be sending — across each channel, watching for voltage drift. When we pushed the full pattern set, this data combined to mirror the full amplitude of a real cortical load. I watched through my fingers and gritted my teeth as half the channels went dark, quietly suspecting we had fried the PEDOT coating.
I pulled myself out of the lab, squinting in the first sunlight I’d seen in 36 hours, and called another old friend. Lyre, a neural signal architect turned cortical cybernetics consultant (most likely in the “gray market” sense of the word), answered on the second ring. He was in the lab that same evening, takeout box in tow.
Looking at the bench and readings, he concluded that the previous night’s firmware update had introduced a timing mismatch. The wires hadn’t burnt out, but the clock that told them when to fire had been off by a microsecond, so the expected voltage response never lined up. He suspected half the channels had dropped out, even though the hardware itself wasn’t damaged. Fifteen minutes and a simple firmware rollback later, and everything worked perfectly.
Now, Lyre and I swapped the saline for neuron cultures to check if the wires could trigger and record real biological data. While we confirmed, Aux fine-tuned his AI encoder and processed April’s data.
We were finally ready to test the integrated system, without yet risking its insertion into April’s brain. We built something we only half jokingly called a “phantom cortex,” a benchtop stand-in: a synthetic cortical sheet of cultured neurons on a chip designed to act as April’s visual cortex. On one side, we put a lab-grown retinal implant that carried live sensory input. On the other, Aux’s playback device pushed reconstructed memories. The phantom cortex’s visual field was rendered on a lab monitor so that we could assess the pattern projections. The phantom cortex rig buzzing faintly in the background, gelled neuron sheets twitching under the microscope with each ripple of charge.
We started with simple patterns: they all came through smoothly, integrated seamlessly into the live visual field. Next, we tested April’s reconstructions. We called her into the lab for this, since she was the only person who could confirm if the renderings were accurate or not.
As we sat around the monitor, Aux sent memories through the phantom cortex. Excitement turned to nerves as April judged our work.
April inhaled sharply as Cline’s face appeared on the monitor screen. He smiled through a rain-smeared window in front of the lab desk.
The image only lasted a second. Aux cursed as the memory degraded and glitched in real time before our eyes. Her husband’s face smeared into a Dali, suddenly donning her grandmother’s glasses. Dozens of screaming mouths popped into view, then out again. I scrambled for the abort key.
“That screaming mouth was my dad’s,” she said, hiding her face. “ He’s someone I don’t miss.”
“This is an issue with the affective tagging,” Aux said grimly. “The system grabbed any memory tied to a spike of loss — even those tangled up with fear or anger.”
We hadn’t built a robust enough model of grief. The models couldn’t distinguish between the suite of neurochemical signals and pathways that light up when grief occurs and other related but distinct emotions like fear, longing, and even resentment. Grief was more global than we thought.
April left without saying goodbye. She messaged us the next day saying she was sorry. I reassured her we were problem-solving, and the next time she heard from us, we’d have it fixed. She didn’t respond. This, I thought, was the problem with trying to bring back the dead. April wanted to feel her loved ones still with her, but nothing comes free, and constant reminders of loss were the price.
That night, as I drifted off, I also fretted that what we were building was actually just a perseveration machine, something that would help people fixate, rather than heal.
Step 04. Define the Problem Space as Grief
The hardware was working, but we needed better software to select and stabilize reconstructed memories.
It wasn’t enough to simply gate memories and focus on spikes in certain brain activity. Grief is a process, not a single event. Rather than a digital switch, it was more like a current running through multiple circuits at once, constantly shifting in strength and direction. We were obviously dealing with a level of complexity we hadn’t modeled for. Lyre told us we’d be smart to think of grief as a type of learning. We fixate on the people we’ve lost to teach ourselves how to live without them. By remembering their absence, we update our mental model of our world to be more congruent with their absence. This means that grief closely mirrors other learning states, such as disgust, trauma, and fear, that help us do the same thing. What this meant in practice was performing a suite of negative space runs with April.
We started simple, with pictures of strangers and familiar but unrelated sights like video feeds of grocery stores and sidewalks. When we felt like the system had a good handle on non-emotive responses, we moved into shakier territory.
We needed April back in the lab. This time, though, we weren’t going to be tackling the people and places she wanted to remember. We’d be focusing on creating “negative spaces” in the data: all that she didn’t want popping up in her sadness so that the AI could learn the difference between grief and the other affective states that closely approximated it. She agreed to come in later that week.
April sat, the microsensors buzzing through her skull while Lyre triggered a series of increasingly emotional cues. She gave us a series of notes a bullying classmate had left in her high school locker, calling her all sorts of horrible names. Then pictures of her father and mother. The hippocampal leads lit with recall, the amygdala spiked, and the bridge dutifully sent its output into the phantom cortex’s renderer.
Her father appeared on the screen. The system didn’t know the difference: to the limbic circuits, loss is loss, whether you wanted to remember or not.
“That’s not who I asked for,” April said flatly, looking at the faint projection hanging over the test bench.
“We know,” Aux said.
We added a valence-gating layer. After each recall run, April tagged what she saw on a tablet: grief, fear, do not summon. Every memory signature — the spiking amygdala channels, the hippocampal pattern maps, the cortical bridge’s intermediate codes — carried her labels forward into the training set. Discarded electrode hoods hung from hooks on the wall, their interior pads oozing the faintly metallic odor of conductive gel.
Lyre ran the retraining loop live, updating the bridge’s decoder weights so the renderer learned to separate emotional intensity from summon-worthy grief.
By the third run, a beach in Greece where she’d honeymooned came through when she wanted it, and her father didn’t. Fear-heavy childhood scenes disappeared entirely from the output buffer.
Aux replayed the final pass into the phantom cortex — hippocampal recalls feeding the bridge, bridge feeding the synthetic V1 sheets — and for the first time that day, only the memories April actually wanted bloomed into view.
She looked at the empty air above the bench, where only her husband appeared, in focus and silent.
“Better,” she said.
Step 05. Cross-Train with Auxiliary Memories
The valence gating held through three separate phantom cortex runs: no father, no high school friend breakups, no clutter of unwanted memories. But there was still a problem with the projections. When the hippocampal traces pushed through the bridge and into the synthetic cortex, April’s husband would flicker after just a few seconds. Her grandmother, likewise, disappeared into a foggy haze, like the steam atop her glasses. Her memories just weren’t stable enough.
We needed additional input from other people who remembered her lost loved ones. A single person’s memories, it seemed, weren’t enough to generate consistent projections. Memory conformity seemed crucial for stabilizing the system. Not only would they help us strengthen the projections, but the effects of collaborative recall would augment April’s own future outputs as well.
Aux designed the auxiliary session room like a film studio. Gathering the memories, we were in his imaginative wheelhouse. April sat behind a partition while the friends, exes, and even a few family members she’d managed to convince to participate sat in a circle around a microphone and eye trackers, with lightweight EEG strips affixed to each of them.
We asked them all kinds of questions, both to supplement the AI system and to boost April’s own recall:
Describe your first memory of Cline.
What’s a boring, but extremely routine memory you have of April’s grandmother?
How did Cline talk at work versus at family dinners?
The harder parts, for April at least, were when we asked them to correct the memories April, herself, had given us. Their reunion beneath the cherry blossoms in spring hadn’t been after his stint working in Europe, like she thought, but after a huge fight the two of them had had. She’d kicked him out of the apartment, and he’d been staying with his best friend for a week. There had been no hugs, no long embrace, but a careful, painful conversation about what they both wanted and whether they still fit in each other’s lives.
After we’d finished interviewing everyone, we were shocked to find that more than half of them had stuck around, mostly the ones who knew her husband. We drank wine out of paper cups and sat on the slightly sticky floor of the lab. The stories continued even without recordings. We dimmed the fluorescent lights, and the perfusion chambers, vials, and incubators glowed like lava lamps. Aux showed off some new im-tech overlays, and we drifted through one of his rendered deep-sea dreams.
While Lyre and Aux worked on the data, I kept the organic stuff alive. The cortical sheets and hippocampal organoids sat in sealed perfusion chambers, their scaffolds constantly fed with oxygenated media so they wouldn’t necrose before implantation. Every few hours, I pulsed the graphene lace with low-amplitude stimulation patterns to keep the synapses from going quiet, watched calcium transients bloom under the scope like tiny green storms, then swapped out the old medium before lactate levels climbed high enough to kill any of them.
By this point, April’s retinal implant had fully healed. For most projects, I spent almost all the time on the eye itself, but I hadn’t thought about it in weeks since implantation. April agreed that she often forgot it was there.
The bridge tissue had to be ready for a human body, not a bioreactor, and that meant no ischemic edges, no scar-prone glial blooms, no dead zones in the middle of the bridge when we finally stitched April to her memories. It was repetitive work, but it kept me from catastrophizing.
Soon, the bridge wasn’t just surviving — it was performing. Aux piped April’s recall traces into the phantom cortex, and April’s loved ones appeared in crisp, frame-by-frame fidelity. No drift. No flicker. No spillover. Lyre stress-tested the decoder with noise injections like cross-talk overlays, where he blended two memorial recall streams, one of her husband and one of her grandmother, and the bridge disentangled them seamlessly on the visual layer. We’d have been satisfied with 90 seconds of stable projection, but we were getting up to three or four minutes consistently.
Somehow, everything worked.
Step 06. Assess Implant Success Via Patient Ground Truthing
April arrived at her implantation surgery, accompanied by more than half the people we’d used for the auxiliary memories. The retinal implant had been a simple procedure. This one was far more intense.
An anesthesiologist put her under, and the team stabilized her skull in a frame. They then drilled three tiny openings into her head, each just wide enough for the thread-thin surgical arms to slide through. On the monitor, the cortical bridge looked like a thin mesh being gently pressed against the surface of her visual cortex, settling over it like a static-clung sheet. A second monitor tracked the hippocampal leads as they snaked deeper, flowing faintly as they followed pre-mapped paths into her memory centers. After two hours, the implants sat exactly where they were needed.
The implants were cushioned in a hydrogel scaffold that reduced swelling and coaxed new vessels to knit quickly around the electrodes. After only 48 hours, doctors cleared her to leave.
Just days later, we brought April in for an initial round of post-operative testing. We had to do a bit of fine-tuning to the model now that it was directly integrated with her brain. Looking at her, you would never know she could see the dead: All the implants were internal. We took her around the city, visiting all the locations, dense with memories and grief, that she’d shared with us during the build. We walked down the rotten pier where her husband had dived in after a teenager who had fallen into the ocean. She told us she saw him perfectly, dripping wet and smiling. Remembering, she said, was tiring, but she told us she was happier for it.
From a lab perspective, this was a headline result: Our system worked. Auxiliary-memory training closed the sparsity gap; valence gating reduced false positives by 98 percent; hippocampal recall signals held steady across cortical frames for over three minutes before their natural decay.
The psychological effects were less clear-cut. When April failed to show up for her two-week check-in, we went to check in on her. We found her in her apartment, surrounded by dirty plates, coffee cups, and desiccated flowers. At first, it seemed like all my worst fears of this implant had come true. She hadn’t taken the trash out since she’d left the hospital, and she was in the same rumpled, sour-smelling clothing she’d been wearing when we saw her last. She told us she’d spent the last two weeks with her ghosts, watching her husband pile trashbags on his shoulder and walk them out to the corner, and her grandmother bent over the kitchen table, pondering its grain. She’d done almost nothing other than live amongst these visions.
Aux, Lyre, and I spent nearly an hour helping her clean. Spurred by the humiliation of being seen like this, some dire warnings, and my direct threat that we’d disable the implant if her self-care didn’t improve, April swore she wouldn’t miss another appointment. She also shared the name of a therapist she’d recently contacted, and agreed to ask her Aunt to stay with her for a little while. We left, and I spent the next month buzzing with guilt that I’d ruined someone’s life.
By her six-month post-op check-in, however, April appeared to be thriving. In her neural patterns, grief acted not like an emotion flickering in and out, but like a steady recalibration — almost an error term adjusting her brain’s predictive model of a world missing someone. The deep longing and sadness remained, but she was learning to fight back against the pull of nostalgia. She was also going out for drinks with friends, working on a startup with one of her husband’s old roommates, and had taken up Tai Chi. She self-reported that she was seeing her deceased loved ones less these days, though it was still nice to call on them when she felt their absence.
April had proven strong enough to use her new eyes as a tool to help her move on, but the memory of that first home visit hung heavy over me. I didn’t think everyone would be able to pull themselves through such exquisitely rendered grief like she had. I feared most would become stuck in an emotional traceback error, reaching again and again for those who no longer existed, rather than using the implant to map out a new path through life.
These thoughts haunted me back in the lab. Under the faint smell of ethanol, I lifted a vial of opsin-gene-edited photoreceptors, their suspension glowing violet under the culture light, and seeded them onto a retinal scaffold. The incubator door hissed as I slid the tray inside.
Despite the technical thrill, the success of the implants, and April’s gratitude, I don’t think I’ll make another eye like hers. It’s too psychologically risky.
Inside the incubator, the earliest parts of a new eye were growing. This one was a pro-bono project for the state’s Department of Children and Families. The eye would overlay subtle thermal and blood-flow cues that signaled stress, so social workers could spot when a client was physically distressed, even if they were masking it. I hadn’t abandoned the idea of visualizing emotional states, but I didn’t want to spend my life helping people picture what was no longer there. The world we were in already offered more than we could possibly fathom.
Spencer Nitkey is a writer, researcher, and educator living in Philadelphia with his wife and a dog named after Jean Baudrillard. His fiction can be found in venues such as Apex Magazine, Asimov Press, Diabolical Plots, Lightspeed Magazine, Protocolized, and many others. You can find more about him and read more of his work on his website, spencernitkey.com.
We are grateful to Benyamin Abramovich Krasa for reading a draft of this essay and providing feedback on its descriptions of neurotech. Header image made by Ella Watkins-Dulaney.
Cite: Nitkey, S. “How to See the Dead.” Asimov Press (2025). DOI: https://doi.org/10.62211/92ws-21nb
The header image for this article is available as a sticker on our website. Visit shop.asimov.com to order.


