Paths to the Future: A Year at Google Brain

I am currently a PhD student at Stanford, studying optimization and machine learning with Stephen Boyd, but from 2017 through 2018 I was a software engineer on the Google Brain team. I started three months after receiving a Master’s degree in computer science (also from Stanford), having just spent the summer working on a research project—a domain-specific language for convex optimization. At the time, a part of me wanted to continue working with my advisor, but another part of me was deeply curious about Google Brain. To see a famous artificial intelligence (AI) research lab from the inside would, at the very least, make for an interesting anthropological experience. So, I joined Google. My assignment was to work on TensorFlow, an open-source software library for deep learning.

From 2017-2018, I worked as an engineer at Google Brain, Google’s AI research lab. Brain sits in Google’s headquarters in Mountain View, CA. Photo by Robbie Shade, licensed CC BY 2.0.

Brain was a magnet for Google’s celebrity employees. For the past few years, Google’s CEO Sundar Pichai (who believes AI is “more profound than electricity or fire”) has emphasized that Google is an “AI-first” company, with the company seeking to implement machine learning in nearly everything do. In a single afternoon, in the team’s kitchenette, I saw Pichai, co-founder Sergey Brin, and Turing award winners David Patterson and John Hennessey.

I didn’t work with these celebrity employees, but I did get to work with some of the original TensorFlow developers. These developers gave me guidance when I sought it and habitually gave me more credit than I deserved. For example, my coworkers let me take the lead in writing an academic paper about TensorFlow 2, even though my contributions to the technology were smaller than theirs. The unreasonable amount of trust placed in me, and credit given to me, made me work harder than I would have otherwise.

The culture of Google Brain reminded me of what I’ve read about Xerox PARC. During the 1970s, researchers at PARC paved the way for the personal computing revolution by developing graphical user interfaces and producing one of the earliest incarnations of a desktop computer.

PARC’s culture is documented in The Power of the Context, an essay written by PARC researcher Alan Kay. Kay describes PARC as a place where senior employees treated less experienced ones as “world-class researchers who just haven’t earned their PhDs yet” (similar to how my coworkers treated me). Kay goes on to say that researchers at PARC were self-motivated and capable “artists,” working independently or in small teams towards similar visions. This made for a productive environment that at times felt “out of control”:

A great vision acts like a magnetic field from the future that aligns all the little iron particle artists to point to “North” without having to see it. They then make their own paths to the future. Xerox often was shocked at the PARC process and declared it out of control, but they didn’t understand that the context was so powerful and compelling and the good will so abundant, that the artists worked happily at their version of the vision. The results were an enormous collection of breakthroughs, some of which we are celebrating today.

At Brain, as at PARC, researchers and engineers had an incredible amount of autonomy. They had bosses, to be sure, but they had an a lot of leeway in choosing what to work on — in finding “their own paths to the future.” (I say “had”, not “have”, since I’m not sure whether Brain’s culture has changed since I’ve left.)

I’ll give one example: a few years ago, many on the Google Brain team realized that machine learning tools were closer to programming languages than to libraries, and that redesigning their tools with this fact in mind would unlock greater productivity. Management didn’t command engineers to work on a particular solution to this problem. Instead, several small teams formed organically, each approaching the problem in its own way. TensorFlow 2.0, Swift for TensorFlow, JAX, Dex, Tangent, Autograph, and MLIR were all different angles on the same vision. Some were in direct tension with each other, but each was improved by the existence of the other—we shared notes often, and re-used each other’s solutions when possible. It’s totally possible that many of these tools might not become anything more than promising experiments, but it’s also possible that at least one will be a breakthrough.

TF 2.0, Swift for TensorFlow, and JAX, developed by separate sub-teams within Google Brain, are different paths to the same vision—an enjoyable, expressive, and performant programming language for machine learning.

I would guess that the PARC-like context that Brain operated in was instrumental in bringing about the creation of TensorFlow. In late 2015, Google open-sourced TensorFlow, making it freely available to the entire world. TensorFlow quickly became enormously popular. Instructors at Stanford and other universities used it in their curricula (my friend Chip Huyen, for example, created a Stanford course called TensorFlow for Deep Learning Research), researchers across the world used it to run experiments, and companies used it to train and deploy models in the real world. Today, TensorFlow is the fifth most popular project on Github out of the many millions of public software repositories available on it, as measured by star count.

And yet, at least for TensorFlow, Google Brain’s hyper-creative, hyper-productive, and “out of control” culture was a double-edged sword. In the process of making their own paths to a shared future, TensorFlow engineers released many features sharing a similar purpose. Many of these features were subsequently deemphasized in favor of more promising ones. While this process might have selected for good features (like tf.data and eager execution), it frustrated and exhausted our users, who struggled to keep up.

A Google TPU, a hardware accelerator for machine learning. While at Brain, I created a custom TensorFlow operation that made it easier to load balance computations across TPU cores.

Brain differed from PARC in at least one way: unlike PARC, which infamously failed to commercialize its research, Google productionized projects that were incubated in Brain. Examples include Google Translate, the BERT language model (which informs Google search), TPUs (hardware accelerators that Google rents to external clients, and uses internally for a variety of production projects), and Google Cloud AI (which sells AutoML as a service). In this sense Google Brain was a natural extension of Larry Page’s desire to work with people who want to do “crazy world-breaking things” while having “one foot in industry” (as Page stated in an interview with Walter Isaacson.)

Leaving Google Brain for a PhD was difficult. I had grown accustomed to the perks, and I appreciated the team’s proximity to research. Most of all I loved working alongside a large team on TensorFlow 2.0—I’m passionate about building better tools, for better minds. But I also love the creative expression that research provides.

I’m often asked why I didn’t simply involve myself in research at Brain, instead of enrolling in a PhD program. Here’s why: the zeitgeist had little room for topics other than deep learning and reinforcement learning. Indeed, in 2018, Google rebranded “Google Research” to “Google AI,” redirecting research.google.com to ai.google.com. (The rebranding understandably raised some eyebrows. It appears that change was quietly rolled back sometime recently, and the Google Research brand has been resurrected.) While I’m interested in machine learning, I’m not convinced that today’s AI is anywhere near as profound as electricity or fire, and I wanted to be trained in a more intellectually diverse environment.

In fact, most of my mentors at Brain encouraged me to enroll in the PhD program. Only one researcher strongly discouraged me from pursuing a PhD, comparing the experience to “psychological torture.” I was so shocked by his dark warning that I didn’t ask any follow-up questions, he didn’t elaborate, and our meeting ended shortly afterwards.

These days, in addition to machine learning, I’m interested in in convex optimization, a branch of computational mathematics concerned with making optimal choices. Convex optimization has many real-world applications—SpaceX uses it to land rockets, self-driving cars use it to track trajectories, financial companies use it to design investment portfolios, and, yes, machine learning engineers use it to train models. While well-studied, as a technology, convex optimization is still young and niche. I suspect that convex optimization has the potential to become a powerful, widely-used technology. I’m interested in doing the work—a bit of math and a bit of computer science—to realize its potential. My advisor at Stanford, Stephen Boyd, is perhaps the world’s leading expert on applications of convex optimization, and I simply could not pass up an opportunity to do useful research under his guidance.

SpaceX solves convex optimization problems onboard to land its rockets, using CVXGEN, a code generator for quadratic programming developed at Stephen Boyd’s Stanford lab. Photo by SpaceX, licensed CC BY-NC 2.0.

It’s been just over a year since I left Google and started my PhD. Since then, I’ve collaborated with my lab to publish several papers, including one that makes it possible to automatically learn the structure of convex optimization problems, bridging the gap between convex optimization and deep learning. I’m now one of three core developers of CVXPY, an open-source library for convex optimization, and I have total creative control over my research and engineering projects.

There are many things about Google Brain that I miss, my coworkers most of all. But now, at Stanford, I get to collaborate with and learn from an intellectually diverse group of extremely smart and passionate individuals, ranging from pure mathematicians, electrical and chemical engineers, physicists, biologists, and computer scientists.

I’m not sure what I’ll do once I graduate, but for now, I’m having a lot of fun—and learning a ton—doing a bit of math, writing papers, shipping real software, and exploring several lines of research in parallel. If I’m very lucky, one of them might even be a breakthrough.

A Primer on TensorFlow 2.0

This post is also available as a Python notebook.

From September 2017 to October 2018, I worked on TensorFlow 2.0 alongside many engineers. In this post, I’ll explain what TensorFlow 2.0 is and how it differs from TensorFlow 1.x. Towards the end, I’ll briefly compare TensorFlow 2.0 to PyTorch 1.0. This post represents my own views; it does not represent the views of Google, my former employer.

TensorFlow (TF) 2.0 is a significant, backwards-incompatible update to TF’s execution model and API.

Execution model. In TF 2.0, all operations execute imperatively by default. Graphs and the graph runtime are both abstracted away by a just-in-time tracer that translates Python functions executing TF operations into executable graph functions. This means in TF 2.0, there is no Session, and no global graph state. The tracer is exposed as a Python decorator, tf.function. This decorator is for advanced users. Using it is completely optional.

API. TF 2.0 makes tf.keras the high-level API for constructing and training neural networks. But you don’t have to use Keras if you don’t want to. You can instead use lower-level operations and automatic differentiation directly.

To follow along with the code examples in this post, install the TF 2.0 alpha.

pip install tensorflow==2.0.0-alpha0
import tensorflow as tf
tf.__version__
'2.0.0-alpha0'

Contents

  1. Why TF 2.0?
  2. Imperative execution
  3. State
  4. Automatic differentiation
  5. Keras
  6. Graph functions
  7. Comparison to other Python libraries
  8. Domain-specific languages for machine learning
read more »

Maine and Potatoes: Approaching Life Like Steinbeck

Per my sister’s recommendation, I recently picked up Travels with Charley, Steinbeck’s account1 of a cross-country road trip he took one summer with his beloved Poodle in tow.

Steinbeck’s favorite kind of journey is a meandering one. By his own admission, he’s “going somewhere” but “doesn’t greatly care whether” he arrives2. Reflecting upon a leisurely detour through Maine’s potato farms, he writes,

everything in the world must have design or the human mind rejects it. But in addition it must have purpose or the human conscience shies away from it. Maine was my design, potatoes my purpose.

It’s tempting to interrogate whether your pursuits are meaningful, be they hobbies or careers3. A degree of such interrogation can be constructive: living with intention necessitates a design and a purpose. But indulge too much and you risk descending into a Hamlet-esque, nihilistic spiral that will inevitably derail your pursuit. The last thing you (and certainly I) want is to end up as Camus’ strawman, the individual who cannot cope with his discovery that life is without meaning. That Steinbeck’s design was Maine and his purpose potatoes is a gentle reminder that our own designs and purposes need not be grand. All that we require of them is to exist.

Footnotes
[1] The introduction to the book’s 50th anniversary edition cautions readers against taking Steinbeck’s story too literally, for he was “a novelist at heart.” But the book reads truthfully enough and, just as important, entertainingly enough. As author and writing instructor John McPhee joked in an interview with The New Yorker’s David Remnick, 94 percent accuracy is good enough for creative non-fiction.
[2] Approaching our actions with such a sentiment is precisely the Bhagavad Gita’s prescription for attaining the Good Life. For that matter, it is also the prescription of Kierkegaard’s Fear and Trembling. Both recommend we resign ourselves to the frustration of our desires, but that we do so happily so that we may pursue them nonetheless. If this sounds difficult to you, you’re not alone; Kierkegaard’s narrator describes this process as something he cannot hope to understand, though he spends the entire text describing it.
[3] Academics at MIT’s Sloan School of Management recently asked 135 people what made their work meaningful. For many, meaningful work is simultaneously “intensely personal” and bigger than themselves.

Learning about Learning: Educational Data Mining

Earlier this summer, I crossed the Atlantic and traveled to Madrid to give a talk at the 8th International Conference on Educational Data Mining. I presented a prototype, built by myself and my colleagues at Stanford, that stages intelligent interventions in the discussion forums of Massive Open Online Courses. Our pipeline, dubbed YouEDU, detects confusion in forum posts and recommends instructional video snippets to their presumably confused authors.

EDM took place in Madrid this year. Pictured above the Retiro Pond in Buen Retiro Park. It has nothing to do with EDM, but I enjoyed the park so please enjoy the picture.

The Educational Data Mining Conference took place in Madrid this year. Pictured above the Retiro Pond in Buen Retiro Park. It has nothing to do with EDM. But I enjoyed the park so please enjoy the picture.

No, not that kind of EDM
Educational Data Mining — affectionately collapsed to EDM — might sound opaque. From the society’s website, EDM is the science and practice of

developing methods for exploring the unique and increasingly large-scale data that come from educational settings, and using those methods to better understand students, and the settings which they learn in.

Any educational setting that generates data is a candidate for EDM research. So really any educational setting is a candidate, full stop. In practice, EDM-ers often find themselves focusing their efforts on computer-mediated settings, like tutoring systems, educational games, and MOOCs, perhaps because it’s easy to instrument these systems to leave behind trails of data.

Popular methods applied to these educational settings include student modeling, affect detection, and interventions. Student models attempt to approximate the knowledge that a student possesses about a particular subject, just as a teacher might assess her student, while affect detectors classify the behavior and emotional states of students. Interventions attempt to improve the experience of students at critical times. My own work marries affect detectors with interventions in an attempt to improve MOOC discussion forums.

Making discussion forums smarter
I became interested in augmenting online education with artificial intelligence a couple of years ago, after listening to a talk at Google and speaking with Peter Norvig. That interest lay dormant for a year, until I began working as a teaching assistant for a Stanford MOOC. I spent a lot of time answering questions in the discussion forum, questions asked by thousands of students. Helping these students was fulfilling work, to be sure. But slogging through a single, unorganized stream of questions and manually identifying urgent ones wasn’t particularly fun. I would have loved an automatically organized inbox of questions.

The YouEDU architecture. Posts are fed to a classifier that screens posts for confusion, and our recommender then fetches clips relevant to the confused posts.

The YouEDU architecture. Posts are fed to a classifier that screens posts for confusion, and our recommender then fetches clips relevant to the confused posts.

That these discussion forums were still “dumb”, so to speak, surprised me. I reached out to the platforms team of Stanford Online Learning, who in turn sent me to Andreas Paepcke, a senior research scientist (and, I should add, an incredibly supportive and kind mentor). It turned out that I wasn’t the only one who wished for a more intelligent discussion forum. I paired up with a student of Andreas’ to tackle the problem of automatically classifying posts by the affect or sentiment they expressed.

Our initial efforts at affect detection were circumscribed by the data available to us. Machine learning tasks like ours need human-tagged data — in our case, we needed a dataset of forum posts in which each post was tagged with information about the affect expressed in it. At the time, no such dataset existed. So we created one: the Stanford MOOCPosts dataset, available to researchers upon request.

The dataset powered the rest of our work. It enabled us to build a model to predict whether or not a post expressed confusion, as well as a pipeline to recommend relevant clips from instructional videos to the author of that confused post.

YouEDU was not meant to replace teaching assistants in MOOCs. Videos are notoriously difficult to search through (they’re not indexed, like books are), and YouEDU simply helps confused students find content relevant to the topic they’re confused about. Our affect classifiers can also be used outside of YouEDU — for example, they could be used to highlight urgent posts for the instructors, or even for other students in the forum.

If you’d like to learn more about our work, you’re welcome to look at the publication, my slide deck, or the edxclassify repository.

Data mining is not nefarious
My experience at EDM was a great one. I learned lots from learned people, made lasting friends and memories, and so on. I could talk at length about interesting talks and papers — like Streeter’s mixture modeling of learning curves, or MacLellan’s slip-aware bounded logistic regression. But I won’t. You can skim the proceedings on your own time.

The EDM community is tightly knit, or at least more tightly knit that that of ACM’s Learning @ Scale, the only other education conference I’ve attended. And though no raves were attended, EDM-ers did close the conference by dancing the night away in a bar, after dining, drinking, and singing upon the roof of the Reina Victoria.

Festivities aside, a shared sense of urgency pulsed through the conference. As of late, the public has grown increasingly suspicious of those who collect and analyze data en masse. We see it in popular culture: Ex Machina, for example, with its damning rendition of a Google-like Big Brother who recklessly and dangerously abuses data, captures the sentiment well. The public’s suspicion is certainly justified, but its non-discriminating nature becomes problematic for EDM-ers. The public fears that those analyzing student data are, like Ex Machina’s tragic genius, either greedy, hoping to manipulate education in order to monetize it, or careless, liable to botch students’ education altogether. For the record, neither is true. EDM researchers are both well-intentioned and competent.

What’s an EDM-er to do? Some at the conference casually floated the idea of rebranding — for example, perhaps they should call themselves educational data scientists, not miners. Perhaps, too, they should write to legislators to convince them that their particular data mining tasks are not nefarious. In a rare example of representative government working as intended, Senator Vitter of Louisiana recently introduced a bill that threatens to cripple EDM efforts. The Student Privacy Protection Act, a proposed amendment to FERPA, would make it illegal for researchers to, among other things, assess or model psychological states, behaviors, or beliefs.

Were Vitter’s bill to go into effect as law, it would potentially wipe out the entire field of affect modeling. What’s more, the bill would ultimately harm the experience of students enrolled in online courses — as I hope YouEDU shows, students’ online learning experiences can be significantly improved by intelligent systems.

Now, that said, I understand why folks might fear a computational system that could predict behavior. I could imagine a scenario in which an educator mapped predicted affect to different curricula; students who appeared confused would be placed in a slow curricula, while those who appeared knowledgeable would be placed in a faster one. Such tracking would likely fulfill the prophecies of the predictor, creating an artificial and unfortunate gap between the “confused” and “knowledgeable” students. In this scenario, however, the predictive model isn’t inherently harmful to the student’s education. The problem instead lies with the misguided educator. Indeed, consider the following paper-and-pencil equivalent of this situation. Our educational system puts too much stock in tests, a type of predictive tool. Perform poorly on a single math test in the fifth grade and you might be placed onto a slow track, making it even less likely you’ll end up mathematically inclined. Does that mean we should ban tests outright? Probably not. It just means that we should think more carefully about the policies we design around tests. And so it is for the virtual: It is the human abuse of predictive modeling, rather than predictive modeling in and of itself, that we should guard against.

Machines that Learn: Making Distributed Storage Smarter

Equipped with shiny machine learning tools, computer scientists these days are optimizing lots of previously manual tasks. The idea is that AI can make certain procedures smarter — we can capitalize on a system’s predictability and implicit structure to automate at least part of the task at hand.

For all the progress we’ve made recently in soulmate-searching pipelines and essay-grading tools, I haven’t seen too many applications of AI to computer infrastructure. AI could solve interesting infrastructure problems, particularly when it comes to distributed systems — in a reflexive sort of way, machines can and should use machine learning to learn more about themselves.

Being smart about it: The case for intelligent storage systems
Distributed systems cover a lot of ground; to stop myself from rambling too much, I’ll focus on distributed storage systems here. In these systems, lots of machines work together to provide a transparent storage solution to some number of clients. Different machines often see different workloads — for example, some machines might store particularly hot (i.e., frequently accessed) data, while others might be home to colder data. The variability in workloads matters because particular workloads play better with particular types of storage media.

Manually optimizing for these workloads isn’t feasible. There are just too many files and independent workloads for humans to make good, case-by-case decisions about where files should be stored.

The ideal, then, is a smart storage system. A smart system would automatically adapt to whatever workload we threw at it. By analyzing file system metadata, it would make predictions about files’ eventual usage characteristics and decide where to store them accordingly. If a file looked like it would be hot or short-lived, the smart system could cache it in RAM or flash; otherwise, it could put it on disk. Creating policies with predictive policy would not only minimize IT administrators’ work, but would also boost performance, lowering latency and increasing throughput on average.

From the past, a view into the future: Self-* storage systems
To my surprise, there doesn’t seem to be a whole lot of work in making storage systems smarter. The largest effort I came across was the self-* storage initiative, undertaken by a few faculty over at CMU back in 2003. From their white paper,

‘self-* storage systems’ [are] self-configuring, self-organizing, self-tuning, self-healing, self-managing systems of storage bricks …, each consisting of CPU(s), RAM, and a number of disks. Designing self-*-ness in from the start allows construction of high-performance, high-reliability storage infrastructures from weaker, less reliable base units …

There’s a wealth of interesting content to be found in the self-* papers. In particular, in Attribute-Based File Prediction, the authors propose ways to exploit metadata and information latent in filenames to bucket files into binary classes related to their sizes, access permissions, and lifespans.

Predictions were made using decision trees, which were constructed using the ID3 algorithm. With the root node corresponding to the entire feature space, ID3 splits the tree into two sub-trees corresponding to the feature that seems like the best predictor (the metric used here is typically information gain, but the self-* project used the chi-squared statistic). The algorithm then recursively builds a tree whose leaf nodes correspond to classes. As an aside, it turns out that ID3 tends to overfit training data — these lecture notes discuss ways to prune decision trees in an attempt to increase their predictive power.

Diagram from "File Classification in Self-* Storage Systems", by Ganger, et. al.

Diagram from “File Classification in Self-* Storage Systems”, by Ganger, et. al.

The features used were coarse. For example, files’ basenames were broken into three chunks: prefixes (characters preceding the first period), extensions (characters preceding the last period), and middles (everything in between); directories were disregarded. These simple heuristics proved fairly effective; prediction accuracy didn’t fall below 70 percent.

It’s not clear how a decision tree trained using these same features would perform if more granular predictions were desired, or if the observed filenames were less structured (what if they lacked delimiters?). I could imagine a much richer feature set for filenames; possible features might include the number of directories, the ratio of numbers to characters, TTLs, etc.

From research to reality: Picking up where self-* left off
The self-* project was an ambitious one — the researchers planned to launch a large scale implementation of it called Ursa Major, which would offer 100s of terabytes of automatically tuned storage to CMU researchers.

I recently corresponded with CMU professor Greg Ganger, who led the self-* project. It turns out that Ursa Major never fully materialized, though significant and practical progress in smart storage systems was made nonetheless. That the self-* project lives no longer doesn’t mean that idea of smart storage systems should die, too. The onus lies with us to pick up the torch, and to continue where the folks at CMU left off.

A Small Glass Box

I took a trip up to San Francisco’s Exploratorium, some two weeks past. Though recently relocated, the Exploratorium is comfortably familiar. It’s still packed with exhibits that span the spectrum from mystically enchanting (one station lets museum-goers create delicate purple auroras that warp and spiral in a glass tube) to delightfully curious (another rapidly spins dozens of Lego Batmen and dolphins, making them dance to the tune of the Caped Crusader’s catchy theme song).

Exhibits at this unconventional museum are designed to stir your curiosity. It's hard to resist playing with them, but of course there's no need to — almost everything is hands-on. Photo by Sara Yang.

Exhibits at this unconventional museum are designed to stir your curiosity. It’s hard to resist playing with them, but of course there’s no need to — almost everything is hands-on. Photo by Sara Yang.

I meandered through the museum, all the while searching for a particular treasure. Just before the closing bells rung, I stumbled upon it: the cloud chamber, a large, humming, refrigerated box with a sky-facing window that allows for the observation of cosmic radiation. Cosmic rays hail from the beyond the solar system. They collide in the earth’s atmosphere, and minuscule particles rain torrentially upon us in the aftermath. The cloud chamber makes an otherwise imperceptible and invisible downpour from the heavens palpably visible, if only for a fleeting moment.

Our homemade cloud chamber consists of a small box with a lid lined with black felt. In order to nudge muons into uncloaking themselves, we douse the felt with isopropanol and heat it from above with my desk lamp.

Our homemade cloud chamber consists of a small box with a lid lined with black felt. In order to nudge muons into uncloaking themselves, we douse the felt with isopropanol and heat it from above with my desk lamp.

The sight brought me back four years, to the first time I saw muons zip hither and thither through the same chamber. I had spent the better part of that year in my garage, tinkering with a friend of mine by the name of Hemanth on our own chamber for a science project.

On a nostalgic whim, I called up Hemanth the next day. We decided to fire up the chamber once again, for old times’ sake. We scrounged the necessary components, lugged them to Hemanth’s garage, and got started. Pulverizing dry ice, we began working to the sound of snow crunching underfoot and the sight of fumes eddying about.

With thick gloves and sturdy hammers, we first crush the dry ice into a coarse powder and pack it tightly into a Styrofoam base, on top of which the chamber sits. The one-two punch of a cooling source and a heating source forces the alcohol into a supersaturated, supercooled state. Muons streaking through the chamber rip electrons off the vapor, causing water molecules to visibly condense around their paths.

With thick gloves and sturdy hammers, we first crush the dry ice into a coarse powder and pack it tightly into a Styrofoam base, on top of which the chamber sits. The one-two punch of a cooling source and a heating source forces the alcohol into a supersaturated, supercooled state. Muons streaking through the chamber rip electrons off the vapor, causing water molecules to visibly condense around their paths.

I followed our procedure as if on autopilot; my mind wandered and let bittersweet memories leak. We packed the dry ice into a foam base (days colored by failed prototype runs), doused the chamber with isopropanol (afternoons brightened by faint flashes of muons), and positioned my lamp atop the box (nights illuminated by the bluish glow of computer monitors).

Our small glass box held us rapt, as we saw the ghosts of muons pass through it. Unfortunately, the streaks are difficult to capture on camera.

Our small glass box held us rapt, as we saw the ghosts of muons pass through it. Unfortunately, the streaks are difficult to capture on camera.

We left the chamber to run for some time. When we returned, muons were streaking visibly through it. Spellbound, we lingered by the chamber for over half an hour. Four years ago, an anxious desire to create something novel and a preoccupation with results left little room for wonder. Now, we could stare into the cloud chamber for but the simple sake of doing so. The muons that passed through it, falling like delicate strands of spider web, were, paradoxically, both otherworldly and earthly. Our small glass box, glued together by a mom-and-pop craft shop, had become a window into the universe’s secrets. The sight was as humbling as it was beautiful.