From 2017 through 2018, I was a software engineer on the Google Brain team. I started three months after receiving a Master’s degree in computer science, having just spent the summer working on a research project—a programming language for making optimal decisions, based on math—at Stanford, my alma mater. A part of me wanted to continue working with my advisor at Stanford, but another part of me was deeply curious about Google Brain. To see a famous artificial intelligence (AI) research lab from the inside would, at the very least, make for an interesting anthropological experience. So I joined Google in the autumn. My assignment was to work on TensorFlow, an open-source software library for building machine learning models.
My desk was just a few steps away from our kitchenette, or microkitchen, in Google-speak. Its name notwithstanding, the microkitchen was a comfortably large space, softly illuminated by sunlight that spilled through tall windows; the windows framed a view of Shoreline Park’s grassy knolls, a rustic gold during the dry season and a lush green when wet. The microkitchen was a place for snacking, caffeinating, informal conversations, and—for a new graduate like myself, not yet accustomed to the glamour of working at Brain—a place for people-watching. In that one kitchen, in a single afternoon, I saw Google’s CEO Sundar Pichai (who believes that AI is “more profound than electricity or fire”), its co-founder Sergey Brin, Turing award winners David Patterson and John Hennessey, and memeworthy engineer Jeff Dean (who, among other things, can make an espresso in an astonishingly short amount of time.)
The glamour wore off quickly. What I came to value most about Brain was that it provided me access to exceptional mentors. I worked with some of the original TensorFlow developers, including Derek Murray, Asim Shankar, Ashish Agarwal, and Alexandre Passos. These engineers treated me like a peer. They encouraged me to take ownership of important tasks, giving me guidance when I sought it and giving me more credit than I deserved. For example, at one point, Derek helped me redesign a small but important part of the TensorFlow runtime, making it possible to partition named units of computation across multiple devices. Even though much of the design was his own, he always introduced our work to others as “Akshay’s project.” Later, Asim, Ashish, and Alexandre let me take the lead in writing an academic paper about TensorFlow Eager, a technology that transformed TensorFlow from a declarative programming language to an imperative one, even though my contributions to the technology were smaller than theirs. The unreasonable amount of trust placed in me, and credit given to me, made me work harder than I would have otherwise.
In many ways, the culture of Google Brain reminded me of what I’ve read about Xerox PARC, perhaps the most influential of industrial computer science research labs. During the 1970s, researchers at PARC paved the way for the personal computer revolution by developing foundational technologies like graphical user interfaces and producing one of the earliest incarnations of a desktop computer. The culture of PARC is documented in The Power of the Context, an essay written by PARC researcher Alan Kay. Kay describes PARC as a place where senior employees treated less experienced ones as “world-class researchers who just haven’t earned their PhDs yet.” Kay goes on to say that researchers at PARC were self-motivated and capable “artists,” working to bring their “visions” into reality. Artists worked independently or in small teams towards similar visions, making for a productive environment that felt “out of control” at times:
A great vision acts like a magnetic field from the future that aligns all the little iron particle artists to point to “North” without having to see it. They then make their own paths to the future. Xerox often was shocked at the PARC process and declared it out of control, but they didn’t understand that the context was so powerful and compelling and the good will so abundant, that the artists worked happily at their version of the vision. The results were an enormous collection of breakthroughs, some of which we are celebrating today.
At Brain, as at PARC, researchers and engineers have an incredible amount of autonomy. They have bosses, to be sure, but they have an a lot of leeway in choosing what to work on — in finding “their own paths to the future.” I’ll give one example: a few years ago, many on the Google Brain team realized that machine learning tools were closer to programming languages than to libraries, and that redesigning their tools with this fact in mind would unlock greater productivity. Management didn’t command engineers to work on a particular solution to this problem. Instead, several small teams formed organically, each approaching the problem in its own way—TensorFlow 2.0, Swift for TensorFlow, JAX, Dex, Tangent, Autograph, and MLIR are all different angles on the same vision. Some are in direct tension with each other, since they’re solving the same problem. Still, each is improved by the existence of the other; we’d re-use each other’s solutions when possible, and we’d share notes often. It’s totally possible that many of these tools might not become anything more than promising experiments, but it’s also possible that at least one will be a breakthrough.
Unlike Xerox, which struggled to find applications of PARC’s research, Google makes use of the tools and research incubated by its AI research lab. For the past few years, Pichai has emphasized that Google is an “AI-first” company, with the company seeking to implement “machine learning techniques in nearly everything [they] do;” Google Photos, Translate, and the Assistant are maybe the most salient examples of AI-powered products.
I would guess that the PARC-like context that Brain operates in was instrumental in bringing about the creation of TensorFlow, the machine learning tool on which I worked. In late 2015, Google open-sourced TensorFlow, making it freely available to the entire world. At the time, TensorFlow was the only open-source framework for deep learning that was actively developed by industry. TensorFlow lowered the barriers to training machine learning models on very large datasets and subsequently using the trained models in real-world applications, accelerating research in machine learning as well as industrial applications of it.
TensorFlow quickly became enormously popular. Instructors at Stanford and other universities used it in their curricula (my friend Chip Huyen, for example, created a Stanford course called TensorFlow for Deep Learning Research), researchers across the globe used it to run experiments, and AI companies used it to train and deploy models in the real world. Today, TensorFlow is the fifth most popular project on Github out of the many millions of public software repositories available on it, as measured by the number of “stars”, or likes, per repository.
At least for TensorFlow, however, Google Brain’s hyper-creative, hyper-productive, and “out of control” culture has been a double-edged sword. In the process of making their own paths to a shared future, TensorFlow engineers released many features sharing a similar purpose, like constructing neural networks. Many of these features were subsequently deemphasized in favor of more promising ones. While this process might have selected for good features, it frustrated and exhausted our users, who struggled to keep up. On the other hand, many of the surviving features did substantially improve TensorFlow—for example, tf.data simplified data processing pipelines, and eager execution has made TensorFlow more expressive and pleasant to use.
In late 2018, I left Google and enrolled in a PhD program at Stanford. Leaving Google Brain was difficult: I loved the perks—the free espresso most of all—but even more, I loved that Brain felt like a vibrant research lab. Not only did I get to write an academic paper, I also several research talks, all hosted by Brain, and I traveled to London to visit DeepMind, to Sweden to attend a conference, and to Stanford to deliver a guest lecture. Most of all, I loved working alongside a large team on TensorFlow 2.0—I’m passionate about building better tools, for better minds. But I also love the creative expression that research provides. It’s true that, instead of starting a PhD, I might have involved myself in research at Brain, joining other scientists’ projects and perhaps eventually starting my own. But the zeitgeist had little room for topics other than deep learning and reinforcement learning. (In 2018, Google rebranded “Google Research” to “Google AI,” redirecting research.google.com to ai.google.com; the rebranding understandably raised some eyebrows. It appears that change was quietly rolled back sometime recently, and the Google Research brand has been resurrected.) While I’m interested in these topics, I’m not convinced that today’s AI is anywhere near as profound as electricity or fire, and I’d like to be trained in a more intellectually diverse environment.
These days, in addition to machine learning, I’m interested in in convex optimization, a branch of computational mathematics concerned with making optimal choices. Convex optimization has many real-world applications—SpaceX uses it to land rockets, self-driving cars use it to track trajectories, and financial companies use it to design investment portfolios. While well-studied from a theoretical perspective, as a technology, convex optimization is still young and niche. I suspect that convex optimization has the potential to become a powerful, widely-used technology. I’m interested in doing the work—a bit of mathematics, and a bit of computer science—to realize its potential. My advisor at Stanford, Stephen Boyd, is perhaps the world’s leading expert on applications of convex optimization, and I simply could not pass up an opportunity to do useful research under his guidance. (In fact, most of my mentors at Brain encouraged me to enroll in the PhD program. Only one researcher strongly discouraged me from pursuing a PhD, comparing the experience to “psychological torture.” I was so shocked by his dark warning that I didn’t ask any follow-up questions, he didn’t elaborate, and our meeting ended shortly afterwards.)
It’s been just over a year since I left Google and started my PhD. Since then, I’ve collaborated with my lab to publish five papers, including one that makes it possible to automatically learn the structure of convex optimization problems, bridging the gap between convex optimization and deep learning. There are many things about Google Brain that I miss, my coworkers—the most skilled engineers I know—most of all. But now, at Stanford, I get to collaborate with and learn from an intellectually diverse group of extremely smart and passionate individuals, ranging from pure mathematicians, electrical and chemical engineers, physicists, biologists, and computer scientists. (Sometimes, when I look at the layers of equations, diagrams, and curves scrawled on the whiteboards in our lab, I get the strange sense that I’m inhabiting a scene from the film Good Will Hunting or A Beautiful Mind, and I can’t tell if life is imitating art or art is imitating life.) I’m now one of three developers of CVXPY, an open-source library for convex optimization, and I have total creative control over my research and engineering projects. I’m not sure what I’ll do once I graduate, but for now, I’m having a lot of fun—and learning a ton—doing a bit of math, writing real software, and exploring several lines of research in parallel. If I’m very lucky, one of them might even be a breakthrough.