After Reading: So Many Books by Gabriel Zaid

I finished So Many Books (Gabriel Zaid) a couple months ago but the main points have been rattling around my head since then. The book reads like a series of essays, focusing on (in relative order) the habits of book collecting, habits of reading and comprehension, growth of publishing, and the economics of publishing. It’s a short book, makes a dry topic very interesting, and makes arguments relevant in today’s world of self-publishing, blogging, vlogging, and all else.

As I’m currently authoring and publishing apps more than authoring and publishing books, all of my favorite passages struck me as reflective of the software industry.

Many people read paragraph by paragraph, sounding out a text, like a child reading letter by letter or word by word. The next level, which many do not reach, is to grasp a book all at once, in its entirety.

[…] It’s like examining a mural from two centimeters away and scanning it ten centimeters at a time, like a short-sighted slug. This doesn’t allow for the integration of the whole, for taking in the mural at a glance.

[…] And how many people want to feel like slugs, especially once they know what it is like to express themselves intelligently in conversation? This accentuates the difference between the developed oral side and the underdeveloped written side. People who feel this way don’t read books. They never really learned how to read books.

Damn. What a good passage. I feel like this is so much of whether you can give a good interview or good presentation on your code. It usually doesn’t matter whether you can type the characters or know the keywords if you can’t articulate and optimize the program’s goal. Too many developers also sound out their programs variable by variable or function by function, instead of grasping or communicating the work as a whole.

To the point about people favoring the oral over the written side, I feel that many developers suffer from the opposite; they prefer to code because they are stuck sounding out keyword after keyword and have trouble articulating their larger goal in human words. This is why my teams will always focus on communication and comments in addition to the underlying functionality.

The cost of reading would be much reduced if more authors respected the readers’ time and published less.

People want to be at the center of the world’s stage and conversation, but the world’s conversations are myriad and continued and concurrent. The true art of publishing involves placing the text in the middle of a conversation, adding to it, knowing how to feed the flames.

So many developers evangelizing their new favorite language or tool to solve all the problems and all the things. Each new idea exists not in the center but in concert with every other language and framework. No one piece or option should or ever could make all the sound or be used in even most products we will build.

With more education, more and more people want to be published. More want to be published than want to read. “I need you to hear me, more than I need to hear you.” […] Not even poets want to read poetry, unless they are required to do so in order to see their own work published.

This has been one of the farthest reaching ideas I took away from this book. I see this play out in the community (and in my own habits) where open source developers will ask for donations toward their work but will not also donate to others. Many people will present at meetups and then forgo attending the talks of others.

This, especially, will be a point of self-improvement I take away from this book.

Books are published faster than ever. More is being published every day than you could ever read. […] Your ignorance grows faster than your knowledge.

All the the more reason to focus on the larger mural being drawn on the wall. :)

Hiring Retrospective - Advancement Rate

After interviewing all Summer and Fall, we’ve found the next member of our OfficeLuv Product Team, a talented and thoughtful software engineer.

This hiring cycle, I wanted to approach recruitment as we would a product feature or epic. Part of that, of course, is having a good retrospective. The first post detailed our hiring steps and flow. Here’s the second part, an overview of advancement rates for each stage of our hiring funnel.

Timeline & Sourcing

We had this role open and the job posted for 5 1/2 months (mid-June through the end of November). We received applicants every week (not every day) throughout the cycle, but with most applying in June/July and October/November.

During that time, we syndicated the job on 7 main sources: our open-source projects/site, our company careers page, AngelList, Indeed, LinkedIn, HackerNews (the monthly Who’s Hiring? post), and a couple Chicago-based Slack groups.

For those syndication sites, the ranking by most to fewest applicants was: Indeed, AngelList, career page, HackerNews, LinkedIn, our open-source site, and finally the Chicago Slack groups.

For the final 2 1/2 months, we used recruiters to source additional, higher-quality candidates. In terms of candidate volume, the recruiters ranked somewhere between our career page and HackerNews during their active months.

Throughout the job’s opening, I cold-emailed approximately 20 candidates. I found most of these by scouring Github or Meet-Up groups.

All combined, we sourced approximately 150 candidates throughout the job’s open period.

Advancement Rates

I wrote previously about the steps in this role’s hiring process. Here were the advancement rates through those steps:

  • 150 candidates
  • 30 phone/coffee/code interviews (20% of candidates)
  • 7 on-site interviews (23% of phone interviews)
  • 1 offer made (and accepted) (14% of on-site interviews)

Proportionally, we advanced recruiter-referred candidates much more than general applicants. Candidates we reached out to directly also advanced much farther than others. This is the first hiring cycle in which I’ve kept such specific numbers, so I’m not yet aware of how these advancement rates compare to an average hiring flow.

Anecdotally, the rest of the hiring team felt very good about our on-site interviewees. In the past, some on-site interviews have been lackluster, which fatigues the rest of the interviewing team. This cycle, I tried to be pretty liberal in scheduling phone/coffee interviews, so I was a bit surprised to calculate such a low rate of advancing candidates to that second step. Again, rough reasons why candidates did not advance to a further step were outlined in the previous post.

Now we’re ready to compare against this for our next hiring cycle. How does this compare to your current or last hiring flow? I would love to know and share more specifics.

Hiring Retrospective - Interview Steps

After interviewing all Summer and Fall, we’ve found the next member of our OfficeLuv Product Team, a talented and thoughtful software engineer.

This hiring cycle, I wanted to approach recruitment as we would a product feature or epic. Part of that, of course, is having a good retrospective. Here’s the first part, a simple overview of the steps in our interview process. This is also for my good friend Bruce Ackerman, who is also hiring at Printavo.

Job Description

Finding the good engineers starts with a good job description. I think too many hiring teams forget that they should think of how to entice the engineers they want to hire. To that end, I went through a couple iterations on this particular job description, with each revision focusing more on our team process and, I’m happy to say, bringing in more candidates. Here’s the final introduction that brought in a spike of quality applicants:

We’re growing here at OfficeLuv and are looking for a Full-stack Engineer to help us shape the momentum! The Full-stack Engineer will help develop, solve, and produce the technology that helps power OfficeLuv and our loyal customers. You will work with the small tech/product team to build applications in the cloud, in the browser, and on phones that will iterate rapidly and provide direct benefit to customers you’ll talk to. We’re building for the long run. You’ll be excited about the two-sided marketplace you can shape here. We’re standardizing and automating a process that’s ripe for it. You’ll be shaping the supply and grocery of offices across the country!

We run a very collaborative and growth-mindset product team. We focus on automating as much as possible (continuous integration and deployment for all systems) so we can all sleep soundly at night. We leave our laptops in the office at the end of the day. If you want a taste of our management style, you can read about it. We contribute to the open source community and communicate within our company continuously.

From there, the job description listed out our technical languages/stack, agile process, and 2-5 years as experience requirements (among a few other nice-to-haves).

Originally, the introduction was only the first paragraph. After adding the second, I noticed a spike in more experienced candidates with a philosophy more closely aligned with our team’s. I wasn’t eager to share my management README initially, but it certainly paid off.

We syndicated this job description on our Careers page, on AngelList, on Indeed, on LinkedIn, on HackerNews, and in Chicago Slack groups.

Application Screening

If a candidate liked our job description (and whatever else they chose to find about us), they would submit a resume and cover letter. If I declined a candidate at this stage, it was largely due to a lack of total experience (e.g. just graduated from a bootcamp or grad school), a lack of experience in relevant languages, or a lack of access (e.g. located outside the Midwest or outside the U.S.).

I would occasionally decline a candidate that shared (open-source) work at this stage containing bad work. Examples included: profanity in code commits, dramatically buggy code on portfolio work, or very sloppy recent work featured prominently.

Phone/Coffee Screen & Code Samples

I would then book a conversational screening session with the person. This was most often a phone interview, but would sometimes take place over coffee if they were a referral.

I approached this conversation as a two-sided, high-density, cards-on-the-table excitement fest. I would tell them about our company’s history, our product, our team’s process, and would try to excite them about building our future. I would ask them about their ideal product development practices and examples of how they have solved complex problems in their past work. It was the job of both sides to excite the other about working together.

During the conversational screen, I would ask for two code samples:

  • An example of something (code, design, data-flow, architecture, etc.) that they built and were proud to build again. They understood the problem and built an elegant or performant or maintainable solution.
  • An example of the opposite. Something they have built or designed that, looking back on it now, they knew they should have done better.

I also asked them to write up brief explanations about why they had chosen the two samples. Then they would email the samples and comments to me within a few days.

I would only advance candidates that seemed excited about us and that I was excited to bring into the office. If they sent code samples that were of poor quality (and did not identify it as such), or their comments did not convey a true understanding of the samples, I would also decline at this point.

On-Site Interview Screen

I would then schedule an on-site interview with the candidate. These interviews normally lasted between 2-4 hours (depending on our team’s availability). They broke down into roughly four sections, handle by myself and three other company members. My technical portion would take roughly twice as long as the others and would usually start the session.

I would set up an account on our staging environment for the candidate to play around in our product (sometimes we simply did this during the interview). I also asked them to prepare a short (5-10 minute) technical topic that they could teach me (“anything interesting or potentially relevant, I’m mostly looking for how you think about things and how you explain things”).

In the technical portion, we would talk about their code samples. I would have them walk me through their comments in more detail. If they prepared a technical teaching topic, they would teach me. I would have them point out where they expect the bodies to be buried in our app (which they had been exploring). Then, I would open up the file or script responsible for that part of the app and have them critique it with me. In all of these topics, I was looking for communication skill and deep technical understanding.

In hiring for past jobs, I would have given a live-coding challenge (a favorite is re-implementing Array.prototype.push in their language of choice), but I didn’t feel that was necessary for this particular role. Instead, I would dig deeply into their ability to understand and explain a more complex portion of our code. If the conversation flowed to it, I would posit some data-flow problems and ask how they would address them.

Following, we would have other company members (always the same members, to maintain consistent measures) speak with the candidate about how business metrics or criteria influenced their product engineering, especially through rapid iteration. Someone would press the person on their ability to incorporate stakeholder feedback into the product iteration cycle. Someone would ask them to provide examples of how they decompose problems like our back-end systems. How have they grown their team members and improved their team’s delivery in the past?

Discussion & Offer

If our other interviewers were excited about the candidate and I thought they were technically savvy, we started reaching out about an offer. We didn’t really ask much more than that.


These are roughly the same procedural steps I’ve used in hiring the members of my last couple teams. If the role was more junior, I would ask more live-coding exercises during the on-site interview. If the role was more senior, I would spend more time explaining our architecture in the on-site interview, asking them to predict where the faults would lie.

Over the time we interviewed candidates for this role, I moved to sending staging platform access further in advance. That led to more productive discussions during the interview.

When I sent out the on-site interview invites to the other members on our team, I always included the candidate’s resume, a short bio of their past roles, current role, and future role desires, as well as suggestions for questions I thought their should ask the candidate. This allowed our interviewers to prepare quickly and effectively.

In the next part of this retrospective, I’ll go over some of our acceptance rates for each of these stages.

Weeknote of September 26, 2018 - Speaking Elsewhere

In lieu of writing recently, I spoke quite a bit elsewhere.

You can listen to my podcast interview on I Code Hire Me where I talked about teaching yourself to code while maintaining your current job. I also talked a bunch about good hiring ideas. Obviously, hiring has been on my mind lately.

I also gave the end-of-conference talk at the inaugural Chicago JSCamp (a conference for JavaScript developers). I talked about how to deliver app upgrades to your customers seamlessly and met a bunch of interesting people from around the country. We’re all facing similar problems, regardless of location - software knows no geography - and I hope to be submitting more conference talks for events around the country based on feedback from this one.

Whenever you work alone for a while, it’s so easy to forget that you are still moving forward. There’s no relative measure for what you build. The biggest idea sticking with me after the conference is that my learnings and my experiences are still new and valuable to the rest of the community - even if they now seem obvious to me.

Go out there and get some perspective.

Speaking at JSCamp Chicago 2018

Weeknote of September 05, 2018

In the past, I’ve talked with my good friend Vaibhav Krishna about how important it is to recognize your mental reserves and drives as an exhaustible resource. Writing and dissecting software, you will often feel the tug of an idea. Just as often, you will be set upon a problem and find no interest grow as you dig fingers into the solution.

Usually, it’s your job to find the solution (and probably define the problem). Hopefully you’ll be getting paid for those. You’ll slog through the keystrokes and requirements uphill, only to find yourself doing it again next month.

That’s while you’ll find the best professionals following their nose whenever possible. To survive any work for long, you find ways to explore - downhill. You recognize when not to push your mind. You recognize when to follow the scent of a challenge or new idea. If your nose is no good, you’ll find out quickly.

This weekend I was following my nose. I was reading up on some algorithms (ARIMA, GARCH, Levenshtein distance, among others) and stewing over OfficeLuv revenue forecasting issues all last week. I thought there may be a good way to quantity the difference between one e-commerce cart and another (the “distance” it would take to edit one cart into another). This would let me calculate the floating average of a customer’s e-commerce cart history. So I took the rainy Labor Day afternoon to develop the two algorithms, riding downhill the whole time. I’ll be writing them up this week and hopefully putting them into use next week.

Cleaning Up For a New Hire

I’m hiring a new full-stack engineer for the OfficeLuv team, and there’s nothing quite like a new hire to kick your team into shape. I’ve written about how new hires are a valuable resource in the past, and each time I focus on drawing more and more value. This current cycle I’ve already noticed a change in my behavior in these past few weeks: I’m cleaning up in anticipation of guests.

Whenever someone agrees to join your team, they are also entering a home. You and the rest of the team have been living and building and rebuilding there, for eight hours daily, in the dust and the muck. You’ve been able to build something that works (you’re hiring!), but there are always dirty areas that you would never write again. You know about these areas - you probably list them over a drink with other engineers.

I used to embrace this hacky code as historical education for the new hires. I would walk them through it and we would both agree that it sucked and that a better path was clear. Over the years, though, I’ve realized that this is just a waste of time. If we know what’s wrong and how to fix it, we should take the action as soon as possible. We should use the new perspective of the new hire to find new problems and think of new solutions.

So I’m taking some time this week to clean up the obvious, dirty areas of the codebase. I feel like I’m vacuuming the house before guests visit; rewriting the queueing logic for our background machine learning calculations. It’s making me even more excited for a new team member to enjoy the open air.

After Reading "Lab Girl"

I was generously gifted Lab Girl by Hope Jahren through Jane Kim in her last week at OfficeLuv. I finished it today, after about a month.

I loved the rhythm of this book: chapters alternated between her memories and thoughtful paths into botanical research. I love connections between a life experience and scientific themes or analogies. I loved the author’s tone and her wit; I wanted to meet her more with each section. I loved the way it would cause me to drift my eyes off the page in thought after reading a scientific explanation and would glue my eyes to the page through a teaching experience.

The author and her partner Bill continuously reminded me of my uncle, Tom and his teaching partner (basically my aunt), Karen. They were my scientific teachers during my childhood and high school years, taking me (and my friends) on camping trips, mud-treks, and surgical tours. They had a similarly complex and intimate career together and taught me more than I probably realize.

I bookmarked more passages toward the beginning of the book, as I think the ideas meshed into a better arc that tapered off as the story passed into later years.

Bookmarked Passages

Working in the hospital teaches you that there are only two kinds of people in the world: the sick and the not sick. If you are not sick, shut up and help. Twenty-five years later, I still cannot reject this as an inaccurate worldview.

I also worked in a hospital and in ambulances. I also cannot disagree with this feeling.

Full-blown mania lets you see the other side of death. Its onset is profoundly visceral and unexpected, no matter how many times you’ve been through it. It is your body that fist sense the urgency of a new world about to bloom. […] Nothing, nothing can be loud enough or bright enough or move fast enough. […] Your raised arms are the fleshy petals of a magnificent lily bursting into flower. It deeply dawns on you that this new world about to bloom is you.

I read the chapters describing her mania twice over because it felt so good.

It is also no uncommon for scientists to work out their personal issues under the guise of making an evaluation, and I was receiving feedback along the lines of “this reviewer is dismayed to find that the investigator’s apparent capabilities were deemed sufficient to merit a graduate degree from the very same institution that produced his own credentials,” and other useless venom.

The scientific are not any more immune to stupidity, bigotry, and prejudice.

Weeknote of 2018-08-21, Making and Breaking Patterns

On repeat: Joe Goddard - Electric Lines

I was dramatically sick Tuesday and Wednesday of last week; I’m guessing food poisoning. The combination of not eating, not exercising, barely drinking, and barely standing made me lose over 10 pounds in about a day. The following day, the dehydration caused me to cramp up so badly I nearly went to the urgent care. There are some patterns you just shouldn’t break.

Collect more than one person in a place and they will start patterning themselves off each other. You can see it in Instagram on repeat, on repeat. All these influencers seem to be “influencing” in the same direction.

Intermittently breaking pattern and secluding yourself from an environment of repeated interaction statistically leads to more thoughtful and novel responses. It improves the group’s collective intelligence, too.

In recent weeks, I’ve been spending a fair amount of time condensing a service-oriented application architecture down to a queue-based implementation within a single project. I’m notoriously a proponent for pervasive patterns in the code I write. I will often go back a rewrite groups of functions to match the same signature or reorganize class hierarchies to fit a couple new members. The real benefits are realized and strengthened now as I go through years-old code, transposing high-level tasks in one language to another. Each revision chips away a few more bits from the edges to reveal the best pattern beneath.

Weeknote of 2018-08-05, Second-order Effects

On repeat on these hot nights: ultralight beam

I’m currently spending time each week interviewing people for our open software engineer position at OfficeLuv. Most days I’m just searching for candidates, and whenever I do find someone I want to interview (I had 5 interviews on Tuesday), I need to make the questions count.

One focus I’ve found to be quick and a reliable way to judge capability is to ask candidates about the second-order effects of their work. Most of the young developers I meet consider only the immediate effects of their solution or immediate problems they debug. The best ones first extrapolate their solution before implementing it. I’m not condoning analysis-paralysis or premature optimization, just the ability to consider a level above or below the current situation.

  • How does changing this interface affect user behavior? How will that cascade into stressing pathways within the application queues?
  • How does optimizing this function for speed over consistency guide user behavior?

Often, I’ll ask a technical question later in the interview process. It’s not entirely important that they can readily rattle off the underlying algorithm; I’ll happily go over the technical implementation with them. The important part is the second part of the question, where we talk about which and why certain pitfalls, benefits, and behaviors happen because of that implementation.

Because of this, I’ve had second-order thinking on my mind all week. Here are just good pieces and thoughts I’ve found just this week that fit that theme.

This Mind-Controlled Robotic Limb Lets You Multitask With Three Arms It turns out that even when subjects lost the third arm, their rapid task-switching skills remained improved. How much more of our brain is constrained because of physical limitations through the rest of our body?

Disturbances #16: The Price of Perfection This person writes a newsletter every week about dust and I’ve subscribed for a while. It’s usually pretty fantastic. This week, they wrote about the accumulation of dust as a result of modern technological manufacturing. As it turns out, accumulations of dust can be explosive.

Should computer science researches disclose negative side-effects? There was a publication this week of an idea: software engineers should disclose the possible bad-actor uses and second-order effects of the software they research and/or develop. I completely agree with an initiative like this. I was surprised to see so many engineers trying to argue their inability to even attempt such thinking.

Cryptocurrency mining operators are clustering around low-priced renewable energy sources Energy-guzzling software is clumping together in geographic space where energy costs less.

I change the placement and structure of my desk at work pretty often. Here is my current work desk. Not pictured: this is at a standing bar table.

Desk on 2018-08-05

Weeknote of 2018-07-30

On repeat: youtube.com/watch?v=cgULtrAfISw

I have been in the practice of writing down my daily tasks, ideas, steps of building, conversation points, random links, handy tidbits, etc for a few years now. These daily notes have been invaluable in reconciling long weekend with the tasks of a Monday’s morning. While working at startups, they have also been the only source of long-forgotten, one-off, code snippets/fixes and sources of pattern recognition in customer feedback. Yesterday, I was reading the lovely Lab Girl (through the generosity of Jane Kim) and was reminded of these notes’ similarity to the lab notes that we were forced to commit in every science course. Reading more into that led me to the BERG concept of weeknotes. I like it.

I have loved the podcast Reply All for a few years now and so jumped at a recent episode that intersected with our startup’s operations. I led a discussion with the company about Amazon’s fake review problem and extrapolated some of the history not covered in the episode. I tried to paint a picture of the history of online marketplaces.

  • We purchased things from each other on the first forums (heavily based on reputation).
  • We moved to posting our goods on Craigslist, where you could reach a broader audience, but reputation was non-existent.
  • We moved to buying on Ebay, where seller reputation was prominent, buyers competed for items, and payment was easy.
  • We moved to buying on Amazon, where product reputation is prominent and multiple sellers compete for you.
  • What’s next? This is a conversation point I tried to push, but people replied that Amazon is probably the peak of this mountain. I hardly agree.

Marybeth had been part of the 48 Hour Film Project a couple weeks ago, and the premiere night was Sunday. We watched something like ten 6-7 minute films all of different genre and created within 2 days time. The variety was wonderful. One film stood out a bit. It was created by and starred the oldest entrants, was shot on a cell-phone camera, had slipped-up lines and audio, and had no digital post-production. It was obvious that the creators also had way more fun than any other group. I’m going to remember that part.

Here’s a photo from the Sylvan Esso concert we just barely attended last week. They were much better live than in their albums, and I loved their albums.

Sylvan Esso

I had a serendipitous lunch with my old friend Josh Martin. Honestly, whenever I spend time with him, I feel ten times as excited to work on new ideas. As always, he encouraged me to share more of my ideas with other people. So here I am, writing a newsletter, again).

I just ended a text message to my girlfriend with a semicolon and I think I should be done for the day.


Getting things straightened out with this micropub stuff.

A Recommendation of Nevzat Cubukcu

A Recommendation of Nevzat Cubukcu

We hired Nevzat as our third engineer and Android device expert. How can you not be impressed when he came to the interview with projects to demonstrate complex matrix manipulations of on-device images that he built to teach himself the Android SDK? He, of course, continued to teach himself while on our team. He also pulled us to place customer delight in the front of our minds while also tirelessly improving out product.

While not knowing any JavaScript upon his first day at OfficeLuv, Nevzat was contributing to our single-page applications within a week. After we finished the release of our Java Android app, he switched completely to client-side JavaScript development without skipping a beat. He would eventually spearhead the conversion of our Android app into a React Native system. I have not yet worked with anyone so agile in their adoption of new languages and technical paradigms as Nevzat.

While remaining flexible in technical development, Nevzat was also a vocal contributor to our product road-maps, striving to always find customer delight. He would go out on customer interviews, incorporate their feedback, and work harder than anyone to build things better than the user would expect. Nevzat always found a way to improve existing features with each new release, from predictive searches to battery-saving background job optimizations.

When you are on a team with him, Nevzat’s excitement is infectious and rallying. I often found myself removing my headphones just to walk through a problem with him, because I knew how quickly we would find the right path together. His ease in assimilating new programming techniques means he will never find an impasse in building a technical product, and his mindfulness of the end user means the product will surely make them smile.

A Recommendation of Jack Kent

A Recommendation of Jack Kent

Working with Jack is extremely rewarding, because I know he will always guide us to the best interface for the customer. His research into the mind and environment of the user is unparalleled and he has a fantastic ability to lead a group through fruitful design sprints. I learned a great deal from his practice of honing a user interface to intuitive, evolving components.

Jack’s knowledge of the customer is beyond any that I’ve seen in other designers. He lead research interviews that we referenced directly in our end results. His thinking process always starts and ends with the user’s own mindset, which ensures solid ground for the final products. It is always easy to talk with Jack about why he chose a design pattern, simply because he usually references direct experience with our research.

Working an engineered functionality into a design can be difficult at times, but Jack is an invaluable resource here. He has enough knowledge and skill to not only predict and avoid common pitfalls but also contribute directly to the development of the features he designs. Through the cycle of usability feedback, he can build, test, and update adjustments to the code reliably, which greatly advances the whole team.

I will always reference Jack’s guidance when conducting user interviews or operating in a design sprint: I have a quote from him taped to the wall beside me. His considerations and conversations have changed my own thinking of the end-user. I would want him to lead any designs that I build or use in the future.

A Recommendation of Lauren Polkow

A Recommendation of Lauren Polkow

When I sat down with Lauren for the first time, I knew immediately that it was the chance of a lifetime to work with her. Such a spark of energy, intelligence, and enthusiasm for the customer walked into the room that it was impossible to resist the opportunity to learn. She is one of the best guides I have found, and the best product leader that I have known.

Lauren leads a team better than anyone else I have seen. She is constantly aware of each member’s strengths, past experiences, career goals, and weekend plans. Able to see potential conflict from miles away, she has the skill to fit others into effective groups that produce exceptional products. I constantly reflect on her words and actions of encouragement or guidance and find new ways to improve myself. Our team was able to deliver because Lauren steered us through conflict and growth to internal understanding.

Backed by thorough customer research and data, Lauren’s product roadmaps and vision are amazingly powerful. Her strong compassion for the customer and desire to know their entire mind ensures that every feature is grounded in utility and delight. She is always pulling informative data out of products, datastores, and customers themselves to validate and guide the company.

Whatever Lauren envisions will form on solid ground. Whatever she works on will be better for it. Whoever she leads will be my envy. Her skill in building a product, inside and out, will continue to be an aspiration of mine and a beacon wherever she goes.

Handy Kue Maintenance CLI Scripts

Building systems at my last few companies, it has been enormously useful to have a robust queueing platform. I’ve tried Amazon’s SQS, NATS, and a couple others but Automattic’s Kue has been the best combination of performance and introspection.

Once you’re really using any queue for large batching tasks, you will eventually run into issues with stuck jobs and jobs that need to be evicted early. This is called queue maintenance. You should have code that automatically sweeps the queue clean based on your own rules of retry, etc.

Alas, you will probably need to manually clean the queue at some points. This is usually a stressful time where you don’t want to hand-type some half-thought JS snippet into a Node.js console. Something like 30,000 bad jobs are backing up the workers and an investor is testing the product. For these situations, I have made a couple command line (CLI) apps to evict or retry queue/kue jobs. I thought my CLI scripts could help others using the Kue library.

Kue CLI Scripts

I typically put stuff like this in a /bin directory at the root of an application. With that, you can create an executable file for job eviction from the queue:

$ mkdir bin && touch bin/remove
$ chmod +x bin/remove

For the parsing of command line arguments, we will need something like commander:

$ npm install commander --save

Inside ./bin/remove, you can place:

#!/usr/bin/env node
const program = require('commander');
const kue     = require('kue');
const pkg     = require('../package');   // load your app details
const redis   = require('../lib/redis'); // some Redis config
const queue   = kue.createQueue(redis);  // connect Kue to Redis

// command line options parsing
    .description('Remove/delete jobs from kue queue')
    .option('-s, --state <state>',
        'specify the state of jobs to remove [complete]', 'complete')
    .option('-n, --number <number>',
        'specify the max number of jobs [1000]', '1000')
    .option('-t, --type <type>',
        'specify the type of jobs to remove (RegExp)', '')

const maxIndex = parseInt(program.number, 10) - 1;
var count = 0;
kue.Job.rangeByState(program.state, 0, maxIndex, 'asc', (err, jobs) => {
    Promise.all(jobs.map(job => {
        return new Promise((res, rej) => {
            if (!job.type.match(program.type)) {
                return res(job);
            job.remove(() => {
                console.log('removed', job.id, job.type);
    })).then(() => {
        console.log('total removed:', count);
    }).catch((err) => {

Similarly, you can create an executable file for job re-queueing:

$ mkdir bin && touch bin/requeue
$ chmod +x bin/requeue

Inside ./bin/requeue, you can place:

#!/usr/bin/env node
const program = require('commander');
const kue     = require('kue');
const pkg     = require('../package');   // load your app details
const redis   = require('../lib/redis'); // some Redis config
const queue   = kue.createQueue(redis);  // connect Kue to Redis

// command line options parsing
    .description('Requeue jobs into kue queue')
    .option('-s, --state <state>',
        'specify the state of jobs to remove [failed]', 'failed')
    .option('-n, --number <number>',
        'specify the max number of jobs [1000]', '1000')
    .option('-t, --type <type>',
        'specify the type of jobs to remove (RegExp)', '')

const maxIndex = parseInt(program.number, 10) - 1;
var count = 0;
kue.Job.rangeByState(program.state, 0, maxIndex, 'asc', (err, jobs) => {
    Promise.all(jobs.map(job => {
        return new Promise((res, rej) => {
            if (!job.type.match(program.type)) {
                return res(job);
            console.log('requeued', job.id, job.type);
    })).then(() => {
        console.log('total requeued:', count);
    }).catch((err) => {

Example Output

Here’s the help text produced by remove (similar to that from requeue):

$ ./bin/remove --help
#   Usage: remove [options]
#   Remove/delete jobs from kue queue
#   Options:
#     -V, --version          output the version number
#     -s, --state <state>    specify the state of jobs to remove [complete]
#     -n, --number <number>  specify the max number of jobs [1000]
#     -t, --type <type>      specify the type of jobs to remove (RegExp)
#     -h, --help             output usage information

And an example execution to remove one job from the failed state of type matching /foo/:

$ ./bin/remove -n 1 -s failed -t foo
# removed 12876999 foobaz
# total removed: 1

We use this in our queue system to remove completed jobs on a cron schedule. It has been handy multiple times when a bug worker has failed a bunch of good jobs and we need to re-queue them all. Hopefully, it’s useful to others.

After Reading "Life in Code"

After reading her excerpts from the last few months, I picked up Ellen Ullman’s Life in Code. I finished the collection of essays yesterday.

Last night, I woke from a dream. I had been programming within a group, each of us helping to shape the code - the program was physical, ethereal, and whipped like mesh within the circle we formed. Multiple streams of data flowed up through the floor, repeating and undulating into the program we were forming. The data, the events, and our hands moved into and out of a computer machine learning system so that we pushed and pulled at whole the shape to fit what we wanted to show.

Some of us stood and some sat but together we folded into a loop around our growing code. Then one of us stood to leave and we all paused to look, holding our hands to the warmth and low light of the program.

I felt it move and meld beneath our fingers as we watched the one move away and into the dark.

Timestamp - Chicago Startup Tech Dinner

I am invited to a dinner with other team leaders from other tech startup companies in the city. We meet at a comfortable restaurant and order cocktails. Out of our nine members, two are women. Eight are white. Two business founders, three engineers, two product managers, two marketers.

I spend the first third of the time swapping engineering stacks with the person next to me. He is “an expert on Facebook tech,” and an enthusiastic supporter of Flow and React Native. He doesn’t build in strict React Native, though: he builds apps based on a third party solution that manages React Native for you. His buddy is the founder of that third party. The app he is building has missed its expected launch date and his boss, across the table, is unhappy about that.

I spend the second third talking with my other neighbors about ICOs and blockchain technology. One is exploring the idea of making his own blockchain-as-a-company, though he doesn’t know any details of how it would function. Another is a serial founder of companies and believes that ICOs hold potential for “flexible money, with no downside, since you always get the money,” but also doesn’t know how blockchains function. Maybe his social connection app can have an ICO to raise money without involving venture capitalists, he says.

I spend the last third talking with other people about maintaining old motorcycles and whether personality tests should be used before interviews to determine the culture fit of candidates. They say that some companies are requiring the tests of all candidates before interviewing anyone. If the person doesn’t fit with their team’s past results, they do not proceed. Those companies also ask executive candidates to take the tests, but the executives refuse.

On Adding React Native to Existing iOS and Android Apps

I write in defense of the beliefs I fear are least defensible. Everything else feels like homework.
- Sarah Manguso, 300 Arguments

No homework for me today. I woke up and integrated new React Native code into an existing Swift 3 iOS app.

I spent 5 hours getting the bare dependencies to compile React components into the existing app codebase, then 3 hours building an interface in React that would have taken a day in native iOS. I was also able to copy and paste our existing JavaScript business logic libraries with zero problem. It felt as if I spent all morning painfully biking up a mountain, after which I’m now coasting downhill.

Tomorrow is biking up the mountain to integrate React Native into our Android app. Luckily I have Nevzat to help with that.

I will write up all of this once a full release cycle is complete.

Update: This made its way into a full talk on bridging native apps with React Native.

Migrating a MongoDB App Datastore to PostgreSQL

A couple weeks ago, Narro had a nice uptick in usage from Pro users that resulted in a large increase in data stored by the application. That is always pleasant but this time, I had a corresponding uptick in price for the data storage. Time for a change!


Years ago, when I first built Narro as a prototype, MongoDB was the New Thing in web development. I had a whole team of colleagues who were very enthusiastic about its uses, but I was a bit skeptical. In addition to helping implement Mongoid Ruby code at work, I thought I would get down into some nitty-gritty details of MongoDB under a Node.js system. Narro also doesn’t have an incredibly relationship-based data model, either, so it seemed like a good idea at the time.

I did learn a great deal. At the day job, it was confirmed that MongoDB was a horrible choice for a heavily-relational monolithic application. Millions of dollars ended up getting scrapped for an implementation of open-source software. In Narro’s codebase, I embraced the lack of relational structure and explored features like the TTL index, optimized map-reduce queries, and aggregation pipelines. Some things were impressively flexible, some things were not strong enough, but I stuck with the MongoDB storage because I had no need to change.


Once the bill for your data storage increases by 1000% in a month, you think about changing things. I have been enjoying the performance, extensibility, and support for PostgreSQL in the past couple years. I calculated the price for running Narro on a PostgreSQL datastore and ended with a price 30% to 5% of the MongoDB storage! The only problem was in getting there.

I wanted to have zero downtime and zero impact on consumers. Narro uses a microservice architecture, which posed its own problems and benefits. I didn’t have to deal with an immense amount of data, but it was millions of records. With that in mind, here was my plan:

  1. Create the schema migrations for the new PostgreSQL datastore.
  2. Create PostgreSQL-based models that expose the same API methods as the existing MongoDB-based models.
  3. Migrate a backup of the existing MongoDB data to PostgreSQL.
  4. Start concurrent asynchronous writes to the PostgreSQL database so that MongDB and PostgreSQL contain the same data.
  5. Make all read-only microservices and read-only operations happen on the PostgreSQL datastore.
  6. Stop writing to the MongoDB datastore. Use only the new PostgreSQL-based models.
  7. Done! Remove the MongoDB datastore.


In execution, the plan worked well. Creating a superset of the model API methods used throughout the microservices proved tedious but greatly smoothed the transition. The whole process lasted about a week.

Narro was previously using Mongoose in the Node.js services and mgo in the Go services. The Go services were simple enough that I migrated even the model APIs to sqlx. In the Node.js services, I used knex for querying the PostgreSQL datastore, and I created new model code that exposed Mongoose-like methods (findById, findOne, etc.) that were used throughout the code but that now mapped to SQL queries. This meant that, wherever a model was queried, I could just replace the require statement with the new model path.

For step #4, I hooked into post-save hooks for the existing MongoDB models and then persisted any change with the new PostgreSQL models. This way, there was no degradation or dependency between the coexisting models.


The plan, of course, didn’t account for everything. I used PostgreSQL’s jsonb column type for several fields where I was dumping data in MongoDB, but even that needs to be somewhat structured. I would watch out for that and have canonical mappings for every value in the migration.

After the initial replication of data from MongoDB to PostgreSQL, I ran some common queries to test the performance. I was surprised by how much slower PostgreSQL performed on queries operating in the jsonb columns. Luckily, there is some nice indexing capability specific to jsonb in PostgreSQL. After applying some simple indices, PostgreSQL was performing much better than the existing, indexed MongoDB datastore!

Another consideration is that MongoDB’s ObjectID type has strange behavior when cast to strings in certain contexts, like moving MongoDB objects to PostgreSQL. It’s a good idea to centralize one function to cast all your model fields, ready for PostgreSQL persistence. This also speaks to another issue in migrating MongoDB data to PostgreSQL: MongoDB data is almost always unstructured in nooks and crannies. It’s a great benefit in the right context, but I would sanitize and normalize every value in the mapping for step #3.

I used the uuid-ossp PostgreSQL extension to mirror MongoDB’s uuid creation for models. Just make sure to enable it (CREATE EXTENSION IF NOT EXISTS...) and set it as the default for your column(s).


Narro hasn’t actually reached step #7. I found that there are still some things PostgreSQL can’t do.

When I built out Narro’s public API, I built in rate-limiting backed by the MongoDB datastore. The leaky-bucket model was built around the TTL index feature in MongoDB, which made the business logic clean and performant. This was actually extracted into a library, mongo-throttle. I couldn’t find such a feature that exists in PostgreSQL (most people will recommend ‘expire-on-insert’ triggers) that can run automatically. For now, Narro’s rate-limiting still operates on the MongoDB storage.

The PostgreSQL datastore is more performant than the same MongoDB datastore. Map-reduce queries have been replaced by group-by and joins. Aggregation pipelines have been replaced by window functions and sub-selects. The old is new again.

“Wait” has almost always meant “Never.”

I keep a running list of my own feature requests for Narro, aside from those of members. One of the first things I wrote down a year or more ago was to migrate the storage to PostgreSQL. It was always in my intentions, but rarely in my heart to make the effort and devote the hours. I’m now grateful for something to push a change.