I just ended a text message to my girlfriend with a semicolon and I think I should be done for the day.


Getting things straightened out with this micropub stuff.

A Recommendation of Nevzat Cubukcu

A Recommendation of Nevzat Cubukcu

We hired Nevzat as our third engineer and Android device expert. How can you not be impressed when he came to the interview with projects to demonstrate complex matrix manipulations of on-device images that he built to teach himself the Android SDK? He, of course, continued to teach himself while on our team. He also pulled us to place customer delight in the front of our minds while also tirelessly improving out product.

While not knowing any JavaScript upon his first day at OfficeLuv, Nevzat was contributing to our single-page applications within a week. After we finished the release of our Java Android app, he switched completely to client-side JavaScript development without skipping a beat. He would eventually spearhead the conversion of our Android app into a React Native system. I have not yet worked with anyone so agile in their adoption of new languages and technical paradigms as Nevzat.

While remaining flexible in technical development, Nevzat was also a vocal contributor to our product road-maps, striving to always find customer delight. He would go out on customer interviews, incorporate their feedback, and work harder than anyone to build things better than the user would expect. Nevzat always found a way to improve existing features with each new release, from predictive searches to battery-saving background job optimizations.

When you are on a team with him, Nevzat’s excitement is infectious and rallying. I often found myself removing my headphones just to walk through a problem with him, because I knew how quickly we would find the right path together. His ease in assimilating new programming techniques means he will never find an impasse in building a technical product, and his mindfulness of the end user means the product will surely make them smile.

A Recommendation of Jack Kent

A Recommendation of Jack Kent

Working with Jack is extremely rewarding, because I know he will always guide us to the best interface for the customer. His research into the mind and environment of the user is unparalleled and he has a fantastic ability to lead a group through fruitful design sprints. I learned a great deal from his practice of honing a user interface to intuitive, evolving components.

Jack’s knowledge of the customer is beyond any that I’ve seen in other designers. He lead research interviews that we referenced directly in our end results. His thinking process always starts and ends with the user’s own mindset, which ensures solid ground for the final products. It is always easy to talk with Jack about why he chose a design pattern, simply because he usually references direct experience with our research.

Working an engineered functionality into a design can be difficult at times, but Jack is an invaluable resource here. He has enough knowledge and skill to not only predict and avoid common pitfalls but also contribute directly to the development of the features he designs. Through the cycle of usability feedback, he can build, test, and update adjustments to the code reliably, which greatly advances the whole team.

I will always reference Jack’s guidance when conducting user interviews or operating in a design sprint: I have a quote from him taped to the wall beside me. His considerations and conversations have changed my own thinking of the end-user. I would want him to lead any designs that I build or use in the future.

A Recommendation of Lauren Polkow

A Recommendation of Lauren Polkow

When I sat down with Lauren for the first time, I knew immediately that it was the chance of a lifetime to work with her. Such a spark of energy, intelligence, and enthusiasm for the customer walked into the room that it was impossible to resist the opportunity to learn. She is one of the best guides I have found, and the best product leader that I have known.

Lauren leads a team better than anyone else I have seen. She is constantly aware of each member’s strengths, past experiences, career goals, and weekend plans. Able to see potential conflict from miles away, she has the skill to fit others into effective groups that produce exceptional products. I constantly reflect on her words and actions of encouragement or guidance and find new ways to improve myself. Our team was able to deliver because Lauren steered us through conflict and growth to internal understanding.

Backed by thorough customer research and data, Lauren’s product roadmaps and vision are amazingly powerful. Her strong compassion for the customer and desire to know their entire mind ensures that every feature is grounded in utility and delight. She is always pulling informative data out of products, datastores, and customers themselves to validate and guide the company.

Whatever Lauren envisions will form on solid ground. Whatever she works on will be better for it. Whoever she leads will be my envy. Her skill in building a product, inside and out, will continue to be an aspiration of mine and a beacon wherever she goes.

Handy Kue Maintenance CLI Scripts

Building systems at my last few companies, it has been enormously useful to have a robust queueing platform. I’ve tried Amazon’s SQS, NATS, and a couple others but Automattic’s Kue has been the best combination of performance and introspection.

Once you’re really using any queue for large batching tasks, you will eventually run into issues with stuck jobs and jobs that need to be evicted early. This is called queue maintenance. You should have code that automatically sweeps the queue clean based on your own rules of retry, etc.

Alas, you will probably need to manually clean the queue at some points. This is usually a stressful time where you don’t want to hand-type some half-thought JS snippet into a Node.js console. Something like 30,000 bad jobs are backing up the workers and an investor is testing the product. For these situations, I have made a couple command line (CLI) apps to evict or retry queue/kue jobs. I thought my CLI scripts could help others using the Kue library.

Kue CLI Scripts

I typically put stuff like this in a /bin directory at the root of an application. With that, you can create an executable file for job eviction from the queue:

$ mkdir bin && touch bin/remove
$ chmod +x bin/remove

For the parsing of command line arguments, we will need something like commander:

$ npm install commander --save

Inside ./bin/remove, you can place:

#!/usr/bin/env node
const program = require('commander');
const kue     = require('kue');
const pkg     = require('../package');   // load your app details
const redis   = require('../lib/redis'); // some Redis config
const queue   = kue.createQueue(redis);  // connect Kue to Redis

// command line options parsing
    .description('Remove/delete jobs from kue queue')
    .option('-s, --state <state>',
        'specify the state of jobs to remove [complete]', 'complete')
    .option('-n, --number <number>',
        'specify the max number of jobs [1000]', '1000')
    .option('-t, --type <type>',
        'specify the type of jobs to remove (RegExp)', '')

const maxIndex = parseInt(program.number, 10) - 1;
var count = 0;
kue.Job.rangeByState(program.state, 0, maxIndex, 'asc', (err, jobs) => {
    Promise.all(jobs.map(job => {
        return new Promise((res, rej) => {
            if (!job.type.match(program.type)) {
                return res(job);
            job.remove(() => {
                console.log('removed', job.id, job.type);
    })).then(() => {
        console.log('total removed:', count);
    }).catch((err) => {

Similarly, you can create an executable file for job re-queueing:

$ mkdir bin && touch bin/requeue
$ chmod +x bin/requeue

Inside ./bin/requeue, you can place:

#!/usr/bin/env node
const program = require('commander');
const kue     = require('kue');
const pkg     = require('../package');   // load your app details
const redis   = require('../lib/redis'); // some Redis config
const queue   = kue.createQueue(redis);  // connect Kue to Redis

// command line options parsing
    .description('Requeue jobs into kue queue')
    .option('-s, --state <state>',
        'specify the state of jobs to remove [failed]', 'failed')
    .option('-n, --number <number>',
        'specify the max number of jobs [1000]', '1000')
    .option('-t, --type <type>',
        'specify the type of jobs to remove (RegExp)', '')

const maxIndex = parseInt(program.number, 10) - 1;
var count = 0;
kue.Job.rangeByState(program.state, 0, maxIndex, 'asc', (err, jobs) => {
    Promise.all(jobs.map(job => {
        return new Promise((res, rej) => {
            if (!job.type.match(program.type)) {
                return res(job);
            console.log('requeued', job.id, job.type);
    })).then(() => {
        console.log('total requeued:', count);
    }).catch((err) => {

Example Output

Here’s the help text produced by remove (similar to that from requeue):

$ ./bin/remove --help
#   Usage: remove [options]
#   Remove/delete jobs from kue queue
#   Options:
#     -V, --version          output the version number
#     -s, --state <state>    specify the state of jobs to remove [complete]
#     -n, --number <number>  specify the max number of jobs [1000]
#     -t, --type <type>      specify the type of jobs to remove (RegExp)
#     -h, --help             output usage information

And an example execution to remove one job from the failed state of type matching /foo/:

$ ./bin/remove -n 1 -s failed -t foo
# removed 12876999 foobaz
# total removed: 1

We use this in our queue system to remove completed jobs on a cron schedule. It has been handy multiple times when a bug worker has failed a bunch of good jobs and we need to re-queue them all. Hopefully, it’s useful to others.

After Reading "Life in Code"

After reading her excerpts from the last few months, I picked up Ellen Ullman’s Life in Code. I finished the collection of essays yesterday.

Last night, I woke from a dream. I had been programming within a group, each of us helping to shape the code - the program was physical, ethereal, and whipped like mesh within the circle we formed. Multiple streams of data flowed up through the floor, repeating and undulating into the program we were forming. The data, the events, and our hands moved into and out of a computer machine learning system so that we pushed and pulled at whole the shape to fit what we wanted to show.

Some of us stood and some sat but together we folded into a loop around our growing code. Then one of us stood to leave and we all paused to look, holding our hands to the warmth and low light of the program.

I felt it move and meld beneath our fingers as we watched the one move away and into the dark.

Timestamp - Chicago Startup Tech Dinner

I am invited to a dinner with other team leaders from other tech startup companies in the city. We meet at a comfortable restaurant and order cocktails. Out of our nine members, two are women. Eight are white. Two business founders, three engineers, two product managers, two marketers.

I spend the first third of the time swapping engineering stacks with the person next to me. He is “an expert on Facebook tech,” and an enthusiastic supporter of Flow and React Native. He doesn’t build in strict React Native, though: he builds apps based on a third party solution that manages React Native for you. His buddy is the founder of that third party. The app he is building has missed its expected launch date and his boss, across the table, is unhappy about that.

I spend the second third talking with my other neighbors about ICOs and blockchain technology. One is exploring the idea of making his own blockchain-as-a-company, though he doesn’t know any details of how it would function. Another is a serial founder of companies and believes that ICOs hold potential for “flexible money, with no downside, since you always get the money,” but also doesn’t know how blockchains function. Maybe his social connection app can have an ICO to raise money without involving venture capitalists, he says.

I spend the last third talking with other people about maintaining old motorcycles and whether personality tests should be used before interviews to determine the culture fit of candidates. They say that some companies are requiring the tests of all candidates before interviewing anyone. If the person doesn’t fit with their team’s past results, they do not proceed. Those companies also ask executive candidates to take the tests, but the executives refuse.

On Adding React Native to Existing iOS and Android Apps

I write in defense of the beliefs I fear are least defensible. Everything else feels like homework.
- Sarah Manguso, 300 Arguments

No homework for me today. I woke up and integrated new React Native code into an existing Swift 3 iOS app.

I spent 5 hours getting the bare dependencies to compile React components into the existing app codebase, then 3 hours building an interface in React that would have taken a day in native iOS. I was also able to copy and paste our existing JavaScript business logic libraries with zero problem. It felt as if I spent all morning painfully biking up a mountain, after which I’m now coasting downhill.

Tomorrow is biking up the mountain to integrate React Native into our Android app. Luckily I have Nevzat to help with that.

I will write up all of this once a full release cycle is complete.

Update: This made its way into a full talk on bridging native apps with React Native.

Migrating a MongoDB App Datastore to PostgreSQL

A couple weeks ago, Narro had a nice uptick in usage from Pro users that resulted in a large increase in data stored by the application. That is always pleasant but this time, I had a corresponding uptick in price for the data storage. Time for a change!


Years ago, when I first built Narro as a prototype, MongoDB was the New Thing in web development. I had a whole team of colleagues who were very enthusiastic about its uses, but I was a bit skeptical. In addition to helping implement Mongoid Ruby code at work, I thought I would get down into some nitty-gritty details of MongoDB under a Node.js system. Narro also doesn’t have an incredibly relationship-based data model, either, so it seemed like a good idea at the time.

I did learn a great deal. At the day job, it was confirmed that MongoDB was a horrible choice for a heavily-relational monolithic application. Millions of dollars ended up getting scrapped for an implementation of open-source software. In Narro’s codebase, I embraced the lack of relational structure and explored features like the TTL index, optimized map-reduce queries, and aggregation pipelines. Some things were impressively flexible, some things were not strong enough, but I stuck with the MongoDB storage because I had no need to change.


Once the bill for your data storage increases by 1000% in a month, you think about changing things. I have been enjoying the performance, extensibility, and support for PostgreSQL in the past couple years. I calculated the price for running Narro on a PostgreSQL datastore and ended with a price 30% to 5% of the MongoDB storage! The only problem was in getting there.

I wanted to have zero downtime and zero impact on consumers. Narro uses a microservice architecture, which posed its own problems and benefits. I didn’t have to deal with an immense amount of data, but it was millions of records. With that in mind, here was my plan:

  1. Create the schema migrations for the new PostgreSQL datastore.
  2. Create PostgreSQL-based models that expose the same API methods as the existing MongoDB-based models.
  3. Migrate a backup of the existing MongoDB data to PostgreSQL.
  4. Start concurrent asynchronous writes to the PostgreSQL database so that MongDB and PostgreSQL contain the same data.
  5. Make all read-only microservices and read-only operations happen on the PostgreSQL datastore.
  6. Stop writing to the MongoDB datastore. Use only the new PostgreSQL-based models.
  7. Done! Remove the MongoDB datastore.


In execution, the plan worked well. Creating a superset of the model API methods used throughout the microservices proved tedious but greatly smoothed the transition. The whole process lasted about a week.

Narro was previously using Mongoose in the Node.js services and mgo in the Go services. The Go services were simple enough that I migrated even the model APIs to sqlx. In the Node.js services, I used knex for querying the PostgreSQL datastore, and I created new model code that exposed Mongoose-like methods (findById, findOne, etc.) that were used throughout the code but that now mapped to SQL queries. This meant that, wherever a model was queried, I could just replace the require statement with the new model path.

For step #4, I hooked into post-save hooks for the existing MongoDB models and then persisted any change with the new PostgreSQL models. This way, there was no degradation or dependency between the coexisting models.


The plan, of course, didn’t account for everything. I used PostgreSQL’s jsonb column type for several fields where I was dumping data in MongoDB, but even that needs to be somewhat structured. I would watch out for that and have canonical mappings for every value in the migration.

After the initial replication of data from MongoDB to PostgreSQL, I ran some common queries to test the performance. I was surprised by how much slower PostgreSQL performed on queries operating in the jsonb columns. Luckily, there is some nice indexing capability specific to jsonb in PostgreSQL. After applying some simple indices, PostgreSQL was performing much better than the existing, indexed MongoDB datastore!

Another consideration is that MongoDB’s ObjectID type has strange behavior when cast to strings in certain contexts, like moving MongoDB objects to PostgreSQL. It’s a good idea to centralize one function to cast all your model fields, ready for PostgreSQL persistence. This also speaks to another issue in migrating MongoDB data to PostgreSQL: MongoDB data is almost always unstructured in nooks and crannies. It’s a great benefit in the right context, but I would sanitize and normalize every value in the mapping for step #3.

I used the uuid-ossp PostgreSQL extension to mirror MongoDB’s uuid creation for models. Just make sure to enable it (CREATE EXTENSION IF NOT EXISTS...) and set it as the default for your column(s).


Narro hasn’t actually reached step #7. I found that there are still some things PostgreSQL can’t do.

When I built out Narro’s public API, I built in rate-limiting backed by the MongoDB datastore. The leaky-bucket model was built around the TTL index feature in MongoDB, which made the business logic clean and performant. This was actually extracted into a library, mongo-throttle. I couldn’t find such a feature that exists in PostgreSQL (most people will recommend ‘expire-on-insert’ triggers) that can run automatically. For now, Narro’s rate-limiting still operates on the MongoDB storage.

The PostgreSQL datastore is more performant than the same MongoDB datastore. Map-reduce queries have been replaced by group-by and joins. Aggregation pipelines have been replaced by window functions and sub-selects. The old is new again.

“Wait” has almost always meant “Never.”

I keep a running list of my own feature requests for Narro, aside from those of members. One of the first things I wrote down a year or more ago was to migrate the storage to PostgreSQL. It was always in my intentions, but rarely in my heart to make the effort and devote the hours. I’m now grateful for something to push a change.

On Narro Joining Stripe Atlas Beta

In June, I applied for Narro’s entry into the Stripe Atlas beta program. Since building Narro incrementally, the worry of financial separation crept up on me a bit. Between the end of one job and the start of another, I had a perfect time to formalize the structure of Narro into a real entity. I thought I could provide a review of Stripe Atlas for anyone considering the program.

Luckily, the “kit” you are provided once in Atlas is like a self-inflating mattress. The team sends along prescribed documents (bylaws, company structure, tax registration, etc.) that you can simply ratify (or edit to suit your needs). A bank account is created and connected for you (at Silicon Valley Bank) and hooked up to a new Stripe account. As I was previously running Narro on an existing Stripe account, I just transferred the credentials to my extant account - no interruption for Narro’s customers/subscriptions. The whole process took maybe two weeks from the first document to the last signature.

Atlas does prescribe quite a few pieces of the puzzle that you can’t (as of this writing) change. The new entity that you register is a C-Corporation, not an LLC. I have generally worked for and seen startups registered as LLC entities, but Atlas provides some nice documentation about the benefits of a C-Corp. The new corporation will also be registered in Delaware. This is exactly what I wanted, as I had previously read about the structure and precedence set around online-only businesses and startups in Delaware. Ultimately, I was pleased with defaults.

The benefits from this process have ranged from tax procedures and general process advice to a forum community to stronger legal standing. Narro has since seen more interest from partnerships. I have felt much more at ease about the separation between the company and my own identity, financially and legally.

Narro may be a bit different from other Atlas members. It is a one-person company and I don’t have plans to take on venture capital. Instead, I am focusing now on growing the business organically and profitably. Narro has always been profitable and I don’t have plans to change that. Going through the Atlas process let me keep both options wide open. I now have the company formalized in its current state while positioning it in the best format possible for the venture capital market (see the C-Corp documentation and Delaware registration).

I’m looking forward to more benefits coming out of the Atlas program. Already, it has proven immensely valuable in prescribing the most common steps to a new incorporation.

After Reading The Undoing Project

My brother’s gift to me this Christmas was The Undoing Project, a novel by Michael Lewis on the work done by researchers Daniel Kahneman and Amos Tversky. I read it from December 27 through January 1.

I had studied the work by these two authors in my college courses, but had no idea bout their personal history or tangential work. It was a splendid reminder of those ideas, and I was captivated by the description of their relationship. I wanted to write down some initial thoughts afterward. These are not in any particular order, just as they came to mind.

I am not a fan of jumping around time scales in a historical account of things. The chapters in this novel move from one decade to the next depending on the narrative, but sometimes also switch characters and time simultaneously so that it becomes a bit disorienting. I am usually a big fan of altered time scales, so it was surprising that I find myself annoyed in this case.

It’s been a while since I read a dramatization. I wonder if the movies made of his previous books influenced this style. I know this is the author that wrote Moneyball and The Big Short, so I wonder if seeing his previous work put to screen caused some of this book to read like a screenplay. Several of the chapter openings read more like scenes from a film.

I enjoyed the focus on the loving relationship between the two men. They were obviously dependent, emotionally and intellectually, on each other for a long time. And it was especially interesting to learn about their eventual disagreements.

I kept hoping to for a return to ANY of the vignette stories applying their theories. The book leads with a basketball recruit applying heuristics in the absence of the researched theories. Why have the initial hook with the basketball recruits if we never returned to them? I feel that maybe this and others were meant to flesh out actual application of the theory, but the payoff never really materialized for me.

Bookmarked Passages

“On what would become a three-volume, molasses sense, axiom-filled textbook called Foundations of Measurement - more than a thousand pages of arguments and proofs of how to measure stuff.”

You can tell the author didn’t enjoy research texts completely, but this sounds great to me.

You could beat the doctor by replacing him with an equation created by people who knew nothing about medicine and had simply asked a few questions of doctors.

This gave me ideas about how humans could be transitioning to a period where our jobs are strictly to recognize patterns in a chaos and translate that into computer models. Then you let the computer work and apply those models since the humans cannot reliably do so. That would be the dichotomy of utility between the humans and the computers.

Some people like to walk while they talk. Amos liked to shoot baskets.

“They had a certain style of working”, recalls Paul Slovic, “which is they just talked to each other for hour after hour after hour.”

“That was the moment I gave up in decision analysis”, Danny said. “No one ever made a decision because of a number. They need a story.”

This is true of memory techniques that I’m trying to teach myself. Stories cement ideas for people.

The understanding of numbers is so weak that they don’t communicate anything. Everyone feels that those probabilities are not real - that they are just something on somebody’s mind.

Envy was different [than regret]. Envy did not require a person to exert the slightest effort to imagine a path to the alternative state. […] Envy, in some strange way, required no imagination.

This is one more reason to wipe envy out of your eyes. It’s the basest form of regret.

To Revisit When Starting a New Job

After starting at and helping to start companies multiple times, I’ve noticed a few writings and lectures that I tend to revisit each time. Even when not moving to a new group, I tend to watch or read these every few months. The beginning of this list came out of a question from the wonderful Jane Kim.

I read Ray Dalio’s Principles (which are conveniently located online, now) to remind myself of the unapologetic beauty of truth in action and truth in words. The pamphlet is a bit heavy-handed with the imperative advice, but I find it compelling nonetheless.

I watch Laura Savino speak about the power of words and think about choosing the right words for how I want to interact with others. Increasingly, I catch myself nearly using a word that connotes more or less than I mean. Taking moments to think of better words, and watching people react to exact words, has been enlightening. It’s also spurred a desire to learn more languages.

I watch Bret Victor speak about invention-based mindsets and think about challenging implicit beliefs and biases. More than just challenging bias, it’s about defining what exactly could be better built into our mindset, or better pulled out of it. He makes a call to widen the gap identified by Ira Glass (below), and widen human capabilities.

I peruse code I have released in the past year and find the worst piece. I think about how, exactly, I would make it better now, or how to avoid it completely. I go back through my code and find the piece that had the most positive impact and think about why, exactly, it had that impact. Those two pieces are not mutually exclusive, but I hope they are.

I read Valve’s Handbook for New Employees as the most organic and captivating on-boarding document I have even encountered. It’s a fantastic bundle of rules and history brimming with hidden gems. I read Netflix’s Culture Deck as the most idealistic version of a workplace guide. It has influenced most of my workplaces, directly or indirectly.

I watch and listen to Sarah Kay captivate with perfect timing, pure enthusiasm, and plenty of emotion. Her first poem is a near-perfect piece of performance, in my opinion. It reminds me that delivery is immensely important to the success of your message.

I listen to Ira Glass talk about the gap while creating and think about the importance of maintaining that gap. In my experience, a great swath of developers and tech workers allow that gap to close very quickly; meeting their own expectations and readily defending their work as the best possible. I remind myself that better work should always extend my grasp and I should know where to find it.

Recruitment Searching on GitHub

We’re currently looking for Senior Mobile (iOS / Android) and Senior Fullstack Engineers at OfficeLuv. Finding great developers is…difficult. I will occasionally search for individuals on GitHub, where I can find a scrap of contact information and reach out.

GitHub doesn’t exactly provide a fantastic interface for perusing influential users, but it does have a reasonably advanced search. From there, you can select people using a certain language, in a certain location, and with a certain number of followers. I use the followers as [an admittedly flawed] proxy for proficiency. You could alternatively substitute number of public repositories as a proxy for proficiency.

As an example, here’s my search for influential JavaScript developers in Chicagoland. If you take a look at the search terms, you can see where to tweak/replace the chosen language, alter the lower-bound of followers, or change the location. Use it to find some good employees.


I was finding good candidates with these searches, but often people obfuscate their email addresses out of their public GitHub profile page. But never forget the metadata found in git itself! Every committed code change pushed to GitHub (or any git repository) must have an email address attached to it. So, all you have to do is find the raw commit data.

Fortunately, GitHub has an API that displays [a portion of] raw git data for public code repositories. To view commit metadata for a given repository, you can visit this URL, substituting your own values:


Then, just look for the commit > author > email field.

The Benefits of Daily Code Review

I wrote a while ago about a methodology for daily code reviews, one which we implemented originally at ThreadMeUp. Now that I’m building a new team at OfficeLuv, I’ve been excited to start them again. Recently, I was talking to a very good friend of mine and found myself reasoning through the importance of team code reviews. I think it boils down to four main skills for individual team members.

Articulate Your Thoughts

I meet a great many programmers/developers/engineers/hackers. There are some that are good at what they do. There are even fewer that can articulate their ideas well enough for others to understand them. I want to work with that small subset.

Presenting your proposed changes to the group necessitates an explanation of your thought process. What was the bug or feature? How did you research it? What failed? What did you decide to build? Why is it our best course of action? If you can’t present answers to these, the rest of the team can see right away that your changes aren’t ready.

This is a great way for mid-level developers to practice explaining their actions well enough to become effective senior developers. I have seen some companies give up on more experienced engineers building communication skills. I think that’s a big mistake, and presenting ideas daily to the group strengthens those skills.

Critique Others

Often, it’s discouraged of junior developers to speak up and criticize more experienced teammates. This is not so during our code reviews. Everyone is entitled to their opinion and is expected to have one. An outside horse sometimes wins the race, and a counterpoint raised before merging a new feature could stimulate an even better solution.

It is important to have confidence in one’s own ideas. Strong opinions, loosely held. Just keep in mind that, though you should put forth new ideas, be ready for them to be refuted with evidence. This brings me to the third benefit.

Receive Criticism

I have never worked with someone who was correct all the time. I have worked with several who thought they were. Encouraging discourse inevitably leads to one person challenging another’s solution. The challenged should understand that it is a natural process that attempts to yield the best possible result. Just because I push back on your implementation doesn’t mean that it’s terrible, it just means you need to back it up with evidence.

Bring evidence to the table. Only one person should be talking at a time during code reviews - practice listening to someone else, even if their idea is terrible. It’s terribly good exercise.

Draw Connections

Code reviews, by forcing everyone to listen while changes within the whole system are proposed, ensures that we are all on the same page. Often, especially during hard sprints, developers can get locked into their own codebase, separated from the group. This is especially common in micro-service systems, where each person may be alone within a service.

By reviewing changes being made elsewhere, members of the team will inevitably see parallels to their own work. This can kickstart the formation of common libraries between apps or a good architecture can be replicated elsewhere. The team shouldn’t make the same mistake twice.

If you have been practicing regular code reviews with your team, I’d love to hear about any other benefits you may have noticed.

A Recommendation of Vasyl Stetsyuk

Vasyl is an eager and diligent QA Lead. While we were working together, a member of our team remarked that they had met no other QA leader more interested in the actual technology, and I would agree. He is a splendid team leader, and defined our testing processes for the group. I would trust him to test my applications again in a heartbeat.

Not all QA developers are actually interested in the technology beneath their feet, but Vasyl is enamored with it. When working with the rest of our team, he wouldn’t stop at understanding the desired behavior, he would ask questions until he understood the underlying mechanics of the server or animation itself. This meant that, when he inevitably found bugs in the software, he would have a high likelihood of pinpointing the root cause of the bug. This saved us hours of effort in reproducing behavior, and greatly accelerated our development cycles.

While leading our QA process, Vasyl also managed a group of junior testers, to great ease and effectiveness. I never once had to worry about the tasks assigned to them or the team’s ability to finish the allotted tests before a deadline. Vasyl worked diligently to lay out every task in easily repeatable steps, and maintained in constant contact with the entire team. He could always estimate the time commitment of a testing cycle accurately, which was invaluable in our tight sprint cycles.

When Vasyl joined our team, we had no structure to our testing tasks and critical paths. He took it upon himself to evaluate several frameworks that fit within our task management system, then present the options to us and advocate his position. Once we had all agreed, he then defined and implemented the testable pathways that allowed us to confidently release each sprint cycle. The new process made our lives easier, our deployments faster, and our customers happier, all championed by Vasyl.

I would be confident in any application that passes Vasyl’s testing standards. He was an integral part of our team and a major reason we could release as quickly as we did. I loved having my mind at ease, working with Vasyl.

A Recommendation of Eric Satterwhite

Working with Eric is, I imagine, akin to preparing dinner with a sushi master chef. He is one of the most confident developers, and rightly so. I have learned more from him in the last year than any other on our team. Eric’s not content with his current abilities, though - he is constantly seeking new challenges and paradigms to test.

When we hired Eric, he blew through all of my coding tests. I had to think of new ones. Over the course of working with our team, he would rewrite entire services, imposing consistent architecture and documentation. Usually, he would just be let loose on a microservice with the words, “How can we make this better?” and Eric would produce entire libraries that we could reuse in multiple layers and locations. He repeatedly saw straight through problems to the core and fixed the machinery beneath.

Personally, I learned from Eric often. I greatly enjoyed our conversations about testing the boundaries of the application stack or JavaScript itself. Unlike other fantastically talented people I have worked with, Eric is entirely transparent about the work he does. He documents his structure and thinking alongside the code, and readily discusses or explains the features or fixes he writes with developers of many experience levels. He is truly an asset of knowledge for everyone around him.

Upon joining our team, Eric decided to expand his dev-ops abilities. He was the architect behind a new microservice resource allocation, and implemented an automated testing & deployment process for our 10+ services. He wrote and taught a Docker course for our internal team and implemented a new monitoring system for each of our node.js services. He succeeded in writing an image color-analyzer in node.js that was 8x faster than ImageMagick. I could go on, but I’m more excited for the next challenge he decides to tackle.

While I would say that Eric is a team asset solely for his technical excellence, I would selfishly say that I look forward more to challenging conversations with him about programming at large. Working with Eric is working with a master of his craft, and anyone is lucky to experience that.

As of this writing, Eric’s website is smugly situated at codedependant.net.

A Recommendation of Cory Keane

Cory is as earnest as he is steadfast. He is as honest as he is pragmatic. We built a team together, around the ideals we share. We set out goals a year in advance and completed them in half the time. I would work with him again, and I would recommend him to guide a project in any climate.

Over multiple years, I have worked with Cory through times thick and thin. He is constant and eager to improve in both, and drove the rest of us around him to follow suit. He worked tirelessly to plan every sprint, from epics to describing individual tasks. He devoted his attention first and foremost to the understanding and clarity of the team around him. Rather than beginning his own tasks, Cory made it his priority to have everyone on the same and the right page. Thus, the entire team moved together as a unit, able to act with information.

As a leader, Cory is compassionate and economical. We went through several difficult conversations amongst the team, and after each one I emerged with new gratitude that Cory had been there. He was eager to ease the team, but never without a true basis for his actions. We never disagreed on the rate at which we grew our team, taking on complexity only when it was beneficial. Cory never compromised on our standards for candidates, which ultimately bore itself out in a fantastically balanced group.

At the beginning of our last year, the entire company laid out a few goals for each team. Cory kept ours at top of mind, to the culmination of reaching all of them before some groups had accomplished one. He understood exactly which pieces of the puzzle depended on others, and steered us away from pitfalls. Cory rode the line between ambition and pragmatism better than any other I have encountered.

I have a deep respect for Cory, as a coworker, leader, and friend. His guidance and perseverance was a guiding light for our team and others in the company. He built the solid base for our technology, the intricate plan to grow it, and the powerful team that built it. I would have him do the same for anyone else.

A Recommendation of Colleen Grafton

Out of all the engineers I have known, Colleen is the most structured and economical. While working together, she has grown tenfold in her abilities and her assertiveness. She would be a valuable asset to any group, both her technical skills and her personality.

Colleen methodically approaches programming features and fixing bugs. She will gather any information before making a decision, searching for precedents before writing her own solution. She will consistently refactor previous solutions to fit new and future input, while maintaining a keen eye on old or external factors. She is a member that will make your team much greater than the sum of its parts.

While beginning on our team as a junior PHP developer, Colleen quickly expanded. She aided in the planning and completion of automated build and test processes across multi-tiered microservice architectures and legacy systems. She also, on her own accord, began training in new languages, eventually contributing to our node.js server code and our client-side JavaScript applications. I was elated to see her confidence grow alongside her technical abilities.

In addition to her technical growth, Colleen actively grew our company as a whole. She planned, proposed, and helped realize health benefits, insurance, and vacation policies for the entire company. At a time when we were hiring faster than comfortable, she organized regular team events and meals to encourage bonding amongst employees. With Colleen advocating to maintain the group unity and organizing benefits, we were much more secure in our relationships and ourselves.

Colleen is on my short list of those I would readily hire again. She brings much more to the table than others with the same credentials, and I trust that she will continue to build on her abilities in leaps and bounds. We should all strive to be so beneficial.

Calming Facebook's Eager Pixels

Working at an e-commerce startup, I get asked to implement new tracking features every day. I built out the integration points for Google Analytics, Google’s retargeting pixel, Google’s conversion pixel, Facebook’s retargeting pixel, Facebook’s conversion pixel, Facebook’s Audience pixel (and here is where I run out of breath). That’s not even a complete list.

Facebook’s Audience pixel, of that group, is the most recent to the table. It was introduced last year as the replacement to both Facebook’s conversion pixel and retargeting pixel. With it, Facebook also introduced a new tracking library, fbevents.js (replacing the fbds.js library, which was shared between the retargeting and conversion pixels). Phew. There goes my second breath.

Facebook’s own documentation used to carry more red flags about continuing with their deprecated retargeting and conversion pixels, but that has faded. Facebook even updates the documentation of the deprecated pixels. The overwhelming majority of our customers (and, I would bet money, Facebook’s customers) request continued support for their old retargeting and conversion pixels. As such, I continue to wade through deprecation warnings from Facebook’s tracking library.

We also continuously ran into tracking inaccuracies. With Facebook’s Pixel Helper, we could see that multiple events would be recorded via either fbevents.js or fbds.js. This bug seemed to cause different behavior over time, with some customers reporting an exceedingly high conversion rate in Facebook’s reporting interface (over 100%, at times) and some reporting exceedingly low rates (less than half of transactions). It was frustrating, and it was happening not only to our platform but on our competitors’ as well.

I debugged my way through each step of our JavaScript code, seeing us trigger only one call to Facebook’s libraries. We run a React application for our client checkout flow. We use react-router to manage history and URL state. You would think that Facebook’s tracking libraries would play nice with Facebook’s user interface library. The extra events seemed to be triggering themselves whenever the URL or history would change. But we had written our own calls to Facebook’s pixel libraries to avoid this!

I prettified the source code of fbevents.js and did the same for the deprecated fbds.js. At the very bottom of both, there is this bit (comments mine):

// s === window.fbq === window._fbq
// d === window.history
// a === window
// New code from fbevents.js
(function ra() {
    if (s.disablePushState === true) return;
    if (!d.pushState || !d.replaceState) return;
    var sa = function() {
        ba = v;
        v = c.href;
        if (v === ba) return;
        var ta = new da({
            allowDuplicatePageViews: true
        ea.call(ta, 'trackCustom', 'PageView');
    p.injectMethod(d, 'pushState', sa);
    p.injectMethod(d, 'replaceState', sa);
    a.addEventListener('popstate', sa, false);
// ------------------------------------
// Deprecated, but still present, code from fbds.js
if (s.disablePushState === true) return;
if (!d.pushState || !d.replaceState) return;
var t = function() {
        k = j;
        j = c.href;
        s.push(['track', 'PixelInitialized']);
    u = function(v, w, x) {
        var y = v[w];
        v[w] = function() {
            var z = y.apply(this, arguments);
            x.apply(this, arguments);
            return z;
u(d, 'pushState', t);
u(d, 'replaceState', t);
a.addEventListener('popstate', t, false);

Both Facebook pixel tracking libraries hook into the window.history and popstate events. This means that it will fire a new cascade of events whenever the browser history API is used and whenever your single page application (SPA) updates the browser URL to reflect the view. Our React SPA was sending API calls to Facebook’s pixels perfectly, but Facebook’s own injected code was triggering duplicate events. The solution:

// shut it down http://i.imgur.com/vxqrKua.gif
window.fbq.disablePushState = true;
window.fbq('init', this.props.fbAudiencePixel);
// or
window._fbq.disablePushState = true;
window._fbq.push(['addPixelId', this.props.fbRetargetingPixel]);
window._fbq.push(['track', 'PixelInitialized', {}]);

After that change, everything seems error-free (at least from testing with Facebook’s Pixel Helper). Time will tell if our customers continue to experience the same effects. It seems a shame that Facebook has this undocumented (I couldn’t find any) flag that makes it play nice within single page apps. It seems an equal shame that Facebook would eagerly trigger its own, potentially erroneous, calls on global events. It also troubles me that the same React application (with multiple calls triggering Pixel Helper errors) seems to cause erroneous results on both ends of the spectrum. And it also troubles me that across our multiple e-commerce competitors, where we are the only one using an entirely React SPA, none have been able to yield a 1:1 ratio of internal conversions to Facebook pixel conversions.

As anyone building a single page app right now, the best advice I can give is to disablePushState. That should be necessary for Angular, React, and mobile frameworks alike. I’ve been able to find one other codebase identifying this problem, Segment.io’s tag manager. The good (or bad?) news is that the disablePushState flag seems to be the only Facebook pixel magic flag.