Weeknote of September 05, 2018

In the past, I’ve talked with my good friend Vaibhav Krishna about how important it is to recognize your mental reserves and drives as an exhaustible resource. Writing and dissecting software, you will often feel the tug of an idea. Just as often, you will be set upon a problem and find no interest grow as you dig fingers into the solution.

Usually, it’s your job to find the solution (and probably define the problem). Hopefully you’ll be getting paid for those. You’ll slog through the keystrokes and requirements uphill, only to find yourself doing it again next month.

That’s while you’ll find the best professionals following their nose whenever possible. To survive any work for long, you find ways to explore - downhill. You recognize when not to push your mind. You recognize when to follow the scent of a challenge or new idea. If your nose is no good, you’ll find out quickly.

This weekend I was following my nose. I was reading up on some algorithms (ARIMA, GARCH, Levenshtein distance, among others) and stewing over OfficeLuv revenue forecasting issues all last week. I thought there may be a good way to quantity the difference between one e-commerce cart and another (the “distance” it would take to edit one cart into another). This would let me calculate the floating average of a customer’s e-commerce cart history. So I took the rainy Labor Day afternoon to develop the two algorithms, riding downhill the whole time. I’ll be writing them up this week and hopefully putting them into use next week.

Cleaning Up For a New Hire

I’m hiring a new full-stack engineer for the OfficeLuv team, and there’s nothing quite like a new hire to kick your team into shape. I’ve written about how new hires are a valuable resource in the past, and each time I focus on drawing more and more value. This current cycle I’ve already noticed a change in my behavior in these past few weeks: I’m cleaning up in anticipation of guests.

Whenever someone agrees to join your team, they are also entering a home. You and the rest of the team have been living and building and rebuilding there, for eight hours daily, in the dust and the muck. You’ve been able to build something that works (you’re hiring!), but there are always dirty areas that you would never write again. You know about these areas - you probably list them over a drink with other engineers.

I used to embrace this hacky code as historical education for the new hires. I would walk them through it and we would both agree that it sucked and that a better path was clear. Over the years, though, I’ve realized that this is just a waste of time. If we know what’s wrong and how to fix it, we should take the action as soon as possible. We should use the new perspective of the new hire to find new problems and think of new solutions.

So I’m taking some time this week to clean up the obvious, dirty areas of the codebase. I feel like I’m vacuuming the house before guests visit; rewriting the queueing logic for our background machine learning calculations. It’s making me even more excited for a new team member to enjoy the open air.

After Reading "Lab Girl"

I was generously gifted Lab Girl by Hope Jahren through Jane Kim in her last week at OfficeLuv. I finished it today, after about a month.

I loved the rhythm of this book: chapters alternated between her memories and thoughtful paths into botanical research. I love connections between a life experience and scientific themes or analogies. I loved the author’s tone and her wit; I wanted to meet her more with each section. I loved the way it would cause me to drift my eyes off the page in thought after reading a scientific explanation and would glue my eyes to the page through a teaching experience.

The author and her partner Bill continuously reminded me of my uncle, Tom and his teaching partner (basically my aunt), Karen. They were my scientific teachers during my childhood and high school years, taking me (and my friends) on camping trips, mud-treks, and surgical tours. They had a similarly complex and intimate career together and taught me more than I probably realize.

I bookmarked more passages toward the beginning of the book, as I think the ideas meshed into a better arc that tapered off as the story passed into later years.

Bookmarked Passages

Working in the hospital teaches you that there are only two kinds of people in the world: the sick and the not sick. If you are not sick, shut up and help. Twenty-five years later, I still cannot reject this as an inaccurate worldview.

I also worked in a hospital and in ambulances. I also cannot disagree with this feeling.

Full-blown mania lets you see the other side of death. Its onset is profoundly visceral and unexpected, no matter how many times you’ve been through it. It is your body that fist sense the urgency of a new world about to bloom. […] Nothing, nothing can be loud enough or bright enough or move fast enough. […] Your raised arms are the fleshy petals of a magnificent lily bursting into flower. It deeply dawns on you that this new world about to bloom is you.

I read the chapters describing her mania twice over because it felt so good.

It is also no uncommon for scientists to work out their personal issues under the guise of making an evaluation, and I was receiving feedback along the lines of “this reviewer is dismayed to find that the investigator’s apparent capabilities were deemed sufficient to merit a graduate degree from the very same institution that produced his own credentials,” and other useless venom.

The scientific are not any more immune to stupidity, bigotry, and prejudice.

Weeknote of 2018-08-21, Making and Breaking Patterns

On repeat: Joe Goddard - Electric Lines

I was dramatically sick Tuesday and Wednesday of last week; I’m guessing food poisoning. The combination of not eating, not exercising, barely drinking, and barely standing made me lose over 10 pounds in about a day. The following day, the dehydration caused me to cramp up so badly I nearly went to the urgent care. There are some patterns you just shouldn’t break.

Collect more than one person in a place and they will start patterning themselves off each other. You can see it in Instagram on repeat, on repeat. All these influencers seem to be “influencing” in the same direction.

Intermittently breaking pattern and secluding yourself from an environment of repeated interaction statistically leads to more thoughtful and novel responses. It improves the group’s collective intelligence, too.

In recent weeks, I’ve been spending a fair amount of time condensing a service-oriented application architecture down to a queue-based implementation within a single project. I’m notoriously a proponent for pervasive patterns in the code I write. I will often go back a rewrite groups of functions to match the same signature or reorganize class hierarchies to fit a couple new members. The real benefits are realized and strengthened now as I go through years-old code, transposing high-level tasks in one language to another. Each revision chips away a few more bits from the edges to reveal the best pattern beneath.

Weeknote of 2018-08-05, Second-order Effects

On repeat on these hot nights: ultralight beam

I’m currently spending time each week interviewing people for our open software engineer position at OfficeLuv. Most days I’m just searching for candidates, and whenever I do find someone I want to interview (I had 5 interviews on Tuesday), I need to make the questions count.

One focus I’ve found to be quick and a reliable way to judge capability is to ask candidates about the second-order effects of their work. Most of the young developers I meet consider only the immediate effects of their solution or immediate problems they debug. The best ones first extrapolate their solution before implementing it. I’m not condoning analysis-paralysis or premature optimization, just the ability to consider a level above or below the current situation.

  • How does changing this interface affect user behavior? How will that cascade into stressing pathways within the application queues?
  • How does optimizing this function for speed over consistency guide user behavior?

Often, I’ll ask a technical question later in the interview process. It’s not entirely important that they can readily rattle off the underlying algorithm; I’ll happily go over the technical implementation with them. The important part is the second part of the question, where we talk about which and why certain pitfalls, benefits, and behaviors happen because of that implementation.

Because of this, I’ve had second-order thinking on my mind all week. Here are just good pieces and thoughts I’ve found just this week that fit that theme.

This Mind-Controlled Robotic Limb Lets You Multitask With Three Arms It turns out that even when subjects lost the third arm, their rapid task-switching skills remained improved. How much more of our brain is constrained because of physical limitations through the rest of our body?

Disturbances #16: The Price of Perfection This person writes a newsletter every week about dust and I’ve subscribed for a while. It’s usually pretty fantastic. This week, they wrote about the accumulation of dust as a result of modern technological manufacturing. As it turns out, accumulations of dust can be explosive.

Should computer science researches disclose negative side-effects? There was a publication this week of an idea: software engineers should disclose the possible bad-actor uses and second-order effects of the software they research and/or develop. I completely agree with an initiative like this. I was surprised to see so many engineers trying to argue their inability to even attempt such thinking.

Cryptocurrency mining operators are clustering around low-priced renewable energy sources Energy-guzzling software is clumping together in geographic space where energy costs less.


I change the placement and structure of my desk at work pretty often. Here is my current work desk. Not pictured: this is at a standing bar table.

Desk on 2018-08-05

Weeknote of 2018-07-30

On repeat: youtube.com/watch?v=cgULtrAfISw

I have been in the practice of writing down my daily tasks, ideas, steps of building, conversation points, random links, handy tidbits, etc for a few years now. These daily notes have been invaluable in reconciling long weekend with the tasks of a Monday’s morning. While working at startups, they have also been the only source of long-forgotten, one-off, code snippets/fixes and sources of pattern recognition in customer feedback. Yesterday, I was reading the lovely Lab Girl (through the generosity of Jane Kim) and was reminded of these notes’ similarity to the lab notes that we were forced to commit in every science course. Reading more into that led me to the BERG concept of weeknotes. I like it.


I have loved the podcast Reply All for a few years now and so jumped at a recent episode that intersected with our startup’s operations. I led a discussion with the company about Amazon’s fake review problem and extrapolated some of the history not covered in the episode. I tried to paint a picture of the history of online marketplaces.

  • We purchased things from each other on the first forums (heavily based on reputation).
  • We moved to posting our goods on Craigslist, where you could reach a broader audience, but reputation was non-existent.
  • We moved to buying on Ebay, where seller reputation was prominent, buyers competed for items, and payment was easy.
  • We moved to buying on Amazon, where product reputation is prominent and multiple sellers compete for you.
  • What’s next? This is a conversation point I tried to push, but people replied that Amazon is probably the peak of this mountain. I hardly agree.

Marybeth had been part of the 48 Hour Film Project a couple weeks ago, and the premiere night was Sunday. We watched something like ten 6-7 minute films all of different genre and created within 2 days time. The variety was wonderful. One film stood out a bit. It was created by and starred the oldest entrants, was shot on a cell-phone camera, had slipped-up lines and audio, and had no digital post-production. It was obvious that the creators also had way more fun than any other group. I’m going to remember that part.


Here’s a photo from the Sylvan Esso concert we just barely attended last week. They were much better live than in their albums, and I loved their albums.

Sylvan Esso


I had a serendipitous lunch with my old friend Josh Martin. Honestly, whenever I spend time with him, I feel ten times as excited to work on new ideas. As always, he encouraged me to share more of my ideas with other people. So here I am, writing a newsletter, again).

I just ended a text message to my girlfriend with a semicolon and I think I should be done for the day.

https://www.andjosh.com/media/2018-05-15/1526425033724-image.jpg

Getting things straightened out with this micropub stuff.

A Recommendation of Nevzat Cubukcu

A Recommendation of Nevzat Cubukcu

We hired Nevzat as our third engineer and Android device expert. How can you not be impressed when he came to the interview with projects to demonstrate complex matrix manipulations of on-device images that he built to teach himself the Android SDK? He, of course, continued to teach himself while on our team. He also pulled us to place customer delight in the front of our minds while also tirelessly improving out product.

While not knowing any JavaScript upon his first day at OfficeLuv, Nevzat was contributing to our single-page applications within a week. After we finished the release of our Java Android app, he switched completely to client-side JavaScript development without skipping a beat. He would eventually spearhead the conversion of our Android app into a React Native system. I have not yet worked with anyone so agile in their adoption of new languages and technical paradigms as Nevzat.

While remaining flexible in technical development, Nevzat was also a vocal contributor to our product road-maps, striving to always find customer delight. He would go out on customer interviews, incorporate their feedback, and work harder than anyone to build things better than the user would expect. Nevzat always found a way to improve existing features with each new release, from predictive searches to battery-saving background job optimizations.

When you are on a team with him, Nevzat’s excitement is infectious and rallying. I often found myself removing my headphones just to walk through a problem with him, because I knew how quickly we would find the right path together. His ease in assimilating new programming techniques means he will never find an impasse in building a technical product, and his mindfulness of the end user means the product will surely make them smile.

A Recommendation of Jack Kent

A Recommendation of Jack Kent

Working with Jack is extremely rewarding, because I know he will always guide us to the best interface for the customer. His research into the mind and environment of the user is unparalleled and he has a fantastic ability to lead a group through fruitful design sprints. I learned a great deal from his practice of honing a user interface to intuitive, evolving components.

Jack’s knowledge of the customer is beyond any that I’ve seen in other designers. He lead research interviews that we referenced directly in our end results. His thinking process always starts and ends with the user’s own mindset, which ensures solid ground for the final products. It is always easy to talk with Jack about why he chose a design pattern, simply because he usually references direct experience with our research.

Working an engineered functionality into a design can be difficult at times, but Jack is an invaluable resource here. He has enough knowledge and skill to not only predict and avoid common pitfalls but also contribute directly to the development of the features he designs. Through the cycle of usability feedback, he can build, test, and update adjustments to the code reliably, which greatly advances the whole team.

I will always reference Jack’s guidance when conducting user interviews or operating in a design sprint: I have a quote from him taped to the wall beside me. His considerations and conversations have changed my own thinking of the end-user. I would want him to lead any designs that I build or use in the future.

A Recommendation of Lauren Polkow

A Recommendation of Lauren Polkow

When I sat down with Lauren for the first time, I knew immediately that it was the chance of a lifetime to work with her. Such a spark of energy, intelligence, and enthusiasm for the customer walked into the room that it was impossible to resist the opportunity to learn. She is one of the best guides I have found, and the best product leader that I have known.

Lauren leads a team better than anyone else I have seen. She is constantly aware of each member’s strengths, past experiences, career goals, and weekend plans. Able to see potential conflict from miles away, she has the skill to fit others into effective groups that produce exceptional products. I constantly reflect on her words and actions of encouragement or guidance and find new ways to improve myself. Our team was able to deliver because Lauren steered us through conflict and growth to internal understanding.

Backed by thorough customer research and data, Lauren’s product roadmaps and vision are amazingly powerful. Her strong compassion for the customer and desire to know their entire mind ensures that every feature is grounded in utility and delight. She is always pulling informative data out of products, datastores, and customers themselves to validate and guide the company.

Whatever Lauren envisions will form on solid ground. Whatever she works on will be better for it. Whoever she leads will be my envy. Her skill in building a product, inside and out, will continue to be an aspiration of mine and a beacon wherever she goes.

Handy Kue Maintenance CLI Scripts

Building systems at my last few companies, it has been enormously useful to have a robust queueing platform. I’ve tried Amazon’s SQS, NATS, and a couple others but Automattic’s Kue has been the best combination of performance and introspection.

Once you’re really using any queue for large batching tasks, you will eventually run into issues with stuck jobs and jobs that need to be evicted early. This is called queue maintenance. You should have code that automatically sweeps the queue clean based on your own rules of retry, etc.

Alas, you will probably need to manually clean the queue at some points. This is usually a stressful time where you don’t want to hand-type some half-thought JS snippet into a Node.js console. Something like 30,000 bad jobs are backing up the workers and an investor is testing the product. For these situations, I have made a couple command line (CLI) apps to evict or retry queue/kue jobs. I thought my CLI scripts could help others using the Kue library.

Kue CLI Scripts

I typically put stuff like this in a /bin directory at the root of an application. With that, you can create an executable file for job eviction from the queue:

$ mkdir bin && touch bin/remove
$ chmod +x bin/remove

For the parsing of command line arguments, we will need something like commander:

$ npm install commander --save

Inside ./bin/remove, you can place:

#!/usr/bin/env node
const program = require('commander');
const kue     = require('kue');
const pkg     = require('../package');   // load your app details
const redis   = require('../lib/redis'); // some Redis config
const queue   = kue.createQueue(redis);  // connect Kue to Redis

// command line options parsing
program
    .version(pkg.version)
    .description('Remove/delete jobs from kue queue')
    .option('-s, --state <state>',
        'specify the state of jobs to remove [complete]', 'complete')
    .option('-n, --number <number>',
        'specify the max number of jobs [1000]', '1000')
    .option('-t, --type <type>',
        'specify the type of jobs to remove (RegExp)', '')
    .parse(process.argv);

const maxIndex = parseInt(program.number, 10) - 1;
var count = 0;
kue.Job.rangeByState(program.state, 0, maxIndex, 'asc', (err, jobs) => {
    Promise.all(jobs.map(job => {
        return new Promise((res, rej) => {
            if (!job.type.match(program.type)) {
                return res(job);
            }
            job.remove(() => {
                console.log('removed', job.id, job.type);
                count++;
                res(job);
            });
        });
    })).then(() => {
        console.log('total removed:', count);
        process.exit(0);
    }).catch((err) => {
        console.error(err);
        process.exit(1);
    });
});

Similarly, you can create an executable file for job re-queueing:

$ mkdir bin && touch bin/requeue
$ chmod +x bin/requeue

Inside ./bin/requeue, you can place:

#!/usr/bin/env node
const program = require('commander');
const kue     = require('kue');
const pkg     = require('../package');   // load your app details
const redis   = require('../lib/redis'); // some Redis config
const queue   = kue.createQueue(redis);  // connect Kue to Redis

// command line options parsing
program
    .version(pkg.version)
    .description('Requeue jobs into kue queue')
    .option('-s, --state <state>',
        'specify the state of jobs to remove [failed]', 'failed')
    .option('-n, --number <number>',
        'specify the max number of jobs [1000]', '1000')
    .option('-t, --type <type>',
        'specify the type of jobs to remove (RegExp)', '')
    .parse(process.argv);

const maxIndex = parseInt(program.number, 10) - 1;
var count = 0;
kue.Job.rangeByState(program.state, 0, maxIndex, 'asc', (err, jobs) => {
    Promise.all(jobs.map(job => {
        return new Promise((res, rej) => {
            if (!job.type.match(program.type)) {
                return res(job);
            }
            job.inactive();
            console.log('requeued', job.id, job.type);
            count++;
            res(job);
        });
    })).then(() => {
        console.log('total requeued:', count);
        process.exit(0);
    }).catch((err) => {
        console.error(err);
        process.exit(1);
    });
});

Example Output

Here’s the help text produced by remove (similar to that from requeue):

$ ./bin/remove --help
# 
#   Usage: remove [options]
# 
#   Remove/delete jobs from kue queue
# 
# 
#   Options:
# 
#     -V, --version          output the version number
#     -s, --state <state>    specify the state of jobs to remove [complete]
#     -n, --number <number>  specify the max number of jobs [1000]
#     -t, --type <type>      specify the type of jobs to remove (RegExp)
#     -h, --help             output usage information

And an example execution to remove one job from the failed state of type matching /foo/:

$ ./bin/remove -n 1 -s failed -t foo
# removed 12876999 foobaz
# total removed: 1

We use this in our queue system to remove completed jobs on a cron schedule. It has been handy multiple times when a bug worker has failed a bunch of good jobs and we need to re-queue them all. Hopefully, it’s useful to others.

After Reading "Life in Code"

After reading her excerpts from the last few months, I picked up Ellen Ullman’s Life in Code. I finished the collection of essays yesterday.

Last night, I woke from a dream. I had been programming within a group, each of us helping to shape the code - the program was physical, ethereal, and whipped like mesh within the circle we formed. Multiple streams of data flowed up through the floor, repeating and undulating into the program we were forming. The data, the events, and our hands moved into and out of a computer machine learning system so that we pushed and pulled at whole the shape to fit what we wanted to show.

Some of us stood and some sat but together we folded into a loop around our growing code. Then one of us stood to leave and we all paused to look, holding our hands to the warmth and low light of the program.

I felt it move and meld beneath our fingers as we watched the one move away and into the dark.

Timestamp - Chicago Startup Tech Dinner

I am invited to a dinner with other team leaders from other tech startup companies in the city. We meet at a comfortable restaurant and order cocktails. Out of our nine members, two are women. Eight are white. Two business founders, three engineers, two product managers, two marketers.

I spend the first third of the time swapping engineering stacks with the person next to me. He is “an expert on Facebook tech,” and an enthusiastic supporter of Flow and React Native. He doesn’t build in strict React Native, though: he builds apps based on a third party solution that manages React Native for you. His buddy is the founder of that third party. The app he is building has missed its expected launch date and his boss, across the table, is unhappy about that.

I spend the second third talking with my other neighbors about ICOs and blockchain technology. One is exploring the idea of making his own blockchain-as-a-company, though he doesn’t know any details of how it would function. Another is a serial founder of companies and believes that ICOs hold potential for “flexible money, with no downside, since you always get the money,” but also doesn’t know how blockchains function. Maybe his social connection app can have an ICO to raise money without involving venture capitalists, he says.

I spend the last third talking with other people about maintaining old motorcycles and whether personality tests should be used before interviews to determine the culture fit of candidates. They say that some companies are requiring the tests of all candidates before interviewing anyone. If the person doesn’t fit with their team’s past results, they do not proceed. Those companies also ask executive candidates to take the tests, but the executives refuse.

On Adding React Native to Existing iOS and Android Apps

I write in defense of the beliefs I fear are least defensible. Everything else feels like homework.
- Sarah Manguso, 300 Arguments

No homework for me today. I woke up and integrated new React Native code into an existing Swift 3 iOS app.

I spent 5 hours getting the bare dependencies to compile React components into the existing app codebase, then 3 hours building an interface in React that would have taken a day in native iOS. I was also able to copy and paste our existing JavaScript business logic libraries with zero problem. It felt as if I spent all morning painfully biking up a mountain, after which I’m now coasting downhill.

Tomorrow is biking up the mountain to integrate React Native into our Android app. Luckily I have Nevzat to help with that.

I will write up all of this once a full release cycle is complete.

Update: This made its way into a full talk on bridging native apps with React Native.

Migrating a MongoDB App Datastore to PostgreSQL

A couple weeks ago, Narro had a nice uptick in usage from Pro users that resulted in a large increase in data stored by the application. That is always pleasant but this time, I had a corresponding uptick in price for the data storage. Time for a change!

Backstory

Years ago, when I first built Narro as a prototype, MongoDB was the New Thing in web development. I had a whole team of colleagues who were very enthusiastic about its uses, but I was a bit skeptical. In addition to helping implement Mongoid Ruby code at work, I thought I would get down into some nitty-gritty details of MongoDB under a Node.js system. Narro also doesn’t have an incredibly relationship-based data model, either, so it seemed like a good idea at the time.

I did learn a great deal. At the day job, it was confirmed that MongoDB was a horrible choice for a heavily-relational monolithic application. Millions of dollars ended up getting scrapped for an implementation of open-source software. In Narro’s codebase, I embraced the lack of relational structure and explored features like the TTL index, optimized map-reduce queries, and aggregation pipelines. Some things were impressively flexible, some things were not strong enough, but I stuck with the MongoDB storage because I had no need to change.

Change

Once the bill for your data storage increases by 1000% in a month, you think about changing things. I have been enjoying the performance, extensibility, and support for PostgreSQL in the past couple years. I calculated the price for running Narro on a PostgreSQL datastore and ended with a price 30% to 5% of the MongoDB storage! The only problem was in getting there.

I wanted to have zero downtime and zero impact on consumers. Narro uses a microservice architecture, which posed its own problems and benefits. I didn’t have to deal with an immense amount of data, but it was millions of records. With that in mind, here was my plan:

  1. Create the schema migrations for the new PostgreSQL datastore.
  2. Create PostgreSQL-based models that expose the same API methods as the existing MongoDB-based models.
  3. Migrate a backup of the existing MongoDB data to PostgreSQL.
  4. Start concurrent asynchronous writes to the PostgreSQL database so that MongDB and PostgreSQL contain the same data.
  5. Make all read-only microservices and read-only operations happen on the PostgreSQL datastore.
  6. Stop writing to the MongoDB datastore. Use only the new PostgreSQL-based models.
  7. Done! Remove the MongoDB datastore.

Execution

In execution, the plan worked well. Creating a superset of the model API methods used throughout the microservices proved tedious but greatly smoothed the transition. The whole process lasted about a week.

Narro was previously using Mongoose in the Node.js services and mgo in the Go services. The Go services were simple enough that I migrated even the model APIs to sqlx. In the Node.js services, I used knex for querying the PostgreSQL datastore, and I created new model code that exposed Mongoose-like methods (findById, findOne, etc.) that were used throughout the code but that now mapped to SQL queries. This meant that, wherever a model was queried, I could just replace the require statement with the new model path.

For step #4, I hooked into post-save hooks for the existing MongoDB models and then persisted any change with the new PostgreSQL models. This way, there was no degradation or dependency between the coexisting models.

Pitfalls

The plan, of course, didn’t account for everything. I used PostgreSQL’s jsonb column type for several fields where I was dumping data in MongoDB, but even that needs to be somewhat structured. I would watch out for that and have canonical mappings for every value in the migration.

After the initial replication of data from MongoDB to PostgreSQL, I ran some common queries to test the performance. I was surprised by how much slower PostgreSQL performed on queries operating in the jsonb columns. Luckily, there is some nice indexing capability specific to jsonb in PostgreSQL. After applying some simple indices, PostgreSQL was performing much better than the existing, indexed MongoDB datastore!

Another consideration is that MongoDB’s ObjectID type has strange behavior when cast to strings in certain contexts, like moving MongoDB objects to PostgreSQL. It’s a good idea to centralize one function to cast all your model fields, ready for PostgreSQL persistence. This also speaks to another issue in migrating MongoDB data to PostgreSQL: MongoDB data is almost always unstructured in nooks and crannies. It’s a great benefit in the right context, but I would sanitize and normalize every value in the mapping for step #3.

I used the uuid-ossp PostgreSQL extension to mirror MongoDB’s uuid creation for models. Just make sure to enable it (CREATE EXTENSION IF NOT EXISTS...) and set it as the default for your column(s).

Observations

Narro hasn’t actually reached step #7. I found that there are still some things PostgreSQL can’t do.

When I built out Narro’s public API, I built in rate-limiting backed by the MongoDB datastore. The leaky-bucket model was built around the TTL index feature in MongoDB, which made the business logic clean and performant. This was actually extracted into a library, mongo-throttle. I couldn’t find such a feature that exists in PostgreSQL (most people will recommend ‘expire-on-insert’ triggers) that can run automatically. For now, Narro’s rate-limiting still operates on the MongoDB storage.

The PostgreSQL datastore is more performant than the same MongoDB datastore. Map-reduce queries have been replaced by group-by and joins. Aggregation pipelines have been replaced by window functions and sub-selects. The old is new again.

“Wait” has almost always meant “Never.”

I keep a running list of my own feature requests for Narro, aside from those of members. One of the first things I wrote down a year or more ago was to migrate the storage to PostgreSQL. It was always in my intentions, but rarely in my heart to make the effort and devote the hours. I’m now grateful for something to push a change.

On Narro Joining Stripe Atlas Beta

In June, I applied for Narro’s entry into the Stripe Atlas beta program. Since building Narro incrementally, the worry of financial separation crept up on me a bit. Between the end of one job and the start of another, I had a perfect time to formalize the structure of Narro into a real entity. I thought I could provide a review of Stripe Atlas for anyone considering the program.

Luckily, the “kit” you are provided once in Atlas is like a self-inflating mattress. The team sends along prescribed documents (bylaws, company structure, tax registration, etc.) that you can simply ratify (or edit to suit your needs). A bank account is created and connected for you (at Silicon Valley Bank) and hooked up to a new Stripe account. As I was previously running Narro on an existing Stripe account, I just transferred the credentials to my extant account - no interruption for Narro’s customers/subscriptions. The whole process took maybe two weeks from the first document to the last signature.

Atlas does prescribe quite a few pieces of the puzzle that you can’t (as of this writing) change. The new entity that you register is a C-Corporation, not an LLC. I have generally worked for and seen startups registered as LLC entities, but Atlas provides some nice documentation about the benefits of a C-Corp. The new corporation will also be registered in Delaware. This is exactly what I wanted, as I had previously read about the structure and precedence set around online-only businesses and startups in Delaware. Ultimately, I was pleased with defaults.

The benefits from this process have ranged from tax procedures and general process advice to a forum community to stronger legal standing. Narro has since seen more interest from partnerships. I have felt much more at ease about the separation between the company and my own identity, financially and legally.

Narro may be a bit different from other Atlas members. It is a one-person company and I don’t have plans to take on venture capital. Instead, I am focusing now on growing the business organically and profitably. Narro has always been profitable and I don’t have plans to change that. Going through the Atlas process let me keep both options wide open. I now have the company formalized in its current state while positioning it in the best format possible for the venture capital market (see the C-Corp documentation and Delaware registration).

I’m looking forward to more benefits coming out of the Atlas program. Already, it has proven immensely valuable in prescribing the most common steps to a new incorporation.

After Reading The Undoing Project

My brother’s gift to me this Christmas was The Undoing Project, a novel by Michael Lewis on the work done by researchers Daniel Kahneman and Amos Tversky. I read it from December 27 through January 1.

I had studied the work by these two authors in my college courses, but had no idea bout their personal history or tangential work. It was a splendid reminder of those ideas, and I was captivated by the description of their relationship. I wanted to write down some initial thoughts afterward. These are not in any particular order, just as they came to mind.

I am not a fan of jumping around time scales in a historical account of things. The chapters in this novel move from one decade to the next depending on the narrative, but sometimes also switch characters and time simultaneously so that it becomes a bit disorienting. I am usually a big fan of altered time scales, so it was surprising that I find myself annoyed in this case.

It’s been a while since I read a dramatization. I wonder if the movies made of his previous books influenced this style. I know this is the author that wrote Moneyball and The Big Short, so I wonder if seeing his previous work put to screen caused some of this book to read like a screenplay. Several of the chapter openings read more like scenes from a film.

I enjoyed the focus on the loving relationship between the two men. They were obviously dependent, emotionally and intellectually, on each other for a long time. And it was especially interesting to learn about their eventual disagreements.

I kept hoping to for a return to ANY of the vignette stories applying their theories. The book leads with a basketball recruit applying heuristics in the absence of the researched theories. Why have the initial hook with the basketball recruits if we never returned to them? I feel that maybe this and others were meant to flesh out actual application of the theory, but the payoff never really materialized for me.

Bookmarked Passages

“On what would become a three-volume, molasses sense, axiom-filled textbook called Foundations of Measurement - more than a thousand pages of arguments and proofs of how to measure stuff.”

You can tell the author didn’t enjoy research texts completely, but this sounds great to me.

You could beat the doctor by replacing him with an equation created by people who knew nothing about medicine and had simply asked a few questions of doctors.

This gave me ideas about how humans could be transitioning to a period where our jobs are strictly to recognize patterns in a chaos and translate that into computer models. Then you let the computer work and apply those models since the humans cannot reliably do so. That would be the dichotomy of utility between the humans and the computers.

Some people like to walk while they talk. Amos liked to shoot baskets.

“They had a certain style of working”, recalls Paul Slovic, “which is they just talked to each other for hour after hour after hour.”

“That was the moment I gave up in decision analysis”, Danny said. “No one ever made a decision because of a number. They need a story.”

This is true of memory techniques that I’m trying to teach myself. Stories cement ideas for people.

The understanding of numbers is so weak that they don’t communicate anything. Everyone feels that those probabilities are not real - that they are just something on somebody’s mind.

Envy was different [than regret]. Envy did not require a person to exert the slightest effort to imagine a path to the alternative state. […] Envy, in some strange way, required no imagination.

This is one more reason to wipe envy out of your eyes. It’s the basest form of regret.

To Revisit When Starting a New Job

After starting at and helping to start companies multiple times, I’ve noticed a few writings and lectures that I tend to revisit each time. Even when not moving to a new group, I tend to watch or read these every few months. The beginning of this list came out of a question from the wonderful Jane Kim.

I read Ray Dalio’s Principles (which are conveniently located online, now) to remind myself of the unapologetic beauty of truth in action and truth in words. The pamphlet is a bit heavy-handed with the imperative advice, but I find it compelling nonetheless.

I watch Laura Savino speak about the power of words and think about choosing the right words for how I want to interact with others. Increasingly, I catch myself nearly using a word that connotes more or less than I mean. Taking moments to think of better words, and watching people react to exact words, has been enlightening. It’s also spurred a desire to learn more languages.

I watch Bret Victor speak about invention-based mindsets and think about challenging implicit beliefs and biases. More than just challenging bias, it’s about defining what exactly could be better built into our mindset, or better pulled out of it. He makes a call to widen the gap identified by Ira Glass (below), and widen human capabilities.

I peruse code I have released in the past year and find the worst piece. I think about how, exactly, I would make it better now, or how to avoid it completely. I go back through my code and find the piece that had the most positive impact and think about why, exactly, it had that impact. Those two pieces are not mutually exclusive, but I hope they are.

I read Valve’s Handbook for New Employees as the most organic and captivating on-boarding document I have even encountered. It’s a fantastic bundle of rules and history brimming with hidden gems. I read Netflix’s Culture Deck as the most idealistic version of a workplace guide. It has influenced most of my workplaces, directly or indirectly.

I watch and listen to Sarah Kay captivate with perfect timing, pure enthusiasm, and plenty of emotion. Her first poem is a near-perfect piece of performance, in my opinion. It reminds me that delivery is immensely important to the success of your message.

I listen to Ira Glass talk about the gap while creating and think about the importance of maintaining that gap. In my experience, a great swath of developers and tech workers allow that gap to close very quickly; meeting their own expectations and readily defending their work as the best possible. I remind myself that better work should always extend my grasp and I should know where to find it.

Recruitment Searching on GitHub

We’re currently looking for Senior Mobile (iOS / Android) and Senior Fullstack Engineers at OfficeLuv. Finding great developers is…difficult. I will occasionally search for individuals on GitHub, where I can find a scrap of contact information and reach out.

GitHub doesn’t exactly provide a fantastic interface for perusing influential users, but it does have a reasonably advanced search. From there, you can select people using a certain language, in a certain location, and with a certain number of followers. I use the followers as [an admittedly flawed] proxy for proficiency. You could alternatively substitute number of public repositories as a proxy for proficiency.

As an example, here’s my search for influential JavaScript developers in Chicagoland. If you take a look at the search terms, you can see where to tweak/replace the chosen language, alter the lower-bound of followers, or change the location. Use it to find some good employees.

Update

I was finding good candidates with these searches, but often people obfuscate their email addresses out of their public GitHub profile page. But never forget the metadata found in git itself! Every committed code change pushed to GitHub (or any git repository) must have an email address attached to it. So, all you have to do is find the raw commit data.

Fortunately, GitHub has an API that displays [a portion of] raw git data for public code repositories. To view commit metadata for a given repository, you can visit this URL, substituting your own values:

https://api.github.com/repos/<username>/<repo_name>/commits

Then, just look for the commit > author > email field.