After Reading The Undoing Project

My brother’s gift to me this Christmas was The Undoing Project, a novel by Michael Lewis on the work done by researchers Daniel Kahneman and Amos Tversky. I read it from December 27 through January 1.

I had studied the work by these two authors in my college courses, but had no idea bout their personal history or tangential work. It was a splendid reminder of those ideas, and I was captivated by the description of their relationship. I wanted to write down some initial thoughts afterward. These are not in any particular order, just as they came to mind.

I am not a fan of jumping around time scales in a historical account of things. The chapters in this novel move from one decade to the next depending on the narrative, but sometimes also switch characters and time simultaneously so that it becomes a bit disorienting. I am usually a big fan of altered time scales, so it was surprising that I find myself annoyed in this case.

It’s been a while since I read a dramatization. I wonder if the movies made of his previous books influenced this style. I know this is the author that wrote Moneyball and The Big Short, so I wonder if seeing his previous work put to screen caused some of this book to read like a screenplay. Several of the chapter openings read more like scenes from a film.

I enjoyed the focus on the loving relationship between the two men. They were obviously dependent, emotionally and intellectually, on each other for a long time. And it was especially interesting to learn about their eventual disagreements.

I kept hoping to for a return to ANY of the vignette stories applying their theories. The book leads with a basketball recruit applying heuristics in the absence of the researched theories. Why have the initial hook with the basketball recruits if we never returned to them? I feel that maybe this and others were meant to flesh out actual application of the theory, but the payoff never really materialized for me.

Bookmarked Passages

“On what would become a three-volume, molasses sense, axiom-filled textbook called Foundations of Measurement - more than a thousand pages of arguments and proofs of how to measure stuff.”

You can tell the author didn’t enjoy research texts completely, but this sounds great to me.

You could beat the doctor by replacing him with an equation created by people who knew nothing about medicine and had simply asked a few questions of doctors.

This gave me ideas about how humans could be transitioning to a period where our jobs are strictly to recognize patterns in a chaos and translate that into computer models. Then you let the computer work and apply those models since the humans cannot reliably do so. That would be the dichotomy of utility between the humans and the computers.

Some people like to walk while they talk. Amos liked to shoot baskets.

“They had a certain style of working”, recalls Paul Slovic, “which is they just talked to each other for hour after hour after hour.”

“That was the moment I gave up in decision analysis”, Danny said. “No one ever made a decision because of a number. They need a story.”

This is true of memory techniques that I’m trying to teach myself. Stories cement ideas for people.

The understanding of numbers is so weak that they don’t communicate anything. Everyone feels that those probabilities are not real - that they are just something on somebody’s mind.

Envy was different [than regret]. Envy did not require a person to exert the slightest effort to imagine a path to the alternative state. […] Envy, in some strange way, required no imagination.

This is one more reason to wipe envy out of your eyes. It’s the basest form of regret.

To Revisit When Starting a New Job

After starting at and helping to start companies multiple times, I’ve noticed a few writings and lectures that I tend to revisit each time. Even when not moving to a new group, I tend to watch or read these every few months. The beginning of this list came out of a question from the wonderful Jane Kim.

I read Ray Dalio’s Principles (which are conveniently located online, now) to remind myself of the unapologetic beauty of truth in action and truth in words. The pamphlet is a bit heavy-handed with the imperative advice, but I find it compelling nonetheless.

I watch Laura Savino speak about the power of words and think about choosing the right words for how I want to interact with others. Increasingly, I catch myself nearly using a word that connotes more or less than I mean. Taking moments to think of better words, and watching people react to exact words, has been enlightening. It’s also spurred a desire to learn more languages.

I watch Bret Victor speak about invention-based mindsets and think about challenging implicit beliefs and biases. More than just challenging bias, it’s about defining what exactly could be better built into our mindset, or better pulled out of it. He makes a call to widen the gap identified by Ira Glass (below), and widen human capabilities.

I peruse code I have released in the past year and find the worst piece. I think about how, exactly, I would make it better now, or how to avoid it completely. I go back through my code and find the piece that had the most positive impact and think about why, exactly, it had that impact. Those two pieces are not mutually exclusive, but I hope they are.

I read Valve’s Handbook for New Employees as the most organic and captivating on-boarding document I have even encountered. It’s a fantastic bundle of rules and history brimming with hidden gems. I read Netflix’s Culture Deck as the most idealistic version of a workplace guide. It has influenced most of my workplaces, directly or indirectly.

I watch and listen to Sarah Kay captivate with perfect timing, pure enthusiasm, and plenty of emotion. Her first poem is a near-perfect piece of performance, in my opinion. It reminds me that delivery is immensely important to the success of your message.

I listen to Ira Glass talk about the gap while creating and think about the importance of maintaining that gap. In my experience, a great swath of developers and tech workers allow that gap to close very quickly; meeting their own expectations and readily defending their work as the best possible. I remind myself that better work should always extend my grasp and I should know where to find it.

Recruitment Searching on GitHub

We’re currently looking for Senior Mobile (iOS / Android) and Senior Fullstack Engineers at OfficeLuv. Finding great developers is…difficult. I will occasionally search for individuals on GitHub, where I can find a scrap of contact information and reach out.

GitHub doesn’t exactly provide a fantastic interface for perusing influential users, but it does have a reasonably advanced search. From there, you can select people using a certain language, in a certain location, and with a certain number of followers. I use the followers as [an admittedly flawed] proxy for proficiency. You could alternatively substitute number of public repositories as a proxy for proficiency.

As an example, here’s my search for influential JavaScript developers in Chicagoland. If you take a look at the search terms, you can see where to tweak/replace the chosen language, alter the lower-bound of followers, or change the location. Use it to find some good employees.

Update

I was finding good candidates with these searches, but often people obfuscate their email addresses out of their public GitHub profile page. But never forget the metadata found in git itself! Every committed code change pushed to GitHub (or any git repository) must have an email address attached to it. So, all you have to do is find the raw commit data.

Fortunately, GitHub has an API that displays [a portion of] raw git data for public code repositories. To view commit metadata for a given repository, you can visit this URL, substituting your own values:

https://api.github.com/repos/<username>/<repo_name>/commits

Then, just look for the commit > author > email field.

The Benefits of Daily Code Review

I wrote a while ago about a methodology for daily code reviews, one which we implemented originally at ThreadMeUp. Now that I’m building a new team at OfficeLuv, I’ve been excited to start them again. Recently, I was talking to a very good friend of mine and found myself reasoning through the importance of team code reviews. I think it boils down to four main skills for individual team members.

Articulate Your Thoughts

I meet a great many programmers/developers/engineers/hackers. There are some that are good at what they do. There are even fewer that can articulate their ideas well enough for others to understand them. I want to work with that small subset.

Presenting your proposed changes to the group necessitates an explanation of your thought process. What was the bug or feature? How did you research it? What failed? What did you decide to build? Why is it our best course of action? If you can’t present answers to these, the rest of the team can see right away that your changes aren’t ready.

This is a great way for mid-level developers to practice explaining their actions well enough to become effective senior developers. I have seen some companies give up on more experienced engineers building communication skills. I think that’s a big mistake, and presenting ideas daily to the group strengthens those skills.

Critique Others

Often, it’s discouraged of junior developers to speak up and criticize more experienced teammates. This is not so during our code reviews. Everyone is entitled to their opinion and is expected to have one. An outside horse sometimes wins the race, and a counterpoint raised before merging a new feature could stimulate an even better solution.

It is important to have confidence in one’s own ideas. Strong opinions, loosely held. Just keep in mind that, though you should put forth new ideas, be ready for them to be refuted with evidence. This brings me to the third benefit.

Receive Criticism

I have never worked with someone who was correct all the time. I have worked with several who thought they were. Encouraging discourse inevitably leads to one person challenging another’s solution. The challenged should understand that it is a natural process that attempts to yield the best possible result. Just because I push back on your implementation doesn’t mean that it’s terrible, it just means you need to back it up with evidence.

Bring evidence to the table. Only one person should be talking at a time during code reviews - practice listening to someone else, even if their idea is terrible. It’s terribly good exercise.

Draw Connections

Code reviews, by forcing everyone to listen while changes within the whole system are proposed, ensures that we are all on the same page. Often, especially during hard sprints, developers can get locked into their own codebase, separated from the group. This is especially common in micro-service systems, where each person may be alone within a service.

By reviewing changes being made elsewhere, members of the team will inevitably see parallels to their own work. This can kickstart the formation of common libraries between apps or a good architecture can be replicated elsewhere. The team shouldn’t make the same mistake twice.


If you have been practicing regular code reviews with your team, I’d love to hear about any other benefits you may have noticed.

A Recommendation of Vasyl Stetsyuk

Vasyl is an eager and diligent QA Lead. While we were working together, a member of our team remarked that they had met no other QA leader more interested in the actual technology, and I would agree. He is a splendid team leader, and defined our testing processes for the group. I would trust him to test my applications again in a heartbeat.

Not all QA developers are actually interested in the technology beneath their feet, but Vasyl is enamored with it. When working with the rest of our team, he wouldn’t stop at understanding the desired behavior, he would ask questions until he understood the underlying mechanics of the server or animation itself. This meant that, when he inevitably found bugs in the software, he would have a high likelihood of pinpointing the root cause of the bug. This saved us hours of effort in reproducing behavior, and greatly accelerated our development cycles.

While leading our QA process, Vasyl also managed a group of junior testers, to great ease and effectiveness. I never once had to worry about the tasks assigned to them or the team’s ability to finish the allotted tests before a deadline. Vasyl worked diligently to lay out every task in easily repeatable steps, and maintained in constant contact with the entire team. He could always estimate the time commitment of a testing cycle accurately, which was invaluable in our tight sprint cycles.

When Vasyl joined our team, we had no structure to our testing tasks and critical paths. He took it upon himself to evaluate several frameworks that fit within our task management system, then present the options to us and advocate his position. Once we had all agreed, he then defined and implemented the testable pathways that allowed us to confidently release each sprint cycle. The new process made our lives easier, our deployments faster, and our customers happier, all championed by Vasyl.

I would be confident in any application that passes Vasyl’s testing standards. He was an integral part of our team and a major reason we could release as quickly as we did. I loved having my mind at ease, working with Vasyl.

A Recommendation of Eric Satterwhite

Working with Eric is, I imagine, akin to preparing dinner with a sushi master chef. He is one of the most confident developers, and rightly so. I have learned more from him in the last year than any other on our team. Eric’s not content with his current abilities, though - he is constantly seeking new challenges and paradigms to test.

When we hired Eric, he blew through all of my coding tests. I had to think of new ones. Over the course of working with our team, he would rewrite entire services, imposing consistent architecture and documentation. Usually, he would just be let loose on a microservice with the words, “How can we make this better?” and Eric would produce entire libraries that we could reuse in multiple layers and locations. He repeatedly saw straight through problems to the core and fixed the machinery beneath.

Personally, I learned from Eric often. I greatly enjoyed our conversations about testing the boundaries of the application stack or JavaScript itself. Unlike other fantastically talented people I have worked with, Eric is entirely transparent about the work he does. He documents his structure and thinking alongside the code, and readily discusses or explains the features or fixes he writes with developers of many experience levels. He is truly an asset of knowledge for everyone around him.

Upon joining our team, Eric decided to expand his dev-ops abilities. He was the architect behind a new microservice resource allocation, and implemented an automated testing & deployment process for our 10+ services. He wrote and taught a Docker course for our internal team and implemented a new monitoring system for each of our node.js services. He succeeded in writing an image color-analyzer in node.js that was 8x faster than ImageMagick. I could go on, but I’m more excited for the next challenge he decides to tackle.

While I would say that Eric is a team asset solely for his technical excellence, I would selfishly say that I look forward more to challenging conversations with him about programming at large. Working with Eric is working with a master of his craft, and anyone is lucky to experience that.

As of this writing, Eric’s website is smugly situated at codedependant.net.

A Recommendation of Cory Keane

Cory is as earnest as he is steadfast. He is as honest as he is pragmatic. We built a team together, around the ideals we share. We set out goals a year in advance and completed them in half the time. I would work with him again, and I would recommend him to guide a project in any climate.

Over multiple years, I have worked with Cory through times thick and thin. He is constant and eager to improve in both, and drove the rest of us around him to follow suit. He worked tirelessly to plan every sprint, from epics to describing individual tasks. He devoted his attention first and foremost to the understanding and clarity of the team around him. Rather than beginning his own tasks, Cory made it his priority to have everyone on the same and the right page. Thus, the entire team moved together as a unit, able to act with information.

As a leader, Cory is compassionate and economical. We went through several difficult conversations amongst the team, and after each one I emerged with new gratitude that Cory had been there. He was eager to ease the team, but never without a true basis for his actions. We never disagreed on the rate at which we grew our team, taking on complexity only when it was beneficial. Cory never compromised on our standards for candidates, which ultimately bore itself out in a fantastically balanced group.

At the beginning of our last year, the entire company laid out a few goals for each team. Cory kept ours at top of mind, to the culmination of reaching all of them before some groups had accomplished one. He understood exactly which pieces of the puzzle depended on others, and steered us away from pitfalls. Cory rode the line between ambition and pragmatism better than any other I have encountered.

I have a deep respect for Cory, as a coworker, leader, and friend. His guidance and perseverance was a guiding light for our team and others in the company. He built the solid base for our technology, the intricate plan to grow it, and the powerful team that built it. I would have him do the same for anyone else.

A Recommendation of Colleen Grafton

Out of all the engineers I have known, Colleen is the most structured and economical. While working together, she has grown tenfold in her abilities and her assertiveness. She would be a valuable asset to any group, both her technical skills and her personality.

Colleen methodically approaches programming features and fixing bugs. She will gather any information before making a decision, searching for precedents before writing her own solution. She will consistently refactor previous solutions to fit new and future input, while maintaining a keen eye on old or external factors. She is a member that will make your team much greater than the sum of its parts.

While beginning on our team as a junior PHP developer, Colleen quickly expanded. She aided in the planning and completion of automated build and test processes across multi-tiered microservice architectures and legacy systems. She also, on her own accord, began training in new languages, eventually contributing to our node.js server code and our client-side JavaScript applications. I was elated to see her confidence grow alongside her technical abilities.

In addition to her technical growth, Colleen actively grew our company as a whole. She planned, proposed, and helped realize health benefits, insurance, and vacation policies for the entire company. At a time when we were hiring faster than comfortable, she organized regular team events and meals to encourage bonding amongst employees. With Colleen advocating to maintain the group unity and organizing benefits, we were much more secure in our relationships and ourselves.

Colleen is on my short list of those I would readily hire again. She brings much more to the table than others with the same credentials, and I trust that she will continue to build on her abilities in leaps and bounds. We should all strive to be so beneficial.

Calming Facebook's Eager Pixels

Working at an e-commerce startup, I get asked to implement new tracking features every day. I built out the integration points for Google Analytics, Google’s retargeting pixel, Google’s conversion pixel, Facebook’s retargeting pixel, Facebook’s conversion pixel, Facebook’s Audience pixel (and here is where I run out of breath). That’s not even a complete list.

Facebook’s Audience pixel, of that group, is the most recent to the table. It was introduced last year as the replacement to both Facebook’s conversion pixel and retargeting pixel. With it, Facebook also introduced a new tracking library, fbevents.js (replacing the fbds.js library, which was shared between the retargeting and conversion pixels). Phew. There goes my second breath.

Facebook’s own documentation used to carry more red flags about continuing with their deprecated retargeting and conversion pixels, but that has faded. Facebook even updates the documentation of the deprecated pixels. The overwhelming majority of our customers (and, I would bet money, Facebook’s customers) request continued support for their old retargeting and conversion pixels. As such, I continue to wade through deprecation warnings from Facebook’s tracking library.

We also continuously ran into tracking inaccuracies. With Facebook’s Pixel Helper, we could see that multiple events would be recorded via either fbevents.js or fbds.js. This bug seemed to cause different behavior over time, with some customers reporting an exceedingly high conversion rate in Facebook’s reporting interface (over 100%, at times) and some reporting exceedingly low rates (less than half of transactions). It was frustrating, and it was happening not only to our platform but on our competitors’ as well.

I debugged my way through each step of our JavaScript code, seeing us trigger only one call to Facebook’s libraries. We run a React application for our client checkout flow. We use react-router to manage history and URL state. You would think that Facebook’s tracking libraries would play nice with Facebook’s user interface library. The extra events seemed to be triggering themselves whenever the URL or history would change. But we had written our own calls to Facebook’s pixel libraries to avoid this!

I prettified the source code of fbevents.js and did the same for the deprecated fbds.js. At the very bottom of both, there is this bit (comments mine):

// s === window.fbq === window._fbq
// d === window.history
// a === window
// New code from fbevents.js
(function ra() {
    if (s.disablePushState === true) return;
    if (!d.pushState || !d.replaceState) return;
    var sa = function() {
        ba = v;
        v = c.href;
        if (v === ba) return;
        var ta = new da({
            allowDuplicatePageViews: true
        });
        ea.call(ta, 'trackCustom', 'PageView');
    };
    p.injectMethod(d, 'pushState', sa);
    p.injectMethod(d, 'replaceState', sa);
    a.addEventListener('popstate', sa, false);
})();
// ------------------------------------
// Deprecated, but still present, code from fbds.js
if (s.disablePushState === true) return;
if (!d.pushState || !d.replaceState) return;
var t = function() {
        k = j;
        j = c.href;
        s.push(['track', 'PixelInitialized']);
    },
    u = function(v, w, x) {
        var y = v[w];
        v[w] = function() {
            var z = y.apply(this, arguments);
            x.apply(this, arguments);
            return z;
        };
    };
u(d, 'pushState', t);
u(d, 'replaceState', t);
a.addEventListener('popstate', t, false);

Both Facebook pixel tracking libraries hook into the window.history and popstate events. This means that it will fire a new cascade of events whenever the browser history API is used and whenever your single page application (SPA) updates the browser URL to reflect the view. Our React SPA was sending API calls to Facebook’s pixels perfectly, but Facebook’s own injected code was triggering duplicate events. The solution:

// shut it down http://i.imgur.com/vxqrKua.gif
window.fbq.disablePushState = true;
window.fbq('init', this.props.fbAudiencePixel);
// or
window._fbq.disablePushState = true;
window._fbq.push(['addPixelId', this.props.fbRetargetingPixel]);
window._fbq.push(['track', 'PixelInitialized', {}]);

After that change, everything seems error-free (at least from testing with Facebook’s Pixel Helper). Time will tell if our customers continue to experience the same effects. It seems a shame that Facebook has this undocumented (I couldn’t find any) flag that makes it play nice within single page apps. It seems an equal shame that Facebook would eagerly trigger its own, potentially erroneous, calls on global events. It also troubles me that the same React application (with multiple calls triggering Pixel Helper errors) seems to cause erroneous results on both ends of the spectrum. And it also troubles me that across our multiple e-commerce competitors, where we are the only one using an entirely React SPA, none have been able to yield a 1:1 ratio of internal conversions to Facebook pixel conversions.

As anyone building a single page app right now, the best advice I can give is to disablePushState. That should be necessary for Angular, React, and mobile frameworks alike. I’ve been able to find one other codebase identifying this problem, Segment.io’s tag manager. The good (or bad?) news is that the disablePushState flag seems to be the only Facebook pixel magic flag.

My Bookmarks Are All Bookmarklets

I love all my bookmarks. None of them actually go to any websites, though. They’re all bookmarklets.

A bookmarklet is a little app that runs in your browser.

Some of them have been around for years, and I recently got into the habit of making them again. Mostly, I just make them for repetitive or arduous tasks while debugging browser junk. I thought I would list out some of my favorites.

Source

This little love dumps the HTML source of the current page in a new tab. This isn’t very useful most of the time, but utterly invaluable when trying to debug a client from my smartphone on the L train at 2AM.

source

javascript:(function(){
    var a = window.open('about:blank').document, b;
    a.write('<!DOCTYPE html><html><head><title>Source of '+location.href+'</title><meta name="viewport" content="width=device-width" /></head><body></body></html>');
    a.close();
    b = a.body.appendChild(a.createElement('pre'));
    b.style.overflow = 'auto';
    b.style.whiteSpace = 'pre-wrap';
    b.appendChild(a.createTextNode(document.documentElement.innerHTML))
})();

Headers

I love finding little gems hiding in the headers of other websites. I try to put a bit of gold in the headers of websites I build myself. This one has helped test CORS issues, cookie issues, or even just satisfying curiosity about what stack competitors are using.

headers

javascript:(function(){
    var req = new XMLHttpRequest(), headers;
    req.open('GET', prompt('page?',document.location), false);
    req.send(null);
    headers = req.getAllResponseHeaders().toLowerCase();
    alert(headers);
})();

Toggle CSS

Many designers/developers use CSS as a whole lot of lipstick on a pig. Sometimes, it’s a lot easier to debug something without all that color getting in the way. This bookmarklet toggles the entire CSS structure on and off, so you can wipe it all away or put it right back.

toggleCSS

javascript:(function(){
    function d(a,b){
        a.setAttribute("data-css-storage",b)
    }
    function e(a){
        var b = a.getAttribute("data-css-storage");
        a.removeAttribute("data-css-storage");
        return b
    }
    var c = [];
    (function(){
        var a = document.body,
            b = a.hasAttribute("data-css-disabled");
        b ? a.removeAttribute("data-css-disabled") :
            a.setAttribute("data-css-disabled","");
        return b
    })() ?
    (
        c = document.querySelectorAll("[data-css-storage]"),
            [].slice.call(c).forEach(function(a){
                "STYLE" === a.tagName ?
                    a.innerHTML=e(a) :
                    "LINK" === a.tagName ?
                        a.disabled = !1 :
                        a.style.cssText = e(a)
            })
    ) :
    (
        c = document.querySelectorAll("[style], link, style"),
            [].slice.call(c).forEach(function(a){
                "STYLE" === a.tagName ?
                    (d(a, a.innerHTML), a.innerHTML="") :
                    "LINK" === a.tagName ?
                        (d(a, ""), a.disabled = !0) :
                        (d(a, a.style.cssText), a.style.cssText = "")
            })
    )
})();

Others

Here are some others, maintained by other people, that I use quite frequently:

Rate-Limit Your Node.js API in Mongo

Update: After a request by Jason Humphrey, I’ve released this implementation as a standalone NPM module: mongo-throttle.

I needed to build a rate-limiting middleware for the new Narro public API, and I was inspired to make the database do my heavy lifting. In Narro’s case, that’s MongoDB.

Expiring Records From MongoDB

Mongo has a useful feature called a TTL index.

TTL collections make it possible to store data in MongoDB and have the mongod automatically remove data after a specified number of seconds or at a specific clock time.

You can tell Mongo to remove data for you! We will use this to remove expired request counts from our rate-limiting check. There are a couple important things to note about this feature:

  • As an index, it is set upon collection creation. If you want to change it, you’ll have to do so manually.
  • The index-specific field, expireAfterSeconds, is in seconds. Unlike most other timestamps in your JavaScript code, don’t divide this by 1000.

Throttle Model

First, let’s build our model to store in our rate-limiting collection. Here we define our expires TTL index on our createdAt field (it only takes one field to expire a record from the collection). We are also defining a max number of requests per IP address (conforming to an IP-specific regex).

/**
 * A rate-limiting Throttle record, by IP address
 * models/throttle.js
 */
var Throttle,
    mongoose = require('mongoose'),
    config = require('../config'),
    Schema = mongoose.Schema;

Throttle = new Schema({
    createdAt: {
        type: Date,
        required: true,
        default: Date.now,
        expires: config.rateLimit.ttl // (60 * 10), ten minutes
    },
    ip: {
        type: String,
        required: true,
        trim: true,
        match: /^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$/
    },
    hits: {
        type: Number,
        default: 1,
        required: true,
        max: config.rateLimit.max, // 600
        min: 0
    }
});

Throttle.index({ createdAt: 1  }, { expireAfterSeconds: config.rateLimit.ttl });
module.exports = mongoose.model('Throttle', Throttle);

Throttler Middleware

I’m using Express/Koa here, so I’m going to write this as a middleware library. All we want to do is find-or-create an existing Throttle record for the requesting IP and increment its value. Upon reaching the max, we can truncate the request chain immediately. The benefit we get from defining our model above is never having to reset records or remove them from the collection!

// Module dependencies
var config = require('../config'),
    Throttle = require('../models/throttle');

/**
   * Check for request limit on the requesting IP
   *  
   * @access public
   * @param {object} request Express-style request
   * @param {object} response Express-style response
   * @param {function} next Express-style next callback
   */ 
module.exports = function(request, response, next) {
    'use strict';
    var ip = request.headers['x-forwarded-for'] ||
        request.connection.remoteAddress ||
        request.socket.remoteAddress ||
        request.connection.socket.remoteAddress;

    // this check is necessary for some clients that set an array of IP addresses
    ip = (ip || '').split(',')[0]; 

    Throttle
        .findOneAndUpdate({ip: ip},
            { $inc: { hits: 1 } },
            { upsert: false })
        .exec(function(error, throttle) {
            if (error) {
                response.statusCode = 500;
                return next(error);
            } else if (!throttle) {
                throttle = new Throttle({
                    createdAt: new Date(),
                    ip: ip
                });
                throttle.save(function(error, throttle) {
                    if (error) {
                        response.statusCode = 500;
                        return next(error);
                    } else if (!throttle) {
                        response.statusCode = 500;
                        return response.json({
                            errors: [
                                {message: 'Error checking rate limit'}
                            ]
                        });
                    }

                    respondWithThrottle(request, response, next, throttle);
                });
            } else {
                respondWithThrottle(request, response, next, throttle);
            }
        });

    function respondWithThrottle(request, response, next, throttle) {
        var timeUntilReset = (config.rateLimit.ttl * 1000) -
                    (new Date().getTime() - throttle.createdAt.getTime()),
            remaining =  Math.max(0, (config.rateLimit.max - throttle.hits));

        response.set('X-Rate-Limit-Limit', config.rateLimit.max);
        response.set('X-Rate-Limit-Remaining', remaining);
        response.set('X-Rate-Limit-Reset', timeUntilReset);
        request.throttle = throttle;
        if (throttle.hits < config.rateLimit.max) {
            return next();
        } else {
            response.statusCode = 429;
            return response.json({
                errors: [
                    {message: 'Rate Limit reached. Please wait and try again.'}
                ]
            });
        }
    }
};

Throttling In Use

Once we have our middleware in place, we can simply drop it into the request-handling chain of Express/Koa and appropriately rate-limit our clients.

var fs = require('fs'),
    throttler = require('../lib/throttler'),
    pkg = JSON.parse(fs.readFileSync('./package.json'));

// I'll assume you've defined your app instance
app.get('/api', throttler, function(req, res) {
    res.jsonp({
        meta: {
            version: pkg.version,
            name: pkg.name
        }
    });
});

In practice, I placed the throttler middleware ahead of things like authentication. If you wanted to rate-limit on something like an API key or authenticated user record, you could do so by placing authentication ahead of rate-limiting and changing the ip field on the Throttle model to something like a user ID or API key.

Databases Doing Dirty-work

Eric did a great thing in the past two weeks with his implementation of data calculating MySQL tables. In short, he wrote a table definition that updates itself on the hour by recalculating its own columns and records by determining the accrued new data and then summing and saving rows for each of our customers. Think of it as a preemptive cache that only has as much overheard as what has accrued in the last hour, with the added benefit of being entirely contained within our MySQL table definitions.

It’s reminded me of the old adage about letting the database do the work for you. There’s usually a way to get the information collated and keyed just the way you want it, but it will take more forethought into your query. And you more than likely won’t be able to use your shiny ORM.

Inspired by Eric’s approach, I started researching some specialty methods for MongoDB. I use Mongo as the datastore for the main service (out of a few micro-services) on Narro. MongoDB doesn’t have the job scheduling Eric employed for calculating time-series data, but it does have auto-expiry of records. I wonder what we could do with this? How about building a rate-limiting service that auto-expires request counts!

Instead of Writing

A list of things I did yesterday instead of writing a thoughtful piece here:

Timestamp - Going to a Party

I need to be there at 8, and it should take ten minutes to drive there. I request an Uber at 7:40. On the map displayed within their application, the cars disappear as I make the request. They were lies! I wait five minutes for the driver that accepted. The driver is lost and he calls me for directions. I wait another ten minutes. The driver was directed to the wrong location by his Uber application. I give up and walk the block to where he is, because he is still lost. He is driving an old truck. He drives me to the wrong entrance to the building, again only because he was directed there by Uber itself. Luckily, I make it inside.

$6.50


On the way home, I leave the building and can hail a cab immediately. The driver goes quickly, directly to my door. The car is electrically-powered, as mandated by the city government.

$8.70

Swiss Army Side Project

I was thinking today of how there’s a benefit to having a tiny side project. I hope most people do. I have quite a few, but one in particular has been useful to me. I have broken out Narro into microservices from the beginning, and one of those is the podcast feed generator service.

Originally, it was written in Node.js, which was chosen for speed of development. Then, I went through a phase of learning Go and seeing server-side code in that light. So the podcast generator was re-written in Go. It also gave me the opportunity to write a podcast generation library in Go: gopod. Now, I’m learning languages again and have begun to eye the service again. I’m thinking of Rust or Lisp at this point. As I’ve written this service multiple times, it gives me the ability to compare languages and libraries as they tackle the exact same task. It’s a valuable perspective.

Unique Humans.txt Files

I was making robots.txt and humans.txt files for Narro recently, and I wanted to find a few unique examples. I was looking for something that included more than the boilerplate code from humanstxt.org. I think that the humans.txt file should be place for a bit of expression. I think rigid structure should be avoided. Please send me any others, but here are the interesting ones I found:

Accursed With a Couple Customers

I’ve seeing a trend with some of the start up companies I’ve worked at. It tends to happen that a prolific and available customer drives the majority of revenue or traffic. That’s all well and good but what usually happens is that one (or two) customers start making decisions in their best interest. Who can blame them?

The original path that would lead to many more customers for the startup is abandoned for this one customer group. It’s hard to break cycle. You tell yourself the customer in hand is worth more than all the customers in the bush. Every article out there is telling you how lucky it is to have paying customers. You must be doing something right!

You’re actually digging yourself a hole. The hole leads to the product your one customer group wants, not your original vision for the industry/product. You’ll have to decide if you want to court this one customer/group or go after the real reason you started this. I know many reasons venture capitalists can give for pivoting the company to your existing customer base. All I can say is that I haven’t seen it work out well. In each case of my experience, it has led to the once-promising startup becoming a lap-dog to the hand that feeds it. You lose employees that are frustrated without the original driving idea. And you tie the company ever-closer to this one customer.

Any place I just used the word customer could just as easily substitute the entity of investor with the same effect.

Thoughts After App Release

In the first week of the available iOS app, the Narro community nearly doubled in size.

I was happily surprised! So far, every feature I have built for Narro has been a direct result of a) some idea I had for myself, or b) some request made by an existing user. The iOS app was no exception. As such, I was expecting mostly extant users to download Narro on iOS.

I have thought before about building things for potential Narro users. I think this proves that a feature can both satisfy current customers and attract new, unknown, markets. Here’s to happy discoveries.

Thoughts While Waiting for App Review

I just pressed the button to submit Narro for iOS into the App Store. After 12 revisions, 3 weeks of testing, and 15 external beta testers, I think it’s ready to go.

I’ve worked on teams submitting apps into closed platforms (iOS, Android, Blackberry), but this is my first app submission alone. As I settle in for the inevitable waiting period while Big Apple looks over my code, here are some thoughts:

  • The documentation isn’t as nice as you may imagine.
  • It is so cool to have a hand in something that is in the hands of so many people.
  • Waiting in lines for attention just to be told you don’t smell right is no fun.
  • Since it will be reviewed, it’s nice to feel like you’re not able to fuck things up too badly.
  • Getting approval and asking for forgiveness sucks.
  • You have only one environment to work in, but only one to worry about.
  • But not really, because iOS has legacy software and hardware, too.

There are several parallel considerations between a solid web app and a solid native app. I always enjoy optimizing page load for my web apps, and so it was this time. To optimize the app size of Narro for iOS, I used dynamically-generated images for the onboarding tutorial. I didn’t have to include any images at all!

Update 2015-11-08

Got rejected on this first submission:

Apps or metadata that mentions the name of any other mobile platform will be rejected.

Argh. Remove, recompile, resubmit.

Increased Speed and Urgency

From Jack Dorsey’s re-introduction as CEO of Twitter today:

It seems strange to desire increased urgency on your team. This brings me back to some thoughts I’ve been writing down about different types of fuel powering your work. I need to compile them all together.

A sense of urgency or stress is definitely on my list, but I also have it earmarked as one that rapidly depletes you in the end game.