Monday, March 30, 2015

What If Uber's Just A Terrible Business?

Sometimes an industry which is very tightly regulated, or which has a lot of middlemen, has these obstructions for a reason.

Working for any taxi service holds enormous opportunities for kidnappers or rapists, and makes you an easy target for armed robbery. In some cities, being a cab driver is very dangerous. In other cities, it's dangerous to take a cab anywhere - don't try it in Argentina - and in many cities it's dangerous for both the driver and the passenger. A taxi service with built-in surveillance could be even worse in the hands of a total maniac. And just to state the obvious, both Uber drivers and regular taxi drivers have logical incentives for driving unsafely.

In this context, I have no problem with governments wanting to regulate Uber literally to death, at least outside of California. California is a special case, in my opinion. Cabs are completely unacceptable garbage throughout California, and the startup scene definitely skews Californian, so the typical startup hacker probably thinks all cabs suck, but cabs are gold in London, Chicago, and New York, and probably quite a few other places too.

In Los Angeles, cabs are licensed by city. A cab driver can only operate in one "city," and can face disastrous legal consequences if they pick up a passenger outside of their "city." But the term's misleading, because the "city" of Los Angeles is really a massive archipelago of small, loosely affiliated towns. It's extremely common that a taxi cab will be unable to pick up any passengers once it drops somebody off. I lived in San Francisco before I lived in Los Angeles, and I had always assumed Bay Area taxis were the worst in the world, but Los Angeles cabs make San Francisco cabs look competent.

(In the same way that San Francisco's MUNI system makes LA's Metro look incredible, even though both suck balls compared to the systems in New York, London, and Chicago. I'm told the system in Hong Kong puts even London's to shame.)

By contrast, in London, you have to pass an incredibly demanding driving and navigation test before you can become a cab driver. Researchers have demonstrated that passing this test causes a substantial transformation in the size of the cab driver's brain, in the regions responsible for maps and navigation.

Uber is obviously a disruptive startup, but disruption doesn't always fix things. The Internet crippled the music industry, but introduced stacks of new middlemen in the process. Ask Zoe Keating how that "democratization" turned out.

In music, the corporate middlemen are this pernicious infestation, which just reappears after you thought you'd wiped it out. Artists have to sacrifice a lot to develop their art to a serious level, and a lot of music performance takes place in situations where people are celebrating (i.e., drunk or high, late at night). So somebody has to provide security, as well as business sense. There's a lot of opportunities for middlemen to get their middle on, and disrupting a market, under those conditions, just means shuffling in a new deck of middlemen. It doesn't change the game.

In the same way that the music industry is a magnet for middlemen, the taxi industry is a magnet for crime. A lot of people champion Uber because it bypasses complex regulations, but my guess is that disrupting a market like that means you just reshuffle the deck of regulations. If Uber kills the entire taxi industry, then governments will have to scrap their taxi regulations and re-build a similarly gigantic regulatory infrastructure around Uber instead. You still have to guard against kidnapping, rape, armed robbery, unfair labor practices, and unsafe driving. None of those risks have actually changed at all. Plus, this new regulatory infrastructure will have to deal with the new surveillance risks that come along with Uber's databases, mobile apps, and geotracking.

Uber's best-case scenario is that the society at large will pay for the consequences of their meager "innovation." But if you look at that cost realistically, Uber is not introducing a tremendous amount of new efficiency to the taxi market at all. They're just restructuring an industry so that it offloads, to taxpayers, the costs and complexities of a massive, industry-wide technology update.

Only time will tell for sure, but I think Uber is a really good lesson for entrepreneurs in how a market can look awesome but actually suck. From a long-term perspective, I don't see how they can hope for any better end-game than bribing corrupt politicians. While that is certainly a time-honored means of getting ahead, and it very frequently succeeds, it's not exactly a technological innovation.

Monday, March 23, 2015

Scrum Fails, By Scrum's Own Standards

I've raised some criticisms of Scrum and Agile recently, and gotten a lot of feedback from Scrum advocates who disagree.

My main critique: Scrum's full of practices which very readily and rapidly decay into dysfunctional variants, making it the software development process equivalent of a house which falls apart while people are inside it.

Most Scrum advocates have not addressed this critique, but among those who have, the theme has been to recite a catchphrase: "Inspect And Adapt." The idea is it's incumbent upon any Scrum team to keep an eye out for Scrum decay, and prevent it.

From scrumalliance.org:

Scrum can be described as a framework of feedback loops, allowing the team to constantly inspect and adapt so the product delivers maximum value.

From an Agile glossary site:

“Inspect and Adapt” is a slogan used by the Scrum community to capture the idea of discovering, over the course of a project, emergent software requirements and ways to improve the overall performance of the team. It neatly captures the both the concept of empirical knowledge acquisition and feedback-loop-driven learning.

My biggest critic in this Internet brouhaha has been Ron Jeffries. Here's a sampling of his retorts:


Mr. Jeffries is a Certified Scrum Trainer and teaches Certified Scrum Developer courses.

In a blog post, Mr. Jeffries acknowledged that I was right to criticize the Scrum term "velocity." He then added:

For my part in [promoting the term "velocity," and its use as a metric], I apologize. Meanwhile you’ll need to deal with the topic as best you can, because it’s not going away.

He reiterates this elsewhere:

Mr Bowkett objects to some of the words. As I said before, yes, well, so do I. This is another ship that has sailed.

The theme here is not "Inspect And Adapt." The theme is "you're right, but we're not going to do anything about it."

This isn't just Mr. Jeffries being bad at handling disagreement. It's also the case with Scrum generally. I'm not the first or only person to say that sprints should be called iterations, or that "backlog" is a senselessly negative name for a to-do list, or that measuring velocity nearly always goes wrong. Scrum also has a concept of iteration estimates which it calls "sprint commitments" for some insane reason, and this terrible naming leads to frequent misunderstandings and misimplementations.

These are really my lightest and most obvious criticisms of Scrum, but the important thing here is that people who love Scrum have been seeing these problems themselves over the twenty years of Scrum's existence, and nobody has fixed these problems yet. In each case, all they had to do was change a name. That's a whole lot of inspecting and adapting which has not happened. The lowest-hanging fruit that I can identify has never been picked.

Monday, March 16, 2015

Ladies And Gentlemen, Nastya Maslova

A very talented young multi-instrumentalist, singer, and beatboxer from Russia, with many videos on YouTube (some under her name's Russian spelling, Настя Маслова).



Thursday, March 12, 2015

And Now For A Breakdown

Here's a live drum and bass set from Mistabishi on a Korg Electribe MX.



Here's another one:



Here's a more electro-house-ish set Mistabishi recorded live at an illegal rave in London last year:



You can actually download all the patterns and patches in this from the Korg web site.

Korg also had Mistabishi demo the new Electribe – basically the next generation of the EMX – in a couple YouTube videos:



(In this one, he enters the video at around 6:45.)



The new Electribe seems pretty cool, but these videos motivated me to buy an EMX instead. The new one's untested, while the EMX is a certified classic, with a rabid fan base, tons of free downloadable sounds, tons of videos on YouTube showing you how to do stuff, forum posts all over the Internet, and rave reviews (no pun intended).

Also, the new version literally has a grey-on-grey color scheme, while the EMX looks like this:



They run about $350 on eBay, but if you find an auction ending 8am on a Sunday, you might pick one up for just $270 (I did).

It also helps that I'm a huge fan of the original Electribe, the Korg ER-1, one of the most creative drum machines ever made. I've owned two (or three, if you count the iElectribe app for iOS, based on the second iteration of the ER-1).

Wednesday, March 4, 2015

Why Ron Jeffries Should Basically Just Learn To Read

Dear Agile, Your Gurus Suck


Like any engineer, I've had some unproductive conversations on the internet, but I caught a grand prize winner the other day, when Agile Manifesto co-signer Ron Jeffries discovered Why Scrum Should Basically Just Die In A Fire, a blog post I wrote months ago.

I tried to have a civil conversation with him, but failed. It was such an uphill climb, you could consider it a 90° angle.

I answered Mr. Jeffries's points, but he ignored these remarks, and instead reiterated his flawed criticisms, at length, on his own blog.

If you're the type who skips drama, I'm sorry, but you want a different post. I wrote that post, and published it on my company's blog. I got some better criticism of my anti-Scrum blog posts, and I responded to it there. I also heard from Agile luminaries like Mr. Jeffries, but unfortunately, none of the insightful criticism came from them.

Because of Mr. Jeffries's high profile, I have to respond to his "rebuttal." But he failed at reading my post, and again at reading my tweets, so my rebuttal is mostly going to be a Cliff's Notes summary of previous remarks.

But let's get it over with.

It's Not Just Me


RSpec core team member Pat Maddox said this after seeing the Twitter argument:


"Ward" is Ward Cunningham – another Agile Manifesto co-signer, the inventor of the wiki, and one of the creators of Extreme Programming. I'm not a huge fan of methodologies, but in my opinion, Extreme Programming beats Scrum. Even if you disagree, for people in Agile, yes, a Ward Cunningham recommendation ought to be solid, every time.

Here's an anonymous email I got from somebody else who witnessed the flame war:
We had a local “Scrum Masters” meet up here in [my home town] maybe 5 years ago. At the time my wife was trying to figure out a career transition for herself and a friend recommended becoming a Scrum Master. I’d been doing Scrum for a couple of years and I was friends with the organizers so I sent her over to that meet up. She came back and told me it was an epic fail and Ron Jeffries was the most unprofessional, rude speaker that she’d ever seen. In hindsight, it was probably a good thing that it turned her off Scrum completely.

Ron Jeffries was rude to most of the people asking questions about how to implement Scrum. One person in particular mentioned having an offshore team and some of the challenges involved in making Scrum work with a geographically distributed development team.

Ron Jeffries’ response was “Fire all the folks not in your office. If you’re trying to work with people offshore, you’ve already failed. Next question."

Even if this was 2010 and the prevailing thought in our industry was that offshoring and remote work didn’t work, his response was just tactless. He had similar short, rude responses to nearly every question that night and it resonated negatively with everyone there. I don’t think they ever asked him back there again.
To his credit, Mr. Jeffries starts out his blog post by acknowledging that he could have been more polite with me:
My first reaction was not the best. I should have compassionately embraced Giles’s pain... But hey, I’m a computer programmer, not a shrink.
This "apology" implies my expectation of basic common courtesy is equivalent to mental illness.

Mr. Jeffries then stumbles through several failures at basic reading comprehension.

Basic Context Fail


We'll start simple. In my post, I wrote:
I've twice seen the "15-minute standup" devolve into half-hour or hour-long meetings where everybody stands, except for management.
I then gave examples.
At one company...

At another company...
Mr. Jeffries used the singular every single time he described my teams, my projects, or the companies I worked for:
Things were not going well in planning poker and Giles’s team did not fix it...

Giles’s team’s Product Owner...

Why didn’t the time boxes do that for Giles’s team?

...I’m sad that Giles suffered through this, and doubtless everyone else on the project suffered as well...

And I’m glad Giles remarks that Scrum does not intend what happened on his team. It does make me wonder why this article wasn’t entitled “Company XYZ and Every One of its Managers Should Just Die in a Fire.”
So he didn't even get that when I said I was talking about two companies, I was talking about two companies.

Advanced Context Fail


In fairness, you only had to miss the point of four or five paragraphs in a row to miss that detail, and my post contained many paragraphs. However, the point of those paragraphs was that you see Scrum practices decay at multiple companies because Scrum practices are vulnerable to decay. They have a half-life, and it is brief.

Mr. Jeffries missed this point. He also failed to recognize hyperbole which I'd thought was glaringly obvious. I said in my post:
In fairness to everybody who's tried [planning poker] and seen it fail, how could it not devolve? A nontechnical participant has, at any point, the option to pull out an egg timer and tell technical participants "thirty seconds or shut the fuck up." This is not a process designed to facilitate technical conversation; it's so clearly designed to limit such conversation that it almost seems to assume that any technical conversation is inherently dysfunctional.

It's ironic to see conversation-limiting devices built into Agile development methodologies, when one of the core principles of the Agile Manifesto is the idea that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation," but I'll get to that a little later on.

For now, I want to point out that Planning Poker isn't the only aspect of Scrum which, in my experience, seems to consistently devolve into something less useful.
These egg timers really exist. I added "shut the fuck up" to highlight the ridiculousness of waving an egg timer at somebody in order to make a serious business decision. Somehow, however, what Mr. Jeffries took away from those words was that I actually, in real life, attended a planning poker session where somebody told somebody else to shut the fuck up.

Quoting from his post:
What’s going on here, and why is planning poker being blamed for the fact that Giles seems to be surrounded by assholes?

...People do cut their fingers off with the band saw, and I guess people do get told to shut the fuck up in a planning poker session.

Should not happen. Should never happen...

No one in Scrum wants abusive behavior...

Things were not going well in planning poker and Giles’s team did not fix it. That makes me sad, very sad, but it doesn’t make me want to blame Scrum. Frankly even if Scrum did say “Sometimes in planning meetings tell people to shut the fuck up”, that would be so obviously wrong that you’d be a fool to do it...

someone with a pair could take an individual aside and tell them not to be a jerk.
I don't even get how Mr. Jeffries took that literally in the first place.

But I'll rephrase my point: building conversation-limiting rules into planning poker contravenes the basic Agile principle that face-to-face conversation is important. Additionally, these conversation-limiting rules provide an easy avenue through which the Scrum process can devolve into something less Agile – and also less productive – and I believe Scrum has several more design flaws like that.

Offensive Agreement


Since Mr. Jeffries equated asking for basic courtesy with asking for free therapy, it's probably no shock that his condescending tone persists even when he acknowledges my points. I pointed out naming errors in Scrum, and he responded:
I’m not sure I’d call those “conceptual” flaws, as Giles does, but they’re not the best possible names. Mary Poppendieck, for example, has long objected to “backlog”. Again, I get that. And I don’t care...

Giles objects to velocity. I could do a better job of objecting to it, and who knows, somewhere in here maybe I have, but the point is a good one.
This is an adult, intending his words for public consumption. Even when he sees I'm right, he dismisses my critique in favor of his own critique, even though he also acknowledges that his own critique probably doesn't exist.



Still, this is the apex of maturity in Mr. Jeffries's post, so I'll try to be grateful. We do at least agree "sprint," "backlog," and "velocity" are flawed terms. If a civil conversation ever emerges from this mess, it might stand on that shred of common ground.

But I'm not optimistic, because Mr. Jeffries also says:
For my part in [promoting the term "velocity," and its use as a metric], I apologize. Meanwhile you’ll need to deal with the topic as best you can, because it’s not going away.
Apology accepted, and I'll give him credit for owning up to a mistake. But his attitude's terrible. If velocity's not good, you should get rid of it. And I don't need to deal with it, because it absolutely is going away.

Scrum's a fad in software management, and all such fads go away sooner or later. The most embarassing part of this fracas was that, while my older followers took it seriously, my younger followers thought the whole topic was a joke. Velocity is, in my own working life, less "going away" than "already gone for years." My last full-time job with Scrum ended in 2008. Since then, I've had to deal with velocity for a grand total of four or five months, and they weren't all contiguous.

Mr. Jeffries continues:
Giles has gone to an organization that pretty much understands Agile...

I’m not sure just what they do at his new company, Panda Strike, but I hope he’ll find some good Agile-based ideas there, and I think probably he will. More than that, I hope Giles finds a fun way to be productive...
I'd agree Panda Strike "pretty much understands Agile," but that's why we regard Scrum with skepticism. We also understand several flaws in Agile that I'd guess Mr. Jeffries does not. And it should be obvious I already had "a fun way to be productive" before Scrum got in my way. Why else would I dis Scrum in the first place?

For the record, at Panda Strike, we do use some Agile ideas, but that's a different blog post.

I'm publishing this on my personal blog, not the Panda Strike blog, because this is my own social media snafu. On the Panda Strike blog, I've gone into more detail about Agile and Scrum, our skepticism, the compromises we'll sometimes make, the flaws we've found in the Agile Manifesto, and the things we still like about it (and, to a lesser extent, Scrum). That blog post is more balanced than my personal attitude on my personal blog.

My post on the Panda blog is a response to the coherent criticism I've received. Please do read it. I hope you enjoy it. But here I'm responding to incoherent criticism from a high-profile source. Sorry - the man has sixteen thousand Twitter followers, and every one of them seems to want to bother me.



Also, a point of order: I am not "Giles" to you, Mr. Jeffries. I am "Mr. Bowkett" or "Mr. Bowkett, sir." Those are your only choices. You don't know me, sir. Mind your manners, please, because I'd like to use my own without feeling put upon. And your repeated claim of being "sad" on my behalf would be creepy if it seemed plausible, but it does not. It reads as insincere, so maybe you could just stop.

Ignored Tweets


My blog post argued that Scrum practices decay rapidly into non-Agile practices. Weirdly, most of my critics have repeatedly stated that my argument was not valid because I provided examples of Scrum practices which had decayed into less Agile versions.

Mr. Jeffries was one such critic. During my Twitter argument with him, I reminded him of this point repeatedly. Here's one of several such reminders:

Even after this exchange, he continued to hammer the idea that my argument – that Scrum's Agile practices decay rapidly – is invalid because the decayed versions of these practices are not Agile.
If people miss the point every time, I didn't make my point clear. But if they ignore clarification, they're arguing in bad faith.

Mr. Jeffries's Terrible Reading Comprehension May Not Be A Fluke


One of Mr. Jeffries's Twitter followers, a Scrum consultant named Neil Killick, wrote a blog post criticizing my post also:
A controversial blog post... suggests that Scrum is broken for many reasons, but one of those reasons cited is that the 15 minute time box for Daily Scrum (aka Standup) is too short to allow for meaningful conversations, and promotes a STFU culture.
This is false.

The phrase "shut the fuck up" occurred in a discussion of planning poker, not daily standups. This concept of "a STFU culture" is Mr. Killick's invention. The time-boxing in standups might undermine the Agile principle that "the most efficient and effective method of conveying information to and within a development team is face-to-face conversation" – it's certainly odd to find this "inhibiting conversation" theme in an "Agile" methodology – but a major theme in my post was that standups frequently go too long.

Having failed to grasp this, and possibly without ever reading my blog in the first place, Mr. Killick then explains how to solve a problem I never had:
IMO, in the situation the author describes, Scrum has exposed that perhaps the author and his colleagues feel their time together in the morning is so precious that they resent ending it artificially.

As the Scrum Master in this situation I would be trying to find out why they are not able to continue talking after the Daily Scrum? Perhaps they struggle to find the time or will to collaborate during the working day, so being forced together by the Daily Scrum gives them an opportunity that they do not want to cut short?
So, recap.

I write a blog post about how Scrum's a waste of time. I get two blog posts defending Scrum. Neither of the individuals who wrote these posts showed enough respect for my time to read the post they were responding to. Both of them say inaccurate things about my post. Both of them then offer me advice about how to solve problems I never had. And this is their counterargument to my claim that Scrum is a waste of time.


A Quick Defense Of Some Innocent Bystanders


Many Scrum defenders have said things to me like "if your company had a problem with Scrum, it was a messed-up company," or even "if your team had a problem with Scrum, they were incompetent."

On behalf of my former co-workers and former managers, fuck you.



One of the companies I mentioned was indeed very badly mismanaged, yet I worked with terrific people there. There were some flaws at the other, too, but it was much better. There were plenty of good people, nothing really awful going on, and Scrum still decayed.

(In fact, standups decayed to a similar failure state in each case, even though the circumstances, and the paths of deterioration, were very different.)

I saw these problems at two companies with a serious investment in Scrum, and I saw echoes of these problems elsewhere too. It's possible that I just fell into shit company after shit company, and the common theme was pure coincidence. But I doubt it. And if your best argument relies on coincidence, and boils down to "maybe you just suck," then you're not coming from a strong position.

Is that even an argument? Or is it just a silencing tactic?

The first time or two that I saw Scrum techniques fail, my teams were using them informally. I thought, "maybe we should be using the formal, complete version, if we want it to work." The next time I saw Scrum techniques fail, we got official Scrum training, but the company was already being mismanaged, so I thought, "maybe it doesn't matter how full or correct our implementation is, if the people at the top are messing it up." The next time after that, management was better, and the implementation was legit, but we were using a cumbersome piece of software to manage the process. So I thought, "maybe Scrum would work if we weren't using this software."

Eventually, somebody said to me, "hey, maybe Scrum just doesn't work," and it made more sense than any of these prior theories.

And Scrum's answer to this is "maybe you just suck"?

Maybe.

Maybe not.

Valid Critiques Addressed Elsewhere


As I said, I got better opposing arguments than these, and I addressed them on the Panda Strike blog. Check it out if you're curious.

Sunday, February 22, 2015

Rest In Peace, Carlo Flores

This is my third "rest in peace" blog post since Thanksgiving, and in a sense, it's the saddest.

My friend Carlo Flores has died. He lived in Los Angeles, and on Twitter, every hacker in Los Angeles was going nuts with grief a few days ago. None of them would say what happened. I had the terrible suspicion Carlo killed himself, and I eventually found out I was right:

I can't even imagine the pain he must have been in to seek a way out, knowing that it would make all of us miss him this way.

I love you Carlo. I'm going to rock for you today.







Friday, January 30, 2015

ForwardJS Next Week: JSCK Talk, Free Tickets

I'll be giving a talk on JSCK at ForwardJS next week. And I have two free tickets to give away — first come, first served.

I'm looking forward to this conf a lot, mostly because there's a class on functional programming in JavaScript. I know there are skeptics, but I recently read Fogus's book about that and it was pretty great.

Tuesday, January 27, 2015

Superhero Comics About Modern Tech

There are two great superhero comics which explore social media and NSA surveillance in interesting ways.

I really think you should read these comics, if you're a programmer. Programming gives you incredible power to shape the way the world is changing, as code takes over nearly everything. But both the culture around programming, and the educations which typically shape programmers' perspectives, emphasize technical details at the expense of subjects like ethics, history, anthropology, and psychology, leading to incredibly obvious and idiotic mistakes with terrible consequences. With great power comes great responsibility, but at Google and Facebook, with great free snacks come great opportunities for utterly unnecessary douchebaggery.


A lot of people in the music industry talk about Google as evil. I don’t think they are evil. I think they, like other tech companies, are just idealistic in a way that works best for them... The people who work at Google, Facebook, etc can’t imagine how everything they make is not, like, totally awesome. If it’s not awesome for you it’s because you just don’t understand it yet and you’ll come around. They can’t imagine scenarios outside their reality and that is how they inadvertently unleash things like the algorithmic cruelty of Facebook’s yearly review (which showed me a picture I had posted after a doctor told me my husband had 6-8 weeks to live).

Fiction exists to explore issues like these, and in particular, fantastical fiction like sci-fi and superhero comics is extremely useful for exploring the impact of new technologies on a society. This is one of the major reasons fiction exists and has value, and these comics are doing an important job very effectively. (There's sci-fi to recommend here as well, but a lot of the people who were writing sci-fi about these topics seem to have almost given up.)

So here these comics are.



In 2013 and 2014, Peter Parker was dead. (Not really dead, just superhero dead.) The megalomaniac genius Otto Octavius, aka Dr. Octopus, was on the verge of dying from terminal injuries racked up during his career as a supervillian. So he tricked Peter Parker into swapping bodies with him, so that Parker died in Octavius's body and Octavius lived on inside Parker's. But in so doing, he acquired all of Parker's memories, and saw why Parker dedicated his life to being a hero. Octavius then chose to follow his example, but to do so with greater competence and intelligence, becoming the Superior Spider-Man.

The resulting comic book series was amazing. It's some of the best stuff I've ever seen in a whole lifetime reading comics.

Given that his competence and intelligence were indeed both superior, Octavius did actually do a much better job of being Spider-Man than Spider-Man himself had ever done, in some respects. (Likewise, as Peter Parker, he swiftly obtained a doctorate, launched a successful tech startup, and turned Parker's messy love life into something much simpler and healthier.) But given that Octavius was a megalomaniac asshole with no reservations about murdering people, he did a much worse job, in other respects.

In the comics, the Superior Spider-Man assassinates prominent criminals, blankets the entire city in a surveillance network comprised of creepy little eight-legged camera robots, taps into every communications network in all of New York, and uses giant robots to completely flatten a crime-ridden area, killing every inhabitant. (He also rants constantly, in hilariously overblown terms, like the verbose and condescending supervillian who he was for his entire previous lifetime.)



Along the way, Octavius meets "supervillians" who are merely pranksters -- kids who hit the mayor with a cream pie so they can tweet about it -- and he nearly kills them.



As every superhero does, of course, Peter Parker eventually comes back from the dead and saves the day. But during the course of the series' nearly two-year run, The Superior Spider-Man did an absolutely amazing job of illustrating how terrible it can be for a city to have a protector with incredible power and no ethical boundaries. Anybody who works for the NSA should read these comics before quitting their terrible careers in shame.



DC Comics, meanwhile, has rebooted Batgirl and made social media a major element of her life.



I only just started reading this series, and it's a fairly new reboot, but as far as I can tell, these new Batgirl comics are comics about social media which just happen to feature a superhero (or superheroine?) as their protagonist. She promotes her own activities on an Instagram-like site, uses it to track down criminals, faces impostors trying to leverage her fame for their own ends, and meets dates in her civilian life as Barbara Gordon through Hooq, a fictional Tinder equivalent.

The most important difference between these two series is that one ran for two years and is now over, while the other is just getting started. But here's my impression so far. Where Superior Spider-Man tackled robotics, ubiquitous surveillance, and an unethical "guardian" of law and order, Batgirl seems to be about the weird cultural changes that social media are creating.

Peter Parker's a photographer who works for a newspaper, and Clark Kent's a reporter, but this is their legacy as cultural icons created many decades ago. Nobody thinks of journalism as a logical career for a hero any more. Batgirl's a hipster in her civilian life, she beats up douchebag DJs, and I think she might work at a tech startup, but maybe she's in grad school. There's a fun contrast here; while the Superior Spider-Man's alter ego "Peter Parker," really Otto Octavius, basically represents the conflict between how Google and the NSA see themselves vs. how they look to everyone else — super genius vs. supervillain — Barbara Gordon looks a lot more like real life, or at least, the real life of people outside of that nightmarish power complex.

Thursday, January 15, 2015

Why Panda Strike Wrote the Fastest JSON Schema Validator for Node.js

Update: Another project is even faster!

After reading this post, you will know:

Because this is a very long blog post, I've followed the GitHub README convention of making every header a link.

Those who do not understand HTTP are doomed to re-implement it on top of itself


Not everybody understands HTTP correctly. For instance, consider the /chunked_upload endpoint in the Dropbox API:

Uploads large files to Dropbox in multiple chunks. Also has the ability to resume if the upload is interrupted. This allows for uploads larger than the /files_put maximum of 150 MB.

Since this is an alternative to /files_put, you might wonder what the deal is with /files_put.

Uploads a file using PUT semantics. Note that this call goes to api-content.dropbox.com instead of api.dropbox.com.

The preferred HTTP method for this call is PUT. For compatibility with browser environments, the POST HTTP method is also recognized.

To be fair to Dropbox, "for compatibility with browser environments" refers to the fact that, of the people I previously mentioned - the ones who do not understand HTTP - many have day jobs where they implement the three major browsers. I think "for compatibility with browser environments" also refers to the related fact that the three major browsers often implement HTTP incorrectly. Over the past 20 years, many people have noticed that their lives would be less stressful if the people who implemented the major browsers understood the standards they were implementing.

Consider HTTP Basic Auth. It's good enough for the GitHub API. Tons of people are perfectly happy to use it on the back end. But nobody uses it on the front end, because browsers built a totally unnecessary restriction into the model - namely, a hideous and unprofessional user experience. Consequently, people have been manually rebuilding their own branded, styled, and usable equivalents to Basic Auth for almost every app, ever since the Web began.

By pushing authentication towards the front end and away from an otherwise perfectly viable aspect of the fundamental protocol, browser vendors encouraged PHP developers to handle cryptographic issues, and discouraged HTTP server developers from doing so. This was perhaps not the most responsible move they could have made. Also, the total dollar value of the effort expended to re-implement HTTP Basic Auth on top of HTTP, in countless instances, over the course of twenty years, is probably an immense amount of money.

Returning to Dropbox, consider this part here again:

Uploads large files to Dropbox in multiple chunks. Also has the ability to resume if the upload is interrupted. This allows for uploads larger than the /files_put maximum of 150 MB.

Compare that to the accept-range header, from the HTTP spec:

One use case for this header is a chunked upload. Your server tells you the acceptable range of bytes to send along, your client sends the appropriate range of bytes, and you thereby chunk your upload.

Dropbox decided to take exactly this approach, with the caveat that the Dropbox API communicates an acceptable range of bytes using a JSON payload instead of an HTTP header.

Typical usage:

  • Send a PUT request to /chunked_upload with the first chunk of the file without setting upload_id, and receive an upload_id in return.
  • Repeatedly PUT subsequent chunks using the upload_id to identify the upload in progress and an offset representing the number of bytes transferred so far.
  • After each chunk has been uploaded, the server returns a new offset representing the total amount transferred.
  • After the last chunk, POST to /commit_chunked_upload to complete the upload.

Google Maps does something similar with its API. It differs from the Dropbox approach in that, instead of an endpoint, it uses a CGI query parameter. But Google Maps went a little further than Dropbox here. They decided that ignoring a perfectly good HTTP header was not good enough, and instead went so far as to invent new HTTP headers which serve the exact same purpose:

To initiate a resumable upload, make a POST or PUT request to the method's /upload URI, including an uploadType=resumable parameter:

POST https://www.googleapis.com/upload/mapsengine/v1/rasters/{asset_id}/files
 ?filename={filename}
 &uploadType=resumable

For this initiating request, the body is either empty or it contains the metadata only; you'll transfer the actual contents of the file you want to upload in subsequent requests.

Use the following HTTP headers with the initial request:

  • X-Upload-Content-Type. Set to the media MIME type of the upload data to be transferred in subsequent requests.
  • X-Upload-Content-Length. Set to the number of bytes of upload data to be transferred in subsequent requests. If the length is unknown at the time of this request, you can omit this header.
  • Content-Length. Set to the number of bytes provided in the body of this initial request. Not required if you are using chunked transfer encoding.

It's possible that the engineers at Google and Dropbox know some limitation of Accept-Ranges that I don't. They're great companies, of course. But it's also possible they just don't know what they're doing, and that's my assumption here. If you've ever been to Silicon Valley and met some of these people, you're probably already assuming the same thing. Hiring great engineers is very difficult, even for companies like Google and Dropbox. Netflix holds terrific scaling challenges and its engineers are still only human.


Anyway, combine this with the decades-running example of HTTP Basic Auth, and it becomes painfully obvious that those who do not understand HTTP are doomed to re-implement it on top of itself.

If you're a developer who understands HTTP, you've probably seen many similar examples already. If not, trust me: they're out there. And this widespread propagation of HTTP-illiterate APIs imposes unnecessary and expensive problems in scaling, maintenance, and technical debt.

One example: you should version with the Accept header, not your URI, because:

Tying your clients into a pre-set understanding of URIs tightly couples the client implementation to the server; in practice, this makes your interface fragile, because any change can inadvertently break things, and people tend to like to change URIs over time.

But this opens up some broader questions about APIs, so let's take a step back for a second.

APIs and JSON Schema


If you're working on a modern web app, with the usual message queues and microservices, you're working on a distributed system.

Not long ago, a company had a bug in their app, which was a modern web app, with the usual message queues and microservices. In other words, they had a bug in their distributed system. Attempts to debug the issue turned into meetings to figure out how to debug the issue. The meetings grew bigger and bigger, bringing in more and more developers, until somebody finally discovered that one microservice was passing invalid data to another microservice.

So a Panda Strike developer told this company about JSON Schema.

Distributed systems often use schemas to prevent small bugs in data transmission from metastasizing into paralyzing mysteries or dangerous security failures. The Rails and Rubygems YAML bugs of 2013 provide a particularly alarming example of how badly things can go wrong when a distributed system's input is not type-safe. Rails used an attr_accessible/attr_protected system for most of its existence - at least as early as 2005 - but switched to its new "strong parameters" system with the release of Rails 4 in 2013.

Here's some "strong parameters" code:

This line in particular stands out as an odd choice for a line of code in a controller:

params.require(:email).permit(:first_name, :last_name, :shoe_size)

With verbs like require and permit, this is basically a half-assed, bolted-on implementation of a schema. It's a document, written in Ruby for some insane reason, located in a controller file for some even more insane reason, which articulates what data's required, and what data's permitted. That's a schema. attr_accessible and attr_protected served a similar purpose more crudely - the one defining a whitelist, the other a blacklist.

In Rails 3, you defined your schema with attr_accessible, which lived in the model. In Rails 4, you use "strong parameters," which go in the controller. (In fact, I believe most Rails developers today define their schema in Ruby twice - via "strong parameters," for input, and via ActiveModel::Serializer, for output.) When you see people struggling to figure out where to shoehorn some functionality into their system, it usually means they haven't figured out what that functionality is.

But we know it's a schema. So we can make more educated decisions about where to put it. In my opinion, whether you're using Rails or any other technology, you should solve this problem by providing a schema for your API, using the JSON Schema standard. Don't put schema-based input-filtering in your controller or your model, because data which fails to conform to the schema should never even reach application code in the first place.

There's a good reason that schemas have been part of distributed systems for decades. A schema formalizes your API, making life much easier for your API consumers - which realistically includes not only all your client developers, but also you yourself, and all your company's developers as well.

JSON Schema is great for this. JSON Schema provides a thorough and extensible vocabulary for defining the data your API can use. With it, any developer can very easily determine if their data's legit, without first swamping your servers in useless requests. JSON Schema's on draft 4, and draft 5 is being discussed. From draft 3 onwards, there's an automated test suite which anyone can use to validate their validators; JSON Schema is in fact itself a JSON schema which complies with JSON Schema.

Here's a trivial JSON Schema schema in CSON, which is just CoffeeScript JSON:

One really astonishing benefit of JSON Schema is that it makes it possible to create libraries which auto-generate API clients from JSON Schema definitions. Panda Strike has one such library, called Patchboard, which we've had terrific results with, and which I hope to blog about in future. Heroku also has a similar technology, written in Ruby, although their documentation contains a funny error:

We’ve also seen interest in this toolchain from API developers outside of Heroku, for example [reference customer]. We’d love to see more external adoption of this toolkit and welcome discussion and feedback about it.

That's an actual quote. Typos aside, JSON Schema makes life easier for ops at scale, both in Panda Strike's experience, and apparently in Heroku's experience as well.

JSON Schema vs Joi's proprietary format


However, although JSON Schema's got an active developer and user community, Walmart Labs has also had significant results with their Joi project, which leverages the benefits of an API schema, but defines that schema in JavaScript rather than JSON. Here's an example:

As part of the Hapi framework, Joi apparently powered 2013 Black Friday traffic for Walmart very successfully.

Hapi was able to handle all of Walmart mobile Black Friday traffic with about 10 CPU cores and 28Gb RAM (of course we used more but they were sitting idle at 0.75% load most of the time). This is mind blowing traffic going through VERY little resources.

(The Joi developers haven't explicitly stated what year this was, but my guess is 2013, because this quote was available before Black Friday this past year. Likewise, we don't know exactly how many requests they're talking about here, but it's pretty reasonable to assume "mind-blowing traffic" means a lot of traffic. And it's pretty reasonable to assume they were happy with Joi on Black Friday 2014 as well.)

I love this success story because it validates the general strategy of schema validation with APIs. But at the same time, Joi's developers aren't fans of JSON Schema.

On json-schema - we don't like it. It is hard to read, write, and maintain. It also doesn't support some of the relationships joi supports. We have no intention of supporting it. However, hapi will soon allow you to use whatever you want.

At Panda Strike, we haven't really had these problems, and JSON Schema has a couple advantages that Joi's custom format lacks.

The most important advantage: multi-language support. JSON's universality is quickly making it the default data language for HTTP, which is the default data transport for more or less everything in the world built after 1995. Defining your API schema in JSON means you can consume and validate it in any language you wish.

It might even be fair to leave off the "JS" and call it ON Schema, because in practice, JSON Schema validators will often allow you to pass them an object in their native languages. Here's a Ruby example:

This was not JSON; this was Ruby. In this example, you still have to use strings, but it'd be easy to circumvent that, in the classic Rails way, with the ActiveSupport library. Similar Python examples also exist. If you've built something with Python and JSON Schema, and you decide to rebuild in Ruby, you won't have to port the schema.

Crazy example, but it's equally true for Clojure, Go, or Node.js. And it's not at all difficult to imagine that a company might port services from Python or Ruby to Clojure, Go, or Node, especially if speed's essential for those services. At a certain point in a project's lifecycle, it's actually quite common to isolate some very specific piece of your system for a performance boost, and to rewrite some important slice of your app as a microservice, with a new focus on speed and scalability. Because of this, it makes a lot of sense to decouple an API's schema from the implementation language for any particular service which uses the API.

JSON Schema's universality makes it portable in a way that Joi's pure JavaScript schemas cannot achieve. (This is also true for the half-implemented pure-Ruby schemas buried inside Rails's "strong parameters" system.)

Another fun use case for JSON Schema: describing valid config files for any service written in any language. This might be annoying for those of you who prefer writing your config files in Ruby, or Clojure, or whatever language you prefer, but it has a lot of practical utility. The most obvious argument for JSON Schema is that it's a standard, which has a lot of inherent benefits, but the free bonus prize is that it's built on top of an essentially universal data description language.

And one final quibble with Joi: it throws some random, miscellaneous text munging into the mix, which doesn't make perfect sense as part of a schema validation and definition library.

JSCK: Fast as fuck


If it seems like I'm picking on Joi, there's a reason. Panda Strike's written a very fast JSON Schema validator, and in terms of performance, Joi is its only serious competitor.

Discussing a blog post on cosmicrealms.com which benchmarked JSON Schema validators and found Joi to be too slow, a member of the Joi community said this:

Joi is actually a lot faster, from what I can tell, than any json schema validator. I question the above blog's benchmark and wonder if they were creating the joi schema as part of the iteration (which would be slower than creating it as setup).

The benchmark in question did make exactly that mistake in the case of JSV, one of the earliest JSON Schema validators for Node.js. I know this because Panda Strike built another of the very earliest JSON Schema validators for Node. It's called JSCK, and we've been benchmarking JSCK against every other Node.js JSON Schema validator we can find. Not only is it easily the fastest option available, in some cases it is faster by multiple orders of magnitude.

We initially thought that JSV was one of these cases, but we double-checked to be sure, and it turns out that the JSV README encourages the mistake of re-creating the schema on every iteration, as opposed to only during setup. We had thought JSCK was about 10,000 times faster than JSV, but when we corrected for this, we found that JSCK was only about 100 times faster.

(I filed a pull request to make the JSV README clearer, to prevent similar misunderstandings, but the project appears to be abandoned.)

So, indeed, the Cosmic Realms benchmarks do under-represent JSV's speed in this way, which means it's possible they under-represent Joi's speed in the same way also. I'm not actually sure. I hope to investigate in future, and I go into some relevant numbers further down in this blog post.

However, this statement seems very unlikely to me:

Joi is actually a lot faster, from what I can tell, than any json schema validator.

It is not impossible that Joi might turn out to be a few fractions of a millisecond faster than JSCK, under certain conditions, but Joi is almost definitely not "a lot faster" than JSCK.

Let's look at this in more detail.

JSCK benchmarks


The Cosmic Realms benchmarks use a trivial example schema; our benchmarks for JSCK use a trivial schema too, but we also use a more medium-complexity schema, and a very complex schema with nesting and other subtleties. We used a multi-schema benchmarking strategy to make the data more meaningful.

I'm going to show you these benchmarks, but first, here's the short version: JSCK is the fastest JSON Schema validator for Node.js - for both draft 3 and draft 4 of the spec, and for all three levels of complexity that I just mentioned.

Here's the long version. It's a matrix of libraries and schemas. We present the maximum, minimum, and median number of validations per second, for each library, against each schema, with the schemas organized by their complexity and JSON Schema draft. We also calculate the relative speed of each library, which basically means how many times slower than JSCK a given library is. For instance, in the chart below, json-gate is 3.4x to 3.9x slower than JSCK.

The jayschema results are an outlier, but JSCK is basically faster than anything.

When Panda Strike first created JSCK, few other JSON Schema valdiation libraries existed for Node.js. Since there are so many new alternatives, it's pretty exciting to see that JSCK remains the fastest option.

However, if you're also considering Joi, my best guess is that, for trivial schemas, Joi is about the same speed as JSCK, which is obviously pretty damn fast. I can't currently say anything about its relative performance on complex schemas, but I can say that much.

Here's why. There's a project called enjoi which automatically converts trivial JSON Schemas to Joi's format. It ships with benchmarks against tv4. The benchmarks run a trivial schema, and this is how they look on my box:

tv4 vs joi benchmark:

  tv4: 22732 operations/second. (0.0439918ms)
  joi: 48115 operations/second. (0.0207834ms)

For a trivial draft 4 schema, Joi is more than twice as fast as tv4. Our benchmarks show that for trivial draft 4 schemas, JSCK is also more than twice as fast as tv4. So, until I've done further investigation, I'm happy to say they look to be roughly the same speed.

However, JSCK's speed advantage over tv4 increases to 5x with a more complex schema. As far as I can tell, nobody's done the work to translate a complex JSON Schema into Joi's format and benchmark the results. So there's no conclusive answer yet for the question of how Joi's speed holds up against greater complexity.

Also, of course, these specific results are dependent on the implementation details of enjoi's schema translation, and if you make any comparison between Joi and a JSON Schema validator, you should remember there's an apples-to-oranges factor.

Nonetheless, JSCK is very easily the fastest JSON Schema validator for Node.js, and although Joi might be able to keep up in terms of performance, a) it might not, and b) either way, its format locks you into a specific language, whereas JSON Schema gives you wide portability and an extraordinary diversity of options.

We are therefore very proud to recommend that you use JSCK if you want fast JSON Schema validation in Node.js.

I'm doing a presentation about JSCK at ForwardJS in early February. Check it out if you're in San Francisco.

Wednesday, January 7, 2015

One Major Difference Between Clojure And Common Lisp

In the summer of 2013, I attended an awesome workshop called WACM (Workshop on Algorithmic Computer Music) at the University of California at Santa Cruz. Quoting from the WACM site:

Students will learn the Lisp computer programming language and create their own composition and analysis software. The instruction team will be led by professor emeritus David Cope, noted composer, author, and programmer...

The program features intensive classes on the basic techniques of algorithmic composition and algorithmic music analysis, learning and using the computer programming language Lisp. Students will learn about Markov-based rules programs, genetic algorithms, and software modeled on the Experiments in Musical Intelligence program. Music analysis software and techniques will also be covered in depth. Many compositional approaches will be discussed in detail, including rules-based techniques, data-driven models, genetic algorithms, neural networks, fuzzy logic, mathematical modeling, and sonification. Software programs such as Max, Open Music, and others will also be presented.


It was as awesome as it sounds, with some caveats; for instance, it was a lot to learn inside of two weeks. I was one of a very small number of people there with actual programming experience; most of the attendees either had music degrees or were in the process of getting them. We worked in Common Lisp, but I came with a bunch of Clojure books (in digital form) and the goal of building stuff using Overtone.

I figured I could just convert Common Lisp code almost directly into Clojure, but it didn't work. Here's a gist I posted during the workshop:


This attempt failed for a couple different reasons, as you can see if you read the comments. First, this code assumes that (if (null list1)) in Common Lisp will be equivalent to (if (nil? list1)) in Clojure, but Clojure doesn't consider an empty list to have a nil value. Secondly, this code tries to handle lists in the classic Lisp way, with recursion, and that's not what you typically do in Clojure.

Clojure's reliance on the JVM makes recursion inconvenient. And Clojure uses list comprehensions, along with very sophisticated, terse destructuring assignments, to churn through lists much more gracefully than my Common Lisp code above. Those 7 lines of Common Lisp compress to 2 lines of Clojure:

(defn build [seq1 seq2]
  (for [elem1 seq1 elem2 seq2] [elem1 elem2]))

A friend of mine once said at a meetup that Clojure isn't really a Lisp; it's "a fucked-up functional language" with all kinds of weird quirks which uses Lisp syntax out of nostalgia more than anything else. To me, this isn't enough to earn Clojure that judgement, which was kinda harsh. I think I like Clojure more than he does. But, at the same time, if you're looking to translate stuff from other Lisps into Clojure, it's not going to be just copying and pasting. Beyond inconsequential, dialect-level differences like defn vs. defun, there are deeper differences which steepen the learning curve a little.