Bogost writes that “methodologies like Scrum never allow that infrastructure to stabilize.” This misses the point.
First, it confuses two different meanings of the word “stability”. Things built by Real Engineers are iterated upon over time too. They just typically don’t catastrophically fail in the process.
But even apart from that gripe, it gets the causation backwards. Software isn’t constantlyevolving (read: breaking) because a bunch of hippies signed the Agile Manifesto and invented Scrum. They created Scrum because their customers’ requirements were changing on a weekly or daily basis, and the strawman software engineering methodology that looks like what you’d expect a Real Engineer to do (see “Waterfall”) simply does not work in software. Heck, I spent four years of my undergrad and two years of my Master’s program trying to help solve this problem.
So why is this such a unique problem to software? This article already argues why it’s the fault of programmers, so let me offer an alternative explanation.
Maybe businesses simply don’t respect this profession. I know Real Engineers are overworked and have incredibly stressful jobs, but I have a feeling they don’t have to put up with quite the shit software developers do in this area. I am doubtful that a business owner would ever tell an architectural engineer, “That’s a great proposal you have there, but could you construct the building in half the time and with half the concrete you suggested?”
Or that she would be asked to change the height and shape of the building halfway through construction.
Or that she would be asked to just work a few 70 hour weeks and get the building done by a deadline even if it’s missing a few support beams.
Or that a bunch of amateur engineers with no credentials would be hired to construct the building for dimes on the dollar, instead of a single experienced engineer, because they all end up building the same thing, right?
Leave programmers to do their own thing and they can create beautifullydesigned, stable software. Sure, some programmers are awful, but not even the best of us are immune to the cut corners and errors that result from working for businesses who tacitly or even explicitly tell them they should cut those corners, because the software budget (you know, to pay for the construction of their entire product) is a fraction of what it should be.
Are those business owners wrong, though? No one will die because Facebook was down for an hour once, or because Twitter ate an AJAX error and didn’t really post a Tweet even though you thought it did. We moan when our computer, like, takes a second to open a thing, much like we moan about cramped seats and bad service on airplanes. But at the end of the day, we obviously prefer low prices to high quality, both when posting pictures of our cats to Facebook and when flying. As long as the plane itself is built by a Real Engineer.
]]>That episode has spawned an incredible number of discussions and flame wars about a mathematical claim that I haven’t seen since the ageold fight about whether \(0.99\overline{9} = 1\). (Yes, it does.) The nice thing about that debate is there there is an unambiguous, correct answer. Every reasonable way you can define “zero point nine repeating”, it is provably, definitely equal to the real number 1. All there is left to do is make a convincing argument aimed at a lay audience, using an incredibly basic mathematical toolset that doesn’t actually contain the tools you need to prove the claim.
This claim, on the other hand, is much more subtle. Convincing someone of the way it’s true is much more like convincing someone that The Dress is <insert colour here> when it’s clearly <insert colour here>.
The Numberphile episode gives a cute little “proof” of the claim which is probably not very satisfying. This is aggravated by the fact that internet points tend to be awarded to whoever can be the biggest contrarian, and so the discussions about this claim tend to be dominated by simple rebuttals. Those rebuttals are correct, but only skindeep.
So yes, \(1 + 2 + 3 + … \ne 1/12\). It’s not equal to any number. The completely correct, technical statement is that the series diverges towards \(\infty\). End of story.
But that wouldn’t make for a very interesting blog post, now would it?
Thankfully, the truth of the matter is more subtle and interesting. Let’s start from the beginning.
There are many ways to define the value of an infinite sum (called a series in mathematics), which we denote by:
$$\sum a_n = a_1 + a_2 + \cdots$$
Once you have Calculus in your mathematical toolbelt and a definition of limits to work with, the most straightforward definition is to say that \(\sum a_n\) is the limit of the sequence \(a_1,\ a_1 + a_2,\ a_1 + a_2 + a_3\), and so on. (In other words, cut off the sum at progressively higher bounds and take the limit.) If this limit is finite, we say the series converges.
This sounds reasonable, and in fact this is the completely standard definition. But this turns out to be a fairly weak condition. Since addition commutes, we might expect the order of the terms \(a_n\) not to actually matter in the series \(\sum a_n\). We might expect that we can rearrange the terms any way we want, and the series should converge to the same number.
But that’s not the case. There are series which converge, but if you rearrange the terms they converge to a completely different number. It turns out that in order to have the nice property of being equal under rearrangements, the series needs to satisfy a condition called absolute convergence, which is pretty much what it sounds like: \(\sum a_n\) needs to converge, but so does the series of absolute values \(\sum a_n\).
In fact, there’s a cute result that if a series converges but is not absolutely convergent, then you can always rearrange the terms to give you any number you want.
Okay, so far I’m talking about very strict definitions of infinite series. But you can also weaken the definition of convergence. For example, consider the series \(1  1 + 1  1 + 1  \cdots\). This doesn’t converge absolutely. It doesn’t converge at all, because as you cut it off at finite bounds, the partial sums flip between 0 and 1. But there’s still something interesting going on. Flipping between 0 and 1 like this seems like, in some sense, it should have the value \(½\). So one way to weaken the definition is to take all the partial sums, and ask what their “average” value is. This is called Cesaro summation.
(I promise I’m getting to the definition relevant to this discussion.) Yet another way to generalize the value of a series uses a method called analytic continuation. There’s a very important class of functions called holomorphic functions. If you have some background in Calculus, you can think of these as complexvalued generalizations of functions that are differentiable everywhere. These are the functions with the “nicest” properties you can have in complex analysis. (Complex Analysis is Calculus in the complex numbers.)
Well, there’s an important theorem in complex analysis called the identity theorem, which states that if you know all the values of a holomorphic function in even the tiniest region of its domain (you have to be careful how you define “region” here but I’m ignoring that detail), then those values determine the rest of the function across the rest of the complex numbers. Which is pretty incredible by itself. But it also gives us a way to take a function we only know how to compute in some small domain, and extend it uniquely to a much bigger set of inputs.
Which brings us to the Riemann zeta function. Given any fixed number \(s\), we can define a series:
$$\sum 1/n^ s = 1 + 1/ 2^ s + 1/3^ s + \cdots$$
We can prove that this series converges whenever whenever \(s\) is a complex number whose real part is greater than 1. So we can think of the series above as a function of \(s\) wherever it converges, which we denote by the Greek letter zeta:
$$\zeta(s) = \sum 1/n^ s$$
But thanks to the identity theorem we can prove this function has a unique generalization (analytic continuation) to the rest of the complex plane. This is called the Riemann zeta function. It’s kind of important, as you can see by the length of its Wikipedia page.
What this has given us is a way to generalize the series \(\sum 1/n^ s\) to a larger domain of values of \(s\) than we could originally. In particular, if we plug \(s=1\) into that formula, we get the series \(1 + 2 + 3 + \cdots\) which we’re interested in. Remember, I said earlier that this series doesn’t converge using any of the typical definitions. But it has an analytic continuation in the form of the Riemann zeta function, which we can prove gives \(\zeta(1) = 1/ 12\). Since the Riemann zeta function is the unique analytic continuation of the function \(\sum 1/n^ s\), this gives us a formal way of identifying the number \(1/ 12\) with the series \(1 + 2 + 3 + \cdots\).
But that’s not all! Plugging other numbers into \(\zeta(s)\) allows us to identify other divergent series with finite numbers. For example, \(\zeta(0)=1/ 2\), which corresponds to the series \(1 + 1 + 1 + \cdots\), and \(\zeta(2)=0\), which corresponds to the series \(1 + 4 + 9 + 16 + \cdots\). These give us the strange “identities”,
$$ 1 + 1 + 1 + \cdots = 1/ 2 $$ $$ 1 + 2 + 3 + \cdots = 1/ 12 $$ $$ 1 + 4 + 9 + 16 + \cdots = 0 $$
This is nothing new. Euler discovered these strange results in the 1700s. But are they meaningful? Oddly enough, yes! These are all examples of a much more general technique in physics called regularization, where a naive formulation of a problem might include a value with a divergent sum, but it’s known that the actual value should be finite. This is used, for example, to calculate the Casimir effect.
(This gets a bit more technical from here on out.)
To see where these finite values come from another way, we can take a divergent series \(\sum a_n\) and somehow add a small parameter \(\epsilon\) which makes the series converge at positive but small values of \(\epsilon\), and equal the original series at \(\epsilon=0\). We can then analyze what this resulting function of \(\epsilon\) looks like near \(\epsilon=0\) to tell us about the “finite part” of the divergent series.
An incredibly useful mathematical tool is the Laurent series. Roughly speaking, it allows us to split up a function by something like “orders of magnitude” and analyze how a function behaves even around points where it is not defined. To see how this works, let’s start with \(1 + 2 + \cdots = \sum n\) and turn it into a function of \(\epsilon\):
$$ \sum ne^{\epsilon n} $$
When \(\epsilon=0\), this becomes the original, divergent, series. But we can take the Laurent series expansion of this function around the point \(\epsilon=0\), which (skipping details), gives us
$$ \sum ne^{\epsilon n} = \frac{1}{\epsilon ^2}  \frac{1}{12} + \mathcal{O}(\epsilon ^2) $$
Above, the notation \(\mathcal{O}(\epsilon ^2)\) means some term that’s proportional to \(\epsilon ^2\), which is approximately 0 when \(\epsilon\) is very small. What this leaves us with is a term that is very large, \(1/\epsilon ^2\), and our old friend, the finite term \(1/ 12\). This allows us to say something like, “the finite part of \(1 + 2 + 3 + \cdots\) is \(1/ 12\)”.
In a physical problem, the infinite contribution of the sum might be some part of the problem that can’t be properly computed yet but should disappear if we took everything into account.
But notice how we got the same result, \(1/12\), in two completely different ways. This is no coincidence. If you would like to know more, check out this (very technical) blog post by Terence Tao where he explains that the two methods we looked at are equivalent.
So in conclusion, the contrarians are technically correct. \(1 + 2 + 3 + \cdots\) is not actually equal to \(1/12\). At least not in the way we would typically define such a sum. But identifying the two quantities is not just a cheap parlor trick. It is deep, meaningful, and useful.
]]>For example, when a user edits her profile, you may allow her to choose from a list of countries. This can be a rather long list that you’d rather not hardcode into your application. But it also may feel like overkill to create fullfledged models for this seldomused, readonly list of data.
What can we do?
First, at what layer do we even initiate this request? Ember intends for us to
use the route layer to load data
if possible. The beforeModel
hook
is a good fit for this case. Returning a promise from this hook will pause the
transition until the promise resolves, allowing us to fetch the list of
countries before the route loads:
1 2 3 4 5 

But there’s a problem. What happens with the data resolved by our promise?
Nothing. Instead, we probably want our ProfileController
to have access to
the list of countries when the route loads. We might think to assign the data
to a property on the route, and then assign it to the controller in the
setupController
hook.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 

Sure, this works, but let’s look at the problems:
It’s very verbose. All we wanted to do is load some JSON!
The request is made every time we enter the route. Chances are, once we load this data once, it won’t change again.
It’s a nightmare to test. We’d need to use a library like mockjax to simulate an Ajax request every time we enter the route.
What if we need to fetch this data for a few different routes? Will we duplicate all this code?
Obviously I think there’s a better way.
First let’s tackle the middle two problems by encapsulating the Ajax request
and its resolved data in an object.
By mixing Ember.PromiseProxyMixin
into an array controller,
we can define a kind of lazyloaded array that is also a promise and, when
resolved, populates itself. [1]
To use it, we just need to set its promise
property. We don’t want this to
happen at initialization, so let’s create a class that wraps the request.
1 2 3 4 5 6 7 8 9 10 11 12 13 

We can now instantiate a single CountriesService
, and ask for its all
property as many times as we want.
1 2 3 4 

You can also probably see why this is easier to test. We could easily replace
the real CountriesService
object with a fake one that returns a static
promise when testing:
1 2 3 

Great, now how do we trigger this in our route, and make it available to the controller?
Unlike Angular, Ember doesn’t much advertise dependency injection as a
userfacing feature, even though it uses it extensively behind the scenes. If
you’ve ever wondered how every route and every controller in your application
has a magical reference to this.store
(if you’re using EmberData),
dependency injection is the answer.
We can define factories using Application.register
,
and then inject them into other parts of the application using Application.inject
.
If we follow reasonable naming conventions, we usually don’t even have to
register a factory. A class named App.FoosService
will automatically be
registered as a factory named service:foos
. [2]
The below code will inject a singleton instance of our CountriesService
into
our profile route and controller, as an instance variable called countries
.
1 2 3 4 

Since the service will be injected into our controller automatically, we can
greatly simplify the beforeModel
hook we were using:
1 2 3 4 5 6 

Since the countries service will be injected into our profile controller, we can also use it in our template.
{{view Ember.Select
content=countries.all
value=country
prompt="What country do you live in?"}}
Here’s a JSBin. Have a happy hacking day. □
[1] This may remind you of the way hasMany
relationships are loaded in
EmberData.
That’s because this is exactly how that feature is implemented.
[2] If you are using EmberCLI or EmberAppKit, where the classes making up your application are defined using ES6 modules, you may be wondering what to do. Every Ember application uses a resolver to look up its bits. EmberCLI defines its own resolver which performs lookups using ES6 modules, according to the naming conventions in that documentation.
]]>We may be tempted to implement our template like this:
1 2 3 4 5 

This certainly works. Each item in the collection will get a selected
boolean
property, which we can use to filter the collection of selected items.
But there’s something about this that should make us feel gross: each item in
the collection is likely a model record, which we are polluting with a new
selected
property. Suddenly the model layer cares about what is strictly a
controllerlayer concern. The selected
property will persist on the records
even when a user leaves the route and comes back to it later. Not only are we
abusing the separation between controllers and models, but we are potentially
adding nonlocal behaviour to our application.
Luckily, Ember provides a relevant mechanism. If we define an itemController property on an array controller, each item in the collection will be wrapped in an instance of the specified object controller.
Let’s define a CheckboxableItemController
:
1 2 3 

Now we can tell Ember to use it for each item in the array controller:
1 2 3 

Now it’s each instance of CheckboxableItemController
that gets a selected
property, rather than the model it wraps. For good measure, let’s see how we
can define computed properties for retrieving the list/count of selected items,
and an action for removing them.
1 2 3 4 5 6 7 8 9 10 11 12 

Note our use of Ember.computed.filterBy
. Critically, we are filtering over
''
(i.e. the controller itself), and not 'content'
. This is because
'content'
is a reference to the underlying collection of items, and not the
wrapped ones.
Have a happy hacking day. □
]]>index
response, but which has details that are either dataheavy or expensive to compute, so you’d
rather only return them as needed.
For example, imagine a blog’s Post
model. You may not want to return each post’s body when a user
only wants to see an index of posts in a search result or archive page. You may also have statistics
associated with Post
records, such as the number of comments or trackbacks, that require
additional database queries to compute.
Some data persistence libraries for Ember.js (like Emu)
support partial loading out of the box. In Emu, your /posts
endpoint can return [{id: 1, title:
"Such post"}]
, and your /posts/1
endpoint can return {id: 1, title: "Such post", body: "Many
text"}
. Emu will make a request to the latter endpoint when the body
attribute is requested and
load it.
EmberData does not currently support this behaviour by default (as of 1.0.0beta.4 when this was written), but it provides you with the tools to add it yourself, with relatively little effort.
We will add a PostDetail
model with the missing attributes, and load it as needed. We can
structure our API a couple of different ways to do this, but our frontend models will look the
same:
1 2 3 4 5 6 7 8 

Simple, right? The PostDetail
model holds the actual body
attribute, but we alias it on the
Post
model. If we use {{post.body}}
in a template, the PostDetail
record will be requested if
necessary.
Okay, but how do we structure our API to make this work? If you only have to fetch, and never save
detail attributes, we can add a single /posts/1/detail
endpoint to our API and link to it in our
Post
JSON:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 

(All examples here assume we are using the builtin ActiveModelAdapter.)
Alternatively, we can add a /post_details
endpoint, responding to whatever actions you need to
support,
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 

You may be concerned that if a user loads the page for a post, your application will need to make
two HTTP requests instead of just one. But we can fix this too. Your show
action can return the
post details sideloaded:
1 2 3 4 5 6 7 8 9 10 11 

There you have it! □
]]>Consider Ruby’s String#split
. Given a regular expression, it splits a
string on matches of the expression. Given a string, it splits a string on
occurrences of the string.
1 2 3 4 5 6 7 8 9 10 11 12 

Except when it doesn’t.
1 2 

Wat?
The Ruby docs explain:
If pattern is a single space, str is split on whitespace, with leading whitespace and runs of contiguous whitespace characters ignored.
Splitting a string on whitespace, (i.e. extracting its words) is a common use
case, and so perhaps this should have been the behaviour of String#split
when called with no arguments. But treating a nonzerolength string as a
special case is confusing, surprising behaviour that is pretty much impossible
to guess from the behaviour of the same method on any other input.
In other words,
]]>