How long does one party control the White House in US Politics


As a millennial born in the late ‘80s, I have a soft spot in my heart for ’90s alt rock, remember a time before smartphones-but not before computers and the internet-and have taken it as a given that the political pendulum swings every decade making it essentially impossible for one party to have more than 3 terms in office (I only vaguely remember the G. H. W. Bush vs. W. Clinton election). So when Donald J. Trump beat Hillary Clinton in November, 2016, I, like most people, was surprised that our electorate had swung that far right as to elect a populist with a xenophobic agenda that was openly supported by the KKK and the so-called ’alt right.’ I won’t pretend that I actually thought Trump would win the presidency or somehow saw this coming, but I did predict (seriously, you can ask some of my friends who I talked to off the record) that if she won, Clinton would be a one-term president. Not because she’s a woman or anything like that, but for the simple reason that since the 1950s, no party has ever held the White House for more than 3 consecutive terms.

My question now, is whether or not this has always been the case. Incumbent presidents still have a 69+/-1% re-election rate, but at least for the last 30 years it seems as the parties’ re-election rate has fallen. Is that really the case? After looking through the empirical data to answer that question, I will close this post with some qualitative context for the quantitative data and a brief discussion of what, if anything, I think this answer means for democracy.


All data for this essay come from 270toWin (who, in turn, cite Wikipedia for a least some of their numbers). Data were retrieved using `readXMLtable` from the XML R Package. All source code is available on github.

How long do parties control the White House?

For want of a better word, how long is the typical party ‘dynasty’? That is, how common is it-going back to 1789-for one party to maintain control of the White House for 2 terms, 5 terms, only one term at a time? There are some recognizable ‘dynasties’ throughout history, most notably the Jeffersonian Democratic-Republicans and most recently FDR’s New Deal Democrats. But in general, 2 terms is the median number of terms for any one party to hold the White House (hereafter WH), and while the distribution is inherently skewed due to the impossibility of negative terms, the average isn’t much higher: ~2.48 terms.


The most recent ‘long dynasty’ of 4 or more consecutive terms within the same party was FDR and Truman (5 terms total), which was over 60 years ago (20% of the nation’s history). Does that mean that long-running party dynasties are over and we’re floundering between electing individuals for two terms in a row and then alternating parties? Maybe. The data tell two stories. First is that the long-term trendline does have a negative slope. The second story is that the slope is barely negative and the confidence interval contains positive slopes because while we have fewer long-running periods with one party retaining the WH, we also have fewer one-term presidents where we swap back and forth between parties like we did in the 1840s and 1870s. That is, the standard deviation for number of terms one party holds before turning over the WH has fallen from a standard deviation of 2.16 terms pre-1900 to a standard deviation of only 1.09 terms since the beginning of the 20th century.

Figure 2 shows the moving averages of the length of political party dynasties. Each ‘dynasty’ is a point on the graph, the blue line is a the ‘noisy’ moving average looking at the average number of terms the incumbent party has been in the WH for the last 5 elections, and the red line is the slightly smoother moving average ‘dynasty’ length for the last 5 periods of uninterrupted party control in the WH. The black line is the trendline for average of the smoother of the two moving averages.



Let’s look at this with a little bit of context. The longest running party dynasty was that of the Jeffersonian Democratic-Republicans, which didn’t end until Andrew Jackson won in 1828 (after an extremely contentious election in 1824 where he won a plurality in the electoral college and popular vote, but lost the majority and was rejected by the House of Representatives) with the newly formed Democratic party (which, at the time, was built on a platform of White Supremacy). In 1884, Grover Cleveland became the first Democrat (again, at the time this was the more conservative party) after a string of 6 Radical Republican terms starting with Abraham Lincoln and promoting a doctrine of equal rights and reunification of North and South after the bitter American Civil War (1861-1865).

In 1912, Woodrow Wilson took advantage of internal divisions in the Progressive Party (Republicans) and captured the White House from Taft after the Republican vote was split between Taft and Teddy Roosevelt. Finally, the most recent political streak ended when Dwight D. Eisenhower became the first Republican (now the political right) to win since FDR Reinvented the Democratic Party with the New Deal.

In other words, dynasties seem to come and then fall alongside dramatic shifts or reinventions in the party. Even adjusting for the fact that the Democratic and Republican Parties switched sides (and essentially became new parties with old names), the longest running party is the Democratic Party from Jackson through Wilson. Second place goes to the existing Democratic and Republican Parties which started with FDR (or Eisenhower or Truman, depending on how you want to count them).

Political Party Name Year of First Presidential Win Year of Most Recent President in Office Age (Years)
Federalist 1789 1800 11
Democratic-Republican 1800 1828 28
Democratic 1828 2016 188
Whig 1840 1852 12
Republican 1860 2020 160

In other words. we’ve had the same parties for a really long time (80 years), and it feels like time for a change. We can see the internal factions clearly within the Republican Party now as they debate how to replace the Affordable Care Act. Because of the first past the post voting system in the United States, a viable third party hasn’t been possible since the mid-1800s, and I don’t see that changing now. But it does seem like we’re entering into a new era.
For the sake of historical record keeping, I really hope we finally dissolve one or both of the existing parties and name the replacement(s) something new. The question that remains is whether it will be the Democratic Party that needs to be reinvented after losing to the least popular, most disapproved-of candidate since we’ve tracked candidate approval ratings, or will it be the Republican Party that can’t seem to pass a comprehensive Health Care bill with majorities in the House, Senate, and control of the WH and 7 years of campaigning on repeal and replace to prepare. Or, we should be so lucky as to institute a parliamentary system or at least one with an alternative vote and viable third parties, but that’s just crazy talk.

On Exploitation and Thin Markets

Exploitation: a word so overused by the political Left, so ignored by the political Right, and so misunderstood by both sides. A straw-man attack against capitalism that is so often so poorly defended by Libertarians (myself included). That all said, this is an important issue that demands a reasonable response from Capitalists, rather than talking past the often valid concerns (with completely unrealistic solutions) of the Left and attempting to turn the problem around on Government as the exploiter (which can and often is the case, but not in the ways the Right likes to claim). This is an attempt to both consolidate my thoughts about exploitation from an ethical/morality perspective, and also attempt to reconcile my free-market ideology with the fact that unregulated firms can and will find ways to exploit consumers.

What is Exploitation?

First, we have to start with the basics: What is exploitation? According to Rebecca Kukla at the Kennedy Institute of Ethics, the broadest definition of exploitation is simply one party “taking unfair advantage of someone else” (see her video clip on Exploitation for EdX). She goes on to clarify that exploitation is not coercive. Coercion is something else entirely for the purposes of ethical and economic discussions. Instead, the decision made by the exploited party is voluntary and can still make that party better off, but the available alternatives are themselves very unattractive as well. In the terminology of Dr. Mike Munger (Duke University), the choice is voluntary, but not euvoluntary (or voluntary the way we think of voluntary in common vernacular and economic assumptions). This could be because the exploiter has somehow limited the available choices or not, though there should be some exclusion for natural consequences of poor decisions–e.g. It’s not exploitation for a spouse to make an ultimatum after the other partner was caught in an affair. In any case, the exploiter benefits from the fact that the exploitee has extremely limited/poor available alternatives.

Taking the example of sweatshops, it’s true, as most libertarians like myself are quick to point out, that the workers are free to take other jobs if they can find them and are remaining employed voluntarily. It’s also true that manufacturing, even in the horrible conditions that many manufacturing workers face, can lead to substantial improvements in well-being, particularly in the next generation. And it’s true that the sweatshop owners do not adequately protect their workers, treat them more like capital than people, and continue to pay exceptionally low wages (to keep prices for Americans low) because they know their workers don’t have any better alternatives. In other words, these manufacturers are profiting off the fact that these workers were born in a place with few economic opportunities. Furthermore, they are benefiting disproportionately more than the workers (who are still receiving a perceived net benefit). Therefore, I think it’s fair to say that sweatshop owners are behaving in an exploitative manner.

Although sweatshops in developing countries make for great examples of exploitation, I want to focus on more first world problems, because those are better for highlighting different policy approaches to prevent, or at least mitigate, exploitation.

First World Problems (That aren’t exclusive to the developed world)

Low Wages and Benefits

Although Donald Gracchus Trump managed to turn the conversation of the election away from policy and towards his Twitter feed, Bernie Sanders seemed poised to try to make minimum wage a key point in his platform had he gotten more news coverage. By no stretch of the imagination are minimum wage workers in the US being exploited like sweatshop workers in third world countries, but it is true that companies like WalMart, McDonalds, Burger King, KMart, etc. play legal games with part-time vs. full-time employment and other tactics to avoid paying benefits, avoid paying overtime, prevent employees from gaining the needed human capital to get promoted, underhandedly push people out, etc. All of this takes advantage of the fact that most of their entry-level workers don’t have better options; if they did, they probably wouldn’t be working in entry-level retail or fast food. In this way, there is a level of exploitation going on. Proportionality is also something to consider: a high schooler working at minimum wage is gaining valuable experience and resume building beyond just the paycheck; the 30-something single mother is not benefiting from this job as much as that teenager, but needs it more. Therefore I don’t think paying entry-level workers minimum wage and working them part time is inherently exploitative, but it can vary based on the situation and future prospects of the worker. As an aside, while I don’t believe the ratio between CEO salary and entry-level workers is meaningful metric, I also don’t really buy the “maximize shareholder value” argument since economic fundamentals have a very limited role in stock price, and it is true that most of these companies would still be profitable if they didn’t play these employment games.

Price Gouging

In the wake of storms and other natural (or manmade) disasters gasoline, generators, water, ice, etc. become in very high demand and the supply of these things is either fixed or falls, so Econ 101 supply and demand models suggest that the prevailing market price will spike up to equilibrium. And it typically would, except most states have anti-gouging laws, and Governors and Mayors love to tout their enforcement of these laws in the wake of such an event. Of course, jacking up the price of something right before everyone in town needs it (e.g. Bottled water) does feel like kind of a dick move, and liberals in general tend to have a visceral reaction that this is wrong and exploitative: “what about the people who can’t afford the higher price?”. Take this practice to its logical, even if hypothetical, conclusions, and you can have stores arbitrarily upping their prices the night before a storm (as Russ Roberts does in his book, The Price of Everything) or resellers buying up all the bottled water in an area before a storm and reselling it after the storm for a substantial profit–effectively arbitraging across time. Such price hikes or intertemporal arbitrage can make a very large profit as a result of others’ misfortune, and there’s always something a little ethically uncomfortable about that.

Captive Markets

In what can be thought of as a very particular form of Price Gouging, we have Captive Markets. In cases where a provider of a service (usually) has a secondary product (usually food or drink), they can create a local monopoly for that product: if you (voluntarily) buy the service they provide and would like to (voluntarily) buy the ancillary product, then you must buy it from the service provider at monopoly prices. For example, after jumping around in a mosh pit at a rock concert, you must buy water from the venue; when seeing a movie, you can only buy popcorn and soda from the concession stand in the theater. Other classic examples of Captive Markets include food/drink on planes, trains, and boats, college textbooks, and food/drink at sports events, or really any place you go to for a reason other than food, can’t leave easily, and can buy food at (but bringing in outside food is banned or discouraged). Captive Markets are localized monopolies, so they tend to charge the monopoly profit maximizing price as opposed to the (much lower) market price. In this case, this is exploitation because your options are very explicitly limited by the supplier of the primary service (even though that primary service is still competitive).

The Wrong Approach

Exploitation is unethical, and while it can range in severity from degrading human dignity at it’s worst to be merely being a dick thing to do at best, it’s in the interest of society to prevent exploitation as much as possible and provide a legal recourse for the exploited when we can’t prevent it. Unfortunately, most proposals I hear (from the Right and the Left, but mostly the Left), would not, in my estimation, actually work. 

Teach People to be Nicer

This is really more of a general comment about policy, but it’s a good starting place for this conversation. Any solution that relies on people doing the right thing, even the wrong thing (ethically speaking) is more profitable and difficult to prosecute is going to fail. You can wax all you want about how people should be more loving and less motivated by money, but that won’t change the hearts of humanity. To quote Milton Friedman, “I do not believe that the solution to our problem is simply to elect the right people. The important thing is to establish a political climate of opinion which will make it politically profitable for the wrong people to do the right thing.” Even though most people, most of the time, aren’t out to get you, a few bad apples will spoil the bunch and break down the system.

None of the games employers play to avoid paying employees benefits (e.g. Health insurance) are illegal, which is why employers play them. After the ACA mandated that employers offer health insurance for workers who worked at least 30 hours a week, many employees saw their hours cut to 29 hours a week. Many democrats responded by recommending that the threshold be changed to 25 or even 20 hours a week. Most Republicans just ignored the issue or pointed to this as evidence that the entire bill is flawed, but I did hear one congressman (I don’t remember who) actually give an intelligent response: We should increase the cap to 40 because no matter what threshold we set, large employers’ legal teams will find a way around it, but at least if the threshold is 40 instead of 30, hourly workers will get an extra 10 hours of paid work. The specifics of this policy aside, this congressman understood that you can’t legislate that people do the right thing.

Ban It!

The problem with just outright banning something, is that sometimes things that look like (or maybe even are) exploitation are still justifiable. In an essay on this topic, Munger tells the story of price gouging laws gone wrong. After a storm, there were widespread power outages, which meant that by day 2 or 3, refrigeration was starting to become an issue and food was spoiling in people’s homes and in stores. Two “hooligans” from a neighboring town heard about this plight and saw a business opportunity. They rented a refrigerated truck and some chainsaws, and bought hundreds of pounds of ice. They drove to the affected down–using the chainsaws to clear the roads of fallen trees where necessary–and began selling ice for $12 a bag. This is an extremely high price for ice, yes, but ice was still limited and important to keep medications and the like cool, so the marginal benefit exceeded the marginal cost. However, these two guys were eventually arrested for price gouging. After that, two things happened: the ice was confiscated and not made available to people, and no one else tried bringing ice in from outside. Munger’s larger point is that even while exploitive practices can be unethical, the alternative can be even worse. If we accept the premise that exploitation is non-coercive, then this becomes a tautology: the transaction is still voluntary, so removing the option to make this transaction (that is further limiting the already bad list of options) makes the exploited individual worse off, and is, in fact, exploitive in its own right. This same logic will generally apply to most if not all price control efforts. (Further discussion along these lines).


Equally undesirable is the response from the Right (the previous two proposals usually coming from the Left), which is typically some version of Trickle-down Reaganomics and deregulation. I’m generally a big fan of deregulation, but the whole concept of exploitation implies that one actor has market power, which constitutes at least a partial market failure and therefore justifies measured government intervention. Granted, most government intervention actually makes the problem worse (as shown above), but the kind of “deregulation” usually proposed by the right actually exacerbates the issue as well by giving the exploiter more market power. I don’t say this often, but the above injustices can and should (I believe) be corrected via government action; just not the kind typically proposed.

Reconciling Free Markets and Government Intervention

The root of exploitation is the lack of alternatives. This is part of the definition, but I feel it often gets overlooked in favor of blaming/hating the exploiter. Most, probably all, attempts to legislate away the possibility of exploitation away by targeting the exploiters are destined for failure. Though they do stand a slightly better chance if you pull a Henry VI and first kill all the lawyers. The best way to end exploitation is not to stop exploiters, but to give the exploited better options, and we can (in most cases) do that with better market competition. Sometimes though, government is needed to facilitate or allow competition (see Roth for several examples of government designing markets so they become more competitive).

Although this is by no means intended to be a complete list, here are some ideas for a market-based approach to the issues listed above. Again, all of these focus not on preventing a firm from exploiting others–workers or consumers–but on giving the exploited parties more/better alternatives.

  • Low Wages and Employment Law Games
    • Minimize opportunities for discrimination in the hiring process. For example, initiatives like “Ban the Box” seek to make it easier for individuals with a criminal record to get a job interview and not be immediately eliminated from the candidate pool based on that record. Similarly, we know that white men are more likely to get interviews than women or minorities with identical resumes, so instituting some kind of quasi-standard nameless resume/cover letter could also help even the playing field.
    • Make it easier and safer to change jobs. THis would include reforming the laws around non-compete clauses in contracts as well as making things like health insurance either easily transferable or not tied to employment in the first case. Although the ACA made this slightly better by ensuring coverage for pre-existing conditions, much of the current Republican Replacement plan (AHCA) would undo many of these gains and make it harder for people, particularly people with illnesses or sick family members to change jobs.
  • Price Gouging
    • In some cases, as Munger notes, price increases are desirable to allocate resources based on highest need, rather than order in the queue. While all allocation methods have issues (pricing, queuing, rationing), the price mechanism is organic and therefore adaptable, which is exactly what’s needed in a crisis situation where this would be needed.
    • Furthermore, the price increases should be evanescent. We as a society have decided (or at least our representatives have) that we want to provide essentials (water, foodstuffs, first aid supplies) to victims of storms, etc. Any price hikes should only last until FEMA comes in and floods the market with water, etc.
    • Ultimately, if they want to avoid problems the State needs allow for price increases or increase the supply of certain goods to compensate for increased demand and decreased supply. Unless they do that, there will be shortages or price shocks. There is no way around this.
  • Captive Markets
    • In cases where the primary service provider is also the ancillary service provider–such as in the case of the movie theater where the theater is providing the primary service of the movie and the ancillary services of popcorn and soda or water sold at Rock concerts–making the provider allow outside food and beverage would lower prices by effectively making the ancillary service compete with all outside providers.
    • In cases where the providers are different–e.g. A restaurant where a particular tour group stops for lunch–kickbacks from the ancillary to the primary should be banned. Since the primary provider is acting in a fiduciary capacity when they select the ancillary provider, there should be nothing done by the ancillary to influence the primary’s decision (besides compete to provide the best product).

Again, this is not, nor is it intended to be a comprehensive list of free market-based interventions to prevent exploitation. The point I’m trying to make is two-fold: (1) market-oriented government interventions for these problems exist (oxymoronic as that phrase may sound) and (2) these interventions are often negative–meaning they prevent some action rather than forcing a firm to comply with some action, such as preventing non-compete clauses or banning outside food/drink as opposed to some kind of (positive) price control. The goal of all these interventions is to make the market “thicker”–that is, to add options.

It’s the Market Structures, Stupid

Tying these pieces together, it’s worth reiterating that the problem isn’t with capitalism, or evil corporations or CEOs, or profit maximization or self interest. The problem is that in some cases the market is too thin for competition to be effective, and people engaging in those markets as price takers aren’t going to get a good deal. Rather than trying–usually unsuccessfully–to change how people act in this thin market, the policy response should be to make the market thicker: provide more options, not further restrict the available options–whether directly, as is the case in banning certain secondary markets or indirectly, as was the result of policies like rent control.

I don’t favor market-based solutions because I trust businessmen. I don’t; I think most of them are full of shit. I favor market-based solutions because I think most politicians are full of shit too, and I definitely don’t trust them. I want a system where even people who are full of shit can do the right thing by the most vulnerable in society because it’s in their self-interest to do so, and people doing what is in their own, perceived self-interest is one of the few things people will reliably do.


A Brief Commentary on Communication as an Influential Force in Societal Development


Enterprise: a project undertaken or to be undertaken, especially one that is important or difficult or that requires boldness or energy

Human: a species of bipedal mammal who, as far as humans are aware, is the only species capable of self-aware consciousness[1] and therefore intentional self-improvement

Human Enterprise: the project of all humanity to improve the well-being of one’s self in society and society as a whole by means of self-aware consciousness and initiative

Bullshitious: Bullshit, but making it sound fancy[2]or what I do on this blog.

In a way, the entire human enterprise is an experiment in communication. It was the human capacity for communication that enabled cooperation, engendering a virtuous cycle in human and social evolution. It was communication and collaboration that allowed humanity to survive despite facing faster, stronger, and fiercer predators and prey. And it was communication, particularly written communication, that laid the groundwork for civilization, a feat that had never been achieved by any other species before or since homo sapiens sapiens (at least not on Earth).

I define communication as the act of conveying information or ideas from one party to another. Good communication is communication that achieves this goal: the receiving party received and understood the information or idea as the sending party intended. This applies to both semantic communication: do you understand what I said? As well as to what I’ll call “essential” communication (as in “essence”): do you understand what I mean, how I feel about the topic, and how I want you to view me?

The human enterprise, pre-historically speaking, has two components: living in larger groups and adapting to the environment to meet our needs, rather than purely responding to the environment. Both of these uniquely human accomplishments began with the rise of agriculture. By adapting the land—by plowing, weeding, etc.—to our need to acquire food with as little effort as possible humans slowly ceased to be nomads, and settled in a set location. Farmers also specialized and allowed some people to focus their energy on things other than food acquisition and towns formed with

We then come to the fabled tale of the Tower of Babel[3]. Babylonians were poised on the great accomplishment of building a stairway to heaven, so they could make a name for themselves and have direct access to God via this stairway. I can only assume that they had already booked Led Zeppelin to play for the ribbon cutting ceremony, but God confused their speech and they scattered before they could finish the project. The question of whether this actually happened like this is, like the rest of Genesis 1-11, completely beside the point[4]. Rather, we get a glimpse from this ancient story about human relationships—with each other and with God.

The problem at Babel (besides the self-proclaimed demigod status that so angered God) was that people couldn’t communicate with each other. When Enkidu told Urshanabi that he needed more bricks on the southern wall, Urshanabi didn’t get the message and didn’t bring the bricks. Then Enkidu got pissed off and left to go hang out with Gilgamesh, leaving the southern wall unfinished. There was no semantic communication happening, much less conveying the essential idea that Urshanabi had for a thing he called an “arch” to stabilize the second level.

The point is that when we don’t speak the same languages—in the case of Babel, literally, but this applies almost equally as well to the “languages” of development, project management, etc.[5]—we don’t make any progress. Second, the complexity of endeavors in the human enterprise require complex communication. Building a massive tower—or developing a successful application, building and maintaining a highway infrastructure, or mending damaged race relations in a country with a history of slavery or
apartheid—requires communicating extremely complicated sets of instructions,
goals, and sentiments. This goes far beyond the (still very impressive)
proto-language of bees to communicate how far away and in which direction a
food source is[6]. Human communication involves conveying abstract emotions and ideas, visions for the future, and perceptions of reality as interpreted by the speaker.

Following from this, all of the above endeavors also require that we communicate about the future. Again this is something that other species occasionally show signs of[7], but Humans are unique in that we display this behavior ubiquitously. The human enterprise necessitated the creation of an ever-increasing lexicon and complex grammatical and syntactical structure. It’s ultimately unknowable which came first, the extensible language or abstract thought, but at some point, humans began conceiving of and conveying beliefs about a future state of the world: “I bet if I put this little seed in the ground and pour water on it, a plant will emerge from the ground in a couple months.” Throughout the millennia, the problems facing humanity have changed, and on and off since the times of the Greeks and Romans, we struggle with vague ideas and concepts like “purpose” or “morality” and Eudaimonia. And so the complexity of our language grows in a self-reinforcing feedback loop to match the newer and “higher” needs and goals of the human enterprise. Going back to my opening statement, I want to dive into what I mean by “experiment” in this context. The entire human enterprise is an experiment in communication.  An experiment is trying something and seeing what happens, and that’s really what we’ve done as a species. I’m going to begin closing with
a series of somewhat disjointed thought experiments and case studies in communication, and then attempt, feebly, to tie it all together for an actual closing.

Military conquest

The entire concept of war—two leaders disagree about economic matters so they use high-minded rhetoric to send the socio-economically disadvantaged from each side to go kill each other until one side gives up—has always confused me[8]. Nevertheless, war and conflict have been pervasive throughout human history. I’ve always found it fascinating how many factors beyond just skill of the troops affect who “wins” a battle or a war: logistics and supply lines; disease, both for the armies and for the home front populations[9]; terrain and weather; and, of course, lines of communication.

There is a probably apocryphal story about Prince Rupert, a cavalry general in the first English Civil War, who received ambiguous orders from the king via a letter that had been written, presumably, a week or more before. Based on the orders in the letter, Rupert attacked the Parliamentarians at Marston Moor, where he suffered a total defeat[10]. Allegedly, Rupert carried the orders from the king in his pocket until the day he died as a challenge to anyone who accused him of making a mistake by attacking at
Marston Moor[11]. His instructions from the king were ambiguous at best, and arguably worse than useless given the turns of events taking place between the writing and reading of the letter.

Military leaders have succeeded and been defeated based on good or bad intelligence—whether due to luck, dereliction of duty, or outright betrayal—and each campaign is a data point in the “what works in communication” experiment. Different generals with their different styles and temperaments use different modes of communication, and learn over time what works and what doesn’t. What encryption and codes successfully obfuscate meaning from the enemy while still being accessible to your allies? How much lag time should I assume and build into my orders? How should I balance flexibility with
structure in my orders to a part of the war I know little about? What is the quickest way to reliably get orders from one side of a battlefield to the other? If we accept (as almost everyone does) that military leaders have shaped history, then we must appreciate the role experimenting with different styles and modes of communication has played in world history.


Marketing and it’s close friend advertising are where the line between semantic and essential communication becomes at once a critical distinction and a pair of double-Dutch ropes. Beyond telling you what the product is, a good advertisement subtly convinces you that the product is a good one and the producer is reliable and trustworthy. Bad advertisements, like the recent Pepsi Super Bowl commercial that sparked a backlash[12], are bad not because the fail to convey what the product is (semantic communication), but because they fail to paint the producer in a positive light—like Pepsi being tone deaf and really not giving a shit about social justice.

And then we come to the more insidious parts of advertising: you will be complete if you buy this. For this message, subtlety is critical as marketers balance making the message easily inferrable without being so blatant as to turn people off. Much as I generally hate this messaging, I can’t deny that when it’s well done it’s brilliant. Every advertisement is an experiment in what motivates humans. That’s not to say that advertising is all there is. On the contrary, vaporware always crashes and burns. Nor is it to say that advertising is the whole of marketing or that all marketing is insidious. It is to say, though, that the firms that grow and last do so because they’ve experimented and found a communication strategy and brand image that works for them. When they gain market share beyond a small critical mass, it’s as much about communication as it is about having a high quality product.


Throughout history, territory has transferred hands, fortunes have been made and lost, and the expansion of the human enterprise have occurred through competing messages, competing media, and novel ideas put into language. The evolution of language—from the present to the future to the abstract—was inseparably linked with the expansion of philosophy and human intellectual pursuits. The human enterprise, that never-ending attempt to improve the quality of life and epistemological enlightenment for the human population, is only advanced when we interact by sharing ideas. When we communicate with one another. It’s not always advancing, but that’s where the experiment comes in, and when it works, we need to capitalize on that positive outcome and remember the methodology used to create it. And expand eudaimonia. 


[1]  By “self-aware consciousness” I mean awareness of one’s own consciousness, and
the act of “second order thought” that, according to St. Thomas Aquinas, sets
us apart from other creatures. See Summa Theologica.

[3] Genesis 11

[7] For example, crows using traffic to crack nuts or monkeys storing stones to
throw at zoo goers. Both of these are real examples that are really exceptions
that prove the rule. Animals may be capable of adapting to the environment and
doing short term strategic planning, but most of what they do, including
migration, appears to be instinctual rather than based on long term planning.

[8] And also the genius that is Bill Waterson,

[9] E.g. the “Germs” part of Guns, Germs, and Steel, by Jared

[12] AKA the “attractive lives matter movement”
Or, my personal favorite play on this ad from SNL:



Comparisons between Health care and Auto Repair

The Analogy

In the United States, cultural and government mandates have tended to place healthcare in its own category, distinct and siloed off from other industries. It’s true that both medical practice and the financing thereof are, in many ways, unlike any other industry, but that’s not to say they need to be treated differently or that Healthcare can’t learn from other industries.

Healthcare delivery can learn from the supply chain management of chain restaurants and their ability to deliver consistent quality across locations, time, and people by using standardization (Gawande, Big Med, 2012). Healthcare can learn from the airline industry and power of the checklist in quality enforcement (Gawande, The Checklist Manifesto, 2009). Even though the healthcare silo even delineates between a Master’s in Business Administration (MBA) and a Master’s in Health Administration (MHA), the fundamental principles of business, growth, and management are still more or less the same (Hekman, 2010), even if they’ve tended to be kept separate.

Further, I believe that we—patients, citizens, consumers, policy makers—can learn a great deal about healthcare financing by looking at other industries. Health insurance is unlike other types of insurance, but I would argue that this represents a misunderstanding the application of insurance to healthcare, rather than a difference in the fundamental nature and category of healthcare and health insurance. In this case, I believe we could learn a great deal about a better way to implement health insurance by looking at the auto insurance industry.

I’ve often made this analogy, but I want to flesh it out, and see how far it really goes.

Parallels between Healthcare and Car Repair

Because many people balk at the comparison initially, I want to start by highlighting areas where these two, seemingly disparate industries are, in fact, quite similar.

Specialized knowledge of complex system

Although the human body is more complex and less well understood than the internal combustion engine, they are both sufficiently complicated as to require specialized knowledge to diagnose and fix/treat. In both cases, therefore, the learning curve of acquiring that specialized knowledge and the costs—money, time, etc.—creates a natural barrier to entry to supply these particular services. The fact that such a barrier exists means that specialization and division of labor is not just advantageous, as it usually is (Roberts, 2010), but truly necessary: self-sufficiency is not an option for anything beyond the most rudimentary oil change or wound care. The fact that the human body is more complicated or that physicians receive commensurately more special training and certifications does not change this fact. It only means that physicians require more reimbursement than mechanics (both in pecuniary form and in prestige), which, in the US, they receive.

Asymmetric information

As a result of the specialized training and knowledge of mechanics and physicians, we have a situation that economists call “asymmetric information,” meaning that one party in a transaction knows something that the other party doesn’t (Arrow, 1963). This includes things on the consumer side where the consumer doesn’t disclose germane information to the provider (mechanic or medical provider), whether due to embarrassment or shame—for example, sexual practices or driving habits (Caine & Tierney, 2015)—or due to not believing that the information is relevant. This informational asymmetry also means that the suppliers have a much better understanding than the consumers of how reliable a diagnostic test is, whether or not a given service is truly necessary.

Fiduciary responsibility and history of bad actors

Because consumers don’t always know better, they tend to trust the experts—in this case, mechanics or physicians—about what services they do or don’t need, and an unscrupulous provider could use that to his or her advantage by recommending services that are not in the best interest of the consumer. Although the trope of the dishonest mechanic who recommends unnecessary parts and services is an old one, the same behavior in physicians—over-ordering unnecessary or low value care due to greed or defensive medicine—has recently come until the public discussion as well (Gawande, 2009; Gawande, 2015).

Insurance to spread out risk

Although we expect to require minor maintenance services for both our cars—oil changes, tire rotations, etc.—and our persons—routine physicals, medications, etc.—we also expect to have larger maintenance needs, but only occasionally. I might need an annual physical an annual tire rotation, but I also expect that there’s a possibility that I will get in a motor vehicle accident and require both medical care and car repairs, both of which will be quite expensive. It’s possible I’ll go through my whole life without a serious accident, but just in case, I’ve bought insurance to cover the costs of that emergency.

Furthermore, as of 2014, both health insurance and car insurance are required for almost everyone. New Hampshire and Virginia do not require car insurance and there are some exceptions to the individual mandate portion of the Affordable Care Act (ACA; aka ‘Obamacare’), but in general, insurance is required for everyone with health and/or a car.

Service is a small part of a critically important outcome

Both good health and functioning cars are critically important to the survival and well-being of most people. Obviously, a working car lacks the obvious life-and-death-hanging-in-the-balance mystique of trauma surgery and isn’t critical in densely urban areas, but reliable transportation is a major determinant of health. Our cars get us to and from work, to and from grocery stores, and generally play a role in most aspects of daily living in suburban and rural areas (RHIhub, 2017).

However, a mechanic isn’t needed to drive the car, just to fix the car when it breaks; just like a physician isn’t needed to help us perform activities of daily living: health is. But so much of the both our physical health and the “health” of our cars has to do with our environments and our behavior. Salty roads take a toll on the body of a car; and where we live (i.e. how rural) affects how many miles we put on our car. By the same token, where we live affects access to healthy foods and clean air. Our behavior also affects our health and the health of our cars. By some estimates, medical care only accounts for 20% (or less) of health outcomes, and social determinants of health (McGovern, Miller, & Hughes-Cromwick, 2014). I don’t know if anyone has done something similar for automotive outcomes, but it seems plausible to me that this is also the case for cars: regular access to high-quality mechanics account for only 10-20% of vehicle life.

Market structure

Finally, there’s market structure. The majority of mechanics and the majority of physicians are in small, private practices, with a handful of assistants to handle scheduling, billing, etc. Additionally, dealerships and hospitals hire a number of mechanics and physicians, respectively. The prices at dealerships and hospital-owned physician practices tend to be higher, but more reliable. The quality of independent practices tends to vary a lot more, such that you can probably get a better deal on a physical or repair by going to an independent provider, but the transaction costs to identify the high-quality providers can be considerable.

Where the Similarities End

Despite their similarities, auto repair is obviously not health care. Most notably, cars are not sentient beings; they are man-made machines for which we possess a complete blueprint. Thus, the ethical considerations associated with auto-repair are really just commercial/economic ethics, whereas medicine has separate set of ethical considerations apart from economic activity.

Several additional differences between auto repair and healthcare exist in the market dynamics of both auto repair and insurance purchasing. With health insurance, the majority of individuals get coverage through their employer or the US government. Auto insurance is virtually entirely private, with individuals purchasing their own insurance. Additionally, price transparency is virtually non-existent in healthcare. While prices still function on an estimate-only basis in auto body repair, the prices are still widely published online and all mechanic shops will provide an estimate in advance at no cost, which cannot be said of medical providers.

Nevertheless, the non-ethical differences between healthcare and auto repair are really just differences of degree, rather than fundamental differences in nature of each service. The human body is not fully understood the way cars are, but from the point of view of both the consumer and the mechanic, the result is more or less the same: cars a still a complex system that the individuals don’t fully understand, though the mechanics understand the system better than the customer; the same is true of healthcare, just with a greater degree of complexity in the subject.

What We Can Learn

The point of all of the above was to show that healthcare and auto repair are somewhat analogous, so what can we learn from this analogy? What lessons—successes or failures—from the auto repair industry can we apply to healthcare?

Lemon Law and Malpractice

When a physician orders, say, a total shoulder arthroplasty and the patient survives and is able to move her arm, then the surgery is considered a success. From the perspective of the patient though, the surgery should only be considered a success when it fixes the underlying problem—in the case of a shoulder arthroplasty, the pain or limited motion is resolved. Even if it wasn’t a success and the patient died on the table, the physician would get paid, which is understandable in that surgery is risky and we shouldn’t incentivize physicians to avoid caring for risky patients, but it’s no small surprise that physicians tend to over-order invasive procedures—procedures that other countries avoid due to cost and limited success (Reid, 2010)—physicians see all the benefits with little-to-no risk of a downside.

In the rest of the economy, particularly for automobiles, we have “Lemon Laws” that protect consumers—or at least give them a viable recourse—from being taken advantage of by the proverbial used car salesman. Specifically, the Magnuson-Moss Warranty Act of 1975 (MMWA) mandates federal standards for warrantees on consumer products and vehicles, and most states have passed similar, additional laws. MMWA and the subsequent state laws are far from perfect. Warranties are full of legalese and there’s a reasonable argument to be made that such consumer protection laws make us less safe because we all have warning fatigue. Nevertheless, a legal apparatus exists such that warranties need to meet certain standards and are enforceable, and there is a competitive advantage to be gained by having a warrantee or guarantee (Tommy Boy notwithstanding).

In medicine, we have malpractice lawsuits, but those are for cases of gross negligence or obvious misdiagnosis; a “successful” procedure that didn’t deliver the advertised results shouldn’t result in a malpractice suit because the physician did everything right in the execution of a given procedure; just not in the advertising and prognosis. Still, physicians should be held accountable for their recommendations, given that they are acting in a fiduciary capacity in an environment of asymmetric information. If they recommend a procedure that empirically has a low rate of solving the problem from the patient’s perspective, they should not receive financial benefits from it, at least not equal to the financial reimbursement they receive when they recommend and perform a procedure that does solve the patient’s problem.

Would it be possible to implement a medical warrantee? Politically, the answer is almost certainly no, given America’s history with health reform (Steinmo & Watts, 1995). But such laws have mitigated the negative effects of bad actors acting in a fiduciary role. In the auto industry, false advertising isn’t in the long run financial interest of car salesmen. If we were to implement similar laws in healthcare, we could prevent over-ordering of invasive procedures from being in the long term financial interest of physicians. Such laws wouldn’t penalize physicians with malpractice suits, but they could prevent physicians from being paid as much for doing a procedure that didn’t benefit the patient. For example, rather than being paid the same rate by Medicare whether the patient survives, recovers, or improves, Medicare could pay 100% of current rates for survival (the current standard), 110% for improvement, and 65% for recovery without improvement. Note that while this would seem to incentivize a physician to kill the patient on table (and make it look like an accident) rather than risk a recovery without improvement, I think it’s fair to say that (1) the vast majority of doctors would never intentionally kill a patient and (2) that would clearly fall under malpractice and, if proven, would result in a revoked medical license and possibly criminal charges.

Insurance Markets

Insurance, by definition, disperses risk. For high-cost, low probability events, we disperse the risk such that even in that low probability event, we aren’t wiped out by the cost. For example, home insurance costs less than a dollar a day, but in the unlikely event that your house burns down, you aren’t wiped out financially and left homeless. How insurance is implemented though seems to vary between healthcare and auto insurance. Here are some of the key differences in auto insurance, relative to health insurance, that I believe could be implemented in healthcare to the benefit of patients.

Not covering routine services

No car insurance covers oil changes, tire rotations, etc. These things are considered givens, and not part of the risk that insurance covers. After all, the probability of needing an oil change is 1, so the actuarial value of an oil change is simply the price an oil change. Similarly, a routine annual physical is an expected expense. There’s no “risk” of having an annual physical—it’s a given—so insurance, by definition, doesn’t make sense. In the auto industry, we follow this definition of insurance, but for some reason, we treat healthcare differently.

With very few exceptions, almost everyone can afford a $70-120 annual physical (adjusting for local cost of living), especially with the benefit of health savings accounts to save away $10 a month. By removing these and other routine medical expenses from the monies insurance are expected to pay, premiums will fall by the expected cost of an annual physical (price * probability an individual in the insurance pool gets a physical) times the overhead markup (about 20% in the US). Furthermore, if individuals are responsible for purchasing their own medical services, demand for price transparency will increase, and competition between providers will drive down prices.

These claims make two assumptions, neither of which are entirely true in the current US healthcare market: competition between insurers, and price transparency of medical services. However, I would argue that both of these unique elements of healthcare are a result of the unique setup of employer-sponsored healthcare, rather than anything fundamentally different about medicine. In both medicine and auto repair, the specifics of the fix will vary wildly by the patient (human or auto), and therefore be more or less expensive, but the fact that most physicians will not, or are unable to, give an estimated cost of services ex ante is more the result of a lack of demand, than an impossibility of doing so. Removing expected services from insurance coverage will, and has already begun to, increase the demand for price transparency in healthcare.

Price discrimination

Under the ACA, limits were placed on the premium increase insurers can charge on sicker (read: more expensive) subscribers, relative to healthy subscribers. Specifically, the ACA allows at most a 3:1 ratio between the lowest risk level and the highest (the American Health Care Act, the potential Republican replacement to the ACA, would allow a 5:1 ratio). While this limits the increases unhealthy patients—elderly, smokers, obese patients, sedentary patients, etc.—see in their premiums, less risk stratification also means that healthy patients will pay more for insurance, subsidizing their more expensive counterparts in the insurance pool.

In the market for auto insurance, we can see some evidence of the opposite effect: auto insurers give discounts for things like age (esp. over 25), good grades, driving record, and other negative correlates to getting in a car accident. Granted, these big data algorithms are not without negative social consequences: things like zip code may be highly predictive of an individual’s health and driving record, but also tend to discriminate against the already disadvantaged (O’Neil, 2016). Still, the allowance of such price discrimination, as long as it is done responsibly, can mitigate the negative impacts of an “individual mandate” as seen in auto insurance by minimizing some of subsidy low utilizers give to high utilizers and therefore preventing an insurance “death spiral.” I’m not advocating that this necessarily be a part of health insurance policy, but is worth noting that this has been a successful policy in an analogous market and therefore incentives like this could be a tool to mitigate rising premiums for healthy, young adults.

Who purchases insurance?

As previously mentioned, the majority of working-age Americans, get insurance through their employer. This means that health insurance salesmen need to cater to HR managers, rather than directly to patients. Auto insurance, on the other hand, is primarily sold directly to drivers. Health insurance is complicated, but so is auto insurance—many of the terms of insurance like “deductible” and “copay” that allegedly cause so much frustration and confusion in healthcare are still used auto insurance. The difference is that auto insurers like esurance have a strong incentive to make their product more understandable, and, in my experience, they have succeeded in doing so.

The human body is more complicated than a car, so it’s understandable that insurance for health will be more complicated than insurance for a car, but it doesn’t follow that insurance for health should be impossible for the average purchaser to understand, as long as insurers have the incentives to make their policies understandable. Removing employer-sponsored health insurance would increase employee pay (by transferring the portion of insurance that employers pay directly to the workers) and incentivize insurers to demystify some of the nuances of health insurance for subscribers.

Individual Mandates

The individual mandate portion of the ACA was highly contentious with voters. Even many (~46%) liberal voters have an unfavorable view of the individual mandate (KFF, 2016). This begs the question: why? We have an individual mandate on auto insurance, why is that not as contentious? My view is that this is primarily the result of a well-executed political theater on the part of conservative politicians and pundits, but given the importance in an individual mandate in preventing a death spiral, liberal politicians would do well to make this comparison. Auto insurance requirements are not contentious, so it’s unclear why similar requirements in healthcare should be so contentious.

Closing Remarks

Political implications

As noted throughout the “what we can learn” section of this post, I believe that treating health insurance more like auto insurance would be beneficial to consumers. I also acknowledge that this is a tall order politically, given the tendency of individuals—politicians, pundits, and voters alike—to view healthcare as exceptional. That’s really the main point of this post: healthcare may be unique by degree, but it is not unique in its fundamentals. Framing the discussion differently and putting health insurance in terms more people understand demystifies the topic and, I believe, increases the likelihood of finding common ground and a consensus.

Notes about this post

One of my goals for this year is to write more, and this post is part of that goal. That said, I’m not entirely happy with this post, especially the “what we can learn” section. However, I’ve spent enough time writing this, and believe that done is better than perfect. I would like to come back at a later date to revisit some of these lessons with mathematical models and data where it’s available. But for now, I ask you, my readers, to not focus on the specific lessons learned, but rather on the main point: healthcare may be unique by degree, but it is not unique in its fundamentals. The lessons are more of examples than specific recommendations.


Arrow, K. J. (1963). Uncertainty and the Welfare Economics of Medical Economics. The American Economic Review, 53(5), 941-973.

Caine, K., & Tierney, W. M. (2015). Point and Counterpoint: Patient Control of Access to Data in Their Electronic Health Records. Journal of General Internal Medicine, 30(S1), 38-41. doi:10.1007/s11606-014-3061-0

Gawande, A. (2009). The Checklist Manifesto. Henry Holt and Company.

Gawande, A. (2009, June 1). The Cost Conundrum. The New Yorker. Retrieved from

Gawande, A. (2012, August 13). Big Med. The New Yorker. Retrieved February 13, 2017, from

Gawande, A. (2015, May 11). Overkill. The New Yorker. Retrieved from

Hekman, K. (2010). Curiosity Keeps the Cat Alive. Holland, MI: Trillium Arts Press. Retrieved from

KFF. (2016, December 1). Kaiser Health Tracking Poll: November 2016. Retrieved from

McGovern, L., Miller, G., & Hughes-Cromwick, P. (2014, August 21). Health Policy Brief: The Relative Contribution of Multiple Determinants to Health Outcomes. Health Affairs, 1-9. doi:10.1377/hpb2014.17

O’Neil, C. (2016, October 3). Cathy O’Neil on Weapons of Math Destruction. (R. Roberts, Interviewer) EconTalk. Retrieved from

Reid, T. R. (2010). The Healing of America. Penguin Group LLC.

RHIhub. (2017). Social Determinants of Health for Rural People. Retrieved March 5, 2017, from Rural Health Information Hub:

Roberts, R. (2010, Feb 8). Roberts on Smith, Ricardo, and Trade. Retrieved from EconTalk:

Steinmo, S., & Watts, J. (1995). It’s the Institutions, Stupid! Why Comprehensive National Health Insurance Always Fails in America. Journal of Politics, Policy, and Law, 20(2), 329-372.

Paternalism and Public Health

I’m currently enrolled at the University of Wisconsin’s Leadership in Population Health Improvement Certification program. The program is fully online, so participation on a forum is a major component of the course. There’s an argument for more government involvement in healthcare that seems to be tacitly pervasive in the worldview of the type of people attracted to this sort of program.

The Argument

The argument is best summarized by one of my fellow students after I made the point that people respond to incentives, and cost sharing measures by insurers will cause patients to take a more active role in their own health decision making. This isn’t an exact quote, but I promise I’m not trying to make him sound worse than he really sounded:


Many people struggle with misinformation when making financial and health decisions. For example many people still think that fried okra is healthy. If people can’t get this basic information right or are worried about paying the bills, they aren’t thinking about this sort of overarching, higher-order effect.


That’s right, I shit you not: my classmate in a public health class thinks that people are too stupid to take care of themselves, don’t respond to economic incentives, and therefore can’t be trusted to take care of themselves and their own healthcare. This individual phrased this argument in a particularly condescending way (both to me and patients), but the core of argument is very prevalent in today’s political discussion.

Dissecting the Argument

Let’s break this down a little bit. This argument was posed as a reason for having universal, comprehensive coverage with no cost sharing. So according to my classmate, the reason copays and coinsurance are bad is three-fold:

  1. Patients don’t understand the basics of health, like diet and exercise, so therefore they can’t be expected to understand the more complicated relationship between screenings for early detection and long term health outcomes.
  2. Patients don’t respond to economic incentives because they don’t understand the actual risks and costs involved.
  3. If we remove the financial cost of medical services, patients will follow their physician’s’ advice, at least with regard to getting the care they need, even if they don’t change their lifestyle choices.


My frustration with this argument is also three-fold:

  1. Each step is logically and empirically false, though there are enough data out there to cherry pick to make a fairly convincing story, and possibly persuade the casual reader.
  2. It’s extremely short-sighted, by only looking at how people behave right now, and not thinking about how people change their behavior in response to incentives.
  3. It’s extremely paternalistic and implies that most patients are unable to make medical decisions for themselves


When I was originally writing this post, I went on a long, tortuous diatribe about the nuances of rational ignorance, rational avoidance of screening tests due to the bus stop paradox and false positive paradox, and how economic incentives through coinsurance mitigate this and lower costs through price transparency and competition. I might come back to that line of reasoning in a future post, and I’ve written about economic incentives as a game changer elsewhere, so for now, I want to focus on my more emotional response to my classmate’s worldview: my disdain for paternalism.


In this regard, I have two questions for myself. First, why does paternalism like this make me so angry? Second, why do so many public health students practice this paternalism?

My Disdain for Paternalism

That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinions of others, to do so would be wise, or even right. These are good reasons for remonstrating with him, or reasoning with him, or persuading him, or entreating him, but not for compelling him, or visiting him with any evil in case he do otherwise.

John Stuart Mill, On Liberty, Chapter 1


In the tradition of the Classical Liberal Economic thinkers, I am very averse to coercion. Furthermore, my fierce INTP independence and skepticism leads me to question others’ goals and I don’t take well to being told what to do (just ask my mother). I’m repulsed by the idea of someone legislating something a certain way because they believe I’m incapable of doing what’s in my own best interest. If they want to incentivize or encourage me towards my own best interest (e.g. a health insurance company giving discounts or reimbursements for gym membership and usage), I’m perfectly happy with that. But only if they start with the premise that I want to do the right thing and they’re just making it easier.


The notion of paternalism implies a lesser and a greater. In a Rawlsian Veil of Ignorance sort of way, I would never want to be the lesser, coerced by the greater: the intellectually inferior whom the greater believes cannot take care of myself. If I am in error, whether by behavior or by belief, Mill is right to encourage others to correct me and me to be corrected, but when the others view me as a lost cause and would try to push me in line with their vision, that’s where the theoretical version of me “behind the Veil” draws the line and pushes back.


Assuming an examined life, people know their needs and values better than anyone else. Assuming rationality, at least insofar as people never say “Hey, I’m going to do this thing that I know will make me net worse off” and then do that thing, people will attempt to live the best life they can, according to their definition and values. Therefore, it is hubris to believe that I know how you should live your life better than you do, but this is what paternalistic thinking embodies and acts on. Now, if I discover a logical flaw in how you are living your life (e.g. drinking energy drinks for energy instead of eating an apple), I may point out the error in your reasoning (the example is an admitted error in my own life), but persuasion must be my only tool to convince you; not coercion, and my reason for attempting to convince you can only be the belief that you don’t adequately understand the facts, not that you are incapable of understanding the facts. Any deviation from these guidelines deviates into paternalism, which by construction and implementation, even if not by definition, inevitably leads to coercion.


In the case of my classmate’s argument against cost sharing, his rationale was that people are incapable of following their own self-interest. If that’s the case, a physician is unlikely to convince them to not eat fried okra, and reducing cost sharing would make no difference. In that scenario, would my classmate take the next step and ban certain foods? The answer, as I found in a subsequent discussion, is apparently yes. Paternalism, with all the best intentions, removes freedom. By creating the mechanism to remove freedom, you create a mechanism that can be used with less-than-ideal intentions to remove freedom for the good of the leader, not the good of the governed.

The Draw of Paternalism

Besides not being paranoid like me, why would someone be drawn to Paternalism? The short answer, I believe, is that it’s expedient. Coming up with an effective “nudge” campaign of paternalistic libertarianism is hard and not always successful. It’s much easier to just be a benevolent dictator (at least it seems that way when you’re not actually a dictator). And it when a certain truth seems obvious to you, it’s easy to think that people who don’t see the obviousness of that truth are incompetent and in need of your correction.


You know why people don’t like Liberals? Because they lose. If Liberals are so fucking smart, how come they lose so goddamn always?

“Will McAvoy,” Newsroom


In the modern Liberal worldview (i.e. the Political Left in America; not economic liberalism) in general and the public health sphere specifically, the solution often seems like a tautology (minimum wage increases earnings and banning smoking makes people live longer), so anyone who disagrees or votes against this agenda must be voting against their own self-interest or not understand their own interests.


Furthermore, public health graduate students, and the Democratic party in general, are better educated than the majority of the population, and it seems like a very reasonable conclusion that someone with a graduate degree understands the interests of the high school drop-out better than the latter individual does herself. And in an objective sense, the more educated person is right. I, with my econ degree, understand the economics of minimum wage–its pros and cons–better than most of the minimum wage workers campaigning in favor of a minimum wage hike.


So why shouldn’t I tell them how it is, and if they don’t listen to reason, just push them in the right direction? I have to admit, it’s tempting.


To answer Aaron Sorkin’s question through news anchor Will McAvoy in Newsroom, “Why do liberals lose [all the time]?” I think a big part of the answer is that Liberals are presumptuous. They presume to know what people’s best interest is, how to get there, and often leave the individuals themselves out of the equation. This is certainly an issue in Public Health, where the community is almost always involved in the health needs assessments, sometimes involved in the prioritization, and rarely involved in the implementation (at least historically, this is changing slightly for the better). Even if Liberals are right from an objective, empirical standpoint about what policies are effective in improving the lives of individuals, the fact that the individuals themselves feel ignored or powerless in this equation goes a long way to explaining both ineffectual policies (feelings of powerlessness have a huge impact on health outcomes) and why Liberals lose “so goddamn always.”

Closing Remarks

Government action, including coercion, is justified and necessary to prevent individuals from harming others–directly or through externalities. Any other coercion, for an individual’s own good as perceived by the government actors (including lobbyists) is paternalism. In general, paternalism, however well-intentioned, is both immoral (at least from my view) and ineffective. I should note that I’m only talking about able-minded adults here, children and the intellectually disabled are a different discussion.


It’s immoral because it’s arrogant, easily blinded, and lazy. Instead of taking the time to convince mentally competent adults of the truth or falsehood of a belief, the paternalist treats that adult as a child, thereby demeaning her.  


It’s ineffective because benevolent dictators like Marcus Aurelius are still mortal, and Commodus still looms in the background. If you push too hard, the populists will revolt (Trump 2016) and/or someone worse will take the reigns of power and undo all that the paternalists have done, including whatever good may, in the short run, have come from their expedient solution.


Where are we to go then?

Paternalistic coercion is such a commonly used option because, unfortunately, it’s not always possible to persuade people of the truth, through rational arguments or otherwise.

Should we in public health keep trying, hope against hope, to convince people what their best interest really is? Do we leave them to their own means?

My Sisyphusian inclination is that the ethical option is to keep trying to persuade people through clever data visualizations and logical argument, but ethical is damn slow.

My Resolution: To Use Trello to Track Resolutions

I love productivity tools. I love getting things done (both the David Allen program and the concept), and productivity tools are useful to that end. I’m also just a nerd and really like playing with new toys, er… tools, and software packages for the novelty of different features and aesthetic beauty of different user interfaces.

Trello is one of those productivity tools that’s novel, really fun to use, and the click-and-drag UI is, in my opinion, kind of beautiful. It’s also super flexible, to the point of being almost too flexible and in adding features it somehow got the worst of several worlds. I enjoy using it, so I’ve tried to use it for several different projects and workflows, but I keep find it coming up short. However, I think I’ve finally landed on a good use case, so I wanted to share. But first, let’s look at what Trello is.

What are you?

Trello is a web-based project management tool that consists of Boards, Lists,and Cards. Within each card, you can track activity, make comments, and track checklists. You can share boards with individual users or within an Organization. The whole usetrello-board-list-cardr interface is click-and-drag and has lots of customizable colors, labels, etc. In short, Trello is an ultra-flexible tool that allows you to have a shared workspace with collaborators. It even has iOS and Android applications.


Failed Experiments

Like I said in my introduction, Trello is, in some ways, too flexible. Meaning it can do almost anything, but it can’t do everything well. Here are some things I’ve attempted to use Trello to manage, and have ultimately moved on to other, more robust tools.

Research Management Tool

I was working on a project for work that involved looking at physician and hospital reimbursement for telemedicine services. I tried tracking my research in Trello, which worked really well for the first 5-10 sources I was looking at. Each card was a paper, report, or website. The description was the citation information. Where applicable, I could attach a PDF to the card, and I could store my notes in the comment section.


However, the number of cards quickly got to the point where I had to scroll on my list to see everything. Now, I don’t mind scrolling, and for a quick project with only a handful of sources, this wouldn’t be too bad, but it’s just not scalable. On the other hand, Evernote is designed for exactly this kind of research repository. I can use tags to find relevant topics, but it’s also fully searchable, including OCR searching within an attached PDF (paid version only), which makes it far more scalable than Trello.

Task Management Tool

I even have a post on this blog about using Trello as a task management tool. I tried a couple different paradigms for this. First, I had a list for each project and a card for each task. Again, once you get about 5 projects, the side-scrolling defeats the notion of being able to immediately see what’s on my plate. Then I tried having a card for each project, a checklist on that card for each task, and converting checklists to cards to move to a next action list. In principle this was fine, but it ended up being too many clicks to just complete a task and look at the subsequent one for that action. I read a few other methodologies/workflows online, but they all run into the same issue: it’s just not scalable enough to handle 50+ actions and still be “at a glance.” Just like in the research binder, more specialized products–Omnifocus, Wunderlist, Todoist, even Evernote–are just downright more effective task management applications than Trello.

Project Master List

This would probably be a really good use case if I were a project director overseeing multiple project managers who each had multiple projects. Trello’s click-and-drag card view makes it very easy to see what’s where (provided there are two or three dozen items or fewer), make updates as needed, and keep on top of things. That said, I’m just one person, and while I do have collaborators on several projects and oversee some people at work, I don’t have this need. I track my current projects on my task management tool, Todoist, and keeping track of all my current projects in Trello and Todoist is just redundant.

Promising Boards

Despite these failed use cases, I really love the Trello interface and enjoy using the tool… if only I could find something I could use it for where it actually performs better than Evernote or Todoist. Here are a few use cases I have found that seem to be going well.

Crisis Management!

I work with electronic medical records, part of which includes supporting go-lives. A EMR go-live was described to me during my college internship, long before I had any idea would be working where I am now, as “We plan, configure the software, and send everyone to training for months. Then we go live and all hell breaks loose and shit hits the fan.” I like to think that less shit is hitting the fan with go-lives these days (especially the ones I help to implement), but that’s still a pretty accurate description. When in a go-live command center, total control task management apps like Todoist aren’t at their acme. I need something that’s focused on this customer and this customer only, and I need something I can quickly add to, update, and get distracted from, without losing anything. Trello fits that bill perfectly. Each new issue gets a card, and the details and progress get tracked in comments and checklists. At this point, it is becoming a task management app, but since I should never have more than a dozen things on my plate at any given time, that’s okay. In this case, not containing everything is what I want.

For my Go-Live board, I have 3 lists for storing tasks–one list for each task type. This might be excessive, but it helps me categorize my work based on energy and mood. Then I have “Delegated” tasks that I still need to keep track of, “Done today” for tasks to report at any daily huddle calls, and “Done this week” for any weekly wrap-up calls or issue summary reports I need to put together. Click-and-drag makes it easy to move each issue card through the process, and card views make it easy to keep everything in front of me, without the distraction of email or other projects in Todoist (which still contains things I want to do today after work). The only drawback is that Trello is on the cloud, so I need to be careful to never include unencrypted documents, patient information, or screenshots (which I rarely create anyways, but I do get these things sent to me quite regularly).

Shared Lists

I keep my grocery list, which my wife also has access to, on Trello. Either of us can updated it throughout the week, and when it comes time to buy food (or we just find ourselves at one of the two places we shop) we can both check the list. I have one list for each of the two grocery stores we go to, and one card is one item to buy. After that item is in the shopping cart, the card gets archived. I also have a list to store recipes (I do a lot of the cooking, so yeah, this is a pretty short list that’s still manageable in Trello), which I can quickly drag over to the “weekly meals” list to create a meal plan each week. Evernote and Todoist could do most of this as well, but the UI of Trello is easier to use for this narrow case, and the collaboration is much easier/better in Trello.

Collaborative Projects

I don’t have many collaborative projects, but on the couple that I do have, when I can get my collaborators to use it, Trello makes a great tool for organizing ideas. Especially for more creative projects, Trello is an easy repository for new ideas, that can be added to by others and moved around easily. When my friend and I were working on a game (it still hasn’t gone anywhere), we were able to share ideas as we had them, and discuss these ideas in relation to other ideas while we were together or just sitting on the couch looking at our phones.

Someday Maybe Lists

I mentioned before that Trello makes a good project tracking tool, but not a good task management tool. This includes projects that aren’t being double documented in your task management tool: your someday maybe project list. When I see a cool class on Coursera or get an inkling for a new skill, I could add it to Evernote, but it’s liable to get lost in my several hundred notes. Instead, I add it to my Someday Maybe Board in Trello. I can make notes on each card about why I think this would be cool. Even add a checklist to identify how much work it would actually be, and include some resources. Because Someday Maybe lists should be reviewed, Trello offers a clean way to review so it doesn’t get so piled up that it’s impossible to manage. Because they shouldn’t be reviewed that often, I don’t have to worry about things get double-documented.

Goal Tracking

Last, and possibly most importantly this time of year, Trello is great for goal tracking and visioning. It’s the end of the year, so lots of people are resolving to spend less, save more, quit smoking, and go to the gym more often in 2017 (though they’ve also resolved to keep these resolutions longer than those same resolutions last year). For for those of us who would like to achieve these goals and keep track of it, I’m finding that Trello makes a very easy, effective platform for this.

Specific Time-Frame

I have a board for 2017 Goals. As it currently stands, the whole board is just one list with the goals themselves. Each card is a broad goal area, but has more SMART (Specific, Measurable, Actionable, Realistic, Time-Based) criteria in the card description.


As I make progress towards each goal I can mark that as a comment, or by adding (and checking) a checklist item. For example, each blog post I write will get a link on my Blogging goal card.

Life Time Goals

My wife and I have a shared board where we track some of our lifetime goals. We use labels to identify which one of us has this goal in mind and our ideal timeframe (broken into <5 years, 5-10 years, 10+ years from now). Again, Trello makes this easier than Evernote to review and update when we need to, but it’s not taking up space in our Todoist lists.



Like the Someday Maybe list, I could put these goals on Evernote, but the visual element of being able to move cards around in Trello makes these goals easier organize relative to each other and see how they fit together.


I’m starting this year with a resolution to use Trello to track my resolutions. If it works, I’ll end the year with a blog post about how it went (maybe, or at least I’ll end the year with a blog post).


Trello’s strengths are it’s easy-to-use click-and-drag way of visualizing data. Evernote is more searchable, more scalable, and can store more. Todoist (or any number of other task lists) is cleaner and more streamlined for checking things off the list. But Trello is collaborative and can track things over time that you don’t want in a daily task list but are more actionable than most things I put in Evernote (I know many people use evernote as a task management app as well, but I was never able to get that to work for me). The key is to make sure that the scope for Trello and each board is somewhat limited: too many boards and too many lists can get overwhelming quickly. As long as the scope is controlled, Trello lets you see multiple groups of actions together, which makes it a great goal tracking tool or collaborative project tracking tool.

Medical Services Inflation

I’ve heard it mentioned several times on NPR and other news outlets that growth in healthcare spending has slowed since the Patient Protection and Affordable Care Act (commonly called “Obamacare”; hereafter referred to as the ACA). By cherry picking this number of that measurement, this is a believable claim[1], and it is true that inflation for health services is at an all-time low (Figure 1). But I want to focus on this measurement in context of inflation as a whole.

Since the end of World War II, the price inflation of medical services and durable medical equipment has been fairly consistently one and a half to two times the overall inflation rate (figure 2). Wage increases, when they happen, tend to follow cost-of-living adjustments or otherwise be tied to the core inflation rate. So when the prices of medical services rise faster than the overall price level, they consume a higher proportion of consumers’ income. This description is a simplification, to be sure, and it doesn’t always describe day-to-day, year-to-year negotiations, but it does describe the fundamentals of long-term wages and spending and year-to-year trends within the macroeconomy.


In Figure 1, we can see some salient historical features:

  • For (almost) this whole time period, medical prices have been rising faster than prices in the rest of the economy.
  • Bad Macroeconomic policies of the 1970s involving abusing the Phillips Curve[2] and wage and price controls resulted in uncharacteristically high inflation during that time period
  • We can see an anomaly in the early 1980s with the Volcker Recession[3] where Medical inflation was lower than core inflation, and inflation as a whole fell off a cliff in the early 1980s, and continued to fall steadily while Greenspan was Fed Chair through the 1990s and early 2000s.

More into the weeds, the Bureau of Labor Statistics is responsible for tracking the price level, using a pre-defined bundle of goods and services, including medical services. The Consumer Price Index is a standardized price level, where 100 is fixed in a base year; inflation is the percent increase in the price level from one period to the next. There are problems with the CPI, but it’s still a widely-used tool. My analysis compares the ratio of Core CPI–which excludes the two most volatile sectors, food and energy, from the overall inflation measure–to inflation of medical goods and services. This ratio shows how much faster is inflation in the medical sector devaluing our incomes relative to the rest of the economy. I pulled data from the Bureau of Labor Statistics using their R API[4]. Year-over-year inflation was calculated by the percent increase in price level from December to December. Source code for analysis can be found at


Figure 2 shows pretty clearly that the ratio between medical inflation and core inflation is still within normal bounds (within 1.5 standard deviations). The ACA has not really changed the root of the problem at all. This isn’t an indictment of the ACA as a whole, since the ACA wasn’t targeted at fixing the fundamentals[5]. But it does show pretty clearly that the ACA does not change the incentives to raise prices with little to no market counterforce, so prices in healthcare are still rising faster than prices in the rest of the economy, ACA notwithstanding.



[5] The fact that it wasn’t targeted at fixing the fundamentals is something of an indictment, but that’s another blog post.

On Growth: Economic Growth as Dynamism

Begging the Question

Why does economic growth matter? Isn’t growth just fueled by mindless consumerism? Are capitalism and minimalism like oil and water, or is growth still a good thing, the consumerism that can fuel it notwithstanding?

Economist John Cochrane of the Stanford University’s Hoover Institution has written a very illuminating article that is available for free on his blog. Cochrane’s essay is empirical and well done, but it doesn’t quite answer the questions I’m asking. Throughout his essay, Cochrane makes the tacit assumption that more income is inherently better. This isn’t a particularly difficult assumption to swallow, but it’s worth examining.

The average American is more than three times better off than his or her counterpart in 1950. Real GDP per person has risen from $16,000 in 1952 to over $50,000 today, both measured in 2009 dollars. Many pundits seem to remember the 1950s fondly, but $16,000 per person is a lot less than $50,000! […] If the US economy had grown at 2% rather than 3.5% since 1950, income per person by 2000 would have been $23,000 not $50,000. That’s a huge difference.

Cochrane goes on to examine productivity, regulation, and pro-growth policies. It’s a good piece; written in very accessible, non-technical language that everyone should read. But let’s examine that basic premise: median incomes of $50,000 are better than median incomes of $23,000.

It may seem obvious that more income is preferable to less, which is (presumably) why Cochrane doesn’t feel the need to justify this sentiment beyond it being a tautology. But let’s take a few steps back. Is it really a given?

Let’s look at some instances where growth isn’t necessarily a good thing and may be viewed negatively. First, when businesses talk of growth, we often balk. Particularly when that business is, say, an insurance company complaining that they “only” had 15% revenue growth and therefore can’t continue in the ACA Insurance Exchanges. We laugh as Coke and Pepsi continue struggling for a couple tenths of a percentage point more of soda market share, and we don’t feel any sympathy for their concerns about bottled water being just as profitable as soda, but having far less brand loyalty. We think back to the toll the industrial revolution took–and is still taking–on the environment and have to ask, “was it worth it?”

As I’ll argue below, yes I think was worth it, but not in the way most people think. It’s not about the raw income. It’s about the economic dynamism–the number of economic transactions–that truly makes growth such a good thing. This is not to say growth doesn’t have some negative consequences, but growth increases dynamism, which is what makes life better.

Objections to Growth

There are many potential objections to economic growth for growth’s sake. For the sake of time, I’ll just focus on three.  

Environmental Costs

Because of the advanced rate of growth beginning with the industrial revolution, we have unleashed tons of mols of greenhouse gases into the atmosphere. Climate change isindubitably man-made as a result of the technological progress, the resultant economic growth made in manufacturing and mass production, and the consequent exponential growth in the population–humans are, after all, biologic machines that convert Oxygen into Carbon Dioxide.


It was growth of a sedentary, agricultural society and animal domestication, not the status quo of hunter-gatherer society, that introduced all manner of infectious diseases, both for humans and the rest of the ecosystem. Increased demand for building materials has, throughout history, been met by increased supply of lumber, resulting in deforestation everywhere from the ancient Fertile Crescent (modern day Iraqi desert) to heaven knows how many other localities. Finally, it was the pursuit of growth–rapid growth through controlling the beaver pelt trade–that incited the French and Indian War in the colonies. The list could go on and on, but it’s clear that Economic growth and technological advancement are not without trade-offs, including with regard to the environment–from local ecology to global climate.

What actually is growing?

What has growth brought us? Growth has brought us more medical technology, but also more medical expenses and bankruptcies. We’ve created a wealth of knowledge and ingenuity, but more than that, we’ve created–and purchased–an incredible amount of stuff. Self-storage is the industry predicated on paying more to cover up past mistakes of over-indulgence, and it’s growing dramatically. In the immortal words of Tyler Durden, for many, economic growth and the globalism that came alongside the great moderation means that we are “working jobs we hate, so we can buy shit we don’t need.” Growth has increased our capacity to build and create, and our spending power has commensurately increased. However, our needs–our true human nature and biological needs–have changed remarkably little. Thus, the growth we’ve experienced, the extra $34,000/year, is being predominantly spent on cost increases of some essentials–housing and healthcare–and the discretionary on the superfluous, the non-essential, the superficial, and keeping up with the Jones’s.

Unequal distribution

“A rising tide lifts all ships.” At least that’s the rhetoric that’s used. And to a degree, it’s absolutely true: innovation is knowledge, and knowledge is a non-rival good. The rising economic tide increases innovation and the body of knowledge available to society. However, economically, some ships are lifted more than others. While the income distribution of households is becoming (relatively) flatter, the income distribution of individuals is more skewed. Inequality in-and-of itself is not, in my view, inherently a bad thing; I want to live in a world where Bill Gates and Sergey Brin are filthy rich after creating products that improve the lives of billions. However, if growth benefits primarily the haves and the have-nots only see the downsides, is growth such a good thing for the majority? What good is growth, then, if it benefits many, but leaves many more desiring and building up credit card debt in pursuit of more, because they can, even when they don’t need to?

Why Growth is Still a Net Positive

Economic growth is often thought as a unidirectional thing. After all, we see GDP growing over time on an x-y scatter plot.

(Source: FRED

But what composes that GDP can change dramatically over time. Preferences shift, societal needs change, and the pie-charts that show the composition of the workforce and goods and services can change dramatically. Growth can be multi-directional and multi-faceted.

For example, what is a normal good? By economic definition, a normal good is one where the demand for that good increases with income. The classic example for normal goods is high-quality foods: as we make more money, we want more of the meat half of the meat and potatoes diet. This is not to be confused with a luxury good, where quantity demanded increases with price–as we see with wine, fashion, and yachts. As Cochrane points out, some non-conventional normal goods include things like civil rights, environmental conservation, and self-determination.

Environmentalism as a Normal Good

Next time you see someone working hard at minimum wage, ask (or just think about, for the shyer among you) if s/he buys energy-efficient lightbulbs or just the cheapest ones available. If they’ve done the math (and most people on that tight a budget have), they’ll probably tell you that they buy whatever’s the cheapest, which is probably not the energy efficient bulbs. When you’re living at the poverty line, you’re not particularly interested in the environmental consequences of your actions; your interests focus on putting food on the table. Concern with the environment and expensive products that are more “environmentally friendly” are normal goods: demand increases with income.

By all accounts, the Industrial Revolution was horrendous for the environment. However that fact does not attenuate emerging nations’ desire to reproduce the same thing in the least. It’s worth noting that John Muir and the Sierra Club didn’t emerge until after the Industrial Revolution had done its work: both in terms of economic growth and damage to the environment. Caring for future generations will, by human nature, always be secondary to caring for the humans alive today, in whatever form that takes. Only when the humans alive today are well-taken care of, will the focus really begin shift to future generations, because caring for nature and future generations is a normal good.

Anti-Consumerism as a Normal Good

What about Palahniuk’s indictment that growth is just fueling that which truly does not matter? As previously posted on this blog, I have a general sympathy of sentiments with the Minimalist philosophy/movement. Given that consumption spending makes up approximately 70% of GDP, minimalism would, on the surface, be diametrically opposed to growth for growth’s sake. But when you dig deeper, it’s not.

What is consumption? When you look at the formula for GDP, we have this monolithic ‘C’ for consumption in Y = C + I + G + NX, where Y is (nominal) GDP, C is consumption, I is investment (which includes corporate capital outlays and residential mortgages), G is government spending (not including transfer payments), and NX is net exports (exports minus imports).

Minimalism is about living a meaningful life. The focus of minimalist thinkers like Joshua Becker, Joshua Fields Millburn, Ryan Nicodemus, Leo Babauta, and others is that mindless consumption of stuff gets in the way of self-actualization. And this is (in my experience) true. However, part of this claim is that experiences are more important than stuff, which is also true–when is the last time you thought about Christmas traditions with your family and when’s the last time you thought about that high-school yearbook you’re inexplicably holding onto? So what about that ‘C’ part of GDP? Consumer purchases are built up of two things: Goods (stuff) and Services (experiences).

Suppose that tomorrow, everyone were to become a diehard minimalist. The Goods part of that identity would fall, but the services (read: experiences) would compensate. Minimalism isn’t just about cutting spending; it’s opposed to unnecessary spending on “shit we don’t need” and replacing that with life experiences–preferably free, but more importantly meaningful.

From the perspective of economic growth, this is great. Growth is about productivity increases, and Minimalism makes people happier, and therefore more productive (in the general sense, not necessarily in the corporate human resources sense). More to the point, when looking at growth in Y (GDP), Investment and NX can, and should, rise, holding C and G constant. So growth does not necessarily–even if it historically has–increase consumption. The more important pieces is I: Investment. Less frivolous spending means more saving. This means more  money available for investment, lower interest rates for firms looking to expand (particularly expand their non-frivolous divisions), and higher standards of living in retirement (including more consumption). According to the Solow Growth model, increased savings will, indeed, cause a short dip in GDP, but increases growth and growth potential in the long run.

First Principles: Adam Smith

Investment includes, among other things, increasing employment. Going back to first principles in Book 1 of An Inquiry into the Nature and Causes of the Wealth of Nations, when a resource is scarce (i.e. Demand is higher than supply) employers are willing to pay more for that resource, including labor. When the economy is in the upswing of the business cycle, employees (excluding public employees) get raises. More importantly, when the economy is growing, employers compete for employees, meaning that workers can change jobs (relatively) easily, and find a job that helps them to thrive–reaching self-actualization by challenging and growing them as a person. Workers can find the “right fit” of a job much more easily as a result of growth.

As a corollary, Research and Development is a normal good, and when firms are growing they are more likely to boost investment in this division. Similarly, startups–i.e. the the drivers of innovation–are able to get funding and capital during times of growth, much more so than during the trough of the business cycle. Moreover, startups are more successful–and therefore more influential–during times of growth. When the economy is growing, we’re learning more. We’re developing new technologies, new products, new processes, and increasing the pool of societal knowledge–even if, for a time, some of that knowledge is proprietary; knowledge never stays proprietary forever.


In short, growth means dynamism. The Oxford English Dictionary defines dynamism as the quality of being characterized by vigorous activity and progress. Economic activity really just means transactions or interactions. When the  economy is growing quickly, the number of potential interactions increases:

  • number of job openings, and the number of applicants willing to apply
  • Venture capital availability and appetite for risk
  • New products and services, and new consumers

When the number of interactions increases, the potential for mutually beneficial or euvoluntary exchanges necessarily increases.

When these exchanges are in goods and services, we benefit a little–new services, experiences, and useful tools. When these exchanges are in the labor market, we benefit a lot. People who are happy in their jobs are more productive, meaning increasing future growth. People who are happy in their jobs are (generally) happier in their lives overall. The extra monetary income we see from growth is nice, but money isn’t everything. More important than real income is the availability of opportunities for personal growth, advancement, thriving, virtue, and self-actualization. You can’t buy any of these things, no matter how high your income is, but they are still normal goods, and a higher income allows you to shift focus from merely surviving to truly living.

Dynamism isn’t just about increased sales or incomes, it’s about increased opportunities for change. Workers stuck in a dead-end job have more opportunities to find a new job when the economy is growing. New products emerge. Some of these are superfluous and engender the kind of mindless spending minimalists hate, but some legitimately make people’s lives better off–for example, digital pills to monitor drug regimen compliance, side-effects, absorption, and effectiveness. Dynamism is about new ideas bouncing off each other, about new people coming into contact with those ideas, and about competition doing what competition does best: forcing everyone to implement new technologies to provide better goods/services at a lower price. Economic dynamism is how we go from the abstract economics to the concrete improvements to people’s lives. And growth is what makes dynamism possible.

Concluding Remarks

Among the works of man, which human life is rightly employed in perfecting and beautifying, the first in importance surely is man himself.

Ultimately, the goal of Economic advancement is human thriving. Economic growth and the dynamism it creates is the most effective way of increasing human thriving sustainably. Yes, it can have downsides. But ironically enough, more growth can also be the cure for downsides from previous growth. Admittedly, this is a little like saying that more alcohol can cure a hangover; it sounds a little insane, but enter the Bloody Mary. Also, would we ever have widespread solar and biodiesel energy without innovation? Economic growth has afforded us the opulence to care about the environment, the well-being of the poor in other nations, and other things that were far from the minds of our forebearers. The benefits of growth are agnostic to where that growth comes from. If we continue to “grow” by buying “shit we don’t need,” then growth will remain lackluster, and even if it doesn’t, we will remain lackluster. If, on the other hand, we grow the economy by growing ourselves, making ourselves more productive–and more interesting–then that will have very different outcome. Economic Dynamism comes from growth, and is what allows us to reinvent–or just tweak–ourselves to become happier, healthier, wealthier; without adversely affecting our fellow human.



I originally wrote this with footnotes, but they didn’t copy over from Google Docs to WordPress. I may add them back in later, but here’s the list:

OmniFocus 2 vs. Todoist vs. Outlook


, , , ,

Like all entries on this blog, it will come as no surprise to my readers (who, according to wordpress analytics, virtually all know me) to hear that I have a fond curiosity for dabbling and experimenting with task management tools. Recently, I’ve been experimenting with Todoist as an alternative to Outlook as a task management system. To my surprise, I actually found it even more useful than I was expecting, and decided to do a head-to-head against OmniFocus, my current GTD weapon of choice. So how do these three systems work, and how do they stack up?

Background and Requirements for my GTD Trusted System

I’m currently writing this post on a Mac, but my work laptop is a windows computer, as is my personal laptop (although I would love an Airbook, I just can’t justify the price, and the windows laptop made sense for my computing needs at the time). One of the core tenants of GTD is that everything is in one system. When I bought OmniFocus, I wasn’t that concerned with this, since I wanted to keep my work life and home life separate. However, as I did more work (projects done for personal edification) split between my (PC) laptop and (Mac) desktop, my “system” started to fracture. I was using OmniFocus on my phone and desktop, outlook on my work laptop (which was, as expected, almost exclusively for work), and trello to piece the rest together, including on my iPad (see pricing section below) and personal laptop. It worked, but it was fragmented.

When I decided to try out Todoist, the primary driver was that I needed a better way to track projects, particularly small projects. Trello works really well (in my opinion) as a master project list and it even does a passable job as a project management tool for large projects. However, it gets very cluttered very quickly when you try to use it as a task management tool (i.e. all the unique next actions), especially when you include small projects of only 3-5 actions. Outlook is even worse. Outlook’s task list is useful only in that you can quickly turn emails into tasks using quick actions. However, Outlook completely lacks the concept of a project, and the only way of grouping tasks is using categories. Since I often have a lot of different projects going at once, this is completely untenable for me. What I wanted was a nice way to have sub-actions within subprojects, like I can do in OmniFocus on my Mac. Ideally, this should be available no matter what: home, work, in transit, whatever. Enter Todoist.

 How did Todoist stack up to the other two?

User Interface and Look-and-Feel


Being part of the Microsoft Office Suite, Outlook is familiar. Hell, it’s where I spend at least 2-4 hours every work day, so the email portion (and subsequent task portion) is clean, familiar, and very customizable. You can change the columns and information shown for each next action. Besides the normal MS ribbon at the top, the overall space is very clean and you can very easily switch between different views. Of these three tools, Outlook is definitely the most customizable.


Somewhat to my dismay when I first bought it, OmniFocus for Desktop feels cluttered. In an effort to highlight/bold information that really matters right now, they put other information (like project, context, etc.) in a light gray. Far from actually honing the eye in, this looks like a block of text. Also, the fact that they consider “due soon” as “within 24 hours” means that things that won’t be actionable until tomorrow show up now as yellow, even though they aren’t actionable right now.


Where it’s most cluttered/cramped though is the side panel that shows the details for each action. First of all, I have to scroll to see the notes section. Second of all, the note is very cramped, inconvenient, and can’t really contain any useful attachments. The due dates and whatnot are very useful, but not very useable and are very click-heavy.

On the positive side, there are a good number of shortcuts for switching perspectives, quick entry, and other tools to make using OmniFocus on the Desktop and on the iPhone very easy to use.


Of these tools, Todoist has hands-down the cleanest display. Labels only show when needed/specified (unlike OmniFocus’s contexts, which are omnipresent). The comments section is large and easily accommodates some stream-of-consciousness work and updates. 

Like OmniFocus, it’s got a fairly click-heavy UI, but Todoist does allow more keyboard use when assigning due dates to new actions, which can save having to move your hand back and forth between keyboard and mouse with each task. 

Where OmniFocus’s UI excels at displaying the same information in different ways, Todoist’s UI excels in its simplicity and the ability to jump between different projects and display different information in the same way.


Features, Kludges, and Bugs


As previously mentioned, you can create tasks out of emails using quick actions. That’s really the only feature worth mentioning, because Outlook Tasks is really just a to-do list; it’s not a fully-fledged task management, much less project management tool. Quick actions, while useful, are a bit buggy in that the order of operations is not always respected, meaning that my categories don’t always take automatically


Of these three tools, OmniFocus seems to have the most features, and some of the most powerful features. Most notably, OmniFocus has the ability to defer tasks and has a robust concept of sequential tasks. In this example, Scan Wedding Cards is the next action I need to take for this sub-project, and the later actions, which are dependent on the first action, are listed as “remaining” but not “available” and can easily be displayed or hidden depending on the perspective. 

Not only is this useful for projects with long time horizons, but it’s extremely useful for recurring actions that only happen 2 or 3 times a year, like scheduling a dentist appointment or changing the furnace air filter. Ideally, I want to forget that I have to do these things at all until the time comes to do them. If I accidentally think about these things, I can rest assured that my task management system has it covered. This “defer” feature can somewhat be replicated in other systems, but it’s not native the same way it is in OmniFocus.

The other feature I really love with OmniFocus is the weekly review. On a weekly basis, OmniFocus (for desktop only) allows you to quickly look at every remaining project, regardless of status, to make sure that the status (active, waiting, or on hold) is appropriate and the due dates are well defined.

On the negative side, OmniFocus desktop has a really annoying bug where if you complete a recurring task, it will automatically generate the next task, but if you un-complete and then recomplete the original action, it will re-copy the recurring action for tomorrow, creating a duplicate recurring series. Also, OmniFocus recurrences aren’t smart enough to “jump ahead” if you miss a day. Also, as mentioned, the notes section feels like an afterthought and is very limited in tools. 


In many ways, I think Todoist’s biggest features are the UI and its ubiquity. Todoist is available on iOS, Android, OS X, Windows, and online. It also has plugins for CloudMagic (my email tool of choice for my apple devices) and Outlook (my mandated tool at work). The UI makes entering dates exceptionally easy compared to OmniFocus; for example I can enter things like “every weekday” or “next Thursday” and it will set the date appropriately.

The other particularly cool/unique feature of Todoist is the Filters functionality (premium only), which, if you’re reasonably comfortable with Boolean logic and are willing to put in the time setting them up, can replicate many (though not all) of the features in OmniFocus that Todoist lacks (like the defer feature).

Organization and Structure


It’s a barely glorified to-do list. Tasks can be grouped by priority, due date, created date, or category. However, you can’t layer subtrasks within other tasks and you can’t group by combinations of categories. 


Folders can contain projects, which can contain tasks, including sub-tasks, which may have their own sub-tasks. I love this level of nesting, since my brain tends to think in outline form, but being a mediocre developer, I don’t like it when my subroutines start getting more than 4 levels deep. Fortunately, OmniFocus can be configured to mark a sub-project complete when all the tasks within that sub-project are completed. I know it only saves a single click, but this is very nice.


Todoist is similar to OmniFocus with the multiple layers of projects, but instead of folders they have parent projects. Again, the nesting for the projects and the actions within the projects can, in theory, go on for a while, but I think 4 or 5 layers is probably the maximum you should ever do. In this screenshot, we just show the “projects” and the parent projects. We can also have tasks and subtasks in the main task window. 


Finally, there’s pricing. Outlook comes as part of a package with the ubiquitous Microsoft Office, so let’s call that more or less free. Todoist has a free version that may work quite well for many people, but people with multiple roles or who want to compartmentalize work, home life, and personal edification would do well to purchase a premium subscription and use filters.

On the whole, OmniFocus and Todoist are fairly similarly priced:

  • OmniFocus is priced on a software-as-a-product model with a different price per platform
    • iPhone: $20
    • iPad: $30
    • OS X: $40 for the basic, $30 for the student version, and $70 for the premium version
  • Todoist is priced on a subscription model, at about $30 per year and is available on all platforms (including online) after that.

So assuming OmniFocus releases a new version ever 2-3 years and you purchase the basic model on all platforms, both will run you just under $100 over 3 years. OmniFocus can be a bit more expensive, especially for the professional/premium version, but again, it does have the richest features.



As mentioned at the beginning, I have computers on multiple platforms, and I really want to consolidate my entire list of next actions—personal and professional—in the same place. To do that, I’ve chosen Todoist. Todoist has a nice outlook plugin, which makes it integrate seamlessly with my work email (which was the only advantage of Outlook tasks in the first place), and it’s available on my apple and PC devices. I’ve got my four different goal areas (think somewhere between 20 and 30 thousand feet in the GTD Horizons of focus model) listed as parent projects, with projects (10 thousand feet in the GTD model) delineated as appropriate underneath the parent project. Todoist has a built in view for “Today” and “Next 7 Days,” which are useful starting places, but I’ve created separate filters based on parent project (so far broken up broadly to “work” vs. “non-work”) to display only the sphere of tasks I care about right now. 

We’ll see how this works out, but for now, I’m quite happy with my switch, particularly if I can come up with a viable way of “deferring” tasks like I could in OmniFocus (current strategy is to just dump them in a “long term recurring” project, though that is, admittedly, not ideal. Others have used a combination of filters and labels, but since labels don’t get automatically added or removed, this would require some complicated filters that I just haven’t gotten around to caring enough to write.

Let me know what you think. And with this, I’m going to cross off my next action in Todoist.

(The Blog Posts project lives under the “Personal Edification” Parent Project in my system.)

Where is the Middle Class Going?

Background and Data Source

We’ve heard for quite some time that the middle class is in crisis and shrinking. But what to do the data say? Is the middle class in crisis? Where is the middle class going? What does that crisis look like in the data?

Data for this exploration comes from the Current Population Survey (CPS) Annual Social and Economic Supplement (ASEC). The CPS ASEC is a longitudinal survey of 50,000 households conducted by the US Census Bureau1. Despite the relatively small sample size (<1.5% of the population), this dataset is regularly used for income analysis and other demographic trends. Data were downloaded and pulled into R using code from All Survey Data Free from Anthony Damico2.

Data Scrubbing

Data were processed using the same steps as Pew Research Center’s 2015 report, America’s Middle Class is Losing Ground6. First, CPS ASEC data for the years 2000, 2005, 2010, 2015 were downloaded into MonetDB. Second, Total Income3, was adjusted for inflation to 2015 USD using the getSymbols function from the quantmod package4 to get Consumer Price Index (CPI) data from the Federal Reserve of St. Louis.

Finally, household income was adjusted for household size. Intuitively, we know that when two people live together, their household income is the sum of each individual’s earnings. However, many expenses (most notably housing and utilities) are shared, so a household of two does not have double the expenses of a household of one. To adjust for this fact, the standard procedure is to divide total household income by the square root of household size5. In other words, a two-person household is assumed to have 1.41 times, rather than 2 times, the expenses as a one-person household.

After income was standardized across all households in the dataset, income class was calculated using the definition used by Pew Research Center6: middle income is 2/3 to 2 times the median, adjusted income. Anying above two times median income is considered “upper” income, and anything below two-thirds of median income is “lower” income. This is distinct from lower, middle, and upper class, which has a wealth component in addition to income as well as connotations of lifestyle6. This is still a relatively crude measure since it does not account for cost of living, but it is useful for a broad-strokes analysis.

Exploratory Analysis

First, like Pew6, I found that the middle class is shrinking, at least in terms of people living in each income class.


These proportions are slightly different than those found by Pew6, but they show the same general trend and are therefore close enough for my purposes. My larger interest is to look more in detail at how the distribution in income is changing. We can see that income is, as it always is, heavily skewed to the right, but the distributions are not identical year-to-year.


Difference in Density Analysis

Theoretical Construction and Example

The question I want to answer is where the middle class is going. To some degree, this is answered by the above graphs showing the proportions of adults living in each income category, but I’m asking a further question: which incomes levels are more–and less–prevelent than they used to be. Conceptually, I want to zoom in to see the gaps between the density curves curves. Since this is something of an unconventional way of visualizing data, let’s start with a proof-of-concept example. First, we will take random samples from a uniform and a Gaussian distribution and graph the density functions of these two samples on top of each other.

set.seed(100) #Set the seed 
#Get two random samples
#Convert these random samples to density distributions
dist1  <-density(sample1$x,from=0,to=2)      
dist2  <-density(sample2$x,from=0,to=2)
#Put these density functions into a dataframe
df1    <-data.frame(x=dist1$x,y=dist1$y)
df2    <-data.frame(x=dist2$x,y=dist2$y)

#Plot these density curves
ggplot() +
     scale_x_continuous(limits=c(0,2)) +
     geom_line(data=df1,aes(x,y),size=1.4,colour="blue")  +
     geom_line(data=df2,aes(x,y),size=1.4,colour="green") +


As expected, we can see the sampling from the uniform distribution (blue) is more densely populated at the ends of the range and less prevalent around the mean. We can quantify these differences by looking at the differences in the density curves. For example, at x = 1 the normal density curve (green) is 1.214 and the blue curve is 0.57 making green 2.128 times as dense as the blue population at x = 1.

To visualize this and see these differences in clearer relief, we can create a difference-in-density curve (no longer a density curve, because the difference can be negative). When we graph this difference curve, we can highlight the sign of the difference in density to see which population is higher or lower, and easily visualize the magnitude of these differences. As this is a comparison, we must have a base population and a comparison population where the resultant density curve is density(base)-density(comparison). In this case, blue (uniform distrubtion) will be our base population and green (normal distribution) is the comparison population.

#Build a Dataframe with the difference in densities
df3    <-data.frame(x=dist1$x,y=dist1$y-dist2$y)
#Graph it
ggplot() +
     scale_x_continuous(limits=c(0,2)) +
     geom_line(data=df3,aes(x,y),size=1.4,colour="black")   +


Now, we can see not just where in our x range the blue population is more prominent than the green population, but also the magnitude of these differences, the latter of which is harder to see when the two density curves are merely juxtaposed. As we’ll see later, we can calculate the area under the curve for different sections to quantify the relative magnitude of each difference between the base and comparison densities.

Difference in Density in Income

Let’s apply this same methodology of differences in density to income distributions over time. Since this methodology necessarily requires a base year and can only compare two distributions at a time, we will use 2015 as our comparison year, and look at where household income has increased/decreased in relative to the base years of 2000 and 2010.


As expected, the proportion of middle income households is smaller (read: negative relative density) in 2015 than in 2010 or 2000. But where are those households going? As seen by the green on the far left, we can see more households living with $0 or ultra-low income. But the $100,000 to $200,000 range is far more common in 2015 than either base year, indicating that we also have more households in the upper income echelons. Calculating the areas under these curves, we can compare the size of the shifts to upper and lower echelons.

Comparison of relative increases to lower and upper echelons from base year to 2015
baseyear lowerGain upperGain timesGain
2000 1.3e-06 1.51e-05 11.737
2010 6.3e-06 7.90e-06 1.258

lowerGain and upperGain columns are the areas under the positive portion of the difference in density curves where income is less than $50K (lower) or between $50K and $200K (upper). timesGain is the ratio between upper echelon increases and lower echelon increases.

Economic Interpretation and Explanation

In the final column of the above table, we see that since 2010, 25% more households moved to upper income than moved into lower income. Note that this is a cross-sectional analysis, so we cannot comment directly on which households moved or other characteristics about those moves. There is some evidence that part of the gains to the $100,000 to $200,000 income levels came not from the middle income group, but from declines in the highest income earners. That said, particularly when we look at the last 15 years (base year 2000), we can see the relative shifts are far more (11 times) in favor of large opulence than poverty.

What do these graphs tell us? First, I think the hyperbolic notion that those evil, billionaire CEOs are taking all the money away from the middle class is solidly debunked by the substantial growth in the proportion of upper middle income households. Are the ultra-rich becoming richer while the mega-rich retire and don’t get replaced by burgeoning young professionals? That conclusion could be supported by the data (note the blocks of red above $250,000 annual income indicating decreases in that level individual, household-adjusted incomes in this range are less prevalent in 2015 than in the base years), but it is certainly not the only explanation.

But what else is going on to explain these findings? Looking at the demographics, We can see some notable differences: most notably, is the number of workers per household.


Pairwise T-Tests show statistically significant differences between all the socioeconomic income levels. However, the substantive difference seems to be the number of workers. Lower income households tend to only have one worker, though the average number of adults across income classes suggests that these are not, on average, a single parent as a single earner (though it is significantly and substantively more common in lower-income households than middle or upper-income households). On the other hand, upper income households have, on average, 2 or more earners. In other words, both the average and median households are better off in 2014 (the income year reported on in the 2015 survey) than they were in 1999 or 2009. However, a large part of that is due to the wider trend of two-income households and other demographic shifts5. This larger trend explains some of the shift between 2000 and 2015, but we can see that this cannot be the full story, since the average workers per household (and all other measures of household size) actually falls between 2010 to 2015. Unfortunately, I need to end my investigation herea, so I will delve into that question at a later date.


Yes, the middle class is shrinking. Inequality is real, as is poverty. CEO pay has exploded since the mid 1990s7. This is all true. But whatever the populists say, this does not mean that everyone is suddenly going to be subjugated to the ultra-rich. The median and average households are still doing okay. There are demographic shifts–lower fertility rates, higher cohabitation, higher divorce rates, lower/later marriage, higher rates of children living with parents longer, etc.–and these demographic shifts go a long way to explaining a growing proportion of upper middle income households. Whether these are good shifts or bad shifts depends on your values and worldview, but economically, they are preventing household inequality from rising.

This all implies that maybe, just maybe, free market capitalism is doing what free markets do best: expand opulence for more people than are harmed by free trade and free markets. This is not to say that everything is perfect and rosy. Obviously, there are serious social challenges facing America today, and inequality has complicated and real ethical and moral concerns. However, the populist nightmare that the middle class is becoming poorer to the benefit of the upper classes of society does not appear to be one of those challenges.

Appendix A: Future Directions

For better or for worse, I’m a busy guy, and this is–for now–just a hobby. I hope to add to this investigation as I have time, but in the meantime, I encourage readers to fork my repo and add some of the following adjustments and considerations I wish I had time to include:

  • Adjustment for Cost of Living, or at least calculating median (and therefore class) by region or FIPS code. This will almost certainly require a new, larger dataset, but is worth exploring given the urbanization of millenials.
  • More robust ways of accounting for household size generally.
  • Identifying the same households over time to track changes to income class.

Appendix B: Code

All code is available on Github at Below is a scattering of key pieces of code.

SQL Query used to get a subset of the data from MonetDB

#Retrieve Data from MonetDB SQL Database using dbQuery
query2015 <- "select h_idnum1
                    ,sum(case when earner=1 then 1 else 0 end)
               from asec15 where htotval > 0 group by h_idnum1,h_year"
df2015 <- dbGetQuery(db, query2015)
#From BuildHHCSV.r

Variables used in analysis

R_Variable_Name ASEC_Variable_Source Variable_Description
X N/A Observation counter
year h_year Survey year (Data reflect previous calendar year earnings)
h_id h_idnum1 Household Identifier
h_wages hwsval Household income from Wages or Salary in the previous calendar year
h_income htotval Total household income in the previous calendar year
h_size h_numper Number of people (all ages) living in household
h_num_adults h_numper-hunder18 Number of adults (18+) living in household
h_num_earners earner Number of people earning some income in household
h_num_fams hnumfam Number of families living in household
cpi_adj [FRED Data] Annual CPI adjustment factor
adj_h_income [Calculated Value] (h_income / cpi_adj) / sqrt(h_size)
seclass [Calculated Value] Social Economic Class, as defined by Pew.

Difference In Density Function

#Building the Difference in Differences Graphs
getYearComparison <- function(df,year1=2000,year2=2015) {
     dist1 <- density(subset(df,year == year1, adj_h_income)$adj_h_income,from=0,to=1000000)
     dist2 <- density(subset(df,year == year2, adj_h_income)$adj_h_income,from=0,to=1000000)
     df1    <-data.frame(x=dist1$x,y=dist2$y-dist1$y) ; df1$pos <- df1$y>0
     compareYears <- ggplot(data=df1) +
          geom_line(aes(x=x,y=y),size=1,colour="black")   +
          ggtitle(paste0("Changes in Income Distribution Between ",year1," and ",year2)) +
          xlab("Adjusted Household Income (2015 USD)") +
          ylab("Difference in Distribution Density") +
               breaks=c(0,50000,100000,200000,300000,400000,500000)) +
          scale_y_continuous(labels=comma) +
     #Return both a graph and the new dataframe


1: US Census Bureau. (n.d.). Small Area Income and Poverty Estimates. Retrieved from

2: Damico, A. J. (2016) Curren Population Survey. ASDFree. Github Repository.; commit c680eec92cbba64512d756e533696dedaa3d415e

3: Variable htotval from the CPS Data Dictionary

4: Ryan, J. A.; Ulrich, J. M.; Thielen, W. (2015). Quantmod. CRAN Package. 5: Burkhauser, R. (2012). Podcast interview with Russ Roberts. Retrieved from

6: Kochhar, R.; Fry, R.; Rohal, M. (2015). The American Middle Class is Losing Ground. Pew Research Center. Retrieved from

7: Planet Money. (2016). Episode 682: When CEO Pay Exploded. Retrieved from