A Brief Commentary on Communication as an Influential Force in Societal Development


Enterprise: a project undertaken or to be undertaken, especially one that is important or difficult or that requires boldness or energy

Human: a species of bipedal mammal who, as far as humans are aware, is the only species capable of self-aware consciousness[1] and therefore intentional self-improvement

Human Enterprise: the project of all humanity to improve the well-being of one’s self in society and society as a whole by means of self-aware consciousness and initiative

Bullshitious: Bullshit, but making it sound fancy[2]or what I do on this blog.

In a way, the entire human enterprise is an experiment in communication. It was the human capacity for communication that enabled cooperation, engendering a virtuous cycle in human and social evolution. It was communication and collaboration that allowed humanity to survive despite facing faster, stronger, and fiercer predators and prey. And it was communication, particularly written communication, that laid the groundwork for civilization, a feat that had never been achieved by any other species before or since homo sapiens sapiens (at least not on Earth).

I define communication as the act of conveying information or ideas from one party to another. Good communication is communication that achieves this goal: the receiving party received and understood the information or idea as the sending party intended. This applies to both semantic communication: do you understand what I said? As well as to what I’ll call “essential” communication (as in “essence”): do you understand what I mean, how I feel about the topic, and how I want you to view me?

The human enterprise, pre-historically speaking, has two components: living in larger groups and adapting to the environment to meet our needs, rather than purely responding to the environment. Both of these uniquely human accomplishments began with the rise of agriculture. By adapting the land—by plowing, weeding, etc.—to our need to acquire food with as little effort as possible humans slowly ceased to be nomads, and settled in a set location. Farmers also specialized and allowed some people to focus their energy on things other than food acquisition and towns formed with

We then come to the fabled tale of the Tower of Babel[3]. Babylonians were poised on the great accomplishment of building a stairway to heaven, so they could make a name for themselves and have direct access to God via this stairway. I can only assume that they had already booked Led Zeppelin to play for the ribbon cutting ceremony, but God confused their speech and they scattered before they could finish the project. The question of whether this actually happened like this is, like the rest of Genesis 1-11, completely beside the point[4]. Rather, we get a glimpse from this ancient story about human relationships—with each other and with God.

The problem at Babel (besides the self-proclaimed demigod status that so angered God) was that people couldn’t communicate with each other. When Enkidu told Urshanabi that he needed more bricks on the southern wall, Urshanabi didn’t get the message and didn’t bring the bricks. Then Enkidu got pissed off and left to go hang out with Gilgamesh, leaving the southern wall unfinished. There was no semantic communication happening, much less conveying the essential idea that Urshanabi had for a thing he called an “arch” to stabilize the second level.

The point is that when we don’t speak the same languages—in the case of Babel, literally, but this applies almost equally as well to the “languages” of development, project management, etc.[5]—we don’t make any progress. Second, the complexity of endeavors in the human enterprise require complex communication. Building a massive tower—or developing a successful application, building and maintaining a highway infrastructure, or mending damaged race relations in a country with a history of slavery or
apartheid—requires communicating extremely complicated sets of instructions,
goals, and sentiments. This goes far beyond the (still very impressive)
proto-language of bees to communicate how far away and in which direction a
food source is[6]. Human communication involves conveying abstract emotions and ideas, visions for the future, and perceptions of reality as interpreted by the speaker.

Following from this, all of the above endeavors also require that we communicate about the future. Again this is something that other species occasionally show signs of[7], but Humans are unique in that we display this behavior ubiquitously. The human enterprise necessitated the creation of an ever-increasing lexicon and complex grammatical and syntactical structure. It’s ultimately unknowable which came first, the extensible language or abstract thought, but at some point, humans began conceiving of and conveying beliefs about a future state of the world: “I bet if I put this little seed in the ground and pour water on it, a plant will emerge from the ground in a couple months.” Throughout the millennia, the problems facing humanity have changed, and on and off since the times of the Greeks and Romans, we struggle with vague ideas and concepts like “purpose” or “morality” and Eudaimonia. And so the complexity of our language grows in a self-reinforcing feedback loop to match the newer and “higher” needs and goals of the human enterprise. Going back to my opening statement, I want to dive into what I mean by “experiment” in this context. The entire human enterprise is an experiment in communication.  An experiment is trying something and seeing what happens, and that’s really what we’ve done as a species. I’m going to begin closing with
a series of somewhat disjointed thought experiments and case studies in communication, and then attempt, feebly, to tie it all together for an actual closing.

Military conquest

The entire concept of war—two leaders disagree about economic matters so they use high-minded rhetoric to send the socio-economically disadvantaged from each side to go kill each other until one side gives up—has always confused me[8]. Nevertheless, war and conflict have been pervasive throughout human history. I’ve always found it fascinating how many factors beyond just skill of the troops affect who “wins” a battle or a war: logistics and supply lines; disease, both for the armies and for the home front populations[9]; terrain and weather; and, of course, lines of communication.

There is a probably apocryphal story about Prince Rupert, a cavalry general in the first English Civil War, who received ambiguous orders from the king via a letter that had been written, presumably, a week or more before. Based on the orders in the letter, Rupert attacked the Parliamentarians at Marston Moor, where he suffered a total defeat[10]. Allegedly, Rupert carried the orders from the king in his pocket until the day he died as a challenge to anyone who accused him of making a mistake by attacking at
Marston Moor[11]. His instructions from the king were ambiguous at best, and arguably worse than useless given the turns of events taking place between the writing and reading of the letter.

Military leaders have succeeded and been defeated based on good or bad intelligence—whether due to luck, dereliction of duty, or outright betrayal—and each campaign is a data point in the “what works in communication” experiment. Different generals with their different styles and temperaments use different modes of communication, and learn over time what works and what doesn’t. What encryption and codes successfully obfuscate meaning from the enemy while still being accessible to your allies? How much lag time should I assume and build into my orders? How should I balance flexibility with
structure in my orders to a part of the war I know little about? What is the quickest way to reliably get orders from one side of a battlefield to the other? If we accept (as almost everyone does) that military leaders have shaped history, then we must appreciate the role experimenting with different styles and modes of communication has played in world history.


Marketing and it’s close friend advertising are where the line between semantic and essential communication becomes at once a critical distinction and a pair of double-Dutch ropes. Beyond telling you what the product is, a good advertisement subtly convinces you that the product is a good one and the producer is reliable and trustworthy. Bad advertisements, like the recent Pepsi Super Bowl commercial that sparked a backlash[12], are bad not because the fail to convey what the product is (semantic communication), but because they fail to paint the producer in a positive light—like Pepsi being tone deaf and really not giving a shit about social justice.

And then we come to the more insidious parts of advertising: you will be complete if you buy this. For this message, subtlety is critical as marketers balance making the message easily inferrable without being so blatant as to turn people off. Much as I generally hate this messaging, I can’t deny that when it’s well done it’s brilliant. Every advertisement is an experiment in what motivates humans. That’s not to say that advertising is all there is. On the contrary, vaporware always crashes and burns. Nor is it to say that advertising is the whole of marketing or that all marketing is insidious. It is to say, though, that the firms that grow and last do so because they’ve experimented and found a communication strategy and brand image that works for them. When they gain market share beyond a small critical mass, it’s as much about communication as it is about having a high quality product.


Throughout history, territory has transferred hands, fortunes have been made and lost, and the expansion of the human enterprise have occurred through competing messages, competing media, and novel ideas put into language. The evolution of language—from the present to the future to the abstract—was inseparably linked with the expansion of philosophy and human intellectual pursuits. The human enterprise, that never-ending attempt to improve the quality of life and epistemological enlightenment for the human population, is only advanced when we interact by sharing ideas. When we communicate with one another. It’s not always advancing, but that’s where the experiment comes in, and when it works, we need to capitalize on that positive outcome and remember the methodology used to create it. And expand eudaimonia. 


[1]  By “self-aware consciousness” I mean awareness of one’s own consciousness, and
the act of “second order thought” that, according to St. Thomas Aquinas, sets
us apart from other creatures. See Summa Theologica.

[3] Genesis 11

[7] For example, crows using traffic to crack nuts or monkeys storing stones to
throw at zoo goers. Both of these are real examples that are really exceptions
that prove the rule. Animals may be capable of adapting to the environment and
doing short term strategic planning, but most of what they do, including
migration, appears to be instinctual rather than based on long term planning.

[8] And also the genius that is Bill Waterson, https://boscobae.blogspot.com/2012/11/calvin-and-hobbes-on-war.html

[9] E.g. the “Germs” part of Guns, Germs, and Steel, by Jared
Diamond. https://www.amazon.com/Guns-Germs-Steel-Fates-Societies/dp/0393317552

[12] AKA the “attractive lives matter movement” http://www.businessinsider.com/stephen-colbert-kendall-jenner-pepsi-ad-controversy-2017-4
Or, my personal favorite play on this ad from SNL: https://youtu.be/Pn8pwoNWseM



Comparisons between Health care and Auto Repair

The Analogy

In the United States, cultural and government mandates have tended to place healthcare in its own category, distinct and siloed off from other industries. It’s true that both medical practice and the financing thereof are, in many ways, unlike any other industry, but that’s not to say they need to be treated differently or that Healthcare can’t learn from other industries.

Healthcare delivery can learn from the supply chain management of chain restaurants and their ability to deliver consistent quality across locations, time, and people by using standardization (Gawande, Big Med, 2012). Healthcare can learn from the airline industry and power of the checklist in quality enforcement (Gawande, The Checklist Manifesto, 2009). Even though the healthcare silo even delineates between a Master’s in Business Administration (MBA) and a Master’s in Health Administration (MHA), the fundamental principles of business, growth, and management are still more or less the same (Hekman, 2010), even if they’ve tended to be kept separate.

Further, I believe that we—patients, citizens, consumers, policy makers—can learn a great deal about healthcare financing by looking at other industries. Health insurance is unlike other types of insurance, but I would argue that this represents a misunderstanding the application of insurance to healthcare, rather than a difference in the fundamental nature and category of healthcare and health insurance. In this case, I believe we could learn a great deal about a better way to implement health insurance by looking at the auto insurance industry.

I’ve often made this analogy, but I want to flesh it out, and see how far it really goes.

Parallels between Healthcare and Car Repair

Because many people balk at the comparison initially, I want to start by highlighting areas where these two, seemingly disparate industries are, in fact, quite similar.

Specialized knowledge of complex system

Although the human body is more complex and less well understood than the internal combustion engine, they are both sufficiently complicated as to require specialized knowledge to diagnose and fix/treat. In both cases, therefore, the learning curve of acquiring that specialized knowledge and the costs—money, time, etc.—creates a natural barrier to entry to supply these particular services. The fact that such a barrier exists means that specialization and division of labor is not just advantageous, as it usually is (Roberts, 2010), but truly necessary: self-sufficiency is not an option for anything beyond the most rudimentary oil change or wound care. The fact that the human body is more complicated or that physicians receive commensurately more special training and certifications does not change this fact. It only means that physicians require more reimbursement than mechanics (both in pecuniary form and in prestige), which, in the US, they receive.

Asymmetric information

As a result of the specialized training and knowledge of mechanics and physicians, we have a situation that economists call “asymmetric information,” meaning that one party in a transaction knows something that the other party doesn’t (Arrow, 1963). This includes things on the consumer side where the consumer doesn’t disclose germane information to the provider (mechanic or medical provider), whether due to embarrassment or shame—for example, sexual practices or driving habits (Caine & Tierney, 2015)—or due to not believing that the information is relevant. This informational asymmetry also means that the suppliers have a much better understanding than the consumers of how reliable a diagnostic test is, whether or not a given service is truly necessary.

Fiduciary responsibility and history of bad actors

Because consumers don’t always know better, they tend to trust the experts—in this case, mechanics or physicians—about what services they do or don’t need, and an unscrupulous provider could use that to his or her advantage by recommending services that are not in the best interest of the consumer. Although the trope of the dishonest mechanic who recommends unnecessary parts and services is an old one, the same behavior in physicians—over-ordering unnecessary or low value care due to greed or defensive medicine—has recently come until the public discussion as well (Gawande, 2009; Gawande, 2015).

Insurance to spread out risk

Although we expect to require minor maintenance services for both our cars—oil changes, tire rotations, etc.—and our persons—routine physicals, medications, etc.—we also expect to have larger maintenance needs, but only occasionally. I might need an annual physical an annual tire rotation, but I also expect that there’s a possibility that I will get in a motor vehicle accident and require both medical care and car repairs, both of which will be quite expensive. It’s possible I’ll go through my whole life without a serious accident, but just in case, I’ve bought insurance to cover the costs of that emergency.

Furthermore, as of 2014, both health insurance and car insurance are required for almost everyone. New Hampshire and Virginia do not require car insurance and there are some exceptions to the individual mandate portion of the Affordable Care Act (ACA; aka ‘Obamacare’), but in general, insurance is required for everyone with health and/or a car.

Service is a small part of a critically important outcome

Both good health and functioning cars are critically important to the survival and well-being of most people. Obviously, a working car lacks the obvious life-and-death-hanging-in-the-balance mystique of trauma surgery and isn’t critical in densely urban areas, but reliable transportation is a major determinant of health. Our cars get us to and from work, to and from grocery stores, and generally play a role in most aspects of daily living in suburban and rural areas (RHIhub, 2017).

However, a mechanic isn’t needed to drive the car, just to fix the car when it breaks; just like a physician isn’t needed to help us perform activities of daily living: health is. But so much of the both our physical health and the “health” of our cars has to do with our environments and our behavior. Salty roads take a toll on the body of a car; and where we live (i.e. how rural) affects how many miles we put on our car. By the same token, where we live affects access to healthy foods and clean air. Our behavior also affects our health and the health of our cars. By some estimates, medical care only accounts for 20% (or less) of health outcomes, and social determinants of health (McGovern, Miller, & Hughes-Cromwick, 2014). I don’t know if anyone has done something similar for automotive outcomes, but it seems plausible to me that this is also the case for cars: regular access to high-quality mechanics account for only 10-20% of vehicle life.

Market structure

Finally, there’s market structure. The majority of mechanics and the majority of physicians are in small, private practices, with a handful of assistants to handle scheduling, billing, etc. Additionally, dealerships and hospitals hire a number of mechanics and physicians, respectively. The prices at dealerships and hospital-owned physician practices tend to be higher, but more reliable. The quality of independent practices tends to vary a lot more, such that you can probably get a better deal on a physical or repair by going to an independent provider, but the transaction costs to identify the high-quality providers can be considerable.

Where the Similarities End

Despite their similarities, auto repair is obviously not health care. Most notably, cars are not sentient beings; they are man-made machines for which we possess a complete blueprint. Thus, the ethical considerations associated with auto-repair are really just commercial/economic ethics, whereas medicine has separate set of ethical considerations apart from economic activity.

Several additional differences between auto repair and healthcare exist in the market dynamics of both auto repair and insurance purchasing. With health insurance, the majority of individuals get coverage through their employer or the US government. Auto insurance is virtually entirely private, with individuals purchasing their own insurance. Additionally, price transparency is virtually non-existent in healthcare. While prices still function on an estimate-only basis in auto body repair, the prices are still widely published online and all mechanic shops will provide an estimate in advance at no cost, which cannot be said of medical providers.

Nevertheless, the non-ethical differences between healthcare and auto repair are really just differences of degree, rather than fundamental differences in nature of each service. The human body is not fully understood the way cars are, but from the point of view of both the consumer and the mechanic, the result is more or less the same: cars a still a complex system that the individuals don’t fully understand, though the mechanics understand the system better than the customer; the same is true of healthcare, just with a greater degree of complexity in the subject.

What We Can Learn

The point of all of the above was to show that healthcare and auto repair are somewhat analogous, so what can we learn from this analogy? What lessons—successes or failures—from the auto repair industry can we apply to healthcare?

Lemon Law and Malpractice

When a physician orders, say, a total shoulder arthroplasty and the patient survives and is able to move her arm, then the surgery is considered a success. From the perspective of the patient though, the surgery should only be considered a success when it fixes the underlying problem—in the case of a shoulder arthroplasty, the pain or limited motion is resolved. Even if it wasn’t a success and the patient died on the table, the physician would get paid, which is understandable in that surgery is risky and we shouldn’t incentivize physicians to avoid caring for risky patients, but it’s no small surprise that physicians tend to over-order invasive procedures—procedures that other countries avoid due to cost and limited success (Reid, 2010)—physicians see all the benefits with little-to-no risk of a downside.

In the rest of the economy, particularly for automobiles, we have “Lemon Laws” that protect consumers—or at least give them a viable recourse—from being taken advantage of by the proverbial used car salesman. Specifically, the Magnuson-Moss Warranty Act of 1975 (MMWA) mandates federal standards for warrantees on consumer products and vehicles, and most states have passed similar, additional laws. MMWA and the subsequent state laws are far from perfect. Warranties are full of legalese and there’s a reasonable argument to be made that such consumer protection laws make us less safe because we all have warning fatigue. Nevertheless, a legal apparatus exists such that warranties need to meet certain standards and are enforceable, and there is a competitive advantage to be gained by having a warrantee or guarantee (Tommy Boy notwithstanding).

In medicine, we have malpractice lawsuits, but those are for cases of gross negligence or obvious misdiagnosis; a “successful” procedure that didn’t deliver the advertised results shouldn’t result in a malpractice suit because the physician did everything right in the execution of a given procedure; just not in the advertising and prognosis. Still, physicians should be held accountable for their recommendations, given that they are acting in a fiduciary capacity in an environment of asymmetric information. If they recommend a procedure that empirically has a low rate of solving the problem from the patient’s perspective, they should not receive financial benefits from it, at least not equal to the financial reimbursement they receive when they recommend and perform a procedure that does solve the patient’s problem.

Would it be possible to implement a medical warrantee? Politically, the answer is almost certainly no, given America’s history with health reform (Steinmo & Watts, 1995). But such laws have mitigated the negative effects of bad actors acting in a fiduciary role. In the auto industry, false advertising isn’t in the long run financial interest of car salesmen. If we were to implement similar laws in healthcare, we could prevent over-ordering of invasive procedures from being in the long term financial interest of physicians. Such laws wouldn’t penalize physicians with malpractice suits, but they could prevent physicians from being paid as much for doing a procedure that didn’t benefit the patient. For example, rather than being paid the same rate by Medicare whether the patient survives, recovers, or improves, Medicare could pay 100% of current rates for survival (the current standard), 110% for improvement, and 65% for recovery without improvement. Note that while this would seem to incentivize a physician to kill the patient on table (and make it look like an accident) rather than risk a recovery without improvement, I think it’s fair to say that (1) the vast majority of doctors would never intentionally kill a patient and (2) that would clearly fall under malpractice and, if proven, would result in a revoked medical license and possibly criminal charges.

Insurance Markets

Insurance, by definition, disperses risk. For high-cost, low probability events, we disperse the risk such that even in that low probability event, we aren’t wiped out by the cost. For example, home insurance costs less than a dollar a day, but in the unlikely event that your house burns down, you aren’t wiped out financially and left homeless. How insurance is implemented though seems to vary between healthcare and auto insurance. Here are some of the key differences in auto insurance, relative to health insurance, that I believe could be implemented in healthcare to the benefit of patients.

Not covering routine services

No car insurance covers oil changes, tire rotations, etc. These things are considered givens, and not part of the risk that insurance covers. After all, the probability of needing an oil change is 1, so the actuarial value of an oil change is simply the price an oil change. Similarly, a routine annual physical is an expected expense. There’s no “risk” of having an annual physical—it’s a given—so insurance, by definition, doesn’t make sense. In the auto industry, we follow this definition of insurance, but for some reason, we treat healthcare differently.

With very few exceptions, almost everyone can afford a $70-120 annual physical (adjusting for local cost of living), especially with the benefit of health savings accounts to save away $10 a month. By removing these and other routine medical expenses from the monies insurance are expected to pay, premiums will fall by the expected cost of an annual physical (price * probability an individual in the insurance pool gets a physical) times the overhead markup (about 20% in the US). Furthermore, if individuals are responsible for purchasing their own medical services, demand for price transparency will increase, and competition between providers will drive down prices.

These claims make two assumptions, neither of which are entirely true in the current US healthcare market: competition between insurers, and price transparency of medical services. However, I would argue that both of these unique elements of healthcare are a result of the unique setup of employer-sponsored healthcare, rather than anything fundamentally different about medicine. In both medicine and auto repair, the specifics of the fix will vary wildly by the patient (human or auto), and therefore be more or less expensive, but the fact that most physicians will not, or are unable to, give an estimated cost of services ex ante is more the result of a lack of demand, than an impossibility of doing so. Removing expected services from insurance coverage will, and has already begun to, increase the demand for price transparency in healthcare.

Price discrimination

Under the ACA, limits were placed on the premium increase insurers can charge on sicker (read: more expensive) subscribers, relative to healthy subscribers. Specifically, the ACA allows at most a 3:1 ratio between the lowest risk level and the highest (the American Health Care Act, the potential Republican replacement to the ACA, would allow a 5:1 ratio). While this limits the increases unhealthy patients—elderly, smokers, obese patients, sedentary patients, etc.—see in their premiums, less risk stratification also means that healthy patients will pay more for insurance, subsidizing their more expensive counterparts in the insurance pool.

In the market for auto insurance, we can see some evidence of the opposite effect: auto insurers give discounts for things like age (esp. over 25), good grades, driving record, and other negative correlates to getting in a car accident. Granted, these big data algorithms are not without negative social consequences: things like zip code may be highly predictive of an individual’s health and driving record, but also tend to discriminate against the already disadvantaged (O’Neil, 2016). Still, the allowance of such price discrimination, as long as it is done responsibly, can mitigate the negative impacts of an “individual mandate” as seen in auto insurance by minimizing some of subsidy low utilizers give to high utilizers and therefore preventing an insurance “death spiral.” I’m not advocating that this necessarily be a part of health insurance policy, but is worth noting that this has been a successful policy in an analogous market and therefore incentives like this could be a tool to mitigate rising premiums for healthy, young adults.

Who purchases insurance?

As previously mentioned, the majority of working-age Americans, get insurance through their employer. This means that health insurance salesmen need to cater to HR managers, rather than directly to patients. Auto insurance, on the other hand, is primarily sold directly to drivers. Health insurance is complicated, but so is auto insurance—many of the terms of insurance like “deductible” and “copay” that allegedly cause so much frustration and confusion in healthcare are still used auto insurance. The difference is that auto insurers like esurance have a strong incentive to make their product more understandable, and, in my experience, they have succeeded in doing so.

The human body is more complicated than a car, so it’s understandable that insurance for health will be more complicated than insurance for a car, but it doesn’t follow that insurance for health should be impossible for the average purchaser to understand, as long as insurers have the incentives to make their policies understandable. Removing employer-sponsored health insurance would increase employee pay (by transferring the portion of insurance that employers pay directly to the workers) and incentivize insurers to demystify some of the nuances of health insurance for subscribers.

Individual Mandates

The individual mandate portion of the ACA was highly contentious with voters. Even many (~46%) liberal voters have an unfavorable view of the individual mandate (KFF, 2016). This begs the question: why? We have an individual mandate on auto insurance, why is that not as contentious? My view is that this is primarily the result of a well-executed political theater on the part of conservative politicians and pundits, but given the importance in an individual mandate in preventing a death spiral, liberal politicians would do well to make this comparison. Auto insurance requirements are not contentious, so it’s unclear why similar requirements in healthcare should be so contentious.

Closing Remarks

Political implications

As noted throughout the “what we can learn” section of this post, I believe that treating health insurance more like auto insurance would be beneficial to consumers. I also acknowledge that this is a tall order politically, given the tendency of individuals—politicians, pundits, and voters alike—to view healthcare as exceptional. That’s really the main point of this post: healthcare may be unique by degree, but it is not unique in its fundamentals. Framing the discussion differently and putting health insurance in terms more people understand demystifies the topic and, I believe, increases the likelihood of finding common ground and a consensus.

Notes about this post

One of my goals for this year is to write more, and this post is part of that goal. That said, I’m not entirely happy with this post, especially the “what we can learn” section. However, I’ve spent enough time writing this, and believe that done is better than perfect. I would like to come back at a later date to revisit some of these lessons with mathematical models and data where it’s available. But for now, I ask you, my readers, to not focus on the specific lessons learned, but rather on the main point: healthcare may be unique by degree, but it is not unique in its fundamentals. The lessons are more of examples than specific recommendations.


Arrow, K. J. (1963). Uncertainty and the Welfare Economics of Medical Economics. The American Economic Review, 53(5), 941-973.

Caine, K., & Tierney, W. M. (2015). Point and Counterpoint: Patient Control of Access to Data in Their Electronic Health Records. Journal of General Internal Medicine, 30(S1), 38-41. doi:10.1007/s11606-014-3061-0

Gawande, A. (2009). The Checklist Manifesto. Henry Holt and Company.

Gawande, A. (2009, June 1). The Cost Conundrum. The New Yorker. Retrieved from http://www.newyorker.com/magazine/2009/06/01/the-cost-conundrum

Gawande, A. (2012, August 13). Big Med. The New Yorker. Retrieved February 13, 2017, from http://www.newyorker.com/magazine/2012/08/13/big-med

Gawande, A. (2015, May 11). Overkill. The New Yorker. Retrieved from http://www.newyorker.com/magazine/2015/05/11/overkill-atul-gawande

Hekman, K. (2010). Curiosity Keeps the Cat Alive. Holland, MI: Trillium Arts Press. Retrieved from http://www.lulu.com/shop/kenneth-m-hekman/curiosity-keeps-the-cat-alive/paperback/product-10910381.html

KFF. (2016, December 1). Kaiser Health Tracking Poll: November 2016. Retrieved from KFF.org: http://kff.org/health-costs/poll-finding/kaiser-health-tracking-poll-november-2016/

McGovern, L., Miller, G., & Hughes-Cromwick, P. (2014, August 21). Health Policy Brief: The Relative Contribution of Multiple Determinants to Health Outcomes. Health Affairs, 1-9. doi:10.1377/hpb2014.17

O’Neil, C. (2016, October 3). Cathy O’Neil on Weapons of Math Destruction. (R. Roberts, Interviewer) EconTalk. Retrieved from http://www.econtalk.org/archives/2016/10/cathy_oneil_on_1.html

Reid, T. R. (2010). The Healing of America. Penguin Group LLC.

RHIhub. (2017). Social Determinants of Health for Rural People. Retrieved March 5, 2017, from Rural Health Information Hub: https://www.ruralhealthinfo.org/topics/social-determinants-of-health

Roberts, R. (2010, Feb 8). Roberts on Smith, Ricardo, and Trade. Retrieved from EconTalk: http://www.econtalk.org/archives/2010/02/roberts_on_smit.html

Steinmo, S., & Watts, J. (1995). It’s the Institutions, Stupid! Why Comprehensive National Health Insurance Always Fails in America. Journal of Politics, Policy, and Law, 20(2), 329-372.

Paternalism and Public Health

I’m currently enrolled at the University of Wisconsin’s Leadership in Population Health Improvement Certification program. The program is fully online, so participation on a forum is a major component of the course. There’s an argument for more government involvement in healthcare that seems to be tacitly pervasive in the worldview of the type of people attracted to this sort of program.

The Argument

The argument is best summarized by one of my fellow students after I made the point that people respond to incentives, and cost sharing measures by insurers will cause patients to take a more active role in their own health decision making. This isn’t an exact quote, but I promise I’m not trying to make him sound worse than he really sounded:


Many people struggle with misinformation when making financial and health decisions. For example many people still think that fried okra is healthy. If people can’t get this basic information right or are worried about paying the bills, they aren’t thinking about this sort of overarching, higher-order effect.


That’s right, I shit you not: my classmate in a public health class thinks that people are too stupid to take care of themselves, don’t respond to economic incentives, and therefore can’t be trusted to take care of themselves and their own healthcare. This individual phrased this argument in a particularly condescending way (both to me and patients), but the core of argument is very prevalent in today’s political discussion.

Dissecting the Argument

Let’s break this down a little bit. This argument was posed as a reason for having universal, comprehensive coverage with no cost sharing. So according to my classmate, the reason copays and coinsurance are bad is three-fold:

  1. Patients don’t understand the basics of health, like diet and exercise, so therefore they can’t be expected to understand the more complicated relationship between screenings for early detection and long term health outcomes.
  2. Patients don’t respond to economic incentives because they don’t understand the actual risks and costs involved.
  3. If we remove the financial cost of medical services, patients will follow their physician’s’ advice, at least with regard to getting the care they need, even if they don’t change their lifestyle choices.


My frustration with this argument is also three-fold:

  1. Each step is logically and empirically false, though there are enough data out there to cherry pick to make a fairly convincing story, and possibly persuade the casual reader.
  2. It’s extremely short-sighted, by only looking at how people behave right now, and not thinking about how people change their behavior in response to incentives.
  3. It’s extremely paternalistic and implies that most patients are unable to make medical decisions for themselves


When I was originally writing this post, I went on a long, tortuous diatribe about the nuances of rational ignorance, rational avoidance of screening tests due to the bus stop paradox and false positive paradox, and how economic incentives through coinsurance mitigate this and lower costs through price transparency and competition. I might come back to that line of reasoning in a future post, and I’ve written about economic incentives as a game changer elsewhere, so for now, I want to focus on my more emotional response to my classmate’s worldview: my disdain for paternalism.


In this regard, I have two questions for myself. First, why does paternalism like this make me so angry? Second, why do so many public health students practice this paternalism?

My Disdain for Paternalism

That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinions of others, to do so would be wise, or even right. These are good reasons for remonstrating with him, or reasoning with him, or persuading him, or entreating him, but not for compelling him, or visiting him with any evil in case he do otherwise.

John Stuart Mill, On Liberty, Chapter 1


In the tradition of the Classical Liberal Economic thinkers, I am very averse to coercion. Furthermore, my fierce INTP independence and skepticism leads me to question others’ goals and I don’t take well to being told what to do (just ask my mother). I’m repulsed by the idea of someone legislating something a certain way because they believe I’m incapable of doing what’s in my own best interest. If they want to incentivize or encourage me towards my own best interest (e.g. a health insurance company giving discounts or reimbursements for gym membership and usage), I’m perfectly happy with that. But only if they start with the premise that I want to do the right thing and they’re just making it easier.


The notion of paternalism implies a lesser and a greater. In a Rawlsian Veil of Ignorance sort of way, I would never want to be the lesser, coerced by the greater: the intellectually inferior whom the greater believes cannot take care of myself. If I am in error, whether by behavior or by belief, Mill is right to encourage others to correct me and me to be corrected, but when the others view me as a lost cause and would try to push me in line with their vision, that’s where the theoretical version of me “behind the Veil” draws the line and pushes back.


Assuming an examined life, people know their needs and values better than anyone else. Assuming rationality, at least insofar as people never say “Hey, I’m going to do this thing that I know will make me net worse off” and then do that thing, people will attempt to live the best life they can, according to their definition and values. Therefore, it is hubris to believe that I know how you should live your life better than you do, but this is what paternalistic thinking embodies and acts on. Now, if I discover a logical flaw in how you are living your life (e.g. drinking energy drinks for energy instead of eating an apple), I may point out the error in your reasoning (the example is an admitted error in my own life), but persuasion must be my only tool to convince you; not coercion, and my reason for attempting to convince you can only be the belief that you don’t adequately understand the facts, not that you are incapable of understanding the facts. Any deviation from these guidelines deviates into paternalism, which by construction and implementation, even if not by definition, inevitably leads to coercion.


In the case of my classmate’s argument against cost sharing, his rationale was that people are incapable of following their own self-interest. If that’s the case, a physician is unlikely to convince them to not eat fried okra, and reducing cost sharing would make no difference. In that scenario, would my classmate take the next step and ban certain foods? The answer, as I found in a subsequent discussion, is apparently yes. Paternalism, with all the best intentions, removes freedom. By creating the mechanism to remove freedom, you create a mechanism that can be used with less-than-ideal intentions to remove freedom for the good of the leader, not the good of the governed.

The Draw of Paternalism

Besides not being paranoid like me, why would someone be drawn to Paternalism? The short answer, I believe, is that it’s expedient. Coming up with an effective “nudge” campaign of paternalistic libertarianism is hard and not always successful. It’s much easier to just be a benevolent dictator (at least it seems that way when you’re not actually a dictator). And it when a certain truth seems obvious to you, it’s easy to think that people who don’t see the obviousness of that truth are incompetent and in need of your correction.


You know why people don’t like Liberals? Because they lose. If Liberals are so fucking smart, how come they lose so goddamn always?

“Will McAvoy,” Newsroom


In the modern Liberal worldview (i.e. the Political Left in America; not economic liberalism) in general and the public health sphere specifically, the solution often seems like a tautology (minimum wage increases earnings and banning smoking makes people live longer), so anyone who disagrees or votes against this agenda must be voting against their own self-interest or not understand their own interests.


Furthermore, public health graduate students, and the Democratic party in general, are better educated than the majority of the population, and it seems like a very reasonable conclusion that someone with a graduate degree understands the interests of the high school drop-out better than the latter individual does herself. And in an objective sense, the more educated person is right. I, with my econ degree, understand the economics of minimum wage–its pros and cons–better than most of the minimum wage workers campaigning in favor of a minimum wage hike.


So why shouldn’t I tell them how it is, and if they don’t listen to reason, just push them in the right direction? I have to admit, it’s tempting.


To answer Aaron Sorkin’s question through news anchor Will McAvoy in Newsroom, “Why do liberals lose [all the time]?” I think a big part of the answer is that Liberals are presumptuous. They presume to know what people’s best interest is, how to get there, and often leave the individuals themselves out of the equation. This is certainly an issue in Public Health, where the community is almost always involved in the health needs assessments, sometimes involved in the prioritization, and rarely involved in the implementation (at least historically, this is changing slightly for the better). Even if Liberals are right from an objective, empirical standpoint about what policies are effective in improving the lives of individuals, the fact that the individuals themselves feel ignored or powerless in this equation goes a long way to explaining both ineffectual policies (feelings of powerlessness have a huge impact on health outcomes) and why Liberals lose “so goddamn always.”

Closing Remarks

Government action, including coercion, is justified and necessary to prevent individuals from harming others–directly or through externalities. Any other coercion, for an individual’s own good as perceived by the government actors (including lobbyists) is paternalism. In general, paternalism, however well-intentioned, is both immoral (at least from my view) and ineffective. I should note that I’m only talking about able-minded adults here, children and the intellectually disabled are a different discussion.


It’s immoral because it’s arrogant, easily blinded, and lazy. Instead of taking the time to convince mentally competent adults of the truth or falsehood of a belief, the paternalist treats that adult as a child, thereby demeaning her.  


It’s ineffective because benevolent dictators like Marcus Aurelius are still mortal, and Commodus still looms in the background. If you push too hard, the populists will revolt (Trump 2016) and/or someone worse will take the reigns of power and undo all that the paternalists have done, including whatever good may, in the short run, have come from their expedient solution.


Where are we to go then?

Paternalistic coercion is such a commonly used option because, unfortunately, it’s not always possible to persuade people of the truth, through rational arguments or otherwise.

Should we in public health keep trying, hope against hope, to convince people what their best interest really is? Do we leave them to their own means?

My Sisyphusian inclination is that the ethical option is to keep trying to persuade people through clever data visualizations and logical argument, but ethical is damn slow.

My Resolution: To Use Trello to Track Resolutions

I love productivity tools. I love getting things done (both the David Allen program and the concept), and productivity tools are useful to that end. I’m also just a nerd and really like playing with new toys, er… tools, and software packages for the novelty of different features and aesthetic beauty of different user interfaces.

Trello is one of those productivity tools that’s novel, really fun to use, and the click-and-drag UI is, in my opinion, kind of beautiful. It’s also super flexible, to the point of being almost too flexible and in adding features it somehow got the worst of several worlds. I enjoy using it, so I’ve tried to use it for several different projects and workflows, but I keep find it coming up short. However, I think I’ve finally landed on a good use case, so I wanted to share. But first, let’s look at what Trello is.

What are you?

Trello is a web-based project management tool that consists of Boards, Lists,and Cards. Within each card, you can track activity, make comments, and track checklists. You can share boards with individual users or within an Organization. The whole usetrello-board-list-cardr interface is click-and-drag and has lots of customizable colors, labels, etc. In short, Trello is an ultra-flexible tool that allows you to have a shared workspace with collaborators. It even has iOS and Android applications.


Failed Experiments

Like I said in my introduction, Trello is, in some ways, too flexible. Meaning it can do almost anything, but it can’t do everything well. Here are some things I’ve attempted to use Trello to manage, and have ultimately moved on to other, more robust tools.

Research Management Tool

I was working on a project for work that involved looking at physician and hospital reimbursement for telemedicine services. I tried tracking my research in Trello, which worked really well for the first 5-10 sources I was looking at. Each card was a paper, report, or website. The description was the citation information. Where applicable, I could attach a PDF to the card, and I could store my notes in the comment section.


However, the number of cards quickly got to the point where I had to scroll on my list to see everything. Now, I don’t mind scrolling, and for a quick project with only a handful of sources, this wouldn’t be too bad, but it’s just not scalable. On the other hand, Evernote is designed for exactly this kind of research repository. I can use tags to find relevant topics, but it’s also fully searchable, including OCR searching within an attached PDF (paid version only), which makes it far more scalable than Trello.

Task Management Tool

I even have a post on this blog about using Trello as a task management tool. I tried a couple different paradigms for this. First, I had a list for each project and a card for each task. Again, once you get about 5 projects, the side-scrolling defeats the notion of being able to immediately see what’s on my plate. Then I tried having a card for each project, a checklist on that card for each task, and converting checklists to cards to move to a next action list. In principle this was fine, but it ended up being too many clicks to just complete a task and look at the subsequent one for that action. I read a few other methodologies/workflows online, but they all run into the same issue: it’s just not scalable enough to handle 50+ actions and still be “at a glance.” Just like in the research binder, more specialized products–Omnifocus, Wunderlist, Todoist, even Evernote–are just downright more effective task management applications than Trello.

Project Master List

This would probably be a really good use case if I were a project director overseeing multiple project managers who each had multiple projects. Trello’s click-and-drag card view makes it very easy to see what’s where (provided there are two or three dozen items or fewer), make updates as needed, and keep on top of things. That said, I’m just one person, and while I do have collaborators on several projects and oversee some people at work, I don’t have this need. I track my current projects on my task management tool, Todoist, and keeping track of all my current projects in Trello and Todoist is just redundant.

Promising Boards

Despite these failed use cases, I really love the Trello interface and enjoy using the tool… if only I could find something I could use it for where it actually performs better than Evernote or Todoist. Here are a few use cases I have found that seem to be going well.

Crisis Management!

I work with electronic medical records, part of which includes supporting go-lives. A EMR go-live was described to me during my college internship, long before I had any idea would be working where I am now, as “We plan, configure the software, and send everyone to training for months. Then we go live and all hell breaks loose and shit hits the fan.” I like to think that less shit is hitting the fan with go-lives these days (especially the ones I help to implement), but that’s still a pretty accurate description. When in a go-live command center, total control task management apps like Todoist aren’t at their acme. I need something that’s focused on this customer and this customer only, and I need something I can quickly add to, update, and get distracted from, without losing anything. Trello fits that bill perfectly. Each new issue gets a card, and the details and progress get tracked in comments and checklists. At this point, it is becoming a task management app, but since I should never have more than a dozen things on my plate at any given time, that’s okay. In this case, not containing everything is what I want.

For my Go-Live board, I have 3 lists for storing tasks–one list for each task type. This might be excessive, but it helps me categorize my work based on energy and mood. Then I have “Delegated” tasks that I still need to keep track of, “Done today” for tasks to report at any daily huddle calls, and “Done this week” for any weekly wrap-up calls or issue summary reports I need to put together. Click-and-drag makes it easy to move each issue card through the process, and card views make it easy to keep everything in front of me, without the distraction of email or other projects in Todoist (which still contains things I want to do today after work). The only drawback is that Trello is on the cloud, so I need to be careful to never include unencrypted documents, patient information, or screenshots (which I rarely create anyways, but I do get these things sent to me quite regularly).

Shared Lists

I keep my grocery list, which my wife also has access to, on Trello. Either of us can updated it throughout the week, and when it comes time to buy food (or we just find ourselves at one of the two places we shop) we can both check the list. I have one list for each of the two grocery stores we go to, and one card is one item to buy. After that item is in the shopping cart, the card gets archived. I also have a list to store recipes (I do a lot of the cooking, so yeah, this is a pretty short list that’s still manageable in Trello), which I can quickly drag over to the “weekly meals” list to create a meal plan each week. Evernote and Todoist could do most of this as well, but the UI of Trello is easier to use for this narrow case, and the collaboration is much easier/better in Trello.

Collaborative Projects

I don’t have many collaborative projects, but on the couple that I do have, when I can get my collaborators to use it, Trello makes a great tool for organizing ideas. Especially for more creative projects, Trello is an easy repository for new ideas, that can be added to by others and moved around easily. When my friend and I were working on a game (it still hasn’t gone anywhere), we were able to share ideas as we had them, and discuss these ideas in relation to other ideas while we were together or just sitting on the couch looking at our phones.

Someday Maybe Lists

I mentioned before that Trello makes a good project tracking tool, but not a good task management tool. This includes projects that aren’t being double documented in your task management tool: your someday maybe project list. When I see a cool class on Coursera or get an inkling for a new skill, I could add it to Evernote, but it’s liable to get lost in my several hundred notes. Instead, I add it to my Someday Maybe Board in Trello. I can make notes on each card about why I think this would be cool. Even add a checklist to identify how much work it would actually be, and include some resources. Because Someday Maybe lists should be reviewed, Trello offers a clean way to review so it doesn’t get so piled up that it’s impossible to manage. Because they shouldn’t be reviewed that often, I don’t have to worry about things get double-documented.

Goal Tracking

Last, and possibly most importantly this time of year, Trello is great for goal tracking and visioning. It’s the end of the year, so lots of people are resolving to spend less, save more, quit smoking, and go to the gym more often in 2017 (though they’ve also resolved to keep these resolutions longer than those same resolutions last year). For for those of us who would like to achieve these goals and keep track of it, I’m finding that Trello makes a very easy, effective platform for this.

Specific Time-Frame

I have a board for 2017 Goals. As it currently stands, the whole board is just one list with the goals themselves. Each card is a broad goal area, but has more SMART (Specific, Measurable, Actionable, Realistic, Time-Based) criteria in the card description.


As I make progress towards each goal I can mark that as a comment, or by adding (and checking) a checklist item. For example, each blog post I write will get a link on my Blogging goal card.

Life Time Goals

My wife and I have a shared board where we track some of our lifetime goals. We use labels to identify which one of us has this goal in mind and our ideal timeframe (broken into <5 years, 5-10 years, 10+ years from now). Again, Trello makes this easier than Evernote to review and update when we need to, but it’s not taking up space in our Todoist lists.



Like the Someday Maybe list, I could put these goals on Evernote, but the visual element of being able to move cards around in Trello makes these goals easier organize relative to each other and see how they fit together.


I’m starting this year with a resolution to use Trello to track my resolutions. If it works, I’ll end the year with a blog post about how it went (maybe, or at least I’ll end the year with a blog post).


Trello’s strengths are it’s easy-to-use click-and-drag way of visualizing data. Evernote is more searchable, more scalable, and can store more. Todoist (or any number of other task lists) is cleaner and more streamlined for checking things off the list. But Trello is collaborative and can track things over time that you don’t want in a daily task list but are more actionable than most things I put in Evernote (I know many people use evernote as a task management app as well, but I was never able to get that to work for me). The key is to make sure that the scope for Trello and each board is somewhat limited: too many boards and too many lists can get overwhelming quickly. As long as the scope is controlled, Trello lets you see multiple groups of actions together, which makes it a great goal tracking tool or collaborative project tracking tool.

Medical Services Inflation

I’ve heard it mentioned several times on NPR and other news outlets that growth in healthcare spending has slowed since the Patient Protection and Affordable Care Act (commonly called “Obamacare”; hereafter referred to as the ACA). By cherry picking this number of that measurement, this is a believable claim[1], and it is true that inflation for health services is at an all-time low (Figure 1). But I want to focus on this measurement in context of inflation as a whole.

Since the end of World War II, the price inflation of medical services and durable medical equipment has been fairly consistently one and a half to two times the overall inflation rate (figure 2). Wage increases, when they happen, tend to follow cost-of-living adjustments or otherwise be tied to the core inflation rate. So when the prices of medical services rise faster than the overall price level, they consume a higher proportion of consumers’ income. This description is a simplification, to be sure, and it doesn’t always describe day-to-day, year-to-year negotiations, but it does describe the fundamentals of long-term wages and spending and year-to-year trends within the macroeconomy.


In Figure 1, we can see some salient historical features:

  • For (almost) this whole time period, medical prices have been rising faster than prices in the rest of the economy.
  • Bad Macroeconomic policies of the 1970s involving abusing the Phillips Curve[2] and wage and price controls resulted in uncharacteristically high inflation during that time period
  • We can see an anomaly in the early 1980s with the Volcker Recession[3] where Medical inflation was lower than core inflation, and inflation as a whole fell off a cliff in the early 1980s, and continued to fall steadily while Greenspan was Fed Chair through the 1990s and early 2000s.

More into the weeds, the Bureau of Labor Statistics is responsible for tracking the price level, using a pre-defined bundle of goods and services, including medical services. The Consumer Price Index is a standardized price level, where 100 is fixed in a base year; inflation is the percent increase in the price level from one period to the next. There are problems with the CPI, but it’s still a widely-used tool. My analysis compares the ratio of Core CPI–which excludes the two most volatile sectors, food and energy, from the overall inflation measure–to inflation of medical goods and services. This ratio shows how much faster is inflation in the medical sector devaluing our incomes relative to the rest of the economy. I pulled data from the Bureau of Labor Statistics using their R API[4]. Year-over-year inflation was calculated by the percent increase in price level from December to December. Source code for analysis can be found at www.github.com/dannhek/inflation.


Figure 2 shows pretty clearly that the ratio between medical inflation and core inflation is still within normal bounds (within 1.5 standard deviations). The ACA has not really changed the root of the problem at all. This isn’t an indictment of the ACA as a whole, since the ACA wasn’t targeted at fixing the fundamentals[5]. But it does show pretty clearly that the ACA does not change the incentives to raise prices with little to no market counterforce, so prices in healthcare are still rising faster than prices in the rest of the economy, ACA notwithstanding.



[1] http://www.factcheck.org/2014/02/aca-impact-on-per-capita-cost-of-health-care/
[2] https://en.wikipedia.org/wiki/Phillips_curve
[3] https://en.wikipedia.org/wiki/Early_1980s_recession_in_the_United_States
[4] https://cran.r-project.org/web/packages/blsAPI/blsAPI.pdf
[5] The fact that it wasn’t targeted at fixing the fundamentals is something of an indictment, but that’s another blog post.

On Growth: Economic Growth as Dynamism

Begging the Question

Why does economic growth matter? Isn’t growth just fueled by mindless consumerism? Are capitalism and minimalism like oil and water, or is growth still a good thing, the consumerism that can fuel it notwithstanding?

Economist John Cochrane of the Stanford University’s Hoover Institution has written a very illuminating article that is available for free on his blog. Cochrane’s essay is empirical and well done, but it doesn’t quite answer the questions I’m asking. Throughout his essay, Cochrane makes the tacit assumption that more income is inherently better. This isn’t a particularly difficult assumption to swallow, but it’s worth examining.

The average American is more than three times better off than his or her counterpart in 1950. Real GDP per person has risen from $16,000 in 1952 to over $50,000 today, both measured in 2009 dollars. Many pundits seem to remember the 1950s fondly, but $16,000 per person is a lot less than $50,000! […] If the US economy had grown at 2% rather than 3.5% since 1950, income per person by 2000 would have been $23,000 not $50,000. That’s a huge difference.

Cochrane goes on to examine productivity, regulation, and pro-growth policies. It’s a good piece; written in very accessible, non-technical language that everyone should read. But let’s examine that basic premise: median incomes of $50,000 are better than median incomes of $23,000.

It may seem obvious that more income is preferable to less, which is (presumably) why Cochrane doesn’t feel the need to justify this sentiment beyond it being a tautology. But let’s take a few steps back. Is it really a given?

Let’s look at some instances where growth isn’t necessarily a good thing and may be viewed negatively. First, when businesses talk of growth, we often balk. Particularly when that business is, say, an insurance company complaining that they “only” had 15% revenue growth and therefore can’t continue in the ACA Insurance Exchanges. We laugh as Coke and Pepsi continue struggling for a couple tenths of a percentage point more of soda market share, and we don’t feel any sympathy for their concerns about bottled water being just as profitable as soda, but having far less brand loyalty. We think back to the toll the industrial revolution took–and is still taking–on the environment and have to ask, “was it worth it?”

As I’ll argue below, yes I think was worth it, but not in the way most people think. It’s not about the raw income. It’s about the economic dynamism–the number of economic transactions–that truly makes growth such a good thing. This is not to say growth doesn’t have some negative consequences, but growth increases dynamism, which is what makes life better.

Objections to Growth

There are many potential objections to economic growth for growth’s sake. For the sake of time, I’ll just focus on three.  

Environmental Costs

Because of the advanced rate of growth beginning with the industrial revolution, we have unleashed tons of mols of greenhouse gases into the atmosphere. Climate change isindubitably man-made as a result of the technological progress, the resultant economic growth made in manufacturing and mass production, and the consequent exponential growth in the population–humans are, after all, biologic machines that convert Oxygen into Carbon Dioxide.


It was growth of a sedentary, agricultural society and animal domestication, not the status quo of hunter-gatherer society, that introduced all manner of infectious diseases, both for humans and the rest of the ecosystem. Increased demand for building materials has, throughout history, been met by increased supply of lumber, resulting in deforestation everywhere from the ancient Fertile Crescent (modern day Iraqi desert) to heaven knows how many other localities. Finally, it was the pursuit of growth–rapid growth through controlling the beaver pelt trade–that incited the French and Indian War in the colonies. The list could go on and on, but it’s clear that Economic growth and technological advancement are not without trade-offs, including with regard to the environment–from local ecology to global climate.

What actually is growing?

What has growth brought us? Growth has brought us more medical technology, but also more medical expenses and bankruptcies. We’ve created a wealth of knowledge and ingenuity, but more than that, we’ve created–and purchased–an incredible amount of stuff. Self-storage is the industry predicated on paying more to cover up past mistakes of over-indulgence, and it’s growing dramatically. In the immortal words of Tyler Durden, for many, economic growth and the globalism that came alongside the great moderation means that we are “working jobs we hate, so we can buy shit we don’t need.” Growth has increased our capacity to build and create, and our spending power has commensurately increased. However, our needs–our true human nature and biological needs–have changed remarkably little. Thus, the growth we’ve experienced, the extra $34,000/year, is being predominantly spent on cost increases of some essentials–housing and healthcare–and the discretionary on the superfluous, the non-essential, the superficial, and keeping up with the Jones’s.

Unequal distribution

“A rising tide lifts all ships.” At least that’s the rhetoric that’s used. And to a degree, it’s absolutely true: innovation is knowledge, and knowledge is a non-rival good. The rising economic tide increases innovation and the body of knowledge available to society. However, economically, some ships are lifted more than others. While the income distribution of households is becoming (relatively) flatter, the income distribution of individuals is more skewed. Inequality in-and-of itself is not, in my view, inherently a bad thing; I want to live in a world where Bill Gates and Sergey Brin are filthy rich after creating products that improve the lives of billions. However, if growth benefits primarily the haves and the have-nots only see the downsides, is growth such a good thing for the majority? What good is growth, then, if it benefits many, but leaves many more desiring and building up credit card debt in pursuit of more, because they can, even when they don’t need to?

Why Growth is Still a Net Positive

Economic growth is often thought as a unidirectional thing. After all, we see GDP growing over time on an x-y scatter plot.

(Source: FRED

But what composes that GDP can change dramatically over time. Preferences shift, societal needs change, and the pie-charts that show the composition of the workforce and goods and services can change dramatically. Growth can be multi-directional and multi-faceted.

For example, what is a normal good? By economic definition, a normal good is one where the demand for that good increases with income. The classic example for normal goods is high-quality foods: as we make more money, we want more of the meat half of the meat and potatoes diet. This is not to be confused with a luxury good, where quantity demanded increases with price–as we see with wine, fashion, and yachts. As Cochrane points out, some non-conventional normal goods include things like civil rights, environmental conservation, and self-determination.

Environmentalism as a Normal Good

Next time you see someone working hard at minimum wage, ask (or just think about, for the shyer among you) if s/he buys energy-efficient lightbulbs or just the cheapest ones available. If they’ve done the math (and most people on that tight a budget have), they’ll probably tell you that they buy whatever’s the cheapest, which is probably not the energy efficient bulbs. When you’re living at the poverty line, you’re not particularly interested in the environmental consequences of your actions; your interests focus on putting food on the table. Concern with the environment and expensive products that are more “environmentally friendly” are normal goods: demand increases with income.

By all accounts, the Industrial Revolution was horrendous for the environment. However that fact does not attenuate emerging nations’ desire to reproduce the same thing in the least. It’s worth noting that John Muir and the Sierra Club didn’t emerge until after the Industrial Revolution had done its work: both in terms of economic growth and damage to the environment. Caring for future generations will, by human nature, always be secondary to caring for the humans alive today, in whatever form that takes. Only when the humans alive today are well-taken care of, will the focus really begin shift to future generations, because caring for nature and future generations is a normal good.

Anti-Consumerism as a Normal Good

What about Palahniuk’s indictment that growth is just fueling that which truly does not matter? As previously posted on this blog, I have a general sympathy of sentiments with the Minimalist philosophy/movement. Given that consumption spending makes up approximately 70% of GDP, minimalism would, on the surface, be diametrically opposed to growth for growth’s sake. But when you dig deeper, it’s not.

What is consumption? When you look at the formula for GDP, we have this monolithic ‘C’ for consumption in Y = C + I + G + NX, where Y is (nominal) GDP, C is consumption, I is investment (which includes corporate capital outlays and residential mortgages), G is government spending (not including transfer payments), and NX is net exports (exports minus imports).

Minimalism is about living a meaningful life. The focus of minimalist thinkers like Joshua Becker, Joshua Fields Millburn, Ryan Nicodemus, Leo Babauta, and others is that mindless consumption of stuff gets in the way of self-actualization. And this is (in my experience) true. However, part of this claim is that experiences are more important than stuff, which is also true–when is the last time you thought about Christmas traditions with your family and when’s the last time you thought about that high-school yearbook you’re inexplicably holding onto? So what about that ‘C’ part of GDP? Consumer purchases are built up of two things: Goods (stuff) and Services (experiences).

Suppose that tomorrow, everyone were to become a diehard minimalist. The Goods part of that identity would fall, but the services (read: experiences) would compensate. Minimalism isn’t just about cutting spending; it’s opposed to unnecessary spending on “shit we don’t need” and replacing that with life experiences–preferably free, but more importantly meaningful.

From the perspective of economic growth, this is great. Growth is about productivity increases, and Minimalism makes people happier, and therefore more productive (in the general sense, not necessarily in the corporate human resources sense). More to the point, when looking at growth in Y (GDP), Investment and NX can, and should, rise, holding C and G constant. So growth does not necessarily–even if it historically has–increase consumption. The more important pieces is I: Investment. Less frivolous spending means more saving. This means more  money available for investment, lower interest rates for firms looking to expand (particularly expand their non-frivolous divisions), and higher standards of living in retirement (including more consumption). According to the Solow Growth model, increased savings will, indeed, cause a short dip in GDP, but increases growth and growth potential in the long run.

First Principles: Adam Smith

Investment includes, among other things, increasing employment. Going back to first principles in Book 1 of An Inquiry into the Nature and Causes of the Wealth of Nations, when a resource is scarce (i.e. Demand is higher than supply) employers are willing to pay more for that resource, including labor. When the economy is in the upswing of the business cycle, employees (excluding public employees) get raises. More importantly, when the economy is growing, employers compete for employees, meaning that workers can change jobs (relatively) easily, and find a job that helps them to thrive–reaching self-actualization by challenging and growing them as a person. Workers can find the “right fit” of a job much more easily as a result of growth.

As a corollary, Research and Development is a normal good, and when firms are growing they are more likely to boost investment in this division. Similarly, startups–i.e. the the drivers of innovation–are able to get funding and capital during times of growth, much more so than during the trough of the business cycle. Moreover, startups are more successful–and therefore more influential–during times of growth. When the economy is growing, we’re learning more. We’re developing new technologies, new products, new processes, and increasing the pool of societal knowledge–even if, for a time, some of that knowledge is proprietary; knowledge never stays proprietary forever.


In short, growth means dynamism. The Oxford English Dictionary defines dynamism as the quality of being characterized by vigorous activity and progress. Economic activity really just means transactions or interactions. When the  economy is growing quickly, the number of potential interactions increases:

  • number of job openings, and the number of applicants willing to apply
  • Venture capital availability and appetite for risk
  • New products and services, and new consumers

When the number of interactions increases, the potential for mutually beneficial or euvoluntary exchanges necessarily increases.

When these exchanges are in goods and services, we benefit a little–new services, experiences, and useful tools. When these exchanges are in the labor market, we benefit a lot. People who are happy in their jobs are more productive, meaning increasing future growth. People who are happy in their jobs are (generally) happier in their lives overall. The extra monetary income we see from growth is nice, but money isn’t everything. More important than real income is the availability of opportunities for personal growth, advancement, thriving, virtue, and self-actualization. You can’t buy any of these things, no matter how high your income is, but they are still normal goods, and a higher income allows you to shift focus from merely surviving to truly living.

Dynamism isn’t just about increased sales or incomes, it’s about increased opportunities for change. Workers stuck in a dead-end job have more opportunities to find a new job when the economy is growing. New products emerge. Some of these are superfluous and engender the kind of mindless spending minimalists hate, but some legitimately make people’s lives better off–for example, digital pills to monitor drug regimen compliance, side-effects, absorption, and effectiveness. Dynamism is about new ideas bouncing off each other, about new people coming into contact with those ideas, and about competition doing what competition does best: forcing everyone to implement new technologies to provide better goods/services at a lower price. Economic dynamism is how we go from the abstract economics to the concrete improvements to people’s lives. And growth is what makes dynamism possible.

Concluding Remarks

Among the works of man, which human life is rightly employed in perfecting and beautifying, the first in importance surely is man himself.

Ultimately, the goal of Economic advancement is human thriving. Economic growth and the dynamism it creates is the most effective way of increasing human thriving sustainably. Yes, it can have downsides. But ironically enough, more growth can also be the cure for downsides from previous growth. Admittedly, this is a little like saying that more alcohol can cure a hangover; it sounds a little insane, but enter the Bloody Mary. Also, would we ever have widespread solar and biodiesel energy without innovation? Economic growth has afforded us the opulence to care about the environment, the well-being of the poor in other nations, and other things that were far from the minds of our forebearers. The benefits of growth are agnostic to where that growth comes from. If we continue to “grow” by buying “shit we don’t need,” then growth will remain lackluster, and even if it doesn’t, we will remain lackluster. If, on the other hand, we grow the economy by growing ourselves, making ourselves more productive–and more interesting–then that will have very different outcome. Economic Dynamism comes from growth, and is what allows us to reinvent–or just tweak–ourselves to become happier, healthier, wealthier; without adversely affecting our fellow human.



I originally wrote this with footnotes, but they didn’t copy over from Google Docs to WordPress. I may add them back in later, but here’s the list:

OmniFocus 2 vs. Todoist vs. Outlook


, , , ,

Like all entries on this blog, it will come as no surprise to my readers (who, according to wordpress analytics, virtually all know me) to hear that I have a fond curiosity for dabbling and experimenting with task management tools. Recently, I’ve been experimenting with Todoist as an alternative to Outlook as a task management system. To my surprise, I actually found it even more useful than I was expecting, and decided to do a head-to-head against OmniFocus, my current GTD weapon of choice. So how do these three systems work, and how do they stack up?

Background and Requirements for my GTD Trusted System

I’m currently writing this post on a Mac, but my work laptop is a windows computer, as is my personal laptop (although I would love an Airbook, I just can’t justify the price, and the windows laptop made sense for my computing needs at the time). One of the core tenants of GTD is that everything is in one system. When I bought OmniFocus, I wasn’t that concerned with this, since I wanted to keep my work life and home life separate. However, as I did more work (projects done for personal edification) split between my (PC) laptop and (Mac) desktop, my “system” started to fracture. I was using OmniFocus on my phone and desktop, outlook on my work laptop (which was, as expected, almost exclusively for work), and trello to piece the rest together, including on my iPad (see pricing section below) and personal laptop. It worked, but it was fragmented.

When I decided to try out Todoist, the primary driver was that I needed a better way to track projects, particularly small projects. Trello works really well (in my opinion) as a master project list and it even does a passable job as a project management tool for large projects. However, it gets very cluttered very quickly when you try to use it as a task management tool (i.e. all the unique next actions), especially when you include small projects of only 3-5 actions. Outlook is even worse. Outlook’s task list is useful only in that you can quickly turn emails into tasks using quick actions. However, Outlook completely lacks the concept of a project, and the only way of grouping tasks is using categories. Since I often have a lot of different projects going at once, this is completely untenable for me. What I wanted was a nice way to have sub-actions within subprojects, like I can do in OmniFocus on my Mac. Ideally, this should be available no matter what: home, work, in transit, whatever. Enter Todoist.

 How did Todoist stack up to the other two?

User Interface and Look-and-Feel


Being part of the Microsoft Office Suite, Outlook is familiar. Hell, it’s where I spend at least 2-4 hours every work day, so the email portion (and subsequent task portion) is clean, familiar, and very customizable. You can change the columns and information shown for each next action. Besides the normal MS ribbon at the top, the overall space is very clean and you can very easily switch between different views. Of these three tools, Outlook is definitely the most customizable.


Somewhat to my dismay when I first bought it, OmniFocus for Desktop feels cluttered. In an effort to highlight/bold information that really matters right now, they put other information (like project, context, etc.) in a light gray. Far from actually honing the eye in, this looks like a block of text. Also, the fact that they consider “due soon” as “within 24 hours” means that things that won’t be actionable until tomorrow show up now as yellow, even though they aren’t actionable right now.


Where it’s most cluttered/cramped though is the side panel that shows the details for each action. First of all, I have to scroll to see the notes section. Second of all, the note is very cramped, inconvenient, and can’t really contain any useful attachments. The due dates and whatnot are very useful, but not very useable and are very click-heavy.

On the positive side, there are a good number of shortcuts for switching perspectives, quick entry, and other tools to make using OmniFocus on the Desktop and on the iPhone very easy to use.


Of these tools, Todoist has hands-down the cleanest display. Labels only show when needed/specified (unlike OmniFocus’s contexts, which are omnipresent). The comments section is large and easily accommodates some stream-of-consciousness work and updates. 

Like OmniFocus, it’s got a fairly click-heavy UI, but Todoist does allow more keyboard use when assigning due dates to new actions, which can save having to move your hand back and forth between keyboard and mouse with each task. 

Where OmniFocus’s UI excels at displaying the same information in different ways, Todoist’s UI excels in its simplicity and the ability to jump between different projects and display different information in the same way.


Features, Kludges, and Bugs


As previously mentioned, you can create tasks out of emails using quick actions. That’s really the only feature worth mentioning, because Outlook Tasks is really just a to-do list; it’s not a fully-fledged task management, much less project management tool. Quick actions, while useful, are a bit buggy in that the order of operations is not always respected, meaning that my categories don’t always take automatically


Of these three tools, OmniFocus seems to have the most features, and some of the most powerful features. Most notably, OmniFocus has the ability to defer tasks and has a robust concept of sequential tasks. In this example, Scan Wedding Cards is the next action I need to take for this sub-project, and the later actions, which are dependent on the first action, are listed as “remaining” but not “available” and can easily be displayed or hidden depending on the perspective. 

Not only is this useful for projects with long time horizons, but it’s extremely useful for recurring actions that only happen 2 or 3 times a year, like scheduling a dentist appointment or changing the furnace air filter. Ideally, I want to forget that I have to do these things at all until the time comes to do them. If I accidentally think about these things, I can rest assured that my task management system has it covered. This “defer” feature can somewhat be replicated in other systems, but it’s not native the same way it is in OmniFocus.

The other feature I really love with OmniFocus is the weekly review. On a weekly basis, OmniFocus (for desktop only) allows you to quickly look at every remaining project, regardless of status, to make sure that the status (active, waiting, or on hold) is appropriate and the due dates are well defined.

On the negative side, OmniFocus desktop has a really annoying bug where if you complete a recurring task, it will automatically generate the next task, but if you un-complete and then recomplete the original action, it will re-copy the recurring action for tomorrow, creating a duplicate recurring series. Also, OmniFocus recurrences aren’t smart enough to “jump ahead” if you miss a day. Also, as mentioned, the notes section feels like an afterthought and is very limited in tools. 


In many ways, I think Todoist’s biggest features are the UI and its ubiquity. Todoist is available on iOS, Android, OS X, Windows, and online. It also has plugins for CloudMagic (my email tool of choice for my apple devices) and Outlook (my mandated tool at work). The UI makes entering dates exceptionally easy compared to OmniFocus; for example I can enter things like “every weekday” or “next Thursday” and it will set the date appropriately.

The other particularly cool/unique feature of Todoist is the Filters functionality (premium only), which, if you’re reasonably comfortable with Boolean logic and are willing to put in the time setting them up, can replicate many (though not all) of the features in OmniFocus that Todoist lacks (like the defer feature).

Organization and Structure


It’s a barely glorified to-do list. Tasks can be grouped by priority, due date, created date, or category. However, you can’t layer subtrasks within other tasks and you can’t group by combinations of categories. 


Folders can contain projects, which can contain tasks, including sub-tasks, which may have their own sub-tasks. I love this level of nesting, since my brain tends to think in outline form, but being a mediocre developer, I don’t like it when my subroutines start getting more than 4 levels deep. Fortunately, OmniFocus can be configured to mark a sub-project complete when all the tasks within that sub-project are completed. I know it only saves a single click, but this is very nice.


Todoist is similar to OmniFocus with the multiple layers of projects, but instead of folders they have parent projects. Again, the nesting for the projects and the actions within the projects can, in theory, go on for a while, but I think 4 or 5 layers is probably the maximum you should ever do. In this screenshot, we just show the “projects” and the parent projects. We can also have tasks and subtasks in the main task window. 


Finally, there’s pricing. Outlook comes as part of a package with the ubiquitous Microsoft Office, so let’s call that more or less free. Todoist has a free version that may work quite well for many people, but people with multiple roles or who want to compartmentalize work, home life, and personal edification would do well to purchase a premium subscription and use filters.

On the whole, OmniFocus and Todoist are fairly similarly priced:

  • OmniFocus is priced on a software-as-a-product model with a different price per platform
    • iPhone: $20
    • iPad: $30
    • OS X: $40 for the basic, $30 for the student version, and $70 for the premium version
  • Todoist is priced on a subscription model, at about $30 per year and is available on all platforms (including online) after that.

So assuming OmniFocus releases a new version ever 2-3 years and you purchase the basic model on all platforms, both will run you just under $100 over 3 years. OmniFocus can be a bit more expensive, especially for the professional/premium version, but again, it does have the richest features.



As mentioned at the beginning, I have computers on multiple platforms, and I really want to consolidate my entire list of next actions—personal and professional—in the same place. To do that, I’ve chosen Todoist. Todoist has a nice outlook plugin, which makes it integrate seamlessly with my work email (which was the only advantage of Outlook tasks in the first place), and it’s available on my apple and PC devices. I’ve got my four different goal areas (think somewhere between 20 and 30 thousand feet in the GTD Horizons of focus model) listed as parent projects, with projects (10 thousand feet in the GTD model) delineated as appropriate underneath the parent project. Todoist has a built in view for “Today” and “Next 7 Days,” which are useful starting places, but I’ve created separate filters based on parent project (so far broken up broadly to “work” vs. “non-work”) to display only the sphere of tasks I care about right now. 

We’ll see how this works out, but for now, I’m quite happy with my switch, particularly if I can come up with a viable way of “deferring” tasks like I could in OmniFocus (current strategy is to just dump them in a “long term recurring” project, though that is, admittedly, not ideal. Others have used a combination of filters and labels, but since labels don’t get automatically added or removed, this would require some complicated filters that I just haven’t gotten around to caring enough to write.

Let me know what you think. And with this, I’m going to cross off my next action in Todoist.

(The Blog Posts project lives under the “Personal Edification” Parent Project in my system.)

Where is the Middle Class Going?

Background and Data Source

We’ve heard for quite some time that the middle class is in crisis and shrinking. But what to do the data say? Is the middle class in crisis? Where is the middle class going? What does that crisis look like in the data?

Data for this exploration comes from the Current Population Survey (CPS) Annual Social and Economic Supplement (ASEC). The CPS ASEC is a longitudinal survey of 50,000 households conducted by the US Census Bureau1. Despite the relatively small sample size (<1.5% of the population), this dataset is regularly used for income analysis and other demographic trends. Data were downloaded and pulled into R using code from All Survey Data Free from Anthony Damico2.

Data Scrubbing

Data were processed using the same steps as Pew Research Center’s 2015 report, America’s Middle Class is Losing Ground6. First, CPS ASEC data for the years 2000, 2005, 2010, 2015 were downloaded into MonetDB. Second, Total Income3, was adjusted for inflation to 2015 USD using the getSymbols function from the quantmod package4 to get Consumer Price Index (CPI) data from the Federal Reserve of St. Louis.

Finally, household income was adjusted for household size. Intuitively, we know that when two people live together, their household income is the sum of each individual’s earnings. However, many expenses (most notably housing and utilities) are shared, so a household of two does not have double the expenses of a household of one. To adjust for this fact, the standard procedure is to divide total household income by the square root of household size5. In other words, a two-person household is assumed to have 1.41 times, rather than 2 times, the expenses as a one-person household.

After income was standardized across all households in the dataset, income class was calculated using the definition used by Pew Research Center6: middle income is 2/3 to 2 times the median, adjusted income. Anying above two times median income is considered “upper” income, and anything below two-thirds of median income is “lower” income. This is distinct from lower, middle, and upper class, which has a wealth component in addition to income as well as connotations of lifestyle6. This is still a relatively crude measure since it does not account for cost of living, but it is useful for a broad-strokes analysis.

Exploratory Analysis

First, like Pew6, I found that the middle class is shrinking, at least in terms of people living in each income class.


These proportions are slightly different than those found by Pew6, but they show the same general trend and are therefore close enough for my purposes. My larger interest is to look more in detail at how the distribution in income is changing. We can see that income is, as it always is, heavily skewed to the right, but the distributions are not identical year-to-year.


Difference in Density Analysis

Theoretical Construction and Example

The question I want to answer is where the middle class is going. To some degree, this is answered by the above graphs showing the proportions of adults living in each income category, but I’m asking a further question: which incomes levels are more–and less–prevelent than they used to be. Conceptually, I want to zoom in to see the gaps between the density curves curves. Since this is something of an unconventional way of visualizing data, let’s start with a proof-of-concept example. First, we will take random samples from a uniform and a Gaussian distribution and graph the density functions of these two samples on top of each other.

set.seed(100) #Set the seed 
#Get two random samples
#Convert these random samples to density distributions
dist1  <-density(sample1$x,from=0,to=2)      
dist2  <-density(sample2$x,from=0,to=2)
#Put these density functions into a dataframe
df1    <-data.frame(x=dist1$x,y=dist1$y)
df2    <-data.frame(x=dist2$x,y=dist2$y)

#Plot these density curves
ggplot() +
     scale_x_continuous(limits=c(0,2)) +
     geom_line(data=df1,aes(x,y),size=1.4,colour="blue")  +
     geom_line(data=df2,aes(x,y),size=1.4,colour="green") +


As expected, we can see the sampling from the uniform distribution (blue) is more densely populated at the ends of the range and less prevalent around the mean. We can quantify these differences by looking at the differences in the density curves. For example, at x = 1 the normal density curve (green) is 1.214 and the blue curve is 0.57 making green 2.128 times as dense as the blue population at x = 1.

To visualize this and see these differences in clearer relief, we can create a difference-in-density curve (no longer a density curve, because the difference can be negative). When we graph this difference curve, we can highlight the sign of the difference in density to see which population is higher or lower, and easily visualize the magnitude of these differences. As this is a comparison, we must have a base population and a comparison population where the resultant density curve is density(base)-density(comparison). In this case, blue (uniform distrubtion) will be our base population and green (normal distribution) is the comparison population.

#Build a Dataframe with the difference in densities
df3    <-data.frame(x=dist1$x,y=dist1$y-dist2$y)
#Graph it
ggplot() +
     scale_x_continuous(limits=c(0,2)) +
     geom_line(data=df3,aes(x,y),size=1.4,colour="black")   +


Now, we can see not just where in our x range the blue population is more prominent than the green population, but also the magnitude of these differences, the latter of which is harder to see when the two density curves are merely juxtaposed. As we’ll see later, we can calculate the area under the curve for different sections to quantify the relative magnitude of each difference between the base and comparison densities.

Difference in Density in Income

Let’s apply this same methodology of differences in density to income distributions over time. Since this methodology necessarily requires a base year and can only compare two distributions at a time, we will use 2015 as our comparison year, and look at where household income has increased/decreased in relative to the base years of 2000 and 2010.


As expected, the proportion of middle income households is smaller (read: negative relative density) in 2015 than in 2010 or 2000. But where are those households going? As seen by the green on the far left, we can see more households living with $0 or ultra-low income. But the $100,000 to $200,000 range is far more common in 2015 than either base year, indicating that we also have more households in the upper income echelons. Calculating the areas under these curves, we can compare the size of the shifts to upper and lower echelons.

Comparison of relative increases to lower and upper echelons from base year to 2015
baseyear lowerGain upperGain timesGain
2000 1.3e-06 1.51e-05 11.737
2010 6.3e-06 7.90e-06 1.258

lowerGain and upperGain columns are the areas under the positive portion of the difference in density curves where income is less than $50K (lower) or between $50K and $200K (upper). timesGain is the ratio between upper echelon increases and lower echelon increases.

Economic Interpretation and Explanation

In the final column of the above table, we see that since 2010, 25% more households moved to upper income than moved into lower income. Note that this is a cross-sectional analysis, so we cannot comment directly on which households moved or other characteristics about those moves. There is some evidence that part of the gains to the $100,000 to $200,000 income levels came not from the middle income group, but from declines in the highest income earners. That said, particularly when we look at the last 15 years (base year 2000), we can see the relative shifts are far more (11 times) in favor of large opulence than poverty.

What do these graphs tell us? First, I think the hyperbolic notion that those evil, billionaire CEOs are taking all the money away from the middle class is solidly debunked by the substantial growth in the proportion of upper middle income households. Are the ultra-rich becoming richer while the mega-rich retire and don’t get replaced by burgeoning young professionals? That conclusion could be supported by the data (note the blocks of red above $250,000 annual income indicating decreases in that level individual, household-adjusted incomes in this range are less prevalent in 2015 than in the base years), but it is certainly not the only explanation.

But what else is going on to explain these findings? Looking at the demographics, We can see some notable differences: most notably, is the number of workers per household.


Pairwise T-Tests show statistically significant differences between all the socioeconomic income levels. However, the substantive difference seems to be the number of workers. Lower income households tend to only have one worker, though the average number of adults across income classes suggests that these are not, on average, a single parent as a single earner (though it is significantly and substantively more common in lower-income households than middle or upper-income households). On the other hand, upper income households have, on average, 2 or more earners. In other words, both the average and median households are better off in 2014 (the income year reported on in the 2015 survey) than they were in 1999 or 2009. However, a large part of that is due to the wider trend of two-income households and other demographic shifts5. This larger trend explains some of the shift between 2000 and 2015, but we can see that this cannot be the full story, since the average workers per household (and all other measures of household size) actually falls between 2010 to 2015. Unfortunately, I need to end my investigation herea, so I will delve into that question at a later date.


Yes, the middle class is shrinking. Inequality is real, as is poverty. CEO pay has exploded since the mid 1990s7. This is all true. But whatever the populists say, this does not mean that everyone is suddenly going to be subjugated to the ultra-rich. The median and average households are still doing okay. There are demographic shifts–lower fertility rates, higher cohabitation, higher divorce rates, lower/later marriage, higher rates of children living with parents longer, etc.–and these demographic shifts go a long way to explaining a growing proportion of upper middle income households. Whether these are good shifts or bad shifts depends on your values and worldview, but economically, they are preventing household inequality from rising.

This all implies that maybe, just maybe, free market capitalism is doing what free markets do best: expand opulence for more people than are harmed by free trade and free markets. This is not to say that everything is perfect and rosy. Obviously, there are serious social challenges facing America today, and inequality has complicated and real ethical and moral concerns. However, the populist nightmare that the middle class is becoming poorer to the benefit of the upper classes of society does not appear to be one of those challenges.

Appendix A: Future Directions

For better or for worse, I’m a busy guy, and this is–for now–just a hobby. I hope to add to this investigation as I have time, but in the meantime, I encourage readers to fork my repo and add some of the following adjustments and considerations I wish I had time to include:

  • Adjustment for Cost of Living, or at least calculating median (and therefore class) by region or FIPS code. This will almost certainly require a new, larger dataset, but is worth exploring given the urbanization of millenials.
  • More robust ways of accounting for household size generally.
  • Identifying the same households over time to track changes to income class.

Appendix B: Code

All code is available on Github at github.com/dannhek/income_distribution. Below is a scattering of key pieces of code.

SQL Query used to get a subset of the data from MonetDB

#Retrieve Data from MonetDB SQL Database using dbQuery
query2015 <- "select h_idnum1
                    ,sum(case when earner=1 then 1 else 0 end)
               from asec15 where htotval > 0 group by h_idnum1,h_year"
df2015 <- dbGetQuery(db, query2015)
#From BuildHHCSV.r

Variables used in analysis

R_Variable_Name ASEC_Variable_Source Variable_Description
X N/A Observation counter
year h_year Survey year (Data reflect previous calendar year earnings)
h_id h_idnum1 Household Identifier
h_wages hwsval Household income from Wages or Salary in the previous calendar year
h_income htotval Total household income in the previous calendar year
h_size h_numper Number of people (all ages) living in household
h_num_adults h_numper-hunder18 Number of adults (18+) living in household
h_num_earners earner Number of people earning some income in household
h_num_fams hnumfam Number of families living in household
cpi_adj [FRED Data] Annual CPI adjustment factor
adj_h_income [Calculated Value] (h_income / cpi_adj) / sqrt(h_size)
seclass [Calculated Value] Social Economic Class, as defined by Pew.

Difference In Density Function

#Building the Difference in Differences Graphs
getYearComparison <- function(df,year1=2000,year2=2015) {
     dist1 <- density(subset(df,year == year1, adj_h_income)$adj_h_income,from=0,to=1000000)
     dist2 <- density(subset(df,year == year2, adj_h_income)$adj_h_income,from=0,to=1000000)
     df1    <-data.frame(x=dist1$x,y=dist2$y-dist1$y) ; df1$pos <- df1$y>0
     compareYears <- ggplot(data=df1) +
          geom_line(aes(x=x,y=y),size=1,colour="black")   +
          ggtitle(paste0("Changes in Income Distribution Between ",year1," and ",year2)) +
          xlab("Adjusted Household Income (2015 USD)") +
          ylab("Difference in Distribution Density") +
               breaks=c(0,50000,100000,200000,300000,400000,500000)) +
          scale_y_continuous(labels=comma) +
     #Return both a graph and the new dataframe


1: US Census Bureau. (n.d.). Small Area Income and Poverty Estimates. Retrieved from https://www.census.gov/did/www/saipe/data/model/info/cpsasec.html

2: Damico, A. J. (2016) Curren Population Survey. ASDFree. Github Repository. https://github.com/ajdamico/asdfree/tree/master/Current%20Population%20Survey; commit c680eec92cbba64512d756e533696dedaa3d415e

3: Variable htotval from the CPS Data Dictionary

4: Ryan, J. A.; Ulrich, J. M.; Thielen, W. (2015). Quantmod. CRAN Package. https://cran.r-project.org/web/packages/quantmod/quantmod.pdf 5: Burkhauser, R. (2012). Podcast interview with Russ Roberts. Retrieved from http://www.econtalk.org/archives/2012/04/burkhauser_on_t.html

6: Kochhar, R.; Fry, R.; Rohal, M. (2015). The American Middle Class is Losing Ground. Pew Research Center. Retrieved from http://www.pewsocialtrends.org/files/2015/12/2015-12-09_middle-class_FINAL-report.pdf

7: Planet Money. (2016). Episode 682: When CEO Pay Exploded. Retrieved from http://www.npr.org/sections/money/2016/02/05/465747726/-682-when-ceo-pay-exploded



A (Mid-) Westerner’s thoughts on Abu Dhabi Culture

This isn’t meant to be authoritative or scholarly. And this isn’t meant to cohesive or well flowing. Most importantly, this isn’t meant to be offensive or read in the light of American cultural superiority, though I imagine that implicit bias will probably show up. This is just a scattering of thoughts I don’t want to forget. Some of this comes from my coworker/guide who lives here; most of it is just my own observations.


  • English. English and Arabic are the official languages, and various guidebooks I’ve read say that English is the Lingua Franca for everyone except the Emiratis. As a very white, very obvious tourist, this has been my experience. That said, they aren’t speaking “proper” English, or even the slightly broken Indian/Asian English I’m used to (I work at a software company, after all). Instead, they seem to be speaking a formalized dialect based on Indian/Pakistani broken English. I even had one cabbie ask me if I spoke English because in his mind, he wasn’t speaking broken English.
  • Public Transportation. I have a love-hate relationship with Public Transportation. I love the idea of it, and I love what it’s done for cities, but I am embarrassingly inept at using it. I also really hate having to schedule my time around catching the bus, because time planning has never been my forte. That said, in Dubai at least, Public transit, while stressful for me, is more my style. The metro runs every 6 minutes and the stations are spotless (thanks to the cheap labor, noted below). Similar to the US, everything runs on a check-in-check-out system with RFID Cards. Buses, at least the buses I took, all operate on a “when the bus is full we leave” basis. They don’t have as many bus routes, but more point-to-point buses, much like the metro, but with less overhead, and no timetables.
  • Alcohol. The UAE is a Muslim Nation in the same way the US is a Christian Nation. Sure, the mosques send out the call to pray via loudspeakers and Alcohol is illegal to sell, but it’s not really illegal. Hotels and Resorts are allowed to sell alcohol, so at night, the little complex of my hotel is full of both tourists and locals coming for a drink. Locals have to pay hotel/resort prices for their beer ($10+ for a pint of Stella or Budweiser), but it’s readily available and all the nightclubs and bars affiliate themselves with a hotel, and the hotels have a side door for non-guests to get in.
  • Cheap Labor. The office building where I’ve been working has waiters. And I don’t mean at the restaurants, I mean that someone comes around to clean the breakroom, clear off the used coffee mugs from my desk, and will come in at the beginning of meetings to take orders for water or coffee. These aren’t interns just happy to sit in on the meeting, this is their livelihood. On my walk to work, I also saw someone polishing the metal rail on a bridge. He was going up and down the bridge with a rag polishing the big metal pole on the end of bridge. In the morning and evening, you can see shitty little buses bringing day laborers to and from the public housing compounds on the outskirts of town, but their clothes are always perfectly clean and ironed. But there are always people there to literally pick up after me. Busing your own table is non-existent here.
  • The Pseudo Caste System. They don’t have India/Hindu’s rigid caste system since upward mobility is possible here. They also don’t have racism quite like we do in the US. But it’s something in between. My Middle Eastern colleague described it as a racial hierarchy: Pakistanis/Indians, Blacks and Philippinos, Non-Emeriti Arabs, Whites, Emiratis. And it’s like in a Brave New World where everyone knows their place (based on their race), accepts it, and complies diligently. Again, upward mobility is possible, but institutional racism seems to be the norm, albeit not in the same way it exists in the US. Each group also has its own code of conduct to which they are held (though tourists seem to be able to get away with almost anything).
  • Build now, use later. I have some pictures of two park benches on the sidewalk, but behind them isn’t a park; it’s an empty sand lot. It’s full of trash and nothingness. But there are these two park benches. While I was there, I worked in a 40 something story building that was built in the last 5 years. It was a beautiful building, right next to an identical building that was completely empty. In both cases, the infrastructure was built not only before the need was there, but before the builder knew what the need was going to be. I’m sure that the vacant lot will be filled in and the benches will be used by people visiting whatever building or park goes in, and I’m sure that the second office building will eventually attract businesses to fill the various suites, but the speculation is rampant throughout all of the newer parts of town. It’s amazing what oil can build.

On Aspartame, Evidence, and Fear

Recently, I’ve been thinking about evidence, particularly in the context of blogs and recent health fads surrounding gluten, yeast, ‘all-natural’, etc. If you’re looking for a food blog, you’re in the wrong place, but I want to write about a trend I’ve seen when reading health blogs. I’m going to pick on health blogs because they’re what I’ve been thinking about, though this is not meant to be an indictment on health blogs (I’ve seen this problem on other categories of blogs as well). I’m also going to pick on aspartame because, well, it’s easy, and it recently came up when someone gave some well-meaning advice to my wife about how drinking diet soda is “bad for you”.

I’ve heard for years that artificial sweeteners are “bad for you” and don’t actually help you lose weight, so I’m actually sympathetic to this argument (call it great marketing from the anti-sweetener lobby). That said, what does the evidence say?

My first go-to for evaluating claims that such-and-such protein/chemical/supplement/food is good or bad for you is WebMD.com. In my experience, WebMD is a fairly conservative source of information, meaning that it takes a very large body of evidence for them to change their recommendations. There’s a trade-off here: on the one hand, this means that WebMD is not a great source to find the latest and greatest information about a new treatment (allopathic, osteopathic, homeopathic, or otherwise); on the other hand, they aren’t likely to endorse something just because it looks promising until there’s a large enough pile of evidence to suggest that it is promising. So what does WebMD say about sweeteners?

WebMD has a short, 3-page article entitled The Truth on Artificial Sweeteners (WebMD, 2002) that gives the professional opinions of a number of experts—mostly practicing and academic nutritionists. The one sentence summary is that artificial sweeteners can cause diarrhea when some people consume a lot of artificial sweeteners, but most people are fine consuming as much as they please, and although they probably don’t help that much in weight loss, “there is no credible information that aspartame—or any other artificial sweetener—causes brain tumors, or any other illness.” In other words, it’s pretty benign: it’s hard to make the argument that diet soda is good for you, but the evidence doesn’t support the claim that it’s bad for you.

As a good researcher, I didn’t want to look just at one source that I’m biased towards, so I Googled “Artificial Sweetener Health” (sans quotes in my search). I’m going to pick on Dr. Mercola for the sole reason that he showed up as the first result from Google that doesn’t appeal to what my biases say is a “credible source”[1]. Dr. Mercola is a D.O. and sells a number of health products and claims to have “The World’s #1 Natural Health Website” (Mercola, 1997). Just the title of his article, Artificial Sweeteners—More Dangerous Than You Ever Imagined (Mercola, 2009) smacks of fear mongering, and the article itself doesn’t disappoint those of you who love a good panic attack. He spends the first three paragraphs not really saying anything except scare tactics and “[i]t’s not pleasant to learn that corporations, government-sponsored regulatory agencies, and politicians are more interested in lining their pockets than protecting your health and the health of your loved ones. But unfortunately, these are serious issues that you must consider for your and your family’s safety” (Mercola, 2009). After some scrolling, he berates aspartame for being discovered by accident—just like penicillin, which, current concerns with anti-microbial stewardship aside, has saved hundreds of millions of lives and made hospitals a place where you can get healed instead of a place where you die of infection.

I could go on, but you get the point: there’s a lot of fear mongering. Let’s get to the evidence. For the sake of time, I’m just going to pick one of his references. To his credit, Dr. Mercola did include references, though there are no citations for his allegations of fraud on the part of the FDA and the “corporations” that got aspartame approved (and since he’s not an investigative journalist, I doubt he, himself, pulled what appear to be quotes from internal FDA documents). Specifically,  I want to focus on the claim about brain cancer:

“Equally alarming is evidence women of childbearing age who consumed aspartame during pregnancy were delivering babies with an increased risk of brain and spinal cord cancer” (Mercola, 2009). For this claim, he cites two sources[2]:

  • A 1992 article from the National Cancer Institute entitled Primary Central Nervous System Lymphomas: An Update (Jellinger, 1992)
  • A 1997 briefing published the Journal of the National Cancer Institute entitled Aspartame Consumption in Relation to Childhood Brain Tumor Risk: Results from a Case-Control Study(Gurney, 1997)

Thanks to Google Scholar, anyone can access the abstract of the 1992 article and the whole of the 1997 briefing. The rest of the 1992 article is behind a paywall, but it appears to be an overview of prevalence of lymphoma—sites in the brain, risk factors, etc.—and of treatment success with different treatments. The word “Aspartame” doesn’t appear in the abstract, so it couldn’t have been that important to the article; it probably came up as a direction for future research or something like that (though I don’t know that for sure).

Which brings me to the second article, which is a doozie. Gurney, et al performed a Case-Control correlational study looking at incidence of brain cancer in children by looking at children born after 1981 (when the FDA approved aspartame) by interviewing their mothers about aspartame and diet soda consumption during the pregnancy, and then calculating the odds ratio for having pediatric brain cancer. Calculating causation from such a study is pretty much impossible, but it’s a useful tool to identify possibilities for future research.

For those of you not familiar with the odds ratio (OR), it’s a ratio of odds, which are a ratio of probabilities. Suppose you have a 0.01 probability (1%) of some event (getting brain cancer) occurring under one state of the world (like your mother drinking diet soda while you were in utero) and a 2% chance in another state of the world (like your mother smoking while you were in utero). The odds in the first case are 0.01/0.99 or 0.0101 and the odds in the second case are o.0202, so the odds ratio is 0.0202/0.0101 or 2 (or “two to one” in common parlance). The interpretation is slightly more nuanced than this, but it basically means (especially with such low probabilities) that you are twice as likely to have the event happen in the second state of the world. This isn’t synonymous with saying that this is a likely event—it’s still only got a probability of 0.02—but the first state of the world is more unlikely in this example. An odds ratio of 1 means that however you divided the possible states of the world has no bearing on the likelihood of the event in question. An OR less than 1 means you’re more likely to experience the even in the first state of the world, and the opposite is true for an OR greater than one. All of this is subject to your sample, so to generalize the results from a sample population, we calculate a 95% confidence interval (basically a “margin of error” to use polling terminology) to estimate a range of values that probably contains the true value for the general population (anyone not in the sample). A confidence interval that does not contain 1 implies that there probably is a difference between the two groups, even in the general populations (it is “statistically significant”).

So let’s look at Gurney’s results, which, remember, Dr. Mercola cited as showing that aspartame might be linked to pediatric cancer. It’s a little difficult to read in this image, so I encourage you to read the article for yourself here[3].

f2-medium(Gurney, 1997)

So what this table shows, for our purposes here, is two things:

  1. All the 95% confidence intervals contain 1, meaning that there’s no conclusive (“statistically significant”) evidence of an effect in either direction.
  2. Only consumption of diet soda during the 2nd trimester and while breastfeeding have an odds ratio greater than 1.

In other words, the children of mothers in this sample who consumed aspartame during pregnancy were, on the whole, less likely to develop brain cancer! This is the exact opposite of what Dr. Mercola said in his article! That said, the affect size and sample size was still small enough that we can’t generalize this to the larger population (i.e. the confidence intervals contain 1), but it clearly doesn’t support Dr. Mercola’s claim. In fact, the authors of the article say outright: “we found no evidence to support the hypothesis that consumption of aspartame is related to pediatric brain tumor occurrence” (Gurney, 1997). This finding is repeated with a more recent (but still before the 2007 Mercola article) 2004 meta-analysis[4] published in the Journal of Oncology[5]: “according to the current literature, the possible risk of artificial sweeteners to induce cancer seems to be negligible” (Weihrauch, 2004).

So what the Hell, Dr. Mercola?

Not only does the literature not support what he’s saying (because it’s not statistically significant), the authors he cites as supporting him concluded the opposite of what he’s saying! It’s intellectually dishonest and straight up unprofessional to misquote authors like that! On top of that, this blog is just full blatant and unjustified fear mongering, which needlessly guilts less skeptical readers into abstaining from all sorts of harmless (but potentially pleasurable) products (artificial sweeteners is just the tip of the iceberg for this blog).

Again, mine isn’t a health blog, and I’m not saying aspartame and other artificial sweeteners (and the diet soda that contains them) are good for you. I’m not even saying they’re not bad for you. They very well could adversely affect my wife’s health and/or mental well-being (diet coke does tend to give me a headache). But the point is that the evidence doesn’t support the claims that this natural-only/homeopathic health blog is making.

It’s certainly possible that other blogs are more intellectually honest than Dr. Mercola, but my experience with health blogs is that they aren’t (evidence available upon request, though I would prefer to see a counter-example). I don’t mean to say that gluten, aspartame, sugar, fat, salt, or whatever is good for you. I don’t know. But I’ve demonstrated that in at least this instance, a very popular (allegedly #1) health blog is just blatantly ignoring the evidence (and in fact misrepresenting the evidence and its authors; what a dick!).

Try it for yourself. Go to a random blog–I recommend health blogs as a good example, but knock yourself out. If they use scare tactics or fear mongering (or even if they don’t), then check their sources. If they don’t have any, then they probably are just making it up. If they do give citations, run a few of them through Google Scholar and see what the actual articles say. Does it line up? If not, then why are you trusting this author? They’re not only making it up, but they’re actively ignoring the evidence to the contrary! I don’t care if you want to keep reading these blogs for your own enjoyment, but don’t pretend that they’re an authoritative source.

Lastly, let me leave you with this wonderfully entertaining and useful rule of thumb from the folks at In a Nutshell. When it comes to conspiracy theories, ask yourself this simple question: Would rich and/or powerful people be affected if this conjecture is true? If so, then it’s probably false (Kurzgesagt, 2014). For example, Dr. Mercola claims that evil, aspartame-producing corporations are intentionally trying to harm you to make a buck (Mercola, 2009). Do rich and powerful people—like, say, the CEOs of Equal or NutraSweet and their loved ones—eat or drink artificial sweeteners? Yes? Okay, this conspiracy theory is probably false.



Gurney, J. P. (1997). Aspartame Consumption in Relation to Childhood Brain Tumor Risk: Results from a Case-Control Study. Journal of the National Cancer Institute, 1072-1074.

Jellinger, K. W. (1992). Primary central nervous system lymphomas—an update. Journal of Cancer Research and Clinical Oncology, 7-27.

Kurzgesagt – In a Nutshell. (2014, December 18). The Ultimate Conspiracy Debunker. Retrieved from Youtube.Com: https://www.youtube.com/watch?v=Hug0rfFC_L8

Mercola, J. D. (1997). About Dr. Mercola. Retrieved from Mercola.com: http://www.mercola.com/forms/background.htm

Mercola, J. D. (2009, October 13). Artificial Sweeteners — More Dangerous Than You Ever Imagined. Retrieved from Mercola.com: http://articles.mercola.com/sites/articles/archive/2009/10/13/artificial-sweeteners-more-dangerous-than-you-ever-imagined.aspx

WebMD. (2002). The Truth on Artificial Sweeteners. Retrieved from WebMD.com: http://www.webmd.com/food-recipes/truth-artificial-sweeteners?page=1

Weihrauch, M. R. (2004). Artificial sweeteners—do they bear a carcinogenic risk? Journal of Oncology, 1460-1465.


[1] The first results were from cancer.gov, Oxford Journals (which I’ll cite later), cancer.org, and Mayo Clinic. Dr. Mercola’s website was result #5.

[2] This is really petty, but it irritates me. He made all the sources hyperlinks to the website itself. It’s almost like he doesn’t want you to read his citations…

[3] http://jnci.oxfordjournals.org/content/89/14/1072.full#F2
That’s right. I provide real links.

[4] A meta-analysis uses data collected in previous studies and compiles them to get a larger sample size for subsequent analysis

[5] http://annonc.oxfordjournals.org/content/15/10/1460.full.pdf+html