Is It Morally Okay for My Little Brother to Work for a Defense Contractor?
by Benjamin Studebaker
As some of you might know, I have a little brother named Adam (he appears on the blog once in a while). Adam is one of my favorite people–he’s a remarkably kind, thoughtful, and gregarious person. If you met him, you’d like him. Just about everyone does.
My little brother is studying at the University of Southampton in the UK, where he’s studying to become an aerospace engineer. Becoming an aerospace engineer literally is rocket science, and it’s not easy. Not only are Adam’s classes exceptionally grueling, but he needs to spend this coming summer doing an internship to get work experience and ensure that he’s competitive on the job market when he graduates. These internships are hard to get, especially if you want to be paid for your work. Recently, Adam was able to score a paid internship at a major American defense contractor. As a political theorist, this raises some interesting moral issues for me–no matter your position in international relations, it’s more or less inevitable that when you get involved in designing and manufacturing weapons, the weapons you make will be used in some conflicts you don’t agree with to kill people you don’t think deserve it. Is it okay with me that Adam wants to do this?
In moral philosophy, we generally recognize two broad types of moral systems:
- Deontology–these systems hold that actions are good when they follow certain rules and bad when they do not follow those rules regardless of what positive consequences might be achieved by breaking the rules. Deontologists believe actions or character traits are intrinsically good or bad, usually in an absolute way. For instance, many deontologists think that murder is always wrong, no matter what positive effects it might have. Deontologists are often religious (the 10 commandments are a deontology), but not always (see Immanuel Kant).
- Consequentialism–these systems dictate that what really matters are the consequences of actions–their outcomes. They are indifferent to the process or means we use so long as the aggregate consequences are positive in some way. Secularists are often (but not always) consequentialists (see the utilitarians).
This is not to say that all deontologists or consequentialists agree with each–there are an immense number of internal debates within both–but the distinction still holds. There are many simple cases that highlight the difference. Here are a few:
The Trolley Problem
A runaway train is poised to run down the wrong siding into a caboose where five people are sitting about having a cup of tea. All five are certain to perish in the wreck if the train strikes them. You see this and have the opportunity to throw a switch, sending the train down a different siding where it will strike a boxcar that one man is in the process of loading, killing him.
Consequentialists usually believe it is better for one person to die rather than five–they throw the switch. Deontologists often believe that if you throw the switch, you’ve murdered the man in the boxcar, and since they believe that murder is categorically wrong, they do nothing.
The Fat Man
This is the same situation as before, except instead of having the opportunity to throw a switch to send the train into the boxcar, you can push a fat man in front of the train. This kills the fat man, but it prevents the train from reaching the caboose.
Consequentialists usually think throwing the switch and pushing the fat man are morally identical cases, because they both have the same result–one person dies. They might feel a little queasier about it, but they make the same decision as before. Deontologists who rationalized that throwing the switch isn’t murder have a much harder time rationalizing the fat man case–they usually see it as distinctively worse, and not merely on an emotional level, but in objective terms.
Hard Times
You are a poor farmer living in a village in a developing country with no substantive social safety net. Bad weather has desolated your farm, and you don’t have the funds to feed your spouse and three children, two boys and a girl. You have an opportunity to send your daughter to work as a prostitute for a year in a nearby city. If you make your daughter work as a prostitute, she will be well-paid and the entire family will have enough money to survive for several more seasons, at which point your children will be old enough to get jobs of their own. If you don’t, you believe that one or more of your children may starve to death, and all three will be malnourished.
Consequentialists usually send the daughter to work as a prostitute, while deontologists are more likely to resist if they believe that prostitution is wrong.
If you were not consistently a consequentialist or a deontologist, it’s not surprising–non-theorists are usually remarkably inconsistent about their moral beliefs because they get those beliefs almost entirely from the social norms they acquired from their parents and peers as children. Theorists and philosophers use consistency to expose the conflicts among our unexamined values and thereby subject them to closer scrutiny than most people can reasonably manage in their day to day lives.
So what do deontology and consequentialism have to do with my brother? I’ve talked to some family members and friends about Adam’s situation, and those people who do take issue with what Adam wants to do usually object on a deontological basis. They argue that because the weapons Adam would be designing and making are sometimes used in the wrong conflicts to kill the wrong people that Adam is an accessory to murder or war crimes.
I’ve never been a deontologist. Deontological moral systems too often force us to choose obviously worse outcomes. I don’t see what the point is in having moral systems, values, or principles if those principles don’t help people live better lives. Deontological systems are fundamentally arbitrary–the behaviors they label as “good” or “bad” merely reflect our traditions and social norms.
Consequentialism takes context into account. It’s more nuanced. As a consequentialist, I look at Adam’s situation and I ask this question:
If Adam works for the defense contractor, does this cause more people to suffer or die than would suffer or die if Adam refused?
The answer is clearly no–if Adam refused the job, someone else would take the job. If the defense contractor decided to close up shop, another defense contractor would swoop in and take the market share. If all the defense contractors in the world refused to make the weapons, the government would make them itself. If everyone refused to make the weapons, that would make a difference, but Adam’s decision has no substantive influence on the behavior of all of these parties. If Adam refuses the job, it will make no difference to the number of wars fought or people who suffer and die. In the meantime, if Adam refuses the job, the direct penalties he incurs are very large–he misses out on a lucrative internship that would improve his market position and help him get future jobs.
International relations theory teaches us that global violence is a systemic issue with a wide array of causes, ranging from the security dilemma to international anarchy to plain old bad decision-making on the part of governments. Because there is nothing to stop powerful states from taking advantage of weaker states, there are irresistible structural incentives for states to try to be strong, and this entails having a powerful military. It’s certainly arguable that military spending is excessive, that the current military threats do not justify the state’s expenditures, but it’s not as if the United States could entirely do without tanks, warplanes, and missiles. Even if I was in charge of our foreign policy, we would still have the most powerful military, we just wouldn’t spend 4.5 times more than our nearest competitor. The trouble is that when you have a big hammer, everything begins to look like a nail, and the US government often misuses its military power in foolhardy and harmful ways.
Adam plays no special role in any of these mistakes. The government creates the demand for weapons. If the private sector doesn’t fulfill that demand, the government will fulfill it itself. The public votes in the ignorant government officials who start or intervene in foolish conflicts and create the demand, and the public pays the taxes that fund those ignorant choices. Every individual in the United States participates in the system, but no individual can affect meaningful change by defecting. The problem is much bigger than any one person.
So why should people expect my little brother to bear a serious personal economic cost for collective decisions they are every bit as much a part of as he is? Too often, we engage in moral scapegoating, blaming individuals for decisions that are at worst mere symptoms of our structural problems. Often these decisions do not have any substantive effects at all on the problems we blame them for causing. This makes it easy for us to convince ourselves that social problems are problems caused by others and not ourselves. We label the people who fire the guns murderers and the people who make them accessories, but we vote for the people who order the guns to be fired and made in the first place. We extol the virtues of our representative democracy even as it leads to vicious, cruel, and stupid foreign policy choices. We defend our system, claiming that it’s the fault of big companies or small individuals, but the system is broken not because individuals or businesses choose to break it, but because it is fundamentally flawed. No one of us bears the blame for this–it is on all of us to forge a better political system at the domestic and international levels, one that leads to less suffering and death.
This is not an easy task. It takes a lot of time, energy, and effort to study how the political system works and how it might be designed differently to work better. We don’t all have that time or energy. Engineers like my brother certainly don’t, not with all the grueling classes they take and difficult problems they have to solve. It’s not reasonable to expect my brother to be a political theorist as well as an engineer. If he tried to, he wouldn’t have the time to be very good at either, and that would be a shame, because he is marvelous at what he does, if I may say so.
Adam is responding to the economic incentives we as a society have created. When we were small, my little brother used to pick up leaves on the ground that he thought were especially beautiful. He’d hand one to me and tell me to keep it forever–he felt it symbolized our relationship. The guy is so sensitive and so smart. He would love to work on a mission to Mars. Now instead of carrying colonists, his rockets will carry warheads. That’s not his fault. It’s ours.
Whoa, Ben. One of your most excellent posts!
Thanks! Glad you liked it. 🙂
Very intriguing inquiries. Really glad I followed. I’ll try not to speak out of my depth or make any assumptions, but it really depends on which contractor it is and what specific project. After all, there are tons of them like Boeing, Aerojet, Carnegie Mellon University, Georgia Tech, IRobot (yes the company that makes Roombas is a military contractor), MIT, Motorola etc. And they all have tons of projects that don’t involve military contributions. Judging by the tone, though, it seems you’re fairly certain that he’ll be involved in a questionable project, so I won’t draw any conclusions.
You picked up on the tone correctly–I’m fairly certain he will be involved in some heavy stuff.
The trolley problem and such examples often illuminate the problem in a clear way, but fail to speak to reality. In reality, things are even stickier. Yes, even stickier. For instance, can you yell at the guy on the tracks as you throw the switch and maybe save his life? Can you put an object in the way instead of a fat man? Can you call someone to stop the train? Can you blame yourself if you fail to take action in a matter of seconds and you end up killing everyone on the train? Of course, the example is not meant to take every possibility into account, but isolate the problem. Still, rarely is the problem so clear. In the trolley example, I can’t help but imagine more of the context, which isn’t given.
As you say, there are powerful forces at play, vast numbers of decisions that are out of our control and vast numbers that are in our control. It’s no wonder we are inconsistent, and I wonder whether it would be truly moral to be consistent. Consistency would seem to be a characteristic of a great moral truth, but as I see it, life doesn’t seem to allow it when we consider the whole context. Perhaps there is no hard and fast law or rule for every situation.
I can’t help but think of the Nazi example often used to criticize Kant’s categorical imperative. (Do you lie to the Nazi who comes to the door to collect the Jew you’ve been hiding? Or do you stick to your guns and NEVER lie?) In that it’s clear that lying is for the most part wrong…lying may be wrong, but only for the most part. I don’t mean to say I’m a moral relativist or anything like that, but only that matters of morality are quite complicated and the context is very important.
Yes, in many of these moral cases the context is simplified so that only one principle is under consideration. It’s sort of similar to how scientists like to do controlled experiments. By controlling for those other variables, we can isolate one specific choice. Of course, in real life situations, other variables are always in play.
I think the best possible moral system would be entirely consistent in its principles while always taking the context into consideration. Considering the context should be one of the principles–it should be integral to the system. In this way a moral system can be consistent yet simultaneously flexible and dynamic.
You raise a good point in bringing up the Kant Nazi case. I think that’s a strong argument for a consequentialist moral view, which is not at all relativist (i.e. lying is wrong when it leads to bad consequences but good when it leads to good consequences–relativists would argue that it depends not on the consequences, but on whether or not it violates the norms or values of our specific society or culture). A deontologist seems pushed by that scenario to drop the rule against lying, but there are many other situations where lying is clearly the wrong thing to do, so this leaves the deontologist in a very uncomfortable position.
Yet another excellent post which provides much food for thought. I’m very much a consequentialist/utilitarian myself and I basically agree with your conclusions. You’re certainly right that Adam shouldn’t have to shoulder the moral burden for decisions made by a large part of society. Just a couple of points though…
I think the important difference between deontologists and consequentialists is that the former believe that moral rules are something *objective* and independent of human thought. They’re usually given by God, although Kant’s ‘categorical imperative’ is supposedly the inevitable result of logical deduction, but in either case they’re something ‘out there’, waiting for us to discover and follow them. Consequentialists, on the other hand, see morals as something *subjective* and man-made. They do use rules and principles as convenient tools in moral decision-making, but if we’ve made up the rules we’re allowed to change them or add new ones according to changing circumstances. God-given rules also tend to be very simple (which might have something to do with the fact that they were developed in simpler times), as is Kant’s ‘categorical imperative’, whereas real life isn’t so simple, with the result that consequentialists find it easier to be flexible in applying their moral rules to real life questions. It’s useful to have *principles* about it not being OK to lie, steal or kill, but it’s easy to think up circumstances where going against such a principle is the lesser of two evils.
That said, I don’t think Adam’s decision is completely a question of Deontology versus Consequentialism. I think there’s a large element of ‘what would happen if *everyone* did or didn’t do so-and-so’. We do lots of things in life even though we know our actions won’t *really* have any practical consequences. No election has ever been lost or won on a single vote (not even in Florida 🙂 ), so why bother to vote? And if you do, why vote for a small extreme party which you know has no chance of winning? If the street is already full of rubbish then whoever eventually clears it up isn’t going to notice one wrapper more or less, so why go to the trouble of finding a bin? Some of these things involve an attempt to ‘send a signal’ to society. E.g. if a significant number of people refused to make armaments then the makers would have to offer ever higher salaries to attract good quality staff, which would make arms and warfare more expensive and therefore less popular politically, and the work would acquire a social stigma which would discourage others from doing it. What I’m talking about here is ‘Suppose They Gave A War and Nobody Came’, ‘are you part of the solution or part of the problem?’, ‘be the change you want to see’, etc., etc., which is another way of looking at things. Of course you’re right, a significant number of people are *not* going to refuse to make armaments, so any one person doing so isn’t going to change anything. The main (if not the only) positive consequences are likely to be for that one person, by giving him a pleasant feeling that he’s done the right thing. So in the end it just comes down to personal feelings.
BTW, I used to work in IT and was faced with this sort of difficult decision on at least two occasions. In one case I followed my ‘principles’ and refused to work on a project for a certain police force, in the other I took the lucrative one-year contract with the big oil company – and I’ve never regretted either decision. What you didn’t mention is that Adam, being such a nice guy, will undoubtedly spend most of his life using his talents to do wonderful things for mankind, so in the long term it would be a great pity (for him and for the rest of us) if he missed his chance to do that by refusing to spend a few months doing something a bit less wonderful. Just as long as he doesn’t get tempted to spend the rest of his life making weapons… I wish him much wisdom!
Thank you for your interesting comment–there are a couple areas where I would push back a little:
Most consequentialist theorists do believe that morality is objective, they just don’t believe that morality has metaphysical foundations (e.g. god, moral particles, etc.) beyond the sheer existence of reasons–it’s still an objective claim that it’s pleasurable for Bob to take a warm bath and that therefore Bob has a reason to take said warm bath. Derek Parfit writes about the objectivity of consequentialist moral theories in On What Matters, if you’re interested in pursuing that line of reasoning.
I actually made the argument against voting on the basis that it’s not a consequential act at the individual level:
https://benjaminstudebaker.com/2014/11/04/the-case-against-voting/
For the most part, people vote because it makes them feel good about themselves, not because they can realistically expect their vote to make a difference.
Littering is similar–we generally refrain from littering not because it actually makes a difference but because there are strong social norms against littering, and in some places it is a crime for which one can potentially be punished (at least in theory if not always in practice). Litterers are guilted and shamed and threatened with punishments by our society, and this is what deters them, not any substantive objective difference in the quality of the environment as a result of the additional wrapper. These are good policies and norms though–in aggregate, the incentive structure they create helps to prevent littering from becoming a serious problem.
I think “be the change you wish to see” is fundamentally flawed reasoning unless one has a reasonable expectation that one’s behavior will play some substantive role in influencing the behavior of others. If you’re a very influential figure, a role model, someone who society looks up to and pays attention to, your actions can have an effect on the behavior of others. But for most ordinary people, this is not the case. The only other reason to “be the change” is if you will avoid a sense of guilt or shame, but that’s ultimately an egotistical reason rather than a pro-social one. The decision comes down to how it will make you feel, not any substantive positive social effects. If lots of people refused to work for defense contractors, they might have to raise their wages, but they could make up for this by raising prices. The Department of Defense is not known for being cheap, and would probably pay more for the tanks, planes, and missiles it believes it needs. In any case, it’s hard to imagine any decision by Adam having that kind of mass influence.
You make a very good point at the end about potential long-term benefits to society that might result from Adam’s internship. I certainly hope that is the case, though I cannot presently establish it. 🙂
You’re quite right about most consequentialist theorists believing that morality is objective. I was oversimplifying – or rather just being sloppy. I’m still finding my way around in the jungle of overlapping multidimensional concept categories which is the world of philosophical ethics, and I haven’t quite managed to work out where my ideas fit in (if they fit in at all, that is 🙂 ). I think the rest of that paragraph still stands though.
Your article about (not) voting is a very interesting one, which I may comment on over there when I have more time. There can be reasons to say that voting is useless which are connected with the actual mechanism in use, for instance your chance of making a difference is increased in a system of proportional representation compared to a first-past-the-post system. I would also say that the fact that no election has ever been lost or won on a single vote (i.e. that one single vote doesn’t actually change anything) doesn’t in itself necessarily mean that voting is useless, and therefore that no one should vote. Maybe democracy is such a bad system that no one should vote, but that’s another question.
However, one shouldn’t confuse no difference with an extremely small difference. What I was trying to say is better illustrated by the examples of littering and climate change, if we forget about the special significance of threshold levels (which are so extremely important in elections). Let’s assume there are good general reasons for not dropping litter. If my wrapper isn’t substantially different from anyone else’s wrapper, and I’m not substantially different from anyone else, then those reasons apply to me as much as to anyone. If I were to agree that one shouldn’t drop litter (i.e. that dropping litter isn’t a good idea), and then go out and do so myself because one wrapper doesn’t make a practical difference, then I would consider myself to be a hypocrite. My not dropping a wrapper makes an extremely small difference, but it makes exactly the same difference as when anyone else doesn’t drop a wrapper. The reason that I, personally, don’t drop litter is not because I’m afraid of someone seeing me and disapproving, nor because I’m afraid of getting fined, nor even because I don’t want to feel like a hypocrite, but simply because I think littering isn’t a good idea. Sounds like a good enough reason to me!
I think that “be the change you wish to see” is a reasonable attitude in this kind of case, although perhaps not in other circumstances (where the business of threshold levels can convert an extremely small difference into literally no difference), in spite of the fact that there are much more effective ways of changing the world if you are able to influence large numbers of people.
I see what you’re saying–I suppose my view would be that in most cases, thresholds come into play. There is a point at which the amount of litter on a beach goes unnoticed, a point at which it becomes a minor aggravant, and a point at which it becomes a major inhibitor to enjoying the beach. From a consequentialist point of view, it does not make sense to be concerned about littering unless the consequences change as a result (i.e. you pass from one threshold to another). It would be very hard to see how from a consequentialist perspective it would be wrong to litter if the additional littering in question would have no additional effect on anything–if it was not enough litter to harm anyone or damage anyone’s experience. Now, if it was true that if I practiced littering as a habit, I could do enough littering to make a substantive difference, we might be able to say that it would be wrong to be a habitual litterer (in the same way that smoking one cigarette is not in and of itself unhealthy but being a regular smoker is). But unless there is a discernible consequence tied to the individual piece of litter in question, I’m not seeing it. Perhaps you could argue uncertainty–if I don’t know whether or not my littering will push us up a threshold, that would give me a reason. But this doesn’t help in cases where you are certain. Otherwise, you’d have to be a deontologist.
I see what you mean about thresholds, but I was specifically trying to find an example where they don’t come into it. What you say about litter thresholds is true, i.e. they do indeed exist, but exactly where those thresholds are is entirely subjective. One person might say “I’m not actually tripping over it and there aren’t too many rats, it’s just litter, get over it!”, whereas another might have his entire day ruined by seeing a single plastic bag thrown away in an otherwise beautiful natural site, and most people will be somewhere between these extremes. Uncertainty also comes into it, as you never know whether you’re pushing the situation over someone’s threshold. So, given that these thresholds are so subjective and uncertain, I think it’s reasonable to just assume, at least for the sake of argument, that the amount of annoyance caused by litter is directly proportional to the amount of litter thrown away, even if the actual equation is probably more complex.
Regarding deontology, I still think the important question is whether rules are regarded as man-made tools, or something God-given c.q. part of the natural world. I don’t have the philosophical terms to describe these two possibilities, but I’m sure you know what I mean. The relevant difference is that in the first case rules can justifiably be changed or ignored according to circumstances, in the second they can’t. As a consequentialist I might decide it’s not a good idea to drop litter, and therefore make a ‘rule’ for myself that dropping litter is wrong. But that’s just a convenience, so that I don’t have to start from scratch with each decision, and if, for instance, I’m in a very big hurry and can’t find a litter bin, I might decide to break my rule occasionally. Maybe I should be talking about a ‘guiding principle’ rather than a ‘rule’, which sounds a bit too unbreakable.
If it is actually the case that each individual piece of litter does (or might reasonably be expected to) have a discernible effect for someone (even if not for everyone), then yes, we would always have reasons not to litter, providing that declining to litter does not impose an unreasonable cost relative to the effects of the litter contributed. I don’t think this would work for the other cases we’ve discussed (e.g. voting or working for a defense contractor) because there is no demonstrable effect and unreasonable personal costs.
Once you start ignoring or changing deontological rules based on the circumstances, you’ve become a consequentialist. What you’re describing sounds like two-level utilitarianism, where we’re rule utilitarians in most cases but act utilitarians in cases where the context is unusual and we have time for deliberation.
I don’t see a ‘reply’ button, so here’s a new comment…
Two-level utilitarianism looks pretty good to me. The choice between act utilitarianism and rule utilitarianism is simply a practical one: it’s a useful rule to stop at a red traffic light, but it would be pretty silly for a pedestrian to do so in the middle of the night in a quiet suburb, when you can literally hear that there isn’t a car moving for miles in any direction. I’ve seen someone do that in Germany (and yes, he did look silly!), and I know someone who didn’t do it in Belgium (more of a police state than most people think), and was seen by a policeman and fined.
I don’t find it so useful to regard consequentialist and deontological theories as a mutually exclusive dichotomy, as in practice consequentialists do use rules as a convenience (see above). I agree with T. M. Scanlon, the inventor of two-level utilitarianism, that the deontological concept of ‘human rights’ can only be justified with reference to the consequences of having those rights. As I said above, I think the dichotomy man-made vs. God-given/natural a much more useful one.
BTW, if you still believe what you wrote two years ago then you could have short-circuited this whole discussion about whether it’s morally okay for Adam to work for a defense contractor by simply saying: yes, it’s morally okay because it’s in his best interest, period.
Correction 1: the inventor of two-level utilitarianism was R. M. Hare, not T. M. Scanlon.
Correction 2: in my last paragraph I should have said: because it’s in his best interest and the state hasn’t forbidden him to do so, period.
I obviously didn’t drink enough coffee before posting.
I haven’t found many secular deontologists to be particularly amenable to two-level utilitarianism or other more pragmatic approaches. I agree that two-level utilitarianism is usually a pretty good way to look at things.
The argument I present here is sort of a longer way of getting to the same point–because the state hasn’t forbidden Adam from working for the defense contractor (indeed, it funds the contractor and encourages him), the state is to blame for the situation. Only it has the power to change the incentive structure and produce different consequences. Adam can take no effective action and any action he would take would entail self-exploitation.
>He would love to work on a mission to Mars. Now instead of carrying >colonists, his rockets will carry warheads. That’s not his fault. It’s ours.
I’m hung up on this last sentence.
Does your brother’s free will not come into consideration at all? SpaceX, Blue Origin, and NASA all provide opportunities for engineers to use their skills for a Mars mission, but your post makes it sound as though your brother has no choice and is choosing his career path out of a matter of financial necessity.
[…] In this episode, Jesse mentioned this article by Benjamin Studebaker: Is It Morally Okay for My Little Brother to Work for a Defense Contractor? […]
LOL! Immature analysis based on abitrarily chosen values while begging their utility.