The Dark Side of Military Super Soldier Technology

FacebookXPinterestEmailEmailEmailShare
Left of Boom Episode 7: Super Soldiers Part 2: The Dark Side (Ft. Edward Barrett and Tony Pfaff)
Left of Boom Episode 7: Super Soldiers Part 2: The Dark Side (Ft. Edward Barrett and Tony Pfaff)

If Left of Boom's Episode 6 left you scared about the terrifying Cyborg military future that awaits us, this sequel may offer a small dose of comfort, in that there are thoughtful people thinking through the major problems that this future presents, and how we might solve them. Last episode, we talked about the technology itself, from smart body suits with implants to bionic eyes and Cyborg brain enhancements. Today, we're going to follow that up by diving deep into the new world of ethical concerns that these technologies open up for the military, and talk about just how prepared America is to handle warfare that involves not just man-machine teaming, but man-machine hybrids. To guide us through what will no doubt be a mind-bending and at times frightening discussion, we have two of the leading experts in the field. Dr. Edward T. Barrett is the Director of Research at the US Naval Academy's Stockdale Center for Ethical Leadership and an ethics professor in the Department of Leadership, Ethics and Law. And Dr. Tony Pfaff is currently the research professor for Strategy for the Military Profession and Ethic at the Strategic Studies Institute at the U.S. Army War College. He's also a senior non-resident fellow at the Atlantic Council.

Mentioned in this episode:

Subscribe to the Left of Boom podcast:

iTunes | Google Podcasts | Spotify | TuneIn | Stitcher

Resources on Super Soldiers and the ethics of bioenhancements:

The following is an edited transcript of this episode of Left of Boom:

Hope Hodge Seck 0:00

Welcome back to Left of Boom. I'm Hope Hodge Seck, managing editor at Military.com. This is our seventh episode, but also the second half of our two-part series on super soldiers and the rapidly developing field of military bioenhancements. If Episode 6 left you scared about the terrifying Cyborg military future that awaits us, this sequel may offer a small dose of comfort, in that there are thoughtful people thinking through the major problems that this future presents, and how we might solve them. Last episode, which you can find at Left of Boom wherever you get your podcasts, we talked about the technology itself, from smart body suits with implants to bionic eyes and Cyborg brain enhancements. Today, we're going to follow that up by diving deep into the new world of ethical concerns that these technologies open up for the military, and talk about just how prepared America is to handle warfare that involves not just man-machine teaming, but man-machine hybrids. To guide us through what will no doubt be a mind-bending and at times frightening discussion, we have two of the leading experts in the field. Dr. Edward T. Barrett is the Director of Research at the US Naval Academy's Stockdale Center for Ethical Leadership and an ethics professor in the Department of Leadership, Ethics and Law. And Dr. Tony Pfaff is currently the research professor for Strategy for the Military Profession and Ethic at the Strategic Studies Institute at the U.S. Army War College. He's also a senior non-resident fellow at the Atlantic Council. Dr. Barrett and Dr. Pfaff, welcome to the show.

Edward Barrett 1:29

Thank you.

Tony Pfaff 1:30

Thank you. Glad to be here.

Hope Hodge Seck 1:32

Dr. Barrett, to set the table here, let's talk a little bit about what you might expect to see in the near future. Which of the sort of invasive bioenhancements, things that are more involved than just neurostimulation or performance enhancers, would you expect to emerge in common use the earliest and what timeframe do you believe we're talking about when we discuss how bioenhancements could change the way that we fight?

Edward Barrett 2:00

Okay, well I was recently on a panel discussing these issues with somebody who'd worked at DARPA, and it's not quite clear when these technologies are going to be ready for use in the military or anywhere else. So I can't answer that question and I think probably be very difficult to answer without a security clearance. I can say that the you know, the ends and the means remain the same. For a long time there's been a desire to enhance soldiers and everybody else for that matter, physically and emotionally, and especially cognitively. And then there are the longevity enhancement possibilities too, and we can even talk about ethical enhancements a little bit later as a possibility. But those are the ends and then as far as the means go, you have devices like exoskeletons and implants. Implants are often directed toward the brain, and therefore cognitive and sensory enhancements Then you have just the old standards of drugs and genetic enhancements. So the ends and the means remain the same. And I'm not sure about the timing to tell you the truth. Maybe Tony has a little bit more insight about that.

Hope Hodge Seck 3:14

Dr. Pfaff, do you have any thoughts?

Tony Pfaff 3:16

I don't know necessarily about the timing of particular technologies but if you review the literature, there are a number of these things that have been ongoing, their development has been ongoing for several years. And in some cases, they're fairly advanced. I mean, certainly exoskeletons and other things. There's the rudimentary forms of those already, you know, out there, but what is going to take to actually get the fielded is, yeah, I don't I don't follow that part very closely. But I would also argue, or agree with that, depending on what you want to call an enhancement, which is something we're talking about, you know, we we've had, they're out. They've been out. We use amphetamines for endurance, and has been going on a long time. In World War I, the Brits, the British, mixed rum and cocaine, kind of get the troops motivated to go over the top.

Hope Hodge Seck 4:04

Oh my goodness.

Tony Pfaff 4:05

My favorite World War II enhancement story is the Germans. You know, before the invasion of France, you know they had this doctoral innovation called blitzkrieg where they group the armor together to take advantage of its speed, and they needed for the invasion to be successful to get to Sedan in three days, but even with the faster Panzers and motorised vehicles, it would still take five days. The German Medical Corps, the chief medical officer came up with this idea they've been prescribing Pervitin, which is a kind of crystal meth that was kind of commercially available in Germany as a pick me up, but they made it pure and gave it to troops in Poland it kind of worked out. And so they distributed 25 million tablets to the soldiers invading France. They crashed the Ardennes in three days, [staff] cars got got out and ahead of the actual armored units. There's one great story of Rommel running around ahead of the enemy behind enemy lines in a staff car, just with everybody, you know, amped up, and the friend saw the staff car and said, Oh my gosh, you know, they've crashed through our lines, we better retreat and reconsolidate. And so French units were retreating when they didn't really have to before they had fully engaged German forces. The doctrine innovation of blitzkrieg is, you know, it was certainly a key to that, but it would not have happened probably not been as successful in history might have turned out a little differently, had it not been for the use of that kind of enhancement. So it's been going on a long time in terms of some of the other tech, more advanced technologies, in terms of, you know, chips and brains and those kinds of interventions. Yeah, that's probably way off before we figure out how to really integrate it in a practical way.

Hope Hodge Seck 5:40

That's really good insight that this is really a continuation of something that's been going on for decades, and I had no idea about that story, which is just amazing. So dialing in this this conversation, I guess, Dr. Pfaff, stay with you for a second, speaking specifically of domestic concerns and issues within the U.S. military. What are the most pressing, most immediate ethical questions that come to mind, that you believe we need to answer as these technologies mature and emerge, particularly the ones that are more invasive, more kind of along the lines of what people call the Cyborg soldier?

Tony Pfaff 6:13

Yeah, well, I mean, may have to clarify a little bit what you mean by domestic concerns. But for instance, one might be, let's say, we enhance veterans and soldiers in a way that gives them greater memory, faster cognitive speeds, greater cognitive capacity, and they get out of the Army more than they are whenever they leave the service. One, you know, how does that affect society, you know, if they're more competitive for certain jobs, and civilians are going to be displaced from those jobs that will be disrupted. On the flip side, what if those enhancements have long-term negative effects? Who's paying for that veterans care? Well, presumably the Veterans Administration, but that's a significant cost that can be disruptive in that society would also have to bear.

Hope Hodge Seck 6:54

Dr. Barrett, do you have any additional thoughts on the biggest issues?

Edward Barrett 6:58

Some of the big worries involve not really the invasiveness, per se, but just the effects of some of these enhancements on soldiers and others in society. The concerns revolve around both the type of enhancement and also the degree of enhancement. So on the type of enhancement, consider the possibility that you could make soldiers extremely fearless and aggressive, that sounds on its face fairly good. And I would call those psychological enhancements, but you could run into some real problems. In those situations you you'd be compromising. I think that the freedom and the safety of the soldier and then you're going to also expose the mission and citizens to undue risk. So that's a concern involved with the type of enhancement. And then a lot of the other discussions have to do with the degree of enhancements. So you have these so called extreme enhancements that could affect the soldier and also others on the side of the screen. There's a couple different arguments out there about how extreme improvements say to intelligence, cognitive capacities might harm soldiers. I think the one that really resonates with me was offered, as I recall by Michael Sandel, a political philosopher at Harvard. And his basic point is that limits, the argument goes, are beneficial for the individual. They promote interdependence and humility, and those are constitutive of human flourishing. And when you take away those limits, you might end up with people, soldiers who are extremely narcissistic and self-centered, superior in their attitude. And that in itself is not fulfilling for that person. So that's some that's one issue. You could argue that because soldiers are so necessary for survival, and because we have an all-volunteer force, then, you know, they're consenting to this harm, but nevertheless, you might want to consider whether it would be worth it in all situations. So that's a degree-related harm to soldiers. And then I could say more about the way societies would be affected a little bit later.

Hope Hodge Seck 9:07

I'm interested and you bring up the way that soldiers might morally, you know, be affected pertaining to their character. How might you sort of fence in that, you know, if you are in the U.S. military, and you're concerned about these things? Or how might you prepare soldiers for that eventuality that they might get technology that makes them behave differently and fundamentally alters their their character, their outlook on the world?

Edward Barrett 9:38

Well, just really briefly, I'll let Tony jump in too, the, I mean one possibility is that you could create these drugs so that they're reversible. So you could take away that, that cognitive enhancements, that is making them somewhat superior and narcissistic, but then there might be psychological issues associated with the reversal issue So those might outweigh the possibility of reversing them. And then you could just give them to the soldiers only when they're going operational say, so they wouldn't be like this all the time. But then there's a risk there too, because they haven't been trained in that state. And that could create some unintended bad consequences. Those are just a couple of things that come to mind that you could do to fencing, this this problem for the individual, but there are costs associated with those too.

Tony Pfaff 10:29

Yeah, I think what I'd chime in with, I'd add just two points, one of the things that I think is important from the beginning of when you're developing these technologies is to take this into account now, before you start fielding the drug, though, the enhancement, whatever it is, if you think it's going to affect the character, have those interventions ready, whatever they might be. And if you can't develop those interventions, you may have to rethink whether or not you want to field it or utilize it or not. You can talk about what you might consider when you're doing that, but also just want to echo Ed's point on reversability and possible after-effects. Some of the scientists I've talked to like to point out that no enhancement's reversible, because you remember what it was like to have that capability. And so that can bring on its own kind of pain or discomfort or psychological pain or discomfort on its own. So that's, that's something you also have to consider, because a lot of books will say, well, we're developing and so it'll be reversible. And that takes away the ethical concern, but it doesn't mean minimize it or mitigate it, but it certainly doesn't take it away.

Hope Hodge Seck 11:28

Right. So you talk about working ahead, ethically, on some of these issues before these sort of enhancements are really mainstream. And I'm struggling to sort of think of an analogous situation in which the U.S. military was fully able to anticipate the ethical and philosophical question that came with the introduction of a game-changing new technology. I'm thinking about how deeply immersed we're becoming in unmanned and drone warfare and the many gray areas that exist around kill decisions, who we target and how, who is in the room when these decisions are made. So how do military ethicists help get ahead of the conversation in a meaningful way?

Edward Barrett 12:09

Well, I mean, we the military has become, I think, pretty wise in that it creates positions like mine and like Tony's. So they're within the system, especially at the staff and war colleges, I'm in an academy. There are people with military experience too that do try to do a lot of this thinking in advance. So we keep track of the ways ware is going to be waged and non-war, and then think through the ethical issues associated with those new situations. And there are a lot of conferences that go on every year on these issues with the relevant players. And we try to reach out to not just other academics but but operators, people who work in industry and think tanks and governments. So there's this conversation is fairly robust, at least in the United States, and I've actually gotten a lot of feedback from Europeans and Asians that say, this conversation in the U.S. military is really unique. The U.S. military basically says whatever they think, they're not restricted at all in posing these possibilities and debating them. So it's a very lively situation that's been created. And I think it's it's very healthy.

Hope Hodge Seck 13:17

We'll be right back.

Hi, it's Hope Seck interrupting my own podcast, make sure that you're signed up for Military.com's free newsletters. We just launched a new one. At Ease, all about military entertainment news. You can also sign up for active-duty and veteran newsletters with insider information specific to your service, as well as once focused on crucial topics like finance jobs and pay. Go to Military.com and select login in the upper right hand corner to register for free and get started. All right, back to the show.

The decision-makers, you know the people with brass on their shoulders are the people developing these technologies do they generally listen? I mean, are they willing to countenance the 'Should we do this?' And not just the 'Can we do this?'

Tony Pfaff 14:07

You want me to say it, Ed, or you?

Edward Barrett 14:08

No, go ahead, Tony.

Tony Pfaff 14:09

The smart ones do! Yeah. Well, you know, I think if you look at you know, AI development, I think we should we actually get pretty should get pretty high marks for that. The Defense Innovation Board and the Joint Staff and others have set up, you know, specific organizations to look specifically at the ethics of AI as this technology is coming online. And if you review those documents, they're pretty good. You know, nothing's ever going to be perfect and you're always going to have new facts will come up and new considerations, but I think there's hope. I'd also underscore, I think that when it comes to bio enhancements and may not be as hard as we think it is. Because you may not catch everything, but you can catch some things. So you know, go back to the story of the nerve agent antidote or vaccine that they gave us in the Persian Gulf War. I was a recipient. As I doing research for my work on this, I learned that they basically overrode the guidelines on giving it to soldiers without their consent. Because of the exigencies of that particular situation, the idea being that, well, we really can't get everybody's consent. And if they really know it was good for them, they'd rather be inoculated against a nerve agent than not, and we just don't have time to test it. But then you learn that they had been stockpiling the drug for that use for over for six years previously, and didn't conduct any tests in the interim. So that's why if you're thinking about this, use, start the test. Consider the ethical issues, consider those implications early on, and you'll get, you know, you'll get a better result. Like I said, I think with AI you're seeing some of that thinking take hold, for which Ed Barrett gets complete credit.

Edward Barrett 15:53

We have a conference every year at the Naval Academy called the McCain conference, and it's on a different topic like this every year. And Tony comes every year and usually speaks. So there's a very there's a very robust group in the United States and we pull in people from Europe and Asia and other places. So it's it's ongoing. I'd say it's really developed over the last, what, maybe 20 years, Tony?

Tony Pfaff 16:16

Yeah, I mean, it was deliberate. I mean, so you started off with a conference of basically philosophers and other interested folks associated with the military academies, all formed a group that wanted to focus on ethical issues, and that just broadened into that particular group that, you know, Ed's a part of, you know, has branches all over the world. And so, yeah, no, I think the military certainly does, I think take into account the ethical issues, but you're right to point out that sometimes necessity gets the better of us. And depending on how we interpret the urgency of the situation, we can get ahead of ourselves. And that's, again, why I go back to you know, start early, you know, do ethics early and often, and that's the thing to watch out for. The other thing to watch out for is where that urgency is coming from. Are the bad guys developing one too, what happens if they get it and we don't you know, and I think that's part of the other consideration. If you're developing it for its own sake, just to have an advantage, you gotta at least think twice about fielding it, maybe even think three times about doing it in the first place. It changes when it's offset or to prevent have being at a disadvantage, but you kind of have to, one of the ways you start to have to think about these kinds of technologies before you even start developing them. And certainly before you start fielding them.

Hope Hodge Seck 17:25

Well, you're anticipating my line of questioning there but before we get into that aspect of things, Dr. Pfaff, I think you brought this up so I want to pull on the thread a little bit more. How would you anticipate free will will come into the equation? So you know, right now you can't choose whether or not to carry an M4 rifle or your M203 grenade launcher if you're grenadier, you carry what's issued to you. Should troops have the choice of whether or not to get bioenhanced, whether it's chemically or an implant, anything like that? How would you expect that conversation as well to affect the military socially?

Tony Pfaff 18:00

Yeah, no, that's a really good question. And the answer kind of goes something like this. Now, again, be specific about we're talking in terms of enhancements, you know, a powered suit that really doesn't, you know, it has no medical implications, or it doesn't involve a medical intervention, or boots that have springs on them enhance, you know, running, jumping, I don't see those as as terribly ethically problematic. What's new about what makes enhancements concerning is when you have some kind of medical intervention into the body that changes the body somehow. And so what you do when you do that with a soldier, you go, you're basically saying, okay, here's a trade, I can give you this enhancement for which there may be longterm side effects, but it will prevent your near-term demise or a serious injury in combat. What is the soldier saying? Well, you've kind of put the soldier, you're giving the soldier an offer he or she cannot refuse. I mean, the only rational thing is to take the enhancement, almost regardless of what the long-term effects might be, depending on what your probabilities of each outcome are. So like with the nerve agent, you know, vaccine. While there are side effects, you know, it was very low in the population. So I might consent to take that risk. But all things being equal, you can place soldiers in a kind of coercive situation, which I think you have to watch out for. One way to handling it is, well, if someone doesn't take the enhancement, well, they don't get exposed to the same risk. But even that's an imperfect solution for obvious reasons. A, that may not really be an option, like the nerve agent antidote. But the other thing is, even if that were an option, there's a second order effect in terms of risk and fairness to the enhanced soldier. Because while that enhancement may make them more resilient, and more survivable, and more lethal on the battlefield, and also it makes them more likely they'll be used, and depending on how many iterations of that there might be getting the enhancement might over long term make them less survivable, or more or more likely to be severely injured. And so that's the kinds of things you have to take into account when you're constructing a policy about how to distribute these things.

Hope Hodge Seck 19:52

Dr. Barrett, you have any thoughts on that one?

Edward Barrett 19:54

No, that was really interesting. Now there there would be these, we'd call them external pressures to enhance. I think to those then you could also add, I'll just call them internal pressures. In certain groups, people who are extremely young are much more open to taking risks. And therefore, you have to wonder how much informed consent they're giving. It's just kind of natural that they would say, yes, this sounds cool. This sounds enhancing and I'm going to take it without considering, like somebody who's a little bit older, who seen that bad things can happen. I just want to note too, that there are cultural differences. I teach military ethics over in France occasionally, and we discuss human enhancement. The French army and the French soldiers are very close to this, they do not want to be enhanced. On the other hand, the American youth that I've talked to the midshipmen, for example, they want the enhancements. So there are cultural differences too. And I don't know exactly how to explain those.

Hope Hodge Seck 20:57

Well, that's a great segue and I think we'll stick with you Dr. Barrett for a second. So to talk about the international consequences of bioenhancement, we had sort of a sidebar about this a couple of weeks ago, first of all with our allies, what happens if we have troops that have brain implants or other physical enhancements? We're interoperating with other nations' troops that don't have access to those kind of technologies. So what are your predictions about the way that this will affect those crucial relationships?

Edward Barrett 21:23

Yeah, good, good question. And relationships, not with just your allies, but but also with your own citizens and citizens of other countries. So, so far, my focus was before on the harm that would be that would exist in soldiers, but that harm, that superiority sense would have, I think, potentially a very deep and potentially broad social effect. So imagine that you are 10 times as intelligent overnight as any other human being, how would you look at your lessers? So you see them as having lower capacities, given our association, especially of cognitive capacities with value, human value, human dignity, which is said to be the foundation of human rights, you might see them as beings that are human but have fewer rights. And therefore, you get the possibility that the military, enhanced, would look upon civilians as lessors. And therefore you get the superiority and dominance of the military. So this would cause serious civil military problems. And therefore, you'd want to solve this problem by distributing them more widely. Let's let the market distribute these enhancements, but then you get the superiority and dominance of the military and then the rich citizens who can afford these enhancements. So then that's not a good situation. So you want to promote the enhancements throughout your society and therefore, you equalize the situation but then you get the superiority and dominance of the rich states militaries. So you see where I'm going with this, you create problems that are potentially global. If you start down this path, and you know, vis a vis, then other militaries, you would look at your allies who are unenhanced, potentially as lesser, and they would be lesser, they wouldn't have less value, but you might see them as having less value. And that could cause problems within these these alliances and their operations. This is something that was really well described by a philosopher named Allen Buchanan and an article that he wrote, I think it was in 2009. I think it's very important to remember how this might affect being enhanced psychologically and the global implications of this, especially between the rich, richer and the poor countries.

Hope Hodge Seck 23:50

And it sounds like there are multiple possible solutions to this problem, but there's not one that's like the perfect fit.

Edward Barrett 23:57

I don't know I could I could go on. Do you want to jump in, Tony?

Tony Pfaff 24:01

Interoperability is kind of always a problem for us. I mean, NATO still doesn't get it, right, and they've been at it for you know, a few weeks now. Um, and, there's ways of compensating for that in terms of role differentiating, we've got this capability, they go do that thing where they're more effective. You can also do geographic spacing and giving people different roles and and put them in different places. But what Ed said was, I think, really important, because you can get around those kinds of interoperability issues, but I think the concern arises with if your partner views your implementation of these enhancements as unethical, you lose a lot of legitimacy and that you can't, it's very hard to recover. It affects how they regard us and regard that individual soldier, if I'm taking pain-suppressing or fear-suppressing drugs, no one's going to think of me is all that courageous. I'm just the guy on the fear-suppressing drugs, and being thought of courageous as part of the soldier identity. And so at some point, we become so enhanced that we're not soldiers the same way that those partners are soldiers, and then you're going to see the kinds of thing that's talking about that are going to creep in that it just generates a lot of friction and make cooperation really difficult. And like you said, that's going to spill over into this, into the civil-military relationships. Because if my enhancements are taking away my fear or reasons for fear, then it's also taking away my exhibition of courage. And that's probably inevitable if we go down this route, but it is going to cause a lot of adjustments in the way that we work with the way our own society regard us, of the way their societies regard us and the way our partners regard us.

Hope Hodge Seck 25:32

So that's our friends. And on the other side, we've got technology-hungry global competitors, who may be just as eager to acquire Cyborg soldier tech and much less worried about the ethical considerations and concerns that we're talking about now. Could we end up in a bioenhancement arms race when it comes to our competitors? And how can we establish meaningful guardrails on human military enhancement in light of the global environment we find ourselves in?

Tony Pfaff 26:00

Aren't we already in a biomedical enhancement arms race?

Edward Barrett 26:07

I mean, ideally, as far as I'm concerned, you know, given the dystopian future that I laid out globally as a possibility, I think a global ban would be nice. But that's just not going to happen because of what you just mentioned, adversary states are developing these things, and therefore we're probably going to have to, because of that external pressure do the same. And so then how do you do the best with that situation? I mean, one thing I've considered is the possibility of developing these things and then regulating by giving them just to Special Forces units that need them more. They tend to be older and therefore their consent is very well-informed. And then this would avoid the military dominance problem because only a small portion of the military would be enhanced in this radical way. I think that'd be very, very difficult. Other states are not going to just enhance their SOF units. You know, within our military, you know, there, there are other reasons to give other components of it these enhancements. So I think the cat will be out of the bag and the entire military will will be wanting these enhancements. And then you just kind of get pushed down the path that I built with the civilians wanting them and then the civilians eventually all getting them but then having this rich state poor state distinction that's even more radical than it is now. Economic in another way. So this is a really key problem. And it's, I think, important for the military to think about, because much of the push for developing this technology is coming from the military. So I think we have a duty to take a look at the possible problems and how to manage those. So I'm kind of at an end, except to say that I mentioned earlier ethical enhancements, so maybe we could solve this, this problem of superiority by an ethical enhances but then you get into, you know, obviously agency issues, you know. Are you taking away the freedom a person to do the right thing, which is very important to human development? So I don't have an answer to your question. And then I worry about this. And a lot of the attention has been going to, and rightly, artificial intelligence and lethal autonomous weapon systems and cyber war. And that's all great, but I think the human enhancement technology deserves a little bit more attention.

Hope Hodge Seck 28:23

Well, I did want to ask that question. Years ago, we had a Secretary of the Navy who said, you know, the F-35 is the last manned platform, you know, we've got people operating RPAs out of Nevada, that are shooting people, you know, in countries very far away. It seems like the operator on the battlefield is almost becoming a species going extinct. So are these are these issues still relevant and pressing as there are fewer and fewer humans on the future battlefield as we see it?

Tony Pfaff 28:56

So if you're talking about remote technology, certainly enhancements can apply to that. Because some of these enhancements are to enable, you know, basically controlling remote devices, with your mind and with your brain. And I mean, certainly that's the the natural progression of military technology over time, it's always been about inflicting the most amount of you know, it's about lethality and you know, survivability, and they're related. The more lethal you are, probably the more likely you're going to survive. So certainly, we can see it's heading that way. And a big concern, which you already see is then what you'll have is, because they'll always have to, for the near future, at least, I'm guessing we're gonna have to have somebody on the ground. With the machines even in the Air Force, the automated technology they're talking about is that there is a manned fighter that controls a number of unmanned fighters. So you're probably not going to remove the human but you're right now all of a sudden, I've got this big giant military, most of the actual people are nowhere near the actual harm. So within the militant, we've already talked about what that might mean from the civil-military perspective, but now let's look at it from the military, you know, from the internal to the military perspective. So then you just have a group of risk-takers, they get smaller and smaller over time backed by, you know, I'm gonna call them a bunch of technocrats or like a bunch of technicians, who are skilled at, you know, manning and running and administering these systems, but who don't share the risk. Now, when I took this on, and the work I've done on disruptive technologies, one of the things I think you can do as the requirement if numbers of human shrink, you can actually rotate people through different kinds of assignments so that that risk gets distributed. I worked with a special operations headquarters in Iraq for a little while. And that was sort of how they did it. He would spend some time in the field, then you'd go back and do support work for a few months, and then you'd come back again, and you go back and so the risk was distributed that way so so that's one way of handling it. And I thought if I've got time, I did want to take up a point on controlling the proliferation of these weapons particularly with adversaries developing it. I took up that question directly. And specifically, what are the likelihoods? What's the possibility of developing the weapons or what's the utility in developing the weapons in order to do a ban? And a great paradigm again, an example of one that work was St. Petersburg Declaration. The Russians had developed in the 1860s, soft lead bullets that were designed to hit ammunition carriages and splinter the wood. When the Minister of War realized that using that on people, because it doesn't kill people, it just makes their life just really, it wounds them severely, and permanently, it creates pain. You know, they decided this is we don't want this kind of munition to be used against people. So they had it and they use the fact that they had it as leverage to have the St. Petersburg conference or counsel, that met, included 19 European countries as well as Iran and others outside of Europe, and they all agreed that that kind of munition would not be developed for use against personnel. So they set size and weight limits on it. And over time, you know, they took a lot of hard work. But they used that as leverage along with efforts to get other people to sign on to the ban. And eventually, you know, we do pretty much have a norm and international law against using those kinds of weapons, or munitions, still to this day. So you can do it. But you can't use 'Oh, I'm doing it to ban it' as an excuse. It has to be accompanied by a real, honest, sincere effort to get other people to sign on to me that whatever kind of counterproliferation protocol would be effective.

Hope Hodge Seck 32:34

So my final question is a bit of a bonus question. It won't be on the test. But there's a term coined by the military futurist August Cole that I really love, FicInt, or fiction intelligence. And so I had to ask to both of you and we'll start with Dr. Barrett, is there a particular book or movie a work of fiction that best represents where you think we'll be in the future? Or where we are right now as we grapple with these technological enhancements?

Edward Barrett 32:59

Yeah. There's been a lot of literature obviously in the past and movies. Recently, there was a movie it was called, was it Limitless, wasn't it?

Hope Hodge Seck 33:08

Yeah, absolutely.

Edward Barrett 33:09

And I saw that an airplane. You know how airplane movies are you remember about half of it, you know, because you're not getting enough oxygen. But I wanted to watch that again, because it seemed like it encapsulates a lot of the ethical issues associated with, you know, being enhanced cognitively. As Tony mentioned, you know, that the way that you would be harmed if you weren't able to get these drugs, and then what your attitude would be like when you were enhanced, and then social ramifications of that. So I think Limitless would be a good thing. I want to watch that one. Again. I'd recommend it because it looks really smart.

Hope Hodge Seck 33:45

It stars Bradley Cooper. That's a good one.

Edward Barrett 33:47

Yeah, exactly.

Hope Hodge Seck 33:50

Dr. Pfaff, do you have one?

Tony Pfaff 33:52

I'm going to come up with an answer in about six hours, three o'clock this morning and I'll call you. Actually when you said that my initial response was, Oh, yeah, Braveheart. Because if you know, if all this technology works, you know, ends up cancelling each other out at some point where or if the, or if we really do escalate into a, you know, a global conflict driven by, you know, talking about the future, you know, climate change. pandemics and other things. You know, if we can really get to that, Mad Max might be another better one, environment, then we can go back and throwing rocks at each other. But I thought about it, you know, not too far. I'll think of a better future movie in a minute. But I think what you're going to see, the thing about Braveheart that struck me was, you know, that was a time when, you know, soldiers could fight soldiers, and the fighting was isolated from the civilian population. And so Orson Scott Card, I know he wrote Ender's Game. Not sure we're there yet. But he also wrote a very interesting fiction story about a future battle between Earth and Mars, and it was entirely automated. And the battle was going on, you know, in space and Earth and Mars. We're just wasting robots right and left. And what broke the stalemate was, the Martians found a human decided to inject one human mind into that, into the conflict, which behaved unpredictably. And ended up winning the day, because the machines on the other side couldn't handle all the calculations associated with trying to keep up with humans. I think we're going to see more precision. If you're talking in space, we can talk about Space Force, but the fighting is going to be done. It's not going to be done by, it's gonna be done by robots and not plumbers, if you've seen the show. So that's in terms of the harms and the killing. But in terms of the disruption, it could get quite severe, because it's going to the political effects will be are probably more dramatic spread out over time and space.

Edward Barrett 35:48

I'll mention one more, maybe Tony, you'll think of it. So this goes way back. Obviously, Mary Shelley's Frankenstein, so so she's operating out of the romantic tradition which is a reaction against the Enlightenment and you know this this in many ways good desire to control nature but the ramifications of that can sometimes be problematic. So I read read that a couple years ago and I thought it was it had some things to offer to on this on this issue.

Hope Hodge Seck 36:20

Well, I personally am relieved to know that there are people like you who are working through these questions. This is the second in a two-part series on sort of super soldiers and the last set of interviews left me very frightened, and I'm somewhat comforted. Thank you so much for taking the time to be on the show.

Tony Pfaff 36:43

Thank you, I enjoyed being here.

Edward Barrett 36:44

Yeah, I enjoyed it.

Hope Hodge Seck 36:51

Well, well, if you've made it this far with us, you've muddled through a couple of the most daunting and sticky issues facing modern warfare. Don't worry, there's lots more to dive into where those came from, but we are going to hit pause and pivot to something totally different for next time. On Episode 8, we're going to hear from a woman who inspired an iconic character on one of the most beloved military aviation movies of all time, and find out how her life and career has been even more interesting than its depiction in film. I am so very excited for this, it's really going to be a can't-miss show. If you do want to learn more about the ethics of Cyborg soldiers and bioenhancements, I've included a bunch of resources curated by our experts in the show notes, so be sure to check those out. In the meantime, the lines are still open, hit me up at podcast@military.com to tell me what military topics interest you. And please leave us a rating and review wherever you get your podcasts to make sure that more people get a chance to learn about the show. As always, thank you for listening and be sure to swing by Military.com to get more information on all the news happening every day in the military community.

Story Continues