As we may either admire or despise the A.I revolution, as presented in writing, art and so on, we need to consider its potential for the long-run. A potential that can threathen humanity as a whole, should it be placed in the wrong hands.
When it comes to military, we as humanity have yet to develop fully-autonomous war machines, even though we're doing quite well in the field of aurodynamics. Although we have yet to invent a fully autonomous, human-independent airborne drone, some machinery only requires minimal input on our end. We may call these planes "drones" but please remember that a drone is far more general.
The term "drone" originally referred to a male bee whose purpose is to impregnate the queen bee. An "office drone" is a deragetory term for a monotonous white-collar office worker. I already knew that because I used to be one myself. As an "office drone", I, of course, didn't need to fly in order to do my boring job. And of course, A.I can be used in menial jobs as well (If you happen to work in Amazon, or know someone who does, you might already know that, too).
The point I'm trying to make is that robotic soldiers are a real possibility for the future. The fact that most of them are currently airborne and semi-autonomous does not limit this potential. We need to understand that artificial intelligence (AI) has far greater potential than mere artists or language models. It can be implemented into a robotic body, and can follow orders blindly. AI-powered robots can be more competent, more durable, and carry out orders without question.
Being an excellent soldier does not make one an excellent person. In fact, there is no correlation between the two. Sometimes, deserting a morally depraved military force is the moral thing to do. Assuming, of course, we can agree on some moral codes as objective.
It might be far more appealing for a military leader, such as a warlord or a military contractor, to employ robotic troops for their cause, either partially or completely, if they can afford it. This would mean that smaller, rogue military factions around the world could be far more powerful than they currently are. They would have greater advantages over their enemies, be able to overthrow one or more governments, and perhaps, be competent enough to even challenge more powerful countries.
Yes, warlords exist today, as well as military contractors. The latter are essentially private military companies. This external article is an example for contemporary warlords. (AKA, powerful people, military and politically-wise, without relation to a strong, national government).
Personally, I would advise against even considering developing killer robot armies. I'm not even talking about manufacturing them. I'm talking about technologies that would be necessary to make them a reality. Robots are incapable of the moral values that many humans possess, unless those values are explicitly programmed into them.
And even then, their programming can be hacked and changed, just like with a computer. It's not something you can do with humans (unless you incorporate some machinary into them, such as a brain chip, that allows it). Yes, brain chips can be dangerous due to that very reason: The ability to override someone's behavior remotely. Either partially or completely Star Wars handled this concept quite well. I might explore this idea in another article, but I digress.
Psychopaths are examples of human beings who are incapable of empathy. As we know, empathy is necessary for morality. Therefore, if we never implement any empathy in a robot soldier, they could be even deadlier than a psychopathic human soldier. This is because of the advantages that I listed earlier, and in general, when comparing between man and machine.
Empathy is not only a capability, but also a moral restraint. If we create a mechanical being without any empathy, it will obey us without question. It will not hesitate to kill our enemies, because it does not have to be programmed with mercy.
And thus, as a theoratical warlord, the fate of countless can be in your hands. An atomic bomb can be countered by having another atomic bomb render your nation obselete. It doesn't have to be the same with military drones. Airborne or otherwise. Even North Korea has an airborne drone force.
As a philosopher, I choose to avoid letting an AI do the thinking for me. I think I can do the job quite well. However, a ruthless, contemporary warlord might not think the same if they had access to AI like I have access to an AI language model. And of course, in relation, the implications of a rogue droid army could forever remain deadlier than an employed AI philosopher. I think we can all agree on that.