In October 2012 a group of non-governmental organizations formed a Campaign to Stop Killer Robots. The aim of this campaign was to preemptively ban fully autonomous weapons capable of selecting and engaging targets without human intervention. The campaign gained momentum swiftly, leading to different legal and political discussions and decision-makings. In this article, we use the framework of cultural techniques to analyze the different operational processes, tactics, and ethics underlying the debates surrounding developments of autonomous weapon systems. From reading the materials of the Campaign to Stop Killer Robots and focusing on current robotic research in the military context we argue that, instead of demonizing Killer Robots as such, we need to understand the tools, processes and operating procedures that create, support and validate these objects. The framework of cultural techniques help us to analyze how autonomous technologies draw distinctions between life and death, human and machine, culture and technology, and what it means to be in control of these systems in the 21st century.