Can’t figure out how to feed and house everyone, but we have almost perfected killer robots. Cool.
What’s more important, a free workforce or an obedient one?
Especially one that is made to kill everybody else except their own. Let it replace the police. I’m sure the quality controll would be a tad stricter then
Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from. At this point one of the biggest security threats to the U.S. and for that matter the entire world is the extremely low I.Q. of every one that is supposed to be protecting this world. But I think they do this all on purpose, I mean the day the Pentagon created ISIS was probably their proudest day.
The real problem (and the thing that will destroy society) is boomer pride. I’ve said this for a long time, they’re in power now and they are terrified to admit that they don’t understand technology.
So they’ll make the wrong decisions, act confident and the future will pay the tab for their cowardice, driven solely by pride/fear.
Boomers have been in power for a long long time and the technology we are debating is as a result of their investment and prioritisation. So am not sure they are very afraid of it.
I didn’t say they were afraid of the technology, I said they were afraid to admit that they don’t understand it enough to legislate it. Their hubris in trying to preset a confident facade in response to something they can’t comprehend is what will end us.
Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from.
Eh, they could’ve done that without AI for like two decades now. I suppose the drones would crashland in a rather destructive way due to the EMP, which might also fry some of the electronics rendering the drone useless without access to replacement components.
I hope so, but I was born with an extremely good sense of trajectory and I also know how to use nets. So lets just hope I’m superhuman and the only one who possesses these powers.
Edit; I’m being a little extreme here because I heavily disagree with the way everything in this world is being run. So I’m giving a little push back on this subject that I’m wholly against. I do have a lot of manufacturing experience, and I would hope any killer robots governments produce would be extremely shielded against EMPs, but that is not my field, and I have no idea if shielding a remote controlled robot from EMPs is even possible?
The movie Small Soldiers is totally fiction, but the one part of that movie that made “sense” was that because the toy robots were so small, they had basically no shielding whatsoever, so the protagonist just had to haul a large wrench/ spanner up a utility pole, and connect the positive and negative terminals on the pole transformer. It blew up of course, and blew the protagonist off the pole IIRC. That also caused a small (2-3 city block diameter) EMP that shut down the malfunctioning soldier robots.
I realize this is a total fantasy/ fictional story, but it did highlight the major flaw in these drones. You can either have them small, lightweight, and inexpensive, or you can put the shielding on. In almost all cases when humans are involved, we don’t spend the extra $$$ and mass to properly shield ourselves from the sun, much less other sources of radiation. This leads me to believe that we wouldn’t bother shielding these low cost drones.
Cross the lines, also not sure if it would really work.
Emps are not hard to make, they won’t however work on hardened systems like the US military uses.
Is there a way to create an EMP without a nuclear weapon? Because if that’s what they have to develop, we have bigger things to worry about.
Yeah very easy ways, one of the most common ways to cheat a slot machine is with a localized emp device to convince the machine you’re adding tokens.
There’s an explosively pumped flux compression generator. https://en.wikipedia.org/wiki/Explosively_pumped_flux_compression_generator
One way involves replacing the flash with an antenna on an old camera flash. It’s not strong enough to fry electronics, but your phone might need anything from a reboot to a factory reset to servicing if it’s in range when that goes off.
I think the difficulty for EMPs comes from the device itself being an electronic, so the more effective the pulse it can give, the more likely it will fry its own circuits. Though if you know the target device well, you can target the frequencies it is vulnerable to, which could be easier on your own device, plus everything else in range that don’t resonate on the same frequencies as the target.
Tesla apparently built (designed?) a device that could fry a whole city with a massive lighting strike using just 6 transmitters located in various locations on the planet. If that’s true, I think it means it’s possible to create an EMP stronger than a nuke’s that doesn’t have to destroy itself in the process, but it would be a massive infrastructure project spanning multiple countries. There was speculation that massive antenna arrays (like HAARP) might be able to accomplish similar from a single location, but that came out of the conspiracy theory side of the world, so take that with a grain of salt (and apply that to the original Tesla invention also).
A true autonomous system would have Integrated image recognition chips on the drones themselves, and hardening against any EM interference. They would not have any comms to their ‘mothership’ once deployed.
If they just send them back it would be some murderous ping pong game.
so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps
Honestly the terrorists will just figure out what masks to wear to get the robots to think they’re friendly/commanders, then turn the guns around on our guys
It’s so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can’t punish AI for doing something wrong. AI does not require a raise for doing something right either
That’s an issue with the whole tech industry. They do something wrong, say it was AI/ML/the algorithm and get off with just a slap on the wrist.
We should all remember that every single tech we have was built by someone. And this someone and their employer should be held accountable for all this tech does.
How many people are you going to hold accountable if something was made by a team of ten people? Of a hundred people? Do you want to include everyone from designer to a QA?
Accountability should be reasonable, the ones who make decisions should be held accountable, companies at large should be held accountable, but making every last developer accountable is just a dream of a world where you do everything correctly and so nothing needs fixing. This is impossible in the real world, don’t know if it’s good or bad.
And from my experience when there’s too much responsibility people tend to either ignore that and get crushed if anything goes wrong, or to don’t get close to it or sabotage any work not to get anything working. Either way it will not get the results you may expect from holding everyone accountable
The CEO. They claim that “risk” justifies their exorbitant pay? Let them take some actual risk, hold them criminally liable for their entire business.
1979: A computer can never be held accountable, therefore a computer must never make a management decision.
2023: A computer can never be held accountable, therefore a computer must make all decisions that are inconvenient to take accountability for.
Whether in military or business, responsibility should lie with whomever deploys it. If they’re willing to pass the buck up to the implementor or designer, then they shouldn’t be convinced enough to use it.
Because, like all tech, it is a tool.
As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.
Horizon: Zero Dawn, here we come.
It won’t be nearly as interesting or fun (as Horizon) I don’t think.
Remember: There is no such thing as an “evil” AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.
Evil humans also manipulated weights and programming of other humans who weren’t evil before.
Very important philosophical issue you stumbled upon here.
Saw a video where the military was testing a “war robot”. The best strategy to avoid being killed by it was to stay u human liek(e.g. Crawling or rolling your way to the robot).
Apart of that, this is the stupidest idea I have ever heard of.
Didn’t they literally hide under a cardboard box like MGS? haha
You’re right. They also hid under a cardboard box.
Doesn’t AI go into landmines category then?
any intelligent creature, artificial or not, recognizes the pentagon as the thing that needs to be stopped first
Makes me think of this great short movie Slaughterbots
Amazing movie that everyone should watch.
For everyone who’s against this, just remember that we can’t put the genie back in the bottle. Like the A Bomb, this will be a fact of life in the near future.
All one can do is adapt to it.
There is a key difference though.
The A bomb wasn’t a technology that as the arms race advanced enough would develop the capacity to be anywhere between a conscientious objector to an usurper.
There’s a prisoner’s dilemma to arms races that in this case is going to lead to world powers effectively paving the path to their own obsolescence.
In many ways, that’s going to be uncharted territory for us all (though not necessarily a bad thing).
The only fair approach would be to start with the police instead of the army.
Why test this on everybody else except your own? On top of that, AI might even do a better job than the US police
It will be fine. We can just make drones that can autonomously kill other drones. There is no obvious way to counter that.
Cries in Screamers.
Well that’s a terrifying thought. You guys bunkered up?
It’s not terrifying whatsoever. In an active combat zone there are two kinds of people - enemy combatants and allies.
Your throw an RFID chip on allies and boom you’re done
Civilians? Never heard of 'em!
The vast majority of war zones have 0 civilians.
Perhaps your min is too caught up in the Iraq/Afghanistan occupations
Really? Like where are you thinking about?
The entire Ukrainian front.
I think you’re forgetting a very important third category of people…
I am not. Turns out you can pick and choose where and when to use drones.
Preeeetty sure you are. And if you can, you should probably let the US military know they can do that, because they haven’t bothered to so far.
They know. It is not important to them.
These are very different drones. The drones youre thinking of have pilots. They also minimize casualties - civilian an non - so you’re not really mad at the drones, but of the policy behind their use. Specifically, when air strikes can and cannot be authorized.
So now you acknowledge that third type of person lol. And that’s the thing about new drones, it’s not great that they can authorize themselves lol.
And that’s the thing about new drones, it’s not great that they can authorize themselves lol
I very strongly disagree with this statement. I believe a drone “controller” attached to every unit is a fantastic idea, and that drones having a minimal capability to engage hostile enemies without direction is going to be hugely impactful.
I’m sorry, I can’t get past the “autonomous AI weapons killing humans part”
That’s fucking terrifying.
I’m sorry but I just don’t see why a drone is scarier than a missile strike.
Inshallah
And that’s how you garauntee conflict for generations to come!
Netflix has a documentary about it, it’s quite good. I watched it yesterday, but forgot its name.
Black Mirror?
Unknown: Killer Robots ?
yes, that was it. Quite shocking to watch. I think that these things will be very real in maybe ten years. I’m quite afraid of it.