The degree of trust they placed in this machine was striking: when the robot pointed to a dark room with no clear exit, the majority of people obeyed it, rather than safely exiting by the door through which they had entered. The researchers ran similar experiments with other robots that seemed to malfunction. Again, subjects followed these robots in an emergency setting, apparently abandoning their common sense.
It seems that robots can naturally hack our trust. We could spend pages defining AI. General AI is what you see in the movies. The movie robots that try to destroy humanity are all general AI.
And while this is fascinating work, encompassing fields from computer science to sociology to philosophy, its practical applications are probably decades away. Specialized AI is designed for a specific task. An example is the system that controls a self-driving car. Specialized AI knows a lot and can make decisions based on that knowledge, but only in this limited domain. AI is inherently a mystifying science-fiction term.
We used to assume that reading chest X-rays required a radiologist: that is, an intelligent human with appropriate training. What makes something AI often depends on the complexity of the tasks performed and the complexity of the environment in which those tasks are performed. The thermostat performs a very simple task that only has to take into account a very simple aspect of the environment. A modern digital thermostat might be able to sense who is in the room and make predictions about future heat needs based on both usage and weather forecast, as well as citywide power consumption and second-by-second energy costs.
A futuristic thermostat might act like a thoughtful and caring butler, whatever that would mean in the context of adjusting the ambient temperature. A thermostat has limited automation and physical agency, and no autonomy. A system that predicts criminal recidivism has no physical agency; it just makes recommendations to a judge. A driverless car has some of all three. R2-D2 has a lot of all three, although for some reason its designers left out English speech synthesis. Robotics also has a popular mythology and a less-flashy reality.
Like AI, there are many different definitions of the term. Again, I prefer to focus on technologies that are more prosaic and near term. For our purposes, robotics is autonomy, automation, and physical agency dialed way up. People have long ascribed human-like qualities to computer programs. He was amazed that people would confide deeply personal secrets to what they knew was a dumb computer program.
Numerous experiments bear similar results. During the US election, about a fifth of all political tweets were posted by bots. In , the Federal Communications Commission had an online public-comment period for its plans to repeal net neutrality. A staggering 22 million comments were received. Many of them—maybe half— were submitted using stolen identities.
Efforts like these will only get more sophisticated. For years, AI programs have been writing news stories about sports and finance for real news organizations like the Associated Press. Already, AI-driven personas can write personalized letters to newspapers and elected officials, leave intelligible comments on news sites and message boards, and intelligently debate politics on social media.
In a recent experiment, researchers used a text-generation program to submit 1, comments in response to a government request for public input on a Medicaid issue.
They fooled the Medicaid. The researchers subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. These techniques are already being used. An online propaganda campaign used AI-generated headshots to create fake journalists. They have histories, personalities, and communications styles. They hang out in various interest groups: gardening, knitting, model railroading, whatever.
They act as normal members of those communities, posting and commenting and discussing. Systems like GPT-3 will make it easy for those AIs to mine previous conversations and related Internet content and appear knowledgeable. Then, once in a while, the AI posts something relevant to a political issue. AI will make the future supply of disinformation infinite. They may also break community discourse.
These systems will affect us at the personal level as well. Earlier I mentioned social engineering. Most phishing emails are generic and easily tagged as spam. The more effective phishing emails—the ones that result in people and companies losing lots of money—are personalized.
For example, an email that impersonates the CEO to someone in the finance department, asking for a particular wire transfer, can be particularly effective. AI has the potential for every one of those hacks to be microtargeted: personalized, optimized, and individually delivered. Advertising messages are bulk broadcast cognitive hacks. AI techniques have the potential to blend aspects of both of those techniques. The addition of robotics will only make these hacks more effective, something Kate Darling chronicled in her book The New Breed.
We see faces everywhere; two dots over a horizontal line looks like a face without any trouble. This is why even minimalist illustrations are so effective. If that something speaks or, even better, converses, then we believe it has intentions, desires, and agency.
Robots are no exception. We can experience nurturing feelings towards adopted children, and we can feel the same instincts arise when we interact with the children of friends or even strangers—or puppies. At least some of our response is inspired by the appearance and behavior of children. Children have large heads in proportion to their bodies, and large eyes in proportion to their heads.
They talk with a higher-pitched voice than adults. And we respond to all of this. Artists have taken advantage of this for generations to make their creations appear more sympathetic. Cartoon characters are drawn this way, as far back as Betty Boop in the s and Bambi in In the live-action movie Alita: Battle Angel, the main character had her eyes computer-enhanced to be larger.
Anthropomorphic robots are an emotionally persuasive technology, and AI will only amplify their attractiveness. As AI mimics humans, or even animals, it will hijack all the mechanisms that humans use to hack each other. But a large face paired with a small body makes us think of it as a child.
Because we humans are prone to making a category error and treating robots as living creatures with feelings and intentions, we are prone to being manipulated by them. Robots could persuade us to do things we might not do otherwise. They could scare us into not doing things we might otherwise do. AIs will get better at all of this.
Already they are trying to detect emotions by analyzing our writings, 52 reading our facial expressions, 53 or monitoring our breathing and heartrate. And, like so many areas of AI, they will eventually surpass people in capability.
This will allow them to more precisely manipulate us. As AIs and autonomous robots take on more real-world tasks, human trust in autonomous systems will be hacked with dangerous and costly results. But never forget that there are human hackers controlling the AI hackers.
All of the systems will be designed and paid for by humans who want them to manipulate us in a particular way for a particular purpose. The competition has been a mainstay at hacker gatherings since the mids. These days, dozens of teams from around the world compete in weekend-long marathon events held all over the world. People train for months. Winning is a big deal. The competition occurred in a specially designed test environment filled with custom software that had never been analyzed or tested.
The AIs were given ten hours to find vulnerabilities to exploit against the other AIs in the competition, and to patch themselves against exploitation. A system called Mayhem, created by a team of Pittsburgh computer-security researchers, won. The researchers have since commercialized the technology, which is now busily defending networks for customers like the Department of Defense.
Mayhem was invited to participate as the only non-human team, and came in last. You can easily imagine how this mixed competition would unfold in the future. AI entrants will improve every year, because the core technologies are all improving. The human teams will largely stay the same, because humans remain humans even as our tools improve.
Eventually the AIs will routinely beat the humans. It will be years before we have entirely autonomous AI cyberattack capabilities, but AI technologies are already transforming the nature of cyberattack. One area that seems particularly fruitful for AI systems is vulnerability finding. Going through software code line by line is exactly the sort of tedious problem at which AIs excel, if they can only be taught how to recognize a vulnerability.
The implications extend far beyond computer networks. Already AIs are looking for loopholes in contracts. This will all improve with time. Modern AIs are constantly improving based on ingesting new data and tweaking their own internal workings accordingly. All of this data continually trains the AI, and adds to its experience. The AI evolves and improves based on these experiences over the course of its operation. There are really two different but related problems here.
The first is that an AI might be instructed to hack a system. The other is that an AI might naturally, albeit inadvertently, hack a system. Both are dangerous, but the second is more dangerous because we might never know it happened. After 7. And was unable to explain its answer, or even what the question was.
That, in a nutshell, is the explainability problem. Modern AI systems are essentially black boxes. Data goes in at one end, and an answer comes out the other. It can be impossible to understand how the system reached its conclusion, even if you are a programmer and look at the code.
Their limitations are different than ours. In , a research group fed an AI system called Deep Patient health and medical data from approximately , individuals, and tested whether or not the system could predict diseases.
The result was a success. Weirdly, Deep Patient appears to perform well at anticipating the onset of psychiatric disorders like schizophrenia—even though a first psychotic episode is nearly impossible for physicians to predict. What we want is for the AI system to not only spit out an answer, but also provide some explanation of its answer in a format that humans can understand.
Explanations are a cognitive shorthand used by humans, suited for the way humans make decisions. AI decisions simply might not be conducive to human-understandable explanations, and forcing those explanations might pose an additional constraint that could affect the quality of decisions made by an AI system. In the near term, AI is becoming more and more opaque, as the systems get more complex and less human-like—and less explainable.
They will invariably stumble on solutions that we humans might never anticipated—and some will subvert the intent of the system. These are all hacks.
You can blame them on poorly specified goals or rewards, and you would be correct. You can point out that they all occurred in simulated environments, and you would also be correct.
But the problem is more general: AIs are designed to optimize towards a goal. Imagine a robotic vacuum assigned the task of cleaning up any mess it sees. He trained an AI by rewarding it for not hitting the bumper sensors. Any good AI system will naturally find hacks. If are problems, inconsistencies, or loopholes in the rules, and if those properties lead to an acceptable solution as defined by the rules, then AIs will find them. We all learned about this problem as children, with the King Midas story.
When the god Dionysus grants him a wish, Midas asks that everything he touches turns to gold. Midas ends up starving and miserable when his food, drink, and daughter all turn to inedible, unpotable, unlovable gold. We also know that genies are very precise about the wording of wishes, and can be maliciously pedantic when granting them.
The genie will always be able to hack your wish. The problem is more general, though. In human language and thought, goals and desires are always underspecified. We never delineate all of the caveats and exceptions and provisos. We never close off all the avenues for hacking. Any goal we specify will necessarily be incomplete. This is largely okay in human interactions, because people understand context and usually act in good faith.
We are all socialized, and in the process of becoming so, we generally acquire common sense about how people and the world works. We fill any gaps in our understanding with both context and goodwill. If I asked you to get me some coffee, you would probably go to the nearest coffeepot and pour me a cup, or maybe to walk to the corner coffee shop and buy one.
You would not bring me a pound of raw beans, or go online and buy a truckload of raw beans. Or the "not turn off when I say so" hack. Or the "more responsiblity than I wanted" hack. Or the "government agents invading my home because some busibody neighbour said I was being cruel to my pet" hack. Real dogs may be great for some people, but some of us want a toy not a responsibility. Re:Hack suggestion Score: 1 , Funny. Somebody got hit with the "take the joke too seriously" hack.
Re:Hack suggestion Score: 1. Rosenblueth, Philosophy of Science, Frickin Lasers Score: 5 , Funny. I just want a robot dog with frickin lasers strapped to his head. Other than fetching beer, the Aibo's main use seems to be as a cat toy - something to drag a string around the floor fast enough that the cat has to do some actual work to catch it.
But cats do like chasing laser pointers, so might as well have the aibo carry the laser pointer around for you.
Re:Frickin Lasers Score: 2. Unfortunately, a laser strapped to it's head might look cool but it won't do very much. Now a tazer powered teeth might be cool. Aw, what a cute robot dog Sic Who? The Flight to Newark?
I thought there was already a story on slashdot about the guy with the laser pointer whom the FBI mistook for Al Qaeda operatives. Accelerando and the future of Aibo hacking Score: 3 , Interesting.
It will be interesting to see how complex these customized Aibo become in the next years. Real world robot cat Score: 3 , Interesting. Omron [omron. Don't really understand what the page says. IIRC, the price is about double that of Aibo though, not thanks to the electronics, but thanks to the fake fur.
That's So Wrong And I didn't think Japanese Robot Cats [ex. Re:Real world robot cat Score: 2. I don't know about you guys, but that is highly disturbing to me. The design is off enough that it looks like taxidermy. Re:Real world robot cat Score: 1.
Yes, but the fur makes it great for necorophiliacs. Re:Accelerando and the future of Aibo hacking Score: 1. When will the Aibo be combat ready? Aibo DRM? Score: 5 , Insightful. Hacks for a Sony product? This can't be allowed! Sony must at once produce a new firmware update for all Aibos to make sure this dog-like robot only performs Sony approved actions. I mean, what would the world be like if a robotic dog did anything other than dance and perform tricks in ant entertaining fasion?
We can't have people going around actually writing their own software on hardware they purchased with their own money. Re:Aibo DRM? Yeah, you probably don't want the Aibo learning how to play Halo 2 although that'd be highly kickass. Score: 3 , Interesting.
It's an impressive piece of hardware. If you're interested in the low level processing, which allows direct processing of the camera images, networking support, real-time control of joints, etc. I'd kill for this hack Score: 5 , Funny. Aibo needs a guard function for the neighborhood cats. Just bark and move a bit when there's any meowing around.
If they pee on my door one more time, I'm gonna' set bear traps. Re:I'd kill for this hack Score: 2. It's called a cat-a-pult for a reason. Of course it would be really cool if you can disguise it so no one knows it's there. Re:I'd kill for this hack Score: 1. You don't need bear traps, just rat poison.
Voice operated x-rated juke box? Score: 2 , Funny. Re:Voice operated x-rated juke box? First this from TFA and the leg-humping feature comment [slashdot. First thing I thought about was "there's got to be some pr0n-related use of this, but I just can't find it", and then they've already thought of it.
Guess I'm not a real geek after all. Score: 5 , Funny. Based on recent events Re:Based on recent events He must be new around here. Or have a penchant for stupidity. Re:not just for fun and games Score: 1.
I just read the Wired article you linked to. Talk about Sony being a bastard! Old Software Score: 2. The info on the "most recent" dogslife [dogsbodynet. Has there been anything done within a year and a half? Bitchin' Stereo Score: 2.
I want an aibo with a holosonic [holosonics. The ultimate speakerphone, mixed with the ultimate stereo, and voice UI. Good dog! Feeding pets Score: 2 , Funny. A server! Score: 1 , Funny. We're sorry, but the server is out of range of the wireless network.
It either went for a walk or is chasing a squirrel in the back yard. Please try again later. Next, Aibo's get worms and viruses Score: 2. This seems inevitable as the Aibo's WiFi and webcam [sonystyle. I can just imagine Aibo spyware that relays webcam shots to who-ever. Owners will need to think twice the next time their Aibo wanders into the bathroom or bedroom.
Time to start thinking about how to deworm the Aibo. I'll be happy Domestic bliss Score: 3 , Funny. I gotta get me one of them robots to vacuum the floor. They're not that expensive, but I'm cheap, and there's not enough marketting to remind me that I need one. Yesterday I had to carry two weeks worth of groceries from the car up to my unit. Attach a paper linkage between them, and they turn in perfect sync, like two gears.
Add some paper strips, and the two robots work together to form a gripper. We can only guess that Sony is using cameras on the bottom of each robot to determine position — possibly with the aid of an encoded work surface — similar to Anoto paper. By using our website and services, you expressly agree to the placement of our performance, functionality and advertising cookies. Learn more. Search Search for:. Hackaday Links: January 9, 10 Comments.
Loading Comments
0コメント