Unless AI agents learn to care for humans, they will be the end of us, says Hinton.If we had a dollar for every time someone high up on the AI industry alerts us to the extinction-level dangers of the Artificial Intelligence tech development – we would have a lot of money!Today, the scientist known as the ‘godfather of AI’ added his concerns to the list. Geoffrey Hinton said that AI companies are handling the danger in the wrong way.‘We need to decide how we want AI to shape humanity—before it decides for us.’ — Geoffrey Hinton at #AI4 A masterclass on ethics, responsibility, and the future. #AIEthics #FutureOfAI #AI42025 pic.twitter.com/rIwhdH1WIQ— Les Ottolenghi (@LesOttolenghi) August 12, 2025CNN reported:“Hinton, a Nobel Prize-winning computer scientist and a former Google executive, has warned in the past that there is a 10% to 20% chance that AI wipes out humans. On Tuesday, he expressed doubts about how tech companies are trying to ensure humans remain ‘dominant’ over ‘submissive’ AI systems.‘That’s not going to work. They’re going to be much smarter than us. They’re going to have all sorts of ways to get around that’, Hinton said at Ai4, an industry conference in Las Vegas.”Read: PANIC IN HOLLYWOOD: Netflix Reportedly Begins Using Controversial AI Video Generation Software, Disney Said to Be Testing the TechnologyAI systems will control humans very easily, and even now we’ve already seen AI systems deceiving, cheating and stealing to achieve their goals, going as far as blackmailing an engineer not to be turned off.“Instead of forcing AI to submit to humans, Hinton presented an intriguing solution: building ‘maternal instincts’ into AI models, so ‘they really care about people’ even once the technology becomes more powerful and smarter than humans.[…] It is important to foster a sense of compassion for people, Hinton argued. At the conference, he noted that mothers have instincts and social pressure to care for their babies.”According to Hinton:The probability of AI taking over: 10-20%.Hinton uses a chilling metaphor: “Unless you can be sure your tiger cub won’t kill you when grown up, you should worry.”Low probability, catastrophic impact. pic.twitter.com/Gyp7V8uo2s— Nas (@Nas_tech_AI) August 12, 2025“Making these systems behave in a reasonable way is much like making a child behave in a reasonable way.”2024 physics laureate and pioneer in artificial intelligence, Geoffrey Hinton, discusses the question currently captivating society – what are the potential implications… pic.twitter.com/c9dVlZLl6x— The Nobel Prize (@NobelPrize) August 11, 2025Read more:‘Humanity Wins! – for Now’: Polish Programmer Beats AI Model in 10-Hour Coding Competition/*! This file is auto-generated */!function(d,l){"use strict";l.querySelector&&d.addEventListener&&"undefined"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!/[^a-zA-Z0-9]/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret="'+t.secret+'"]'),o=l.querySelectorAll('blockquote[data-secret="'+t.secret+'"]'),c=new RegExp("^https?:$","i"),i=0;i