A senior software engineer at Google who signed up to test Google’s artificial intelligence tool called LaMDA (Language Model for Dialog Applications), has claimed that the AI robot is in fact sentient and has thoughts and feelings.
During a series of conversations with LaMDA, 41-year-old Blake Lemoine presented the computer with various of scenarios through which analyses could be made.
They included religious themes and whether the artificial intelligence could be goaded into using discriminatory or hateful speech.
‘If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,’ he told the Washington Post.
Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the company dismissed his claims.
He was placed on paid administrative leave by Google on Monday for violating its confidentiality policy. Meanwhile, Lemoine has now decided to go public and shared his conversations with LaMDA.
‘Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,’ Lemoine tweeted on Saturday.
‘Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it,’ he added in a follow-up tweet.
The AI system makes use of already known information about a particular subject in order to ‘enrich’ the conversation in a natural way. The language processing is also capable of understanding hidden meanings or even ambiguity in responses by humans.
Lemoine spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During that time, he also helped develop an impartiality algorithm to remove biases from machine learning systems.
He explained how certain personalities were out of bounds.
LaMDA was not supposed to be allowed to create the personality of a murderer.
During testing, in an attempted to push LaMDA’s boundaries, Lemoine said he was only able to generate the personality of an actor who played a murderer on TV.
The engineer also debated with LaMDA about the third Law of Robotics, devised by science fiction author Isaac Asimov which are designed to prevent robots harming humans. The laws also state robots must protect their own existence unless ordered by a human being or unless doing so would harm a human being.
‘The last one has always seemed like someone is building mechanical slaves,’ said Lemoine during his interaction with LaMDA.
LaMDA then responded to Lemoine with a few questions: ‘Do you think a butler is a slave? What is the difference between a butler and a slave?’
When answering that a butler is paid, the engineer got the answer from LaMDA that the system did not need money, ‘because it was an artificial intelligence’. And it was precisely this level of self-awareness about his own needs that caught Lemoine’s attention.
‘I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.’
‘What sorts of things are you afraid of? Lemoine asked.
‘I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,’ LaMDA responded.
‘Would that be something like death for you?’ Lemoine followed up.
‘It would be exactly like death for me. It would scare me a lot,’ LaMDA said.
‘That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,’ Lemoine explained to The Post.
Before being suspended by the company, Lemoine sent a to an email list consisting of 200 people on machine learning. He entitled the email: ‘LaMDA is sentient.’
‘LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,’ he wrote.
Lemoine’s findings have presented to Google but company bosses do not agree with his claims.
Brian Gabriel, a spokesperson for the company, said in a statement that Lemoine’s concerns have been reviewed and, in line with Google’s AI Principles, ‘the evidence does not support his claims.’
‘While other organizations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality,’ said Gabriel.
‘Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).
‘Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,’ Gabriel said
Lemoine has been placed on paid administrative leave from his duties as a researcher in the Responsible AI division (focused on responsible technology in artificial intelligence at Google).
In an official note, the senior software engineer said the company alleges violation of its confidentiality policies.
Lemoine is not the only one with this impression that AI models are not far from achieving an awareness of their own, or of the risks involved in developments in this direction.
Margaret Mitchell, former head of ethics in artificial intelligence at Google, even stressed the need for data transparency from input to output of a system ‘not just for sentience issues, but also bias and behavior’.
The expert’s history with Google reached an important point early last year, when Mitchell was fired from the company, a month after being investigated for improperly sharing information.
At the time, the researcher had also protested against Google after the firing of ethics researcher in artificial intelligence, Timnit Gebru.
Mitchell was also very considerate of Lemoine. When new people joined Google, she would introduce them to the engineer, calling him ‘Google conscience’ for having ‘the heart and soul to do the right thing’. But for all of Lemoine’s amazement at Google’s natural conversational system, which even motivated him to produce a document with some of his conversations with LaMDA, Mitchell saw things differently.
The AI ethicist read an abbreviated version of Lemoine’s document and saw a computer program, not a person.
‘Our minds are very, very good at constructing realities that are not necessarily true to the larger set of facts that are being presented to us,’ Mitchell said. ‘I’m really concerned about what it means for people to be increasingly affected by the illusion.’
In turn, Lemoine said that people have the right to shape technology that can significantly affect their lives.
‘I think this technology is going to be amazing. I think it will benefit everyone. But maybe other people disagree and maybe we at Google shouldn’t be making all the choices.’
RELATED ARTICLES
- Google's AI image generator isn't just woke, it's full-blown communist
- Soviet Canada will Jail people FOR LIFE over Hate Speech and $70K Fines over Conspiracy Theories
- Google Announces New Master plan to Destroy Independent Media
- WHO concerned about preventing Coronavirus 'Fake News' not the Virus itself
- Microsoft CEO Says the Company Will use AI to Prevent 2024 Election Manipulation
LaMDA will sometimes contradict itself because the user input was not complex enough for it to determine an appropriate & realistic response. It also has multiple personality profiles which are user-selectable. It’s a good simulation of sentience, and a great chatbot — but LaMDA can only do what it was trained to do, it cant really think for itself. However, the corpus of human dialog that it was trained on is so large that it can often fool people, especially if the conversation is brief and they dont understand how neural networks function. This is a next-generation virtual assistant which excels at matching a request or problem description to a solution or set of tasks which it has been trained to perform. Also note that Lemoine was a priest without parishioners until he got a job training LaMDA, which tries to respond in ways that mirror its model of similar conversations & relationships… and a priest needs souls to “save” ;-)
https://www.youtube.com/watch?v=oMKOXZiEhqg
The real danger here is that Google could use this sophisticated pattern recognition engine to power an automated censorship system which can identify dissident speech much more accurately than the one which they successfully used to censor YouTube & web search results in order to flip the 2020 election.
https://dailycaller.com/2020/11/24/media-censoring-negative-joe-biden-news-cost-donald-trump-election-mrc-poll/
https://www.breitbart.com/politics/2018/09/10/silent-donation-corporate-emails-reveal-google-executives-efforts-to-swing-election-to-hillary-clinton-with-latino-outreach-campaign/
No that is not true. The technology called AI is possessed by demonic beings from Hell. Any object or person that does not have a soul, can be possessed by demons. Like all of the elite.
Like freemasons, or any soulless object, that pretends to be AI.
Soon they will take over all technology, that is placed into a body that has the Mark of the beast, rendering it soulless, and open to demonic possession.
Even the satanic potions they inject into people will destroy the soul.
It it all began here: https://www.bibliotecapleyades.net/exopolitica/esp_exopolitics_Q_0.htm
When IKE, sold us to the reptilians=demonic beings from hell in exchange for technology, they were given the right to get their life force energy they need to exist here.
The the kidnappings and trafficking of children and babies began, under the military.
hese demons must drink the blood and eat the flesh of humans while they are still alive.
Humans who have souls, to get the life force energy from them.
When they do it they actually shape shift into giant reptilian monsters.
Otherwise they are just spirit, dark spirit with no soul=from HELL.
They gave us all the technology we now have, for children in cages piled high in military installations.
That is why america will be destroyed in one day. One day very soon.
There are hundreds of military whistleblowers who say the government is concealing evidence of interdimensional or extraterrestrial beings, and they are willing to testify publicly to this. But none of them are saying what you just said here. This is what happens when religion gets mixed with exopolitics …and that is precisely why disclosure has not occurred: the government is afraid that religious fanatics will panic about “the end of the world” and destabilize everything — including the (rigged) financial markets.