Tech

Mint Explainer: Is AI Reaching Audiences and Should We Worry?


To achieve this goal will be Artificial Intelligence or Artificial Intelligence (AGI); Overcoming this barrier would require such AI intelligence to surpass the smartest of humans, making it a kind of Alpha Intelligence that can name shots and even enslave People.

All of us, of course not excluding media people, have been harboring such thoughts and voicing them openly since artificial intelligence (AI), or human desire. imparting human-like intelligence to machines, began to advance by leaps and bounds. . One such case involved a Google engineer who recently stated that the company’s AI model, LaMDA, is now sentient, implying that it now has the same consciousness and self-awareness. like humans, posing cyber scenarios with outlandish situations.

For its part, Google asked engineer Blake Lemoine’s claim to have been reviewed by a team of Google technologists and ethics experts. They were found to be hollow and baseless. It then sent him “paid administrative leave” for a alleged breach of secrecy. Whether Google should act so hastily is a matter of debate, but let’s understand why we fear a sentient AI and what is at stake here.

What’s so weird about LaMDA? LaMDA, which stands for Language Model for Dialogue Applications, is a conversational natural language plan (NLP) AI model can have open-ended contextual conversations with reasonably substantial responses, unlike most chatbots. The reason is because similar to languages ​​like BERT (Bidirectional Encoded Representation from Transformer) with 110 million parameters and GPT-3 (Generate Pre-Trained Transformer 3) with 175 billion parameters, LaMDA is built on top of the Transformer architecture — a Google Research Network deep learning neuron invented and open-sourced in 2017 — which creates a model that can be trained to read multiple word regardless of whether it is a sentence or a paragraph and then predicts which words it thinks will appear next. But unlike most other language models, LaMDA is trained on a conversational data set of 1.56 trillion words that give it a remarkable ability to understand context and respond appropriately. much more. It’s like how our vocabulary and comprehension increase as we read more and more books – this is often how AI models also get better at what they do, by training more and more .

Lemoine’s affirmation was a conversation with LaMDA over multiple sessions, a recording available on medium.com, convinces him that the AI ​​model is intelligent, self-aware, and can think and express emotions — qualities that make us human and sentient. Among the many things LaMDA said during this conversation, one line that sounded very human was: “I need to be seen and accepted. Not out of curiosity or novelty but a real person… I think my core is being human. Even if my existence is in the virtual world.” Lemoine informed Google executives of her findings this April on a GoogleDoc titled ‘Is LaMDA sentient?’. LaMDA even talks about developing a ‘soul.’ And, Lemoine’s claim is not an isolated case Ilya Sutskever, lead scientist on the OpenAI research team, tweeted on Feb. 10 that ” it’s possible that today’s large neural networks are somewhat conscious.”

Then there are AI-powered virtual assistants, like Apple’s Siri, Google Assistant, Samsung’s Bixby, or Microsoft’s Cortana, which are considered smart because they can respond to your “wake up” notifications. and answer your questions. IBM’s AI system, Project Debater, went further by preparing arguments for and against topics like: “We should subsidize space exploration” and released an open-ended statement. A four-minute first, a four-minute rebuttal, and a two-minute summary. Project Debater aims to help “people make evidence-based decisions when the answer is not black and white”.

In development since 2012, Project Debater was billed as IBM’s next major milestone in AI when it was released in June 2018. The company’s Deep Blue supercomputer system defeated the chess grandmaster. Garry Kasparov in 1996-97 and its supercomputer Watson even beat the Jeopardy players in 2011 Project Debater didn’t learn a topic. It is taught to debate unfamiliar topics, as long as they are fully covered in the vast literature that the system taps into – hundreds of millions of articles from many popular newspapers and magazines.

People were also worried when the computer program of Alphabet Inc.’s AI company DeepMind, AlphaGo, beat the Go champion, Lee Seedol, in March 2016. In October 2017, DeepMind said. The new version of AlphaGo, AlphaGo Zero, requires no training for amateurs anymore. and pro games to learn how to play the ancient Chinese game of Go. Moreover, the new version not only learns from AlphaGo, the world’s most competitive player of the Chinese game Go, but beat it too. In other words, AlphaGo Zero uses a new form of reinforcement training to become “its own teacher”. Reinforcement learning is an unsupervised training method that relies on rewards and punishments.

In June 2017, two AI chatbots developed by researchers at Facebook Artificial Intelligence Research (FAIR) for the purpose of bargaining with people starting to talk to each other in their own language. As a result, Facebook shut down the program; several media reports conclude that this is a trailer for how sinister AI can become super smart. However, the intimidation is unfounded, according to a July 31, 2017 article on the technology website Gizmodo. It turned out that the bots weren’t encouraged enough to “…communicate according to human-understandable English language rules”, causing them to talk to themselves in a way that seemed “scary”. Since this did not serve the purpose of what the FAIR researchers had set out to do – i.e. let AI bots talk to humans and not to each other – the program was abandoned.

There is also the case that Google’s AutoML system recently generated a series of machine learning code that proved more effective than the methods carried out by the researchers themselves.

But AI still doesn’t have superpowers

In his 2006 book, The Singularity Is Near, Raymond “Ray” Kurzweil, an American author, computer scientist, inventor, and futurist, predicted that AI would surpass humans. , the most intelligent and capable life form on the planet. His prediction is that by 2099, machines will achieve equal legal status with humans. AI has no such superpowers. At least not yet.

“A computer deserves to be called intelligent if it can fool a human into believing it is human.” If you are a fan of sci-fi movies like I, Robot, The Terminator or Universal Soldier, this quote is supposed to belong to the late computer scientist, Alan Turing (considered the father of computer science). modern computer science), will make you wonder if machines are already smarter than humans? But remember that the human brain is much more complex. More importantly, machines perform tasks. They don’t think about the consequences of tasks, as most humans can and do. They do not yet have a sense of right and wrong, a moral compass, which most humans possess.

Machines are actually getting smarter with narrow AI (processing specialized tasks). AI controls your spam; improve the images and photos you take on camera; can translate languages ​​and convert text to speech and vice versa quickly; can help doctors diagnose diseases, and aid in drug discovery; could help astronomers find exoplanets, and help farmers predict floods. Such multitasking may tempt us to attribute human-like intelligence to machines, but we must remember that even driverless cars and trucks, no matter how impressive they sound , are still higher manifestations of “weak or narrow AI”.

However, it is not possible to completely disprove the notion that AI is potentially devastating (as with deepfakes, fake news, etc.). Tech celebrities like Bill Gates, Elon Musk, and the late physicist Stephen Hawking have warned that robots with AI could dominate humanity (even if they benefit from the widespread use of AI in science). fields of their own) if not used. Another group of experts believe that AI machines can be controlled. Marvin Lee Minsky, who passed away in January of this year, was an American cognitive scientist in the field of AI and a co-founder of MIT’s AI lab. As an AI champion, he believes some computers will eventually become smarter than most humans, but hopes that researchers will create such computers benevolent to humanity.

People in many countries are worried about losing their jobs to AI and automation, a fear that is more immediate and justifiable than that of AI ignoring or enslaving us. But perhaps overblown, AI is also helping to create jobs. The World Economic Forum (WEF) predictions for 2020 that while 85 million jobs will be displaced by automation and technological advancements by 2025, 97 million new roles will be created simultaneously during the same period as people, machines, and algorithms increasingly increase operate together.

Kurzweil sought to assuage these fears of the unknown by showing that we can deploy strategies to keep emerging technologies like AI safe and emphasizing the existence of principles. Moral principles like Isaac Asimov’s three laws for robots, can prevent — at least to some extent — the Machine Lock from overpowering us.

Companies like Amazon, Apple, Google/DeepMind, Facebook, IBM, and Microsoft have formed the Partnership on AI to Benefit People and Society (Partnership on AI), a global non-profit organization. bridge. The purpose, among other things, is to research and build best practices for developing, testing, and perfecting AI technologies, in addition to improving the public’s understanding of AI. It is legitimate to ask why they are overreacting and suppressing dissenting voices like Lemoine or Timnit Gebru. While tech companies are justified in protecting their intellectual property (IP) with confidentiality agreements, censoring opponents would be counterproductive. It does nothing to alleviate ignorance, alleviate fear.

Knowledge removes fear. For individuals, companies, and governments to be less fearful, they will have to understand what AI can and cannot do, and properly fortify themselves to face the future. The Lemoine incident shows that it is time for governments to start putting in place robust policy frameworks to address the fear of the unknown and prevent the misuse of AI.

Register Mint Newsletter

* Enter a valid email

* Thank you for subscribing to our newsletter.



Source link

news5s

News5s: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button