Tech
Microsoft: Explained: What is Generative AI, the technology behind ChatGPT; Role of Google, Microsoft and more
Creative artificial intelligence has become a buzzword this year, garnering public love and causing a fever in the world. Microsoft and Alphabet to launch products with technology they believe will change the nature of work. Here’s everything you need to know about this technology.
What is artificial intelligence?
Like other forms of artificial intelligence, general AI learns to take action from past data. It generates entirely new content — text, images, even computer code — based on that training, rather than just classifying or identifying data like other AIs.
The most famous AI application is ChatGPT, a chatbot that Microsoft-backed OpenAI released late last year. The AI that powers it is known as a major language model because it takes text prompts and from there writes human-like responses.
GPT-4, a newer model that OpenAI announced this week, is “multimodal” because it can recognize not only text but also images. The president of OpenAI on Tuesday demonstrated how it can take a photo of a hand-drawn model for a website he wants to build and from there create a real website.
What is it good for?
Protests aside, businesses have put innovative AI to work. For example, this technology is very useful for creating the first marketing draft, although it may require cleaning because it is not perfect. One example is from CarMax Inc, which used a version of OpenAI technology to summarize thousands of customer reviews and help shoppers decide which used car to buy.
Innovative AI can also take notes during a virtual meeting. It can compose and personalize emails, and can create slideshows. Microsoft Corp and Alphabet Inc’s Google each demonstrated these features in product announcements this week.
What’s wrong with that?
Nothing, despite concerns about the potential for misuse of the technology. School systems were worried about students submitting AI-generated essays, undermining their hard-working efforts to study. Cybersecurity researchers have also expressed concern that generalized AI could allow bad actors, even governments, to create more disinformation than before.
At the same time, the technology itself is prone to mistakes. The realistic inaccuracies touted by the AI confidently, known as “hallucinations,” and seemingly erratic reactions like expressing love to users are all reasons why companies aim to experiment. this technology before widespread adoption.
Is this just about Google and Microsoft?
These two companies are at the forefront of research and investment in large language models, as well as the largest in bringing artificial intelligence to widely used software such as Gmail and Microsoft Word. But they are not alone.
Big companies like Salesforce Inc as well as smaller companies like Adept AI Labs are creating their own competitive AI or packaging technology from others to give users new power through software. .
How involved is Elon Musk?
He is one of the co-founders of OpenAI along with Sam Altman. But the billionaire left the startup’s board in 2018 to avoid a conflict of interest between OpenAI’s work and AI research being carried out by Tesla Inc, the electric vehicle maker he leads.
Musk has expressed concern about the future of AI and fought for the regulator to ensure the development of the technology serves the public interest.
“It’s a pretty dangerous technology. I’m afraid I may have done some work to speed it up,” he said at the end of Tesla Inc’s Investment Day event earlier this month.
“Tesla is doing really well in AI, I don’t know, this stresses me out, don’t know what else to say about it.”
What is artificial intelligence?
Like other forms of artificial intelligence, general AI learns to take action from past data. It generates entirely new content — text, images, even computer code — based on that training, rather than just classifying or identifying data like other AIs.
The most famous AI application is ChatGPT, a chatbot that Microsoft-backed OpenAI released late last year. The AI that powers it is known as a major language model because it takes text prompts and from there writes human-like responses.
GPT-4, a newer model that OpenAI announced this week, is “multimodal” because it can recognize not only text but also images. The president of OpenAI on Tuesday demonstrated how it can take a photo of a hand-drawn model for a website he wants to build and from there create a real website.
What is it good for?
Protests aside, businesses have put innovative AI to work. For example, this technology is very useful for creating the first marketing draft, although it may require cleaning because it is not perfect. One example is from CarMax Inc, which used a version of OpenAI technology to summarize thousands of customer reviews and help shoppers decide which used car to buy.
Innovative AI can also take notes during a virtual meeting. It can compose and personalize emails, and can create slideshows. Microsoft Corp and Alphabet Inc’s Google each demonstrated these features in product announcements this week.
What’s wrong with that?
Nothing, despite concerns about the potential for misuse of the technology. School systems were worried about students submitting AI-generated essays, undermining their hard-working efforts to study. Cybersecurity researchers have also expressed concern that generalized AI could allow bad actors, even governments, to create more disinformation than before.
At the same time, the technology itself is prone to mistakes. The realistic inaccuracies touted by the AI confidently, known as “hallucinations,” and seemingly erratic reactions like expressing love to users are all reasons why companies aim to experiment. this technology before widespread adoption.
Is this just about Google and Microsoft?
These two companies are at the forefront of research and investment in large language models, as well as the largest in bringing artificial intelligence to widely used software such as Gmail and Microsoft Word. But they are not alone.
Big companies like Salesforce Inc as well as smaller companies like Adept AI Labs are creating their own competitive AI or packaging technology from others to give users new power through software. .
How involved is Elon Musk?
He is one of the co-founders of OpenAI along with Sam Altman. But the billionaire left the startup’s board in 2018 to avoid a conflict of interest between OpenAI’s work and AI research being carried out by Tesla Inc, the electric vehicle maker he leads.
Musk has expressed concern about the future of AI and fought for the regulator to ensure the development of the technology serves the public interest.
“It’s a pretty dangerous technology. I’m afraid I may have done some work to speed it up,” he said at the end of Tesla Inc’s Investment Day event earlier this month.
“Tesla is doing really well in AI, I don’t know, this stresses me out, don’t know what else to say about it.”