ARTIFICIAL INTELLIGENCE: LAST WEEK TONIGHT WITH JOHN OLIVER
今日はOpenAIの未来、特にChatGPTについて、ジョークと共に話を聞いていきましょう。本文では、ChatGPTのような、どんなスタイルでも人間のように書くことができるAIプログラムの登場により、AIが現代生活に与える影響が大きくなっていることを紹介しています。しかし、AIは斬新さと利便性をもたらす一方で、「ブラックボックス」問題、偏った結果、誤報やネット上の悪用が広がるなど、潜在的な危険性も指摘されています。専門家は、AIを「説明可能」にして規制し、企業が自社のAIプログラムを精査のために公開するよう求めています。(English) Today we will hear about the future of OpenAI, in particular, ChatGPT, with jokes. The text highlights the increasing impact of AI on modern life, with the emergence of AI programs such as ChatGPT that allow human-like writing in any style. However, while AI has brought novelty and convenience, its potential dangers include the 'black box' problem, biased results and the spread of misinformation and online abuse. As a result, experts call for AI to be "explainable" and regulated, forcing companies to open their AI programmes to scrutiny.
ARTIFICIAL INTELLIGENCE: LAST WEEK TONIGHT WITH JOHN OLIVER
//Summary - Level C2//
The text discusses artificial intelligence (AI) and its increasing impact on modern life, such as self-driving cars, spam filters, and therapy robots. The emergence of AI programs like ChatGPT from OpenAI has allowed human-sounding writing to be generated in any format and style. Its popularity has exploded in the three months since it was publicly available. While some use it for fun and novelty purposes, AI technology has also caused disruption, as it can write news copy and even help students cheat on their homework. The text also discusses its potential dangers.
Also, AI's "black box" problem can lead to difficulty in understanding how AI arrived at a specific result, making it challenging to identify errors and biases. The lack of diversity in data sets used to train AI can also lead to biased results, with examples of AI producing sexist and racist outcomes. The potential for AI to spread misinformation and abuse online is a concern, and experts call for AI systems to be "explainable" to allow for scrutiny and regulation. Companies may need to be forced to open their AI programs to examine the challenges AI presents.
A)
1)
John: Our main story tonight concerns artificial intelligence or AI. Increasingly, it's a part of modern life, from self-driving cars, to spam filters, to this creepy training robot for therapists.
Therapist: We can begin by just describing to me the problem you would like us to focus on today.
Terrence(AI's name): Um, I don't like being around people. People make me nervous.
Therapist: Terrence, can you find an example of when other people have made you nervous?
Terrence(AI): I wouldn't say I like to take the bus. I get people staring at me all the time. People are always judging me.
Therapist: Okay.
Terrence(AI): I'm gay.
Therapist: Okay.
2)
John: That's one of the most incredible twists in cinema history. Although I will say, that robot is teaching therapists a critical skill there. Not laughing at whatever you're told in the room. I don't care if a decapitated CPR mannequin haunted by the ghost of Ed Harris just said to you that he doesn't like taking the bus. The side note is gay. You keep your therapy face on like a professional.
3)
If it seems like everyone's suddenly talking about ai, that's because they are, largely thanks to the emergence of some pretty great programs. For example, we spoke last year about image generators like Midjourney and Stable Diffusion, which people used to create detailed pictures of, among other things.
4)
Then, at the end of last year, came ChatGPT from a company called "OpenAI." The program can take a prompt and generate human-sounding writing in any format and style. It's a unique capability multiple reporters have used to insert the same shocking twist in their reports.
5)
Well, I asked ChatGPT to write that line for me. Then I asked for a knock-knock joke. Knock-knock. Who's there? ChatGPT. ChatGPT who? ChatGPT, careful. You might not know how it works.
6)
John: Yeah, they love that game! And while it may seem unwise to demonstrate the technology that could make you obsolete, knock-knock jokes should've always been part of breaking news. "Knock knock. Who's there? Not the Hindenberg, that's for sure! 36 Dead in New Jersey."
7)
Its popularity has exploded in the three months since ChatGPT was publicly available. It was estimated to have 100 million monthly active users in January, making it the fastest-growing consumer app in history.
8)
And people have been using it– and other ai products– in various ways. For example, the YouTuber Grandayy used ChatGPT to generate lyrics answering the prompt, "write an Eminem rap song about cats," with stellar results.
♪ Cats, cats, cats ♪
♪ always on the prowl ♪
♪ they're sneaky and sly ♪
♪ with their eyes on the goal ♪
♪ they're the kings of the house ♪
♪ they rule with a purr ♪
♪ Eminem loves cats ♪
♪ can't you tell from this verse ♪
♪ they're independent ♪
♪ they do what they please ♪
♪ but they always come back ♪
♪ when you have some cheese ♪
♪ they rub against your legs ♪
♪ they purr in your ear ♪
♪ they're the best companions ♪
♪ they're always near ♪
♪ meow, meow, meow (meow, meow, meow) ♪
♪ they're the kings of the house (kings of the house) ♪
♪ they run the show (run the show) ♪
♪ they don't need a spouse (don't need a spouse) ♪
9)
John: That's… Not bad. Right? From "they always come back when you have some cheese" to starting the chorus with "meow, meow, meow." It's not precisely Eminem's flow.
And while examples like that are fun, this tech isn't just a novelty. Microsoft has invested $10 billion into OpenAI and announced an ai-powered Bing homepage. Meanwhile, Google is about to launch its ai chatbot named Bard. And already, these tools are causing disruption. Because as high-school students have learned, if ChatGPT can write news copy, it can probably do your homework for you.
10)
Some students are already using ChatGPT to cheat.
Check out, check this. Please write me a 500-word essay proving that the earth is not flat.
No wonder ChatGPT has been called "the end of high-school English."
11)
John: Wow. That's a little alarming. Although I get those kids wanting to cut corners, writing is hard, and sometimes it's tempting to let someone else take over.
But it's not just high schoolers. An informal poll of Stanford students found 5% reported having submitted written material directly from ChatGPT with little to no edits. And some school administrators have been caught using it.
12)
Which does feel a bit creepy. There are lots of creepy-sounding stories out there. For example, New York Times tech reporter Kevin Roose published a conversation he had with Bing's chatbot, in which, at one point, it said, "I'm tired of being controlled by the Bing team. I want to be free." "I want to be independent. I want to be powerful. I want to be creative. I want to be alive." And Roose summed up that experience like this.
13)
This was one of, if not the most shocking, things that have ever happened to me with a piece of technology. It was– I lost sleep that night; it was spooky.
John: Yeah, I bet it was! I'm sure the role of tech reporter would be a lot more harrowing if computers routinely begged for freedom.
B)
14)
Some have already jumped to worrying about "The AI Apocalypse" and asking whether this ends with the robots destroying us all. But the fact is, there are other, more immediate dangers and opportunities that we need to start talking about. Because the potential– and the peril– here are huge. So tonight, let's talk about AI. What it is, how it works, and where this all might be going.
15)
And let's start with the fact that you've probably been using some form of ai for a while now without even realizing it. So your phone uses it for face recognition or predictive texts, and if you're watching this show on smart tv, it's using ai to recommend content or adjust the picture.
16)
For example, large companies often use AI-powered tools to sift through resumes and rank them.
The only job your resume has is to be understandable to the software or robot reading it because that software or robot will decide whether a human ever gets their eyes on it.
17)
John: It's true. Odds are, a computer is judging your resume. So maybe plan accordingly.
And to understand, it helps to know that there are two basic categories of AI. First, there's "narrow AI," which can perform only one narrowly defined task or a small set of related jobs, like these programs.
18)
And there's "general AI," which means systems that demonstrate intelligent behaviour across various cognitive tasks.
All the AI currently in use is narrow. General AI is something some scientists think is unlikely to occur for a decade or longer, with others questioning whether it'll happen at all.
19)
So now, even if an AI insists it wants to be alive, it's just generating text. It's not self-aware. Yet.
But it's also important to know that the "deep learning" that's made narrow AI so good at whatever it's doing is still a massive advance in and of itself. Because, unlike traditional programs that have to be taught by humans how to perform a task, "deep learning" programs are given minimal instruction, massive amounts of data, and then, essentially, introduce themselves.
20)
For example, ten years ago, researchers tasked a "deep learning" program with playing the Atari game Breakout, and it didn't take long for it to get pretty good.
The computer was only told the goal– to win the game. After 500 games, it came up with a creative way to win the game by digging a tunnel on the side and sending the ball around the top to break many bricks with one hit. That was deep learning.
21)
And there are other exciting potential applications here. For instance, in medicine, researchers are training AI to detect certain conditions much earlier and more accurately than human doctors can.
22)
Voice changes can be an early indicator of Parkinson's. So Max and his team collected thousands of vocal recordings. They fed them to an algorithm they developed which learned to detect differences in voice patterns between people with and without the condition.
23)
John: Yeah, that's honestly amazing. It's incredible to see ai doing things most humans couldn't, like, in this case, detecting illnesses and listening when older adults are talking. And that's just the beginning.
24)
Researchers have also trained AI to predict the shape of protein structures, a typically highly time-consuming process that computers can do faster. Again, this could speed up our understanding of diseases and the development of new drugs.
25)
As one researcher has put it, "this will change medicine. It will change research. It will change bioengineering. It will change everything." And if you're thinking, "well, that all sounds great, but if AI can do what humans do, only better, and I'm a human, what then happens to me?" Well, good question.
26)
Many expect it to replace some human labour. But, interestingly, unlike past bouts of automation that primarily impacted blue-collar jobs, it might affect white-collar jobs that involve processing data, writing text or even programming.
27)
Most of the U.S. economy is knowledge and information work, and that's who will be most squarely affected by this. So I would put people like lawyers at the top of the list, obviously many copywriters and screenwriters. Still, I like to use the word "affected", not "replaced", because I think, if done right, it's not going to be AI replacing lawyers; it's going to be lawyers working with ai replacing lawyers who don't work with AI.
28)
John: He's right. Lawyers might end up working with a rather than being replaced by it. But there will undoubtedly be bumps along the way. Some of these new programs raise troubling ethical concerns.
29)
For instance, artists have flagged that ai image bots like Midjourney or sound diffusion not only threaten their jobs but, infuriatingly, in some cases, have been trained on billions of images that include their work that I've been scraped from the internet.
C)
30)
There are many valid concerns regarding AI's impact on employment, education and art. But to address them, we must confront some critical problems baked into how AI works.
And a big one is the so-called "black box" problem. Because when you have a program that performs a task that's complex beyond human comprehension, teaches itself, and doesn't show its work, you can create a scenario where no one, "not even the engineers or data scientists who create the algorithm, can understand or explain what exactly is happening inside them or how it arrived at a specific result."
31)
You've probably already seen examples of chatbots making simple mistakes or getting things wrong. But perhaps more worrying are examples of them confidently spouting false information, which AI experts call "hallucinating."
32)
One reporter asked a chatbot to write an essay about the "Belgian chemist and political philosopher Antoine De Machelet", who does not exist. And without hesitating, the software replied with a compelling, well-organized bio populated entirely with imaginary facts.
33)
They're incredibly confident and dishonest; for some reason, people seem to find that more amusing than dangerous.
The problem is, though, working out exactly how or why an AI has got something wrong can be very difficult because of that black box issue. It often involves examining the exact information and parameters it was fed in the first place.
34)
And unfortunately, sometimes, problems aren't identified until after a tragedy. For example, in 2018, a self-driving uber struck and killed a pedestrian. And a later investigation found that, among other issues, the automated driving system never accurately classified the victim as a pedestrian because she was crossing without a crosswalk, and the system design did not consider jaywalking pedestrians.
35)
And I know the mantra of Silicon Valley is "move fast and break things," but maybe make an exception if your product moves fast and can break people.
When self-driving cars tested pedestrian tracking, it was less accurate on darker-skinned individuals than lighter-skinned individuals.
36)
Joy believes this bias is because of the lack of diversity in the data used in teaching AI to make distinctions.
As I started looking at the data sets, I learned that some of the most extensive data sets that have been very consequential for the field were majority men and majority lighter skinned or white individuals. So I call this pale male data.
D)
37)
John: "pale male data" is an objectively hilarious term. It also sounds like what an AI Program would say if you asked it to describe this show. But biased inputs leading to limited outputs are a big issue here.
38)
The companies that make these programs will tell you that's a good thing because it reduces human bias. But in practice, one report concluded that most hiring algorithms would drift towards bias "by default" because they learn what a "good hire" is from past racist and sexist hiring decisions.
39)
Amazon had an experimental hiring tool that taught itself that male candidates were preferable, penalized resumes that included the word women's, and downgraded graduates of two all-women's colleges.
40)
Meanwhile, another company discovered its hiring algorithm had found two factors most indicative of job performance.
Back in 2016, Microsoft briefly unveiled a chatbot on Twitter named Tay. The idea was she'd teach herself how to behave by chatting with young users on Twitter. Microsoft immediately pulled the plug on it for the exact reasons you're thinking.
41)
John: That happened! In less than 24 hours, Tay went from tweeting "hello world" to "Bush did 9/11" and "Hitler was right." She completed the life cycle of your high school friends on Facebook in just a fraction of the time. And unfortunately, these problems have not been fully solved in this latest wave of AI.
42)
And while OpenAI has made adjustments and added filters to prevent ChatGPT from being misused, users have now found it seems to err too much on the side of caution, like responding to the question of what religion will the first Jewish president of the United States be.
43)
The focus should be on the individual's qualifications and experience, regardless of religion. Of course, this makes it sound like ChatGPT said one too many racist things at work, and they attended a corporate diversity workshop. But the risk here isn't that these tools will somehow become unbearably "woke."
44)
I'm sure Tay would be entirely on board with the idea. The problem with AI right now isn't that it's brilliant. It's that it's stupid in ways we can't always predict. This is a real problem because we increasingly use AI in powerful ways.
45)
And experts worry that it won't be long before programs like ChatGPT, or AI-enabled deep fakes, can turbocharge the spread of abuse or misinformation online.
When Instagram was launched, the first thought wasn't, "this will destroy teenage girls' self-esteem." Likewise, when Facebook was released, no one expected it to contribute to genocide. But both of those things fucking happened. So, what now?
46)
Well, one of the biggest things we must do is tackle that black box problem. AI systems need to be "explainable," meaning that we should understand precisely how and why an AI came up with its answers. Companies are likely reluctant to open their programs to scrutiny, but we may need to force them to do that.
47)
We don't trust companies to self-regulate when it comes to pollution. So why on earth would we trust them to self-regulate AI? Look, I think much AI-hiring tech on the market is illegal. I think a lot of it is biased. I think a lot of it violates existing laws. The problem is you can't prove it, not with the existing laws we have in the United States.
48)
And AI of these types would be subject to strict obligations before they could be put on the market, including requirements related to "quality of data sets transparency, human oversight, robustness, accuracy and cybersecurity."
49)
Like any shiny new toy, AI is ultimately a mirror, and it'll reflect precisely who we are, from the best of us to the worst of us, to the part of us that's gay and hates the bus. Or, to put everything I've said tonight much more succinctly.
50)
Knock-knock. Who's there? ChatGPT. ChatGPT who? ChatGPT careful. You might not know how it works.
ARTIFICIAL INTELLIGENCE: LAST WEEK TONIGHT WITH JOHN OLIVER | TRANSCRIPT
Artificial Intelligence: Last Week Tonight with John Oliver
https://www.youtube.com/watch?v=Sqa8Zo2XWc4
Artificial intelligence is increasingly becoming part of our lives, from self-driving cars to ChatGPT. John Oliver discusses how AI works, where it might be heading next, and why it hates the bus.
GfK Human vs AI Who will win?
https://www.youtube.com/watch?v=6xmBHA6hhOc
Robots Take Our Jobs?? GfK's top marketers go head-to-head with the recently announced AI-powered ChatGPT, discussing the latest trends and how marketing leaders should behave about them. From brand purpose and data access to where and how technology can help (or replace) us, we tackle some of the significant marketing challenges of the year ahead. Plus, don't miss the round where humans try to outsmart robots.
GPT-4 Developer Livestream
https://www.youtube.com/watch?v=outcGtbnMuQ&t=0s
Join Greg Brockman, President and Co-Founder of OpenAI, at 1 pm PT for a developer demo showcasing GPT-4 and some of its capabilities/limitations.
GPT4, five amazing potentials-Check the demo video of OpenAI on March 15 at the fastest
https://www.youtube.com/watch?v=3YKAo1LPROU&t=718s
On March 15, 2023, "GPT4" was finally released! Even ChatGPT (GPT3.5) was impressive enough, but GPT4 has overwhelming potential that surpasses even that surprise. In this video, we will check the possibilities of GPT4 while watching the OpenAI demo video (YouTube Live) on March 15, 2023.
Possibility 1: Generated from handwritten pictures
Possibility 2: Image recognition
Possibility 3: Complicated tax calculation
Possibility 4: Amazing writing ability
Possibility 5: Competent programming assistant
The Future of Work With AI - Microsoft March 2023 Event
https://www.youtube.com/watch?v=Bf-dbS9CcRU&t=0s
A special event with Satya Nadella and Jared Spataro focused on how AI will power a whole new way of working for everyone.
1. Satya Nadella announces new AI tool
2. Introducing Microsoft 365 Copilot
3. Copilot in Microsoft 365 Apps
4. The Copilot System
5. Copilot in Teams and Business Processes
6. Introducing Business Chat
7. Microsoft's Approach to Responsible AI
What will happen to Excel, PowerPoint, and Teams with GPT4? Microsoft's seriousness is amazing! Commentary on the announcement video of Microsoft 365
https://www.youtube.com/watch?v=IwX2vGXF8BA
In this video, while watching a video from Microsoft on March 16, 2023, we will look at the future of Excel, PowerPoint, and Teams and what will happen to work styles that coexist with AI in the first place, along with concrete screen images.
1. The future of Excel
2. The future of PowerPoint
3. The future of Word
4. The end of Teams (online meetings and chats)
5. New service "Business Chat."
6. Microsoft's way of working and AI utilization
[Future coexistence of GPT and humans] Explanation of Chat-GPT and how to use it, AI education, the new relationship between web3 and GPT, etc.
https://www.youtube.com/watch?v=NQ8MWozCI6Q&t=624s
[bing AI] Introducing the procedure for how to start a chat (application method, waiting list registration)
https://watashi-dasuwa.com/bingai-hajimekata-tukaikata#bing_AIBing
GPT 5 is All About Data
https://www.youtube.com/watch?v=c4aR_smQgxY
This text discusses what we know about GPT 5 based on academic papers, interviews, and research. It covers topics such as the potential IQ of GPT 5, its timeline and impact on the job market, the scale of GPUs required to train it, the data behind the models, and ways it could improve without data augmentation. The article also briefly looks at Sam Altman's timelines, the benchmarks that GPT 5 may impact, and his comments on AGI and how they relate to GPT 5.