Artificial Intelligence: Last Week Tonight with John Oliver (HBO)

Artificial intelligence is increasingly becoming part of our lives, from self-driving cars to ChatGPT. John Oliver discusses how AI works, where it might be heading next, and, of course, why it hates the bus.

Connect with Last Week Tonight online…

Subscribe to the Last Week Tonight YouTube channel for more almost news as it almost happens: www.youtube.com/lastweektonight

Find Last Week Tonight on Facebook like your mom would: www.facebook.com/lastweektonight

Follow us on Twitter for news about jokes and jokes about news: www.twitter.com/lastweektonight

Visit our official site for all that other stuff at once: www.hbo.com/lastweektonight
Video Rating: / 5

Join the Conversation

19 Comments

  1. Exactly! The biggest problem is "unknown" unknowns, meaning we cannot anticipate what would go wrong and when and by the time we do, it would be too late.

  2. The bit about the Jewish president, on second thought, is a bit more complex. "Jewish" is not just a religious term, it is also a culture/heritage term. Someone who is, for lack of better phrase, "born" Jewish might still be described as Jewish without actually believing in the faith. So the AI might be right…

    On the note of self driving: As always, a clarification/correction need to be made. It is not meant to be 100% safe, nothing ever is. It only need to be safer than human drivers, which is a supringly low bar to clear. People, for some reason, severely overestimate the capability of humans. The number of people that die in car accident each year is close to the total of US military death in the entirty of the Iraq war. Granted, the risk distribution isn't equal for everyone, there are groups that are significantly higher risk such as intoxicated drivers. So the "average" and "median" might differ significantly. But ultimately, I'm not sure if we are all that far away from AI being better than the average driver. The real risk with self driving is really with hacking than anything else.

  3. My cousin just had a birthday. He is computer programmer/tech Bro… working for Google. He says the timeline is greatly misjudged. The world will be disrupted, economics, our jobs, our health, our technology, everything WILL be completely different by 2035.

  4. While I suppose that some regulation of machine learning may be necessary, it's not as simple as "opening up the AI" to understand how it's coming to conclusions. Especially when you think about the fact, that basically a mathematical function is being used to calculate results. Try guessig the shape of let's say x^3 – 3 * y^8 + z^-5 = 24. And now think about a couple of thousand dimensions (up to 32,000 for gtp4 turbo). That will give only a slight idea of the complexity of understanding the "thought process" of machine learning.

  5. Those completely reasonable problems are precisely why we'll be prevented from having self-driving vehicles because people won't accept anything but at least 99.9% in something like that even though they'd save more lives. If there are 80,000 accident deaths a year by drivers and self-driving cars would only cause 5,000 deaths like these limited scenarios, we'll choose to accept those extra 75,000 deaths. That's stupid and short-sighted.

    That woman's "pale male data" argument is disingenuous because she's making the accusation that the researchers are purposely choosing certain inputs and that's pure insanity. That's saying they are racists purposely self-sabotaging their own projects and preventing them to succeed simply because they want to focus on white men, when it's FAR more rational to realize they use the data they have available. The US is still 3/4 white people, so the data is going to be 3/4 white people. It's just that simple.

  6. There's no chance in hell AI will replace screenwriters. A robot cannot do what I do. The human experience is what I write about, and no robot can do that.

  7. Does AI have access to all the information on Google and Wikipedia? Can I ask it what to do in a weather event?

  8. High school english is stupid you try that shit in university with proper referencing and academic tone chatGPT won't do shit for you.

  9. This situation pisses me off because corrupt leaders are pulling the strings to line their own pockets.

  10. I can’t believe that despite every movie having the gall to make fun of people from the past by making them time travel to the future and think the tv is a box full of tiny people, the moment we make a program meant to imitate how humans speak reporters go apeshit thinking we’ve created Frankenstein’s monster and there’s invisible people in the code.

  11. I heard about the girl that was using AI for her video script but the AI was using data and words from her alleged stalker so it made him think that she was interested because of specific things the AI picked up from his conversations

  12. while AI technology is really promising, you must remember that it is also extremely unstable and unreliable

  13. AI is just a statistical analysis. There’s no black box. It’s like a computer doing a calculation in a second that would take me a lifetime. The hype with AI is exactly the same as once was has to how fast computers are with calculations. You can also flip a coin to drop a wmd or not.

  14. The biggest lesson of AI is one we've faced many times: humans always run right into unknown things with very little concern about where they could go, and things going bad doesn't make us stop.

Leave a comment

Your email address will not be published. Required fields are marked *