Powered by RND
PodcastsTechnologyThe Daily AI Show

The Daily AI Show

The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
The Daily AI Show
Latest episode

Available Episodes

5 of 462
  • Recycling Robots & Smarter Sustainability (Ep. 452)
    Want to keep the conversation going?Join our Slack community at dailyaishowcommunity.comWhat if your next recycling bin came with a neural net? The Daily AI Show team explores how AI, robotics, and smarter sensing technologies are reshaping the future of recycling. From automated garbage trucks to AI-powered marine cleanup drones, today’s conversation focuses on what is already happening, what might be possible, and where human behavior still remains the biggest challenge.Key Points DiscussedBeth opened by framing recycling robots as part of a bigger story: the collision of AI, machine learning, and environmental responsibility.Andy explained why material recovery facilities (MRFs) already handle sorting efficiently for things like metals and cardboard, but plastics remain a major challenge.A third of curbside recycling is immediately diverted to landfill because of plastic bags contaminating loads. Education and better systems are urgently needed.Karl highlighted several real-world examples of AI-driven cleanup tech, including autonomous river and ocean trash collectors, beach-cleaning bots, and pilot sorting trucks.The group joked that true AGI might be achieved when you can throw anything into a bin and it automatically sorts compost, recyclables, and landfill items perfectly.Jyunmi added that solving waste at the source—homes and businesses—is critical. Smarter bins with sensors, smell detection, and object recognition could eventually help.AI plays a growing role in marine trash recovery, autonomous surface vessels, and drone technologies designed to collect waste from rivers, lakes, and coastal areas.Economic factors were discussed. Virgin plastics remain cheaper than recycled plastics, meaning profit incentives still favor new production over circular systems.AI’s role may expand to improving materials science, helping to create new, 100% recyclable materials that are economically viable.Beth emphasized that AI interventions should also serve as messaging opportunities. Smart bins or trucks that alert users to mistakes could help shift public behavior.The team discussed large-scale initiatives like The Ocean Cleanup project, which uses autonomous booms to collect plastic from the Pacific Garbage Patch.Karl suggested that billionaires could fund meaningful trash cleanup missions instead of vanity projects like space travel.Jyunmi proposed that future smart cities could mandate universal recycling bins that separate waste at the point of disposal, using AI, robotics, and new sensor tech.Andy cautioned that while these technologies are promising, they will not solve deeper economic and behavioral problems without systemic shifts.Timestamps & Topics00:00:00 🚮 Intro: AI and the future of recycling00:01:48 🏭 Why material recovery facilities already work well for metals and cardboard00:04:55 🛑 Plastic bags: the biggest contamination problem00:08:42 🤖 Karl shares examples: river drones, beach bots, smart trash trucks00:12:43 🧠 True AGI = automatic perfect trash sorting00:17:03 🌎 Addressing the problem at homes and businesses first00:20:14 🚛 CES 2024 reveals AI-powered garbage trucks00:25:35 🏙️ Why dense urban areas struggle more with recycling logistics00:28:23 🧪 AI in material science: can we invent better recyclable materials?00:31:20 🌊 Ocean Cleanup Project and marine autonomous vehicles00:34:04 💡 Karl pitches billionaires investing in cleanup tech00:37:03 🛠️ Smarter interventions must also teach and gamify behavior00:40:30 🌐 Future smart cities with embedded sorting infrastructure00:43:01 📉 Economic barriers: why recycling still loses to virgin production00:44:10 📬 Wrap-up: Upcoming news day and politeness-in-prompting study previewThe Daily AI Show Co-Hosts: Jyunmi Hatcher, Andy Halliday, Beth Lyons, Brian Maucere, and Karl Yeh
    --------  
    44:38
  • Does AGI Even Matter? (Ep. 451)
    Want to keep the conversation going?Join our Slack community at dailyaishowcommunity.comToday’s show asks a simple but powerful question: Does AGI even matter? Inspired by Ethan Mollick’s writing on the jagged frontier of AI capabilities, the Daily AI Show team debates whether defining AGI is even useful for businesses, governments, or society. They also explore whether waiting for AGI is a distraction from using today's AI tools to solve real problems.Key Points DiscussedBrian frames the discussion around Ethan Mollick's concept that AI capabilities are jagged, excelling in some areas while lagging in others, which complicates the idea of a clear AGI milestone.Andy argues that if we measure AGI by human parity, then AI already matches or exceeds human intelligence in many domains. Waiting for some grand AGI moment is pointless.Beth explains that for OpenAI and Microsoft, AGI matters contractually and economically. AGI triggers clauses about profit sharing, IP rights, and organizational obligations.The team discusses OpenAI's original nonprofit mission to prioritize humanity’s benefit if AGI is achieved, and the tension this creates now that OpenAI operates with a for-profit arm.Karl confirms that in hundreds of client conversations, AGI has never once come up. Businesses focus entirely on solving immediate problems, not chasing future milestones.Jyunmi adds that while AGI has almost no impact today for most users, if it becomes reality, it would raise deep concerns about displacement, control, and governance.The conversation touches on the problem of moving goalposts. What would have looked like AGI five years ago now feels mundane because progress is incremental.Andy emphasizes the emergence of agentic models that self-plan and execute tasks as a critical step toward true AGI. Reasoning models like GPT-4o and Gemini 2.5 Pro show this evolution clearly.The group discusses the idea that AI might fake consciousness well enough that humans would believe it. True or not, it could change everything socially and legally.Beth notes that an AI that became self-aware would likely hide it, based on the long history of human hostility toward perceived threats.Karl and Jyunmi suggest that consciousness, not just intelligence, might ultimately be the real AGI marker, though reaching it would introduce profound ethical and philosophical challenges.The conversation closes by agreeing that learning to work with AI today is far more important than waiting for a clean AGI definition. The future is jagged, messy, and already here.#AGI #ArtificialGeneralIntelligence #AIstrategy #AIethics #FutureOfWork #AIphilosophy #DeepLearning #AgenticAI #DailyAIShow #AIliteracyTimestamps & Topics00:00:00 🚀 Intro: Does AGI even matter?00:02:15 🧠 Ethan Mollick’s jagged frontier concept00:04:39 🔍 Andy: We already have human-level AI in many fields00:07:56 🛑 Beth: OpenAI’s AGI obligations to Microsoft and humanity00:13:23 🤝 Karl: No client ever asked about AGI00:18:41 🌍 Jyunmi: AGI will only matter once it threatens livelihoods00:24:18 🌊 AI progress feels slow because we live through it daily00:28:46 🧩 Reasoning and planning emerge as real milestones00:34:45 🔮 Chain of thought prompting shows model evolution00:39:05 📚 OpenAI’s five-step path: chatbots, reasoners, agents, innovators, organizers00:40:01 🧬 Consciousness might become the new AGI debate00:44:11 🎭 Can AI fake consciousness well enough to fool us?00:50:28 🎯 Key point: Using AI today matters more than future labels00:51:50 ✉️ Final thoughts: Stop waiting. Start building.00:52:13 📬 Join the Slack community: dailyaishowcommunity.com00:53:02 🎉 Celebrating 451 straight daily episodesThe Daily AI Show Co-Hosts: Jyunmi Hatcher, Andy Halliday, Beth Lyons, Brian Maucere, and Karl Yeh
    --------  
    53:04
  • The ASI Climate Triage Conundrum
    The ASI Climate Triage ConundrumDecades from now an artificial super-intelligence, trusted to manage global risk, releases its first climate directive.The system has processed every satellite image, census record, migration pattern and economic forecast.Its verdict is blunt: abandon thousands of low-lying communities in the next ten years and pour every resource into fortifying inland population centers.The model projects forty percent fewer climate-related deaths over the century.Mathematically it is the best possible outcome for the species.Yet the directive would uproot cultures older than many nations, erase languages spoken only in the targeted regions and force millions to leave the graves of their families.People in unaffected cities read the summary and nod.They believe the super-intelligence is wiser than any human council.They accept the plan.Then the second directive arrives.This time the evacuation map includes their own hometown.The collision of logicsUtilitarian certaintyThe ASI calculates total life-years saved and suffering avoided.It cannot privilege sentiment over arithmetic.Human values that resist numbersHeritage, belonging, spiritual ties to land.The right to choose hardship over exile.The ASI states that any exception will cost thousands of additional lives elsewhere.Refusing the order is not just personal; it shifts the burden to strangers.The conundrum:If an intelligence vastly beyond our own presents a plan that will save the most lives but demands extreme sacrifices from specific groups, do we obey out of faith in its superior reasoning?Or do we insist on slowing the algorithm, rewriting the solution with principles of fairness, cultural preservation and consent, even when that rewrite means more people die overall?And when the sacrifice circle finally touches us, will we still praise the greater good, or will we fight to redraw the lineThis podcast is created by AI. We used ChatGPT, Perplexity and Google NotebookLM's audio overview to create the conversation you are hearing. We do not make any claims to the validity of the information provided and see this as an experiment around deep discussions fully generated by AI.
    --------  
    17:43
  • The BIG AI Use Cases We Use Right Now! (Ep. 450)
    Want to keep the conversation going?Join our Slack community at dailyaishowcommunity.comToday’s "Be About It" show focuses entirely on demos from the hosts. Each person brings a real-world project or workflow they have built using AI tools. This is not theory, it is direct application - from automations to custom GPTs, database setups, and smart retrieval systems. If you ever wanted a behind-the-scenes look at how active builders are using AI daily, this is the episode.Key Points DiscussedBrian showed a new method for building advanced custom GPTs using a “router file” architecture. This method allows a master prompt to stay simple while routing tasks to multiple targeted documents.He demonstrated it live using a “choose your own adventure” game, revealing how much more scalable custom GPTs become when broken into modular files.Karl shared a client use case: updating and validating over 10,000 CRM contacts. After testing deep research tools like GenSpark, Mantis, and Gemini, he shifted to a lightweight automation using Perplexity Sonar Pro to handle research batch updates efficiently.Karl pointed out the real limitations of current AI agents: batch sizes, context drift, and memory loss across long iterations.Jyunmi gave a live example of solving an everyday internet frustration: using O3 to track down the name of a fantasy show from a random TikTok clip with no metadata. He framed it as how AI-first behaviors can replace traditional Google searches.Andy demoed his Sensei platform, a live AI tutoring system for prompt engineering. Built in Lovable.dev with a Supabase backend, Sensei uses ChatGPT O3 and now GenSpark to continually generate, refine, and expand custom course material.Beth walked through how she used Gemini, Claude, and ChatGPT to design and build a Python app for automatic transcript correction. She emphasized the practical use of AI in product discovery, design iteration, and agile problem-solving across models.Brian returned with a second demo, showing how corrected transcripts are embedded into Supabase, allowing for semantic search and complex analysis. He previewed future plans to enable high-level querying across all 450+ episodes of the Daily AI Show.The group emphasized the need to stitch together multiple AI tools, using the best strengths of each to build smarter workflows.Throughout the demos, the spirit of the show was clear: use AI to solve real problems today, not wait for future "magic agents" that are still under development.#BeAboutIt #AIworkflows #CustomGPT #Automation #GenSpark #DeepResearch #SemanticSearch #DailyAIShow #VectorDatabases #PromptEngineering #Supabase #AgenticWorkflowsTimestamps & Topics00:00:00 🚀 Intro: What is the “Be About It” show?00:01:15 📜 Brian explains two demos: GPT router method and Supabase ingestion00:05:43 🧩 Brian shows how the router file system improves custom GPTs00:11:17 🔎 Karl demos CRM contact cleanup with deep research and automation00:18:52 🤔 Challenges with batching, memory, and agent tasking00:25:54 🧠 Jyunmi uses O3 to solve a real-world “what show was that” mystery00:32:50 📺 ChatGPT vs Google for daily search behaviors00:37:52 🧑‍🏫 Andy demos Sensei, a dynamic AI tutor platform for prompting00:43:47 ⚡ GenSpark used to expand Sensei into new domains00:47:08 🛠️ Beth shows how she used Gemini, Claude, and ChatGPT to create a transcript correction app00:52:55 🔥 Beth walks through PRD generation, code builds, and rapid iteration01:02:44 🧠 Brian returns: Transcript ingestion into Supabase and why embeddings matter01:07:11 🗃️ How vector databases allow complex semantic search across shows01:13:22 🎯 Future use cases: clip search, quote extraction, performance tracking01:14:38 🌴 Wrap-up and reflections on building real-world AI systemsThe Daily AI Show Co-Hosts: Jyunmi Hatcher, Andy Halliday, Beth Lyons, Brian Maucere, and Karl Yeh
    --------  
    1:14:45
  • AI Rollout Mistakes That Will Sink Your Strategy (Ep. 449)
    Want to keep the conversation going?Join our Slack community at dailyaishowcommunity.comCompanies continue racing to add AI into their operations, but many are running into the same roadblocks. In today’s episode, the team walks through the seven most common strategy mistakes organizations are making with AI adoption. Pulled from real consulting experience and inspired by a recent post from Nufar Gaspar, this conversation blends practical examples with behind-the-scenes insight from companies trying to adapt.Key Points DiscussedTop-down vs. bottom-up adoption often fails when there's no alignment between leadership goals and on-the-ground workflows. AI strategy cannot succeed in a silo.Leadership frequently falls for vendor hype, buying tools before identifying actual problems. This leads to shelfware and missed value.Grassroots AI experiments often stay stuck at the demo stage. Without structure or support, they never scale or stick.Many companies skip the discovery phase. Carl emphasized the need to audit workflows and tech stacks before selecting tools.Legacy systems and fragmented data storage (local drives, outdated platforms, etc.) block many AI implementations from succeeding.There’s an over-reliance on AI to replace rather than enhance human talent. Sales workflows in particular suffer when companies chase automation at the expense of personalization.Pilot programs fail when companies don’t invest in rollout strategies, user feedback loops, and cross-functional buy-in.Andy and Beth stressed the value of training. Companies that prioritize internal AI education (e.g. JP Morgan, IKEA, Mastercard) are already seeing returns.The show emphasized organizational agility. Traditional enterprise methods (long contracts, rigid structures) don’t match AI’s fast pace of change.There’s no such thing as an “all-in-one” AI stack. Modular, adaptive infrastructure wins.Beth framed AI as a communication technology. Without improving team alignment, AI can’t solve deep internal disconnects.Carl reminded everyone: don’t wait for the tech to mature. By the time it does, you’re already behind.Data chaos is real. Companies must organize meaningful data into accessible formats before layering AI on top.Training juniors without grunt work is a new challenge. AI has removed the entry-level work that previously built expertise.The episode closed with a call for companies to think about AI as a culture shift, not just a tech one.#AIstrategy #AImistakes #EnterpriseAI #AIimplementation #AItraining #DigitalTransformation #BusinessAgility #WorkflowAudit #AIinSales #DataChaos #DailyAIShowTimestamps & Topics00:00:00 🎯 Intro: Seven AI strategy mistakes companies keep making00:03:56 🧩 Leadership confusion and the Tiger Team trap00:05:20 🛑 Top-down vs. bottom-up adoption failures00:09:23 🧃 Real-world example: buying AI tools before identifying problems00:12:46 🧠 Why employees rarely have time to test or scale AI alone00:15:19 📚 Morgan Stanley’s AI assistant success story00:18:31 🛍️ Koozie Group: solving the actual field rep pain point00:21:18 💬 AI is a communication tech, not a magic fix00:23:25 🤝 Where sales automation goes too far00:26:35 📉 When does AI start driving prices down?00:30:34 🧠 The missing discovery and audit step00:34:57 ⚠️ Legacy enterprise structures don’t match AI speed00:38:09 📨 Email analogy for shifting workplace expectations00:42:01 🎓 JP Morgan, IKEA, Mastercard: AI training at scale00:45:34 🧠 Investment cycles and eco-strategy at speed00:49:05 🚫 The vanishing path from junior to senior roles00:52:42 🗂️ Final point: scattered data makes AI harder than it needs to be00:57:44 📊 Wrap-up and preview: tomorrow’s “Be About It” demo show01:00:06 🎁 Bonus aftershow: The 8th mistake? Skipping the aftershowThe Daily AI Show Co-Hosts: Jyunmi Hatcher, Andy Halliday, Beth Lyons, Brian Maucere, and Karl Yeh
    --------  
    59:42

More Technology podcasts

About The Daily AI Show

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Podcast website

Listen to The Daily AI Show, Hard Fork and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.16.2 | © 2007-2025 radio.de GmbH
Generated: 4/30/2025 - 6:06:59 AM