We’ve been asked more questions about Google’s NotebookLM in recent weeks than any other tool. This “personalized research assistant” doesn't just regurgitate your notes; it discusses them, generates insights, and even plays podcaster with your data. We put it to the test with an academic paper on Machine Psychology and were impressed by its ability to translate complex ideas into engaging audio content. For a behind-the-scenes peek, tune into Lenny's Podcast, where Senior PM Raiza Martin dishes on the tool's evolution from a humble 20% side project. It's not just about getting stuff done anymore; AI applications are providing nuance and a depth of understanding that helps people engage with content on a deeper level.
Adobe's Firefly Video Model in Premiere Pro promises production cost savings with its two-second clip extensions and mid-shot adjustments. Fixing a slightly off-center shot or extending a clip without recalling the entire crew? That's a game-changer, potentially eliminating costly reshoots and revolutionizing tight budgets and schedules. Yet, as AI video enters the mainstream, questions of authenticity and ethics are not going away, especially in journalism and documentaries. If AI can "fix" any shot, are we sacrificing authenticity for perfection in our visual storytelling? For businesses, this balance between efficiency and integrity will become a key differentiator.
Walmart's Adaptive Retail strategy aims to make online shopping profoundly personal. They say their “Wallaby” system, trained on decades of Walmart data, will provide a bespoke shopping experience for each visitor. Personalization is nothing new, but retailers like Walmart now have an expanded toolset with which to create more consistent, personalized content and shopping experience across all mediums. We’ll be watching closely in the coming months to see how Walmart's tech impacts consumer expectations for personalization across the retail landscape.
OpenAI has released Canvas for ChatGPT, an interface that allows users to refine code and content in a dedicated workspace. An answer to Anthropic’s Artifacts, it streamlines the process of refining AI-generated content, allowing people to make targeted changes without the need for constant re-prompting or manual edits in separate documents. We’re here for it! And this is just early beta — as the tooling evolves, we can’t wait until our workflows go from “thinking out loud” to delivering finessed solutions. All too frequently we get to 90% clear articulation of an idea and then spend twice as much time polishing the final product. We want a tool that says: “I got you. Here’s what you’re trying to say, stop iterating and let me take it from here.”
DeepMind's new SCoRe framework aims to teach LLMs to be their own fact-checkers, reviewing outputs before presenting them to the end user. Improving this ability to self-correct is critical for problems that are inherently iterative. For fun, we tested OpenAI’s o1 model on NYT Connections puzzles, and have found the results interesting. On simple puzzles, it provides the answer in <5 seconds. On more challenging puzzles, it “thought” for upwards of a minute testing and iterating on many different groupings. SCoRe similarly could be a game-changer for tasks that require this kind of iterative reasoning, making AI assistants more efficient and reliable partners for those more challenging problems.
Ethan Mollick posits that AI adoption is already happening in your organization, ready or not. This post delves into the disparity between individual adoption and organizational acknowledgment, and suggests strategies for companies to harness hidden potential. At Sam we’ve seen several trends across public sector organizations, in particular: decreasing Learning & Development budgets; a lack of organizational AI literacy at every level; significant concerns on AI risk (both well-informed risks and not so informed concerns); fears to experiment due a fear of the unknown. The unfortunate irony is that by forcing AI underground, these organizations are often setting themselves up for the very risks they're trying to dodge.
Agentforce from Salesforce marks the unleashing of a free-to-roam breed of business AI, moving from assistive copilots to more autonomous agents. The offering introduces AI agents capable of handling tasks in sales, marketing, commerce, and customer service, with the ability to take action within established limits and without direct human supervision. This level of autonomy in a commercial product for enterprise use is relatively novel. With great power comes increased risk (not to mention great responsibility) and bad (human) actor interactions seem likely. We’ll be watching to see how this develops.
OpenAI recently resolved an issue in ChatGPT that made it appear as if the AI was initiating conversations unprompted, ie. asking users about personal events, like their first week of high school. While “just a bug”, this episode offers a thought-provoking preview of potential future AI behaviors. These technologies are advancing at breakneck speed, so how do we harness their potential while maintaining clear lines of human agency? It's a balancing act that will define our relationship with AI, a relationship we hope is not just beneficial, but consensual.
A new study from Common Sense Media throws into sharp relief an eternal struggle of parenthood. Today's parents find themselves caught between anxiety about their children's prospects and optimism for the incredible opportunities on the horizon. Yet, some parental experiences are timeless — including the disconnect between our perceptions of our kids' activities and reality. Case in point: while 70% of teens are actively engaging with generative AI, a mere 37% of parents are in the loop. Not just a technological divide, it's a challenge to how we communicate across generations. As AI reshapes our world, how can we ensure we're not left behind in understanding our children's experiences, both at home and beyond?
Anthropic's Claude for Enterprise is redefining their AI offering with a 500K token context window and native GitHub integration. This isn't just about bigger numbers – with enterprise-grade security features, Claude is poised to overcome adoption barriers in highly regulated industries. Yes, Anthropic is playing catch-up with OpenAI, but it’s also a game of leap-frog, with more than double the prompt capacity and arguably the best product on the market today. These tools will become an increasingly critical part of your organization’s technology stack and we’re watching these releases very closely.
Google Research is turning up the cool factor with its AI-powered Heat Resilience tool. Using satellite imagery to identify zones for solutions such as tree planting and cool roofs, it's helping cities combat rising temperatures at the neighborhood level. This approach helps us move from reactive to predictive resilience strategies, preparing us for future challenges. Cool puns aside, we’re pumped to see AI being utilized in this way — tools like this make us optimistic for the future.
Volkswagen is putting ChatGPT in the driver's seat (not literally) with its new IDA system, aiming to make car-human interactions more natural. Beyond simple voice operations available today like temperature and infotainment system control, VW is steering us towards a future where vehicles are becoming more like intelligent companions, with features like restaurant recommendations, information on local tourist attractions, and maybe someday operate as a local tour guide?! As we cruise into this new reality, businesses across industries will need to buckle up and consider how we can leverage AI to redefine products and create immersive experiences.
Google's new AI 'Reimagine' tool for Pixel 9 makes photo manipulation incredibly easy. While it opens up amazing creative possibilities to the layperson, is it outpacing its safeguards? How much will this push us back to authenticated news sources, and where does this AI arms race stop? This tech marks (another) significant shift in how we interact with and trust visual content, challenging both businesses and individuals to adapt to a world where every image could be (or… is likely) AI-enhanced. As society adapts, we'll need to foster a culture of healthy skepticism while still embracing our creative potential.
Researchers from Google and Tel Aviv University are rewriting the rules of digital interaction with “GameNGen”. The videos you see on this site are of people playing a version of Doom that was dynamically recreated simply by training the model to play the game (a lot) and then rebuilding what it learned. Neural models powering game engines opens up a diverse range of applications, from training simulations, to playing out emergency scenarios, and testing new products. Dynamically generated user experiences are on the horizon, and savvy leaders are already exploring unique ways to apply this to their domains. Imagine a world where digital interfaces adapt in real-time to user preferences, learning styles, and contexts – that's all part of the future GameNGen is pointing to.
Anthropic’s system prompts for Claude offer a nice glimpse into their company priorities and philosophy. From adaptive communication to acknowledging limitations and handling controversial topics, knowing the instructions guides our engagement. We were particularly vindicated (in our own prompting struggles) to see instructions like "Specifically, Claude avoids starting responses with the word “Certainly” in any way.” — Claude has used the ‘C’ word in 70% of our interactions just this morning. At the end of the day, we appreciate the transparency and willingness to share.
Jakob Nielsen's UX retrospective isn't just a history lesson; it's a roadmap for navigating the current AI revolution. From Bell Labs' empirical studies to ChatGPT's intent-based interactions, each insight builds on the last. AI isn't replacing the core principles of UX; but they do continue to evolve. The role of UX professionals is transforming, requiring us to become adept at leveraging AI while staying true to our core mission of understanding and serving users.
This article highlights AI's potential to amplify existing societal disparities, challenging business leaders to reconsider their approach to AI adoption. Jutta Treviranus's insights remind us that with great innovation comes great responsibility. Balancing AI-driven efficiency with ethical, equitable implementation is crucial for sustainable and socially responsible business growth. Looking for a good starting point? Invest in AI literacy training for your team, emphasizing ethical considerations and potential biases.
Machine Psychology introduces a novel approach—applying human psychology methods to the study of Large Language Models (LLMs). Most LLM research has focused on "mechanistic interpretability" — similar to reverse-engineering a complex machine to figure out how all the parts work together. This paper explores what we can learn from neuroscience and psychology, disciplines that have long worked to study signs of intelligent behavior, and it's exciting to see the work of multiple fields combining to help us understand the nature of how these models work, and inform how we work with them.
Are concerns about AI subversion overshadowing the potential impact of AI seduction? From "I don't trust chatbots" to "I'm in love with an AI" – the cognitive dissonance is real. While chatbots often face skepticism in customer service, many people are starting to forge deep, personal, connections with AI companions – a trend set to intensify with personalized voices and avatars. For businesses, this shift presents both opportunities and ethical challenges. What is the future of customer engagement in this balance between AI efficiency and human authenticity?
“What if every time you picked up your phone, it helped you focus on your goals rather than distracting you?” Marei Wollersberger's piece challenges us to envision a world where AI enhances our focus rather than fragmenting it. As tech leaders, we're at a crossroads: do we amplify the attention economy or pioneer an intention economy? This shift demands new business models, collaborative engineering, and a commitment to long-term user wellbeing. How are you balancing profit with purposeful design in your AI initiatives?
A personal favorite documentary, "AlphaGo", portrays both the emotional challenges and opportunities connected to increasing AI capability. Today, players like Shin Jinseo (#1 ranked Go player worldwide), embody a new era where AI is not just a competitor, but a tool and catalyst for learning. The film's emotional core remains relevant, reminding us that in the face of technological leaps, our capacity to learn, adapt, and feel might be our greatest asset. On Shin Jinseo utilizing AI: https://baduk.news/s/where-shin-jinseo-really-wins
Is AI in civic engagement a powerful tool or a Pandora's box? This article by Beth Simone Noveck illustrates how AI can play a powerful role in voter education. But, given the stakes, we must prioritize transparency, address bias concerns, and carefully define the role and boundaries of state involvement. While AI is already being used to impact global elections, we’re optimistic for how it can also be used to empower voters.
Early testers of Apple’s MacOS beta discovered plain text pre-prompts to support features like the Smart Reply in Apple Mail and the Memories feature in Apple Photos. While providing a fascinating glimpse into the development of Apple Intelligence, the pre-prompts also highlight the nascent state of prompt engineering which often feels more like pleading with my toddler than instructing a true intelligence. (Though at least one of the Sam team was vindicated to see that they’re also using “please”). We look forward to the release, but like any brand new tech, we anticipate early challenges and steady improvements.
DeepMind's latest AI, AlphaProof, is the first AI to achieve medal status at the International Mathematical Olympiad, scoring just shy of gold-medal status. Impressively, AlphaProof was among only 5 participants (of 609) to solve the most challenging problem. AlphaProof builds on AlphaZero (DeepMind’s reinforcement learning model) and was trained using synthetic data generated by a fine-tuned Gemini model. While there are many barriers to solving open research problems, AlphaProof could be a valuable tool to support mathematics researchers.
AI's promise of productivity is hitting a very human snag. New research reveals a stark contrast: executives are bullish on AI's potential, while employees are drowning in digital overwhelm. This gap isn't just about technology—it's a leadership challenge that demands a delicate balance of innovation and empathy. As we rush to embrace AI, are we leaving our workforce behind?
Many of us have now heard the refrain “AI won’t replace you, but people using AI will”. This oldie but a goodie from HBR advocates that leaders should focus on building organizational AI capabilities. AI leaders are recognizing the importance of training and upskilling their teams on data best practices, AI/ML development, AI leadership, and GenAI usage. As fast as things are moving, we’re still in the early stages of AI adoption and leaders who prioritize capability development will outperform in the long run.
A late start on AI adoption isn't a death sentence – it's a wake-up call. ServiceNow's study shows that even AI leaders are just scratching the surface. Your path forward? Start small but think big. Leverage AI in your existing platforms for quick wins, while simultaneously exploring how AI could impact your underlying business fundamentals. This balanced approach can help close the gap with the pacesetters.
Miro's latest product update provides a glimpse into their new “sidekicks” (their term for AI agents). These sidekicks will have specific roles (launching with Agile Coach, Product Leader, and Product Marketer) and can perform tasks directly in the canvas. We’re constantly experimenting with how we utilize AI in our design and strategy processes at Sam and we look forward to getting our hands on these new Intelligent Canvas features.
Northeastern University researchers investigate perceptions of fairness in this study on AI in hiring. Turns out, job seekers are more accepting of AI pre-screening when it's presented as "demographic-blind" rather than ensuring diversity. This raises some questions, for instance — How transparent should companies be about AI in hiring? Are we trading one form of bias for another? We're curious to hear your experiences – have you been on either side of the process?
GPT-4o mini is OpenAI’s latest release, dropping last week. It’s a solid improvement on the other lightweight models, maintaining a broad range of quality outputs from GPT-4o while delivering at a significantly lower cost (60% lower than GPT-3.5 Turbo). These evolutions are important to track as they can suddenly unlock previously unviable use cases.
Etsy has taken a stand on the flood of AI-assisted crafts, with a set of “Creativity Standards” — asking sellers to tag items so it’s clear just how much of their offering is human-produced. This starts in the arts, but how and when will it extrapolate into customer service, data analytics, software development, etc? We believe it will be increasingly important to be transparent about the extent of AI assistance in products and services. We’re curious to see to what extent consumers will pay a premium for human creation over the coming years.
Like the rise of many previous technologies, AI is moving through a chain of accelerating tipping points. “It isn’t a steady curve but a series of thresholds that, when crossed, suddenly and irrevocably change aspects of our lives.” In this article, Ethan Mollick discusses the impact curve of AI, from toy to tool. We love the idea of maintaining an “impossibility list” that insert-your-tool-of-choice can almost do today. As foundational capabilities continue to increase, test your list and incorporate new possibilities quickly. This will accelerate your efforts and acts as a live measure of the evolution that’s happening right in front of us.
As AI agent use increases, what does the AI do when it can’t complete a task on its own? It hires humans to do it, of course. This may feel gross, but is it an inevitable next step? We don’t know, but we have (so many) questions. When this starts, where does it stop? How does this power dynamic evolve? What happens to job security? Personal agency? Where does economic control centralize? How will social structures adapt? Will we still get coffee breaks? We want to know your take — let us know in the comments.
Ethan Mollick says that beyond the first “individuals” wave of AI (I’m an artist-coder-writer!) the second wave is about putting AI to work within organizations. He argues that *all* employees (not just IT departments) should be experimenting — “...the source of any real advantage in AI will come from the expertise of employees, which is needed to unlock the expertise latent in AI.” We think so too, and by the way, we offer custom AI training for leaders, teams and entire organizations. (Not a coincidence!)
LLMs (like ChatGPT) use Reinforcement Learning from Human Feedback (RLHF) to improve the quality of their output. As these models are getting smarter and more accurate, it’s increasingly difficult for humans to interpret the results and provide valuable feedback. OpenAI is developing CriticGPT (I bet it’s great at parties) to assist humans with RLHF. There are still several limitations in the process, for example hallucinations are still an issue with CriticGPT, so humans are still a critical part of the loop, for now.
Renowned venture capitalist and former Wall St securities analyst Mary Meeker is regarded as one of the more influential figures in tech and investment, with deep insights focusing on internet growth, technology adoption, and digital transformation. So when she publishes her first paper in over four years, we’re here for it. We recommend the full report (https://www.bondcap.com/reports/aiu), where she tackles many important questions for leaders, like who is the customer, how to serve them, and how tech and universities can be partners, rather than obstacles.
The recent AI Engineer World’s Fair saw a fascinating keynote from Romain Huet at OpenAI, with a few demos of ChatGPT voice and vision mode. The ability to recognize rough scribble/sketches and summarize physical pages of text in seconds was impressive, but the live pair-programming (around 7:20:00) blew us away, and provides another glimpse into how our workflows will evolve across all functions and roles, not just for engineers.
Target is piloting a generative AI chatbot for employees called Store Companion which acts as a coach and process expert, allowing staff more time with customers, and answering procedural questions in seconds. The name could maybe use a little workshopping, but these kinds of tactical deployments are becoming baseline in the AI adoption race, and we look forward to seeing how they integrate with broader strategy.
Hot on the heels of the Sonnet 3.5 release, Anthropic's answer to Custom GPTs is here in the form of Projects. Along with the ability to layer documents such as style guides and codebases into the background knowledge of your instructions (huge), Projects allows users and teams to organize their chats into consolidated zones, super helpful in our opinion. And how does it compare to the competition? Early days yet, but in our initial testing we've been impressed, getting great results fast.
Is GenAI a product or feature? Benedict Evans wrote a great piece about how with this launch, Apple will take things in a new direction from the other incumbents. With Apple's tech embedded on the device, customer expectations will take a big step forward, and product companies will find that they need to be so much more than an LLM wrapper to be significant. I've had a number of conversations with people who are wrestling with this question right now, and I'd love to hear your thoughts.
Ilya Sutskever, Daniel Gross, and Daniel Levy think so, and have launched a new company called Safe Superintelligence Inc. (SSI) in the race for AGI. These are heavy hitters making a Big Ballsy Bet, and we've got the popcorn out for this boom-or-bust move away from your typical product roadmap. A big TBD on whether SSI have the resources and capability to make this happen, but we're glad that someone is taking an interest.
There is a psychological impact of AI and automated technologies on individuals, with a significant influence on factors like sales, customer loyalty, and employee performance, based on over seven years of research by the authors. Find practical insights for leaders and managers on effectively integrating these technologies into service and business processes, product design, and communication to benefit customers and employees.
Nearly half of surveyed Canadian managers and executives believe their employees are underprepared for AI, with only five percent deeming their workforce "very prepared." There is a significant talent gap between academic research and industrial application of AI in Canada, and there is an urgent need for Canada to enhance AI literacy and training to close its longstanding productivity gap.
Boston Consulting Group (BCG) anticipates that AI consulting will constitute 20% of its revenues in 2024 and expects this to double by 2026 due to the increasing integration of AI into various corporate operations. The rapid rise of generative AI is seen as a significant revenue driver, with the firm expanding its AI capabilities through partnerships with tech giants and by equipping its workforce with AI tools to enhance productivity and manage tasks efficiently.
Transport for London (TfL) has conducted an AI trial at a Tube station, using AI-powered cameras to monitor various activities and enhance safety and efficiency. The system, capable of identifying incidents such as fare evasion, falls, and potential suicides, demonstrated significant potential for improving station management, although it raises concerns about privacy and surveillance.
The AI Index 2024 report by Stanford provides a comprehensive overview of AI trends, technical advancements, and their impacts across various sectors. Key highlights include AI surpassing human benchmarks in specific tasks, significant investment growth in generative AI, rising AI-related regulations, and the increasing influence of AI in science and medicine, with a notable focus on responsible AI and its integration into the economy.
There are critical security risks associated with using large language models (LLMs). This OWASP Top 10 list provides guidelines and best practices for mitigating these risks, focusing on areas such as data privacy, model integrity, user authentication, and adversarial attacks to ensure the safe deployment and operation of LLM applications.