Tag: technology

  • The January Paradox: Why Digital Habits Matter More Than Resolutions

    As we emerge from the festive blur into January’s stark reality, an interesting pattern emerges in our digital behaviour. While many of us instinctively retreat into endless scrolling to escape the winter blues, research suggests this might be exactly when we need to reshape our relationship with technology.

    The conventional wisdom about January is fascinating. We’re told to make grand resolutions, start ambitious programmes, transform ourselves entirely. Yet psychological research reveals something counterintuitive: it’s precisely when we’re at our most vulnerable that tiny, consistent actions have the most profound impact.

    Through our work at Duck Score, we’ve observed something remarkable. Users who maintain simple daily check-ins during January report significantly different experiences than those who attempt dramatic lifestyle changes. It’s not about the magnitude of the action, but its consistency. A two-minute reflection each day creates more lasting change than hours of sporadic self-improvement efforts.

    This insight aligns with recent neuroscience findings about habit formation in challenging conditions. When we’re stressed or depleted – as many are in January – our brains actually become more receptive to micro-habits. It’s as if the mind, seeking stability, becomes particularly receptive to small, manageable patterns.

    But here’s where it gets really interesting: the role of AI in this process isn’t what you might expect. While many tech companies push AI as a replacement for human connection during difficult times, our research suggests technology works best when it creates bridges rather than substitutes. AI can help us recognise patterns in our emotional landscape, but it’s the small, human connections that actually shift our experience.

    Consider this: in our beta testing, users who combined daily personal check-ins with small group sharing showed remarkable resilience during challenging periods. The AI provided insights, but the human connections, even minimal ones, provided the emotional anchor.

    This challenges the traditional January narrative. Instead of grand transformations, what if we focused on tiny, consistent digital habits that support our well-being? Instead of trying to overhaul our lives, what if we simply committed to better understanding ourselves through small, daily practices?

    As we navigate these cold, dark days, perhaps the answer isn’t in dramatic changes or digital escapism, but in small, intentional moments of reflection and connection. Technology, when thoughtfully designed, can help create these moments without demanding more energy than we have to give.

    This January, consider this: what if the path through winter’s challenges isn’t about changing everything, but about understanding yourself a little better each day? What if the most powerful resolution isn’t a dramatic transformation, but a commitment to small, daily moments of self-awareness?

    In a world that demands constant evolution, sometimes the most revolutionary act is simply paying attention to where you are right now.

    Would love to hear your thoughts on how you’re approaching January. Are you focusing on grand changes, or finding power in smaller habits?

  • The Uncanny Valley Paradox: When AI Gets Too Close to Human

    Something strange is happening in our relationship with AI. Roboticist Masahiro Mori described it back in 1970 as the ‘uncanny valley’ – that unsettling feeling when something appears almost, but not quite, human. Think of ultra-realistic CGI characters whose slight imperfections make them more disturbing than cartoons. As AI becomes more sophisticated, you might expect this effect to disappear. Yet something unexpected is happening: our sensitivity to artificial humanity seems to be increasing.

    I first encountered this concept in Barcelona back in 2016. I was invited to do some work with Oliver Harrison, MD MPH on a health moonshot at Telefonica, trying to imagine what the future of healthcare might look like. Mental health emerged as our greatest challenge, but we couldn’t have predicted how AI would transform this landscape.

    A personal tragedy during that time drove home the desperate need for better mental health support. It wasn’t just about accessibility anymore – it was about creating solutions that could genuinely help people while feeling authentic and trustworthy.

    Fast forward to today, and recent research reveals something fascinating. Studies from early 2024 show that even as we casually chat with ChatGPT and let AI help write our emails, we’re becoming more, not less, sensitive to artificial attempts at human connection. AI news anchors still trigger feelings of eeriness, while AI-generated forecasts face persistent skepticism compared to human predictions.

    This creates a fascinating tension in mental health technology. How do we harness AI’s remarkable capability for pattern recognition and insight generation without triggering that uncanny valley response? The answer might lie in not trying to make AI more human-like at all.

    Through our work at Duck Score, we’re discovering that technology works best when it knows its role. AI excels at spotting patterns and generating insights from data – things the human brain isn’t optimised for. Meanwhile, real human connection happens best in small, trusted groups where people can be their authentic selves. It’s about creating distinct spaces where each can play to its strengths.

    It’s like we’ve been trying to teach fish to climb trees when they could be showing us new ways to swim. The real opportunity isn’t in making AI more human-like, but in using it to enhance genuine human connections. Perhaps the uncanny valley isn’t an obstacle to overcome, but a guide showing us where to draw the line between artificial and human interaction.

    Think about a therapist’s office. The value isn’t just in the therapist’s insights, but in the human connection – the empathy, understanding, and trust built over time. Similarly, our closest relationships aren’t built on pattern recognition or data analysis, but on shared experiences and emotional bonds. What if technology could support both these elements without trying to replicate them?

    This shift in thinking has profound implications. Instead of asking “How do we make AI feel more human?”, we should be asking “How do we use AI to support and enhance human connection?” It’s about creating digital spaces where AI can provide valuable insights while preserving authentic human interaction.

    Looking back at those early discussions in Barcelona, I realise we were asking the wrong questions. The future of digital mental health support doesn’t lie in crossing the uncanny valley, but in building bridges over it – creating environments where AI and human interaction each serve their distinct purpose in supporting mental wellbeing.

    The uncanny valley might just be showing us the way forward – not by trying to make technology more human, but by letting it be brilliantly, unashamedly artificial in service of genuine human connection.