Chatbots and You: How AI helps surgeons spot brain tumors
10 mins read

Chatbots and You: How AI helps surgeons spot brain tumors

AI and You: The Chatbots Are Talking to Each Other, AI Helps Surgeons Pinpoint Brain Tumors

When I came back to work this week, I got a ton of emails about AI stuff. What caught my eye was how much money and effort are being put into making chatbots that act just like people.

At Meta, CEO Mark Zuckerberg introduced new AI characters that the company’s 3 billion users can chat with on platforms like Facebook, Instagram, Messenger, and WhatsApp. These characters are inspired by real celebrities such as Snoop Dogg, Kylie Jenner, Tom Brady, Naomi Osaka, Paris Hilton, and Jane Austen.

According to Meta executives, these AI characters are designed to bring fun, learning, and social interaction to users, allowing them to chat about various topics like recipes and more. Zuckerberg mentioned that people prefer interacting with a variety of AI characters rather than a single super-intelligent one.

However, let’s be clear, these virtual friends aren’t just about connecting with loved ones; it’s also about profit. Currently, tech companies like Meta, OpenAI, Microsoft, and Google are in a race to dominate this AI landscape. As noted by The New York Times, Meta aims to boost user engagement across its apps, which primarily rely on ads for revenue. More time users spend on Meta’s platforms means more ads displayed, translating into higher profits for the company.

Meta wasn’t the first to use famous personalities to humanize AI chatbots; this concept dates back to the late ’60s with ELIZA. This approach has proven successful, as demonstrated by Character.ai, a two-year-old platform allowing interactions with chatbots based on celebrities like Taylor Swift and fictional characters such as Super Mario. The company is seeking funding that could value it between $5 billion to $6 billion. Character.ai recently introduced a feature called Character Group Chat, enabling users to chat simultaneously with multiple AI characters.

However, the use of celebrities in AI raises ethical concerns, particularly if these individuals are unaware or uncompensated for the use of their likeness. Actor Tom Hanks recently warned his followers about an AI dental ad featuring his unauthorized likeness. Hanks previously discussed the potential dangers of AI, highlighting the ability to create digital replicas of individuals at any age using AI or deepfake technology. Legal discussions are ongoing in various industries, such as Hollywood, to establish intellectual property rights concerning actors’ faces and voices, especially in the context of digital replicas. The Writers Guild of America recently resolved issues related to AI in film and TV, but actors, represented by SAG-AFTRA, continue to negotiate, particularly regarding the use of “digital replicas.”

The following are other AI developments worth your attention.

For a price, ChatGPT can now hear and speak

OpenAI has introduced new voice and image features in ChatGPT, allowing users to engage in voice conversations and share images for discussion. These capabilities are accessible to ChatGPT Plus subscribers, priced at $20 per month. With this update, users can snap pictures of various things, like landmarks during travel, fridge contents for meal planning, or math problems for assistance.

Also Read | With Character.AI, people and AI can chat together in groups

According to Joanna Stern from The Wall Street Journal, conversing with ChatGPT feels remarkably human-like, akin to the interactions in the movie “Her,” where the protagonist falls in love with an AI named Samantha. The chatbot’s natural voice, conversational tone, and articulate responses often make it difficult to distinguish from a human. However, there are occasional challenges, such as slow response times and connection issues. In some instances, the conversation abruptly ends, a behavior Stern humorously compares to rudeness typically associated with humans.

A rude AI? Maybe the chatbots are getting more human after all. 

The end of human focus groups; hello, synthetic ones?

Fantasy, a company, is developing “synthetic humans” for clients such as Ford, Google, LG, and Spotify, enabling them to gain insights into audiences, brainstorm product concepts, and generate new ideas. Wired reported that Fantasy employs machine learning technology similar to that used in chatbots like OpenAI’s ChatGPT and Google’s Bard to create these synthetic humans.

These AI agents are endowed with characteristics derived from real people through ethnographic research and are integrated into large language models like OpenAI’s GPT and Anthropic’s Claude. Furthermore, the agents can be programmed with knowledge about a client’s existing products or services, enabling them to engage in meaningful conversations about the client’s offerings.

Interestingly, Fantasy has employed a collaborative approach by creating focus groups comprising both real individuals and synthetic humans. For instance, in the case of oil and gas company BP, these hybrid groups discussed topics or product ideas. According to Roger Rohatgi, BP’s global head of design, synthetic humans excel in providing continuous and diverse responses, unlike human participants who might tire or provide limited answers.

The ultimate goal might indeed be to have AI bots engage in conversations with each other. However, there’s a significant challenge: training AI characters is a complex task. Wired spoke with Michael Bernstein, an associate professor at Stanford University involved in creating a community of chatbots called Smallville. He emphasized the need to question how accurately language models mirror real human behavior.

Bernstein pointed out that AI-generated characters, while impressive, are not as intricate or intelligent as real people. They might tend to exhibit stereotypes and lack the diversity seen in real populations. Making these models more faithful reflections of reality is, according to Bernstein, still an unresolved research question.

Also Read | Video startup Captions launches new AI dubbing app Lipdub with Gen Z slang

Trust and ethics are important

Deloitte’s 2023 report on the “State of Ethics and Trust in Technology” emphasizes the crucial role humans play in shaping the development, deployment, and use of AI tools and systems. The report advocates for organizations to establish trustworthy and ethical principles for emerging technologies. It also stresses the importance of collaborative efforts between businesses, government agencies, and industry leaders to create consistent and ethically robust regulations for these technologies.

Failure to adhere to ethical standards can lead to severe consequences, including reputational damage, harm to individuals, and regulatory penalties. Deloitte’s research highlights the connection between financial losses and employee dissatisfaction due to ethical breaches, indicating that companies engaged in such behavior may struggle to attract and retain talent. One study even revealed that employees in companies involved in ethical breaches experienced a 50% decrease in cumulative earnings over the subsequent decade compared to their counterparts in other companies.

Working on the margins for brain surgery

Surgeons face a challenging task when removing brain tumors: determining how much of the surrounding tissue to remove to ensure all cancerous cells are excised without causing neurological damage. A recent study published in Nature unveils an exciting advancement in tumor detection, utilizing an AI neural network developed by scientists in the Netherlands called Sturgeon.

Sturgeon employs deep learning to assist surgeons in striking the delicate balance between maximizing tumor removal and minimizing the risk of neurological harm. During surgery, the AI system scans segments of a tumor’s DNA, identifying specific chemical modifications that offer a detailed diagnosis of the tumor type and subtype. This real-time diagnosis helps surgeons decide the appropriate level of aggression for the operation.

In tests on frozen tumor samples from previous brain cancer surgeries, Sturgeon accurately diagnosed 45 out of 50 cases within 40 minutes of DNA sequencing. During 25 live brain surgeries, 18 correct diagnoses were provided, including surgeries on children. Although not all brain tumors can be diagnosed through the chemical modifications analyzed by this AI method, the results offer promising possibilities for the future of brain tumor surgeries with the aid of advanced AI technologies.

AI word of the week: AI

Certainly, “anthropomorphism” is a pertinent term in the context of AI, referring to the attribution of human characteristics, emotions, or behaviors to non-human entities, including artificial intelligence and robots. It’s a concept that continues to shape discussions around the ethics and societal implications of AI technology. If you’re interested in exploring more topics related to AI or any other subject, feel free to ask!

Also Read | ElevenLabs introduces AI Dubbing, translating video and audio into 20 languages

So instead, I offer up the Council of Europe’s definition of “artificial intelligence“:

The description you provided outlines the fundamental concept of artificial intelligence (AI). AI encompasses a diverse range of sciences, theories, and techniques aimed at replicating human cognitive abilities within machines. Modern advancements in AI strive to delegate complex tasks that were traditionally carried out by humans to machines.

Within the AI community, there’s a distinction between “strong” AI, which possesses the capability to independently contextualize various specialized problems, and “weak” or “moderate” AI, which excels within specific domains it has been trained for. Some experts argue that achieving “strong” AI would necessitate not only enhancing the performance of existing systems but also making significant progress in fundamental research to model the entire world comprehensively. This differentiation highlights the ongoing debate and challenges in the field of artificial intelligence.

Also Read | Microsoft Bing Chat: Now With DALL-E 3 AI Image Generator

For comparison, here’s the US State Department quoting the National Artificial Intelligence Act of 2020: AI refers to a machine-based system that can make predictions, recommendations, or decisions affecting real-world or virtual environments based on human-defined objectives.

Leave a Reply

Your email address will not be published. Required fields are marked *