The Innovator's Radar newsletter enables you to stay on top of the latest business innovations. Enjoy this week's edition. Jennifer L. Schenker Innovator Founder and Editor-in-Chief |
|
- N E W S I N C O N T E X T - |
|
An AI-powered transcription tool widely used in the medical field to help doctors communicate with their patients touts itself as having "human-level robustness and accuracy." But the tool sometimes invents things that no one ever said, posing potential risks to patient safety, and deletes the underlying audio from which the transcriptions are generated, leaving medical staff no way to verify their accuracy, AP News reported on October 26. Meanwhile, Wired reported on October 24 that AI-enhanced search engines from Google, Microsoft, and Perplexity have been surfacing debunked and racist research claiming genetic superiority of white people over other racial groups. This finding, revealed through investigative work by Hope Not Hate, a UK-based anti-racism organization, has added to concerns about racial bias and radicalization in AI-powered search. Both stories illustrate the gulf between AI hype and reality as well as the dangers of overestimating the technology. Read on to get the key takeaways from this story and learn about this week's other important technology news impacting business. |
|
Stay on top of the latest business innovations and support quality journalism. Subscribe to get unlimited access to all of The Innovator's independently reported articles. |
|
While there is lots of attention being given to AI and quantum computing there are an estimated 200 critical and emerging technologies shaping today’s technological landscape, each with their own unique cybersecurity implications. “There is a Pandora’s box full of new technologies coming to market,” warns Dr. Hoda Al Khzaimi, director and founder of the Center for Emerging Technology Accelerated Research (EMARATSEC), and associate vice provost for research translation and entrepreneurship at New York University Abu Dhabi (NYUAD), United Arab Emirates. She is a co-author of a recent World Economic Forum report on Navigating Cyber Resilience in the Age of Emerging Technologies. Indeed, the rapid growth in investments in emerging technologies– from approximately $4 billion in 2018 to more than $3.2 trillion today – demonstrates a significant surge in global interest and development, underscoring the need for a broad, inclusive approach to technology assessment and strategy development, says the report. In the face of this complex and evolving threat landscape a traditional mindset of “security by design”, which focuses on embedding security features into new technologies from the outset, is no longer sufficient, says Al Khzaimi, who is also co-chair of the Forum’s Global Future Council for Cybersecurity and director emeritus of NYUAD’s Centre for Cybersecurity. Instead, there is a pressing need to adopt a “resilience by design” approach, which ensures that systems can withstand and recover from inevitable attacks that will occur as these technologies proliferate, she says. |
|
- I N T E R V I E W O F T H E W E E K - |
|
Who: Professor Renée Cummings, who is listed among the world’s top 100 women in AI ethics, is an AI, data and tech ethicist, and the first data activist-in-residence at the University of Virginia’s School of Data Science, where she was named Professor of Practice in Data Science. She’s also the inaugural senior fellow in AI, Data and Policy at All Tech Is Human, a leading international think tank. Cummings is also a nonresident senior fellow at The Brookings Institution, co-director of Brookings’ Equity Lab, and a member of the World Economic Forum’s Data Equity Council and the World Economic Forum’s AI Governance Alliance. Topic: Reimagining equity in the age of AI. Quote: "We can’t have equitable AI without equitable data. The data we have - which is being used to make critical decisions from the C-Suite to Main Street - was created with historical inequities and biases so the equity challenge with AI is a data equity challenge. AI governance and data governance must walk hand in hand." |
|
- S T A R T U P O F T H E W E E K - |
|
AI’s large language models (LLMs) lack transparency. Nobody, including the people who create them, knows why an LLM model gives the exact answer it gives or why it makes a particular decision. That’s a problem for organizations in high impact regulated industries like financial services or healthcare, for both legal and ethical reasons. The issue is not limited to LLMs: most AI models are based on opaque statistics that generally cannot be understood easily. “You can’t trust what you can’t control, and you can’t control something you don’t understand,” says Angelo Dalli, CTO and Chief Scientist of UMNAI, a UK-based startup. UMNAI is trying to tackle this issue by marrying neural networks and LLMs with neuro-symbolic AI, which relies on logic and reasoning, and an understanding of cause-and-effect, rather than just pure statistical predictions and associations, to represent knowledge and uses rule-based systems and logical inference to derive conclusions. So how does that work with real-world business applications? Read on to find out. |
|
- N U M B E R O F T H E W E E K |
|
Amount the European Innovation Council (EIC) said this week that it will be investing in breakthrough innovation across Europe in 2025. This funding, part of Horizon Europe, targets deep tech and strategic technologies essential for Europe’s competitiveness. |
|
|
|