The Innovator's Radar newsletter enables you to stay on top of the latest business innovations. Enjoy this week's issue. Jennifer L. Schenker Innovator Founder and Editor-in-Chief |
|
- N E W S I N C O N T E X T - |
|
Two reports out this week look at how large language models (LLMs) - deep learning algorithms that can recognize, summarize, translate, predict, and generate content using very large datasets - will impact the future of work. A World Economic Forum white paper says that LLMs could be a boon for jobs that require critical thinking, complex problem-solving skills, and creativity, including those in engineering, mathematics, and scientific analysis. These tools could benefit workers by increasing the productivity of routine tasks and making their roles more rewarding and focused on a higher added value but a Boston Consulting Group (BCG) report cautions that GenAI is a double-edged sword. When GenAI has mastered a type of task, virtually everyone (about 90%) will see a boost in performance, says the BCG report. The trouble is people find it difficult to determine when GenAI has achieved this mastery. This is critical, according to the report, because GenAI can create significant value (40% performance improvement over not using GPT-4) for creative product innovation, but also erode value by as much as 23% when used for business problem solving. The BCG report found that in some cases GenAI can be good enough to deliver final drafts but even if used for the right task GenAI may hurt a company’s overall creativity. Read on to learn more about this story and the week's most important technology news impacting business. |
|
Stay on top of the latest business innovations and support quality journalism. Subscribe to get unlimited access to all of The Innovator's independently reported articles. |
|
In July, the Securities and Exchange Commission (SEC) adopted rules requiring public companies operating in the U.S. to disclose cybersecurity incidents and annually report information regarding their cybersecurity risk management, strategy, and governance. The new rules should serve as a wake-up call to all companies, anywhere in the world, regardless of their size or sector. Ignorance is not an excuse anymore. I first talked about this back in 2015 at the World Economic Forum’s annual meeting in Davos, where I said, “There are only two kinds of companies: those that have been hacked, and those that don’t know they have been hacked.” This was considered shocking news then, but has since become a given, with the situation getting exponentially worse every year – and artificial intelligence will further accelerate these concerns. Already bad actors are using AI to make remarkedly effective copies of people’s voices and deepfakes of videos. It will be a big issue in the year ahead, not just because fraudsters can copy peoples’ voices to get access to their bank accounts and personal information, but also because the technology can be used to influence political campaigns and gain access to defense systems and critical infrastructure. The U.S. government is recognizing this now, too, with the National Security Agency, the Federal Bureau of Investigation, and the Cybersecurity and Infrastructure Security Agency issuing a cybersecurity information sheet that warns how "synthetic media" – fake information and communications spanning text, video, audio and images – poses a growing threat to corporations. This is an excerpt from an exclusive column Former Cisco Executive Chairman John Chambers produced for The Innovator. Paying subscribers can access the full column. |
|
- I N T E R V I E W O F T H E W E E K - |
|
Who: De Kai, who invented and built the world’s first global-scale online language translator. is a professor of computer science and engineering at Hong Kong University of Science and Technology (HKUST) and Distinguished Research Scholar at Berkeley’s International Computer Science Institute. He serves on the board of AI ethics think tank The Future Society and was one of eight inaugural members of Google's AI ethics council.
Topic: Responsible AI
Quote: "The issues around AI governance are still not being elevated high enough in companies. AI is going to be everywhere. This is not an area where corporations will want to be playing catch-up." |
|
- S T A R T U P O F T H E W E E K - |
|
South Korean startup Mind AI is developing an artificial intelligence engine that converts natural language into a new data structure to perform human-like reasoning, offering what it says are significant advantages over large language models (LLMs) provided by Big Tech companies like OpenAI, Microsoft, and Google. The Mind AI engine takes natural language inputs and transforms them into internationally patented data structures, which it calls canonicals. Once language is encoded in this form, Mind AI says its engine can make connections, perform logical reasoning and generate intelligent responses, without the need for massive amounts of data and computing power. The company calls it a new form of AI. Differentiators include transparency ( the ability to trace-back how certain conclusion was made with precision), a dramatically higher rate of accuracy ( relative to LLM based systems), context hopping and an ability to handle more diverse languages natively than other types of AI systems. The company, formed by two serial entrepreneurs, has attracted strategic investments from early-stage VCs, family offices of major business conglomerates and high net worth individualsin South Korea, the Philippines, Thailand, and Canada. In addition to those countries Mind AI has a presence in India with plans to target Europe and North America. The mission is to translate intelligence into a machine and build the most advanced natural language reasoning AI. “We want to be the CPU [Central Processing Unit] of AI – the reasoning algorithm that anyone can use,” says co-founder and CEO Paul Lee M.D., a serial entrepreneur and a medical doctor by training. |
|
- N U M B E R O F T H E W E E K |
|
Percentage of executives responding to a recent McKinsey survey who said their company's current business model would need to change “moderately to completely” in order to remain economically viable by 2025. Innovation is central to survival, says McKinsey, and innovative companies are leveraging tech to get ahead. |
|
|
|