Clicky chatsimple

AI ROI Is Being Questioned By Companies This Week

Category :

AI

Posted On :

Share This :

This week in AI, Gartner published a report indicating that by the end of 2025, about one-third of enterprise-level generative AI initiatives will be shelved beyond the proof-of-concept stage. Numerous factors contribute to this, including low-quality data, insufficient risk controls, rising infrastructure expenses, and more.

However, the paper states that the lack of apparent business value is one of the main obstacles to the adoption of generative AI.

According to Gartner, implementing generative AI across an entire enterprise can cost anywhere from $5 million to an astounding $20 million. An AI-powered document search tool can cost $1 million upfront and between $1.3 million and $11 million per user yearly, whereas a basic coding assistant has upfront expenses between $100,000 and $200,000 and recurring expenditures upward of $550 per user per year, according to the paper.

When the advantages are hard to measure and may take years to manifest, if they ever do, it is difficult for firms to swallow such high price tags.

According to a survey conducted by Upwork last month, many workers have found that adopting AI has actually increased hardship rather than productivity. Nearly half (47%) of workers using AI say they have no idea how to achieve the productivity gains their employers expect, and over three-fourths (77%) believe that AI tools have decreased productivity and increased their workload in some way. The survey, which involved interviewing 2,500 C-suite executives, full-time employees, and freelancers, was conducted.

Even if venture capital activity is quite active, it appears that AI’s honeymoon phase may be coming to an end. It’s not shocking at all. Numerous anecdotes demonstrate how generative artificial intelligence (AI), which still faces unresolved technical problems, is often more bother than it is worth.

Bloomberg just released a story about a Google-powered application that is currently being tested at HCA hospitals in Florida and leverages artificial intelligence to examine patient medical records. The tool Bloomberg spoke with has users complaining that it can’t always provide accurate health information; once, it didn’t mention if a patient had any medication sensitivities.

Businesses are starting to want more AI. It is the responsibility of vendors to control expectations, barring scientific discoveries that remedy the worst of its constraints.

We’ll find out if they possess the humility to comply.

Updates

SearchGPT: OpenAI unveiled SearchGPT last Thursday. It’s a search function that pulls information from the internet to provide “timely answers” to queries.

More AI for Bing: Microsoft, not to be outdone, unveiled Bing generative search, another AI-powered search function, last week. Bing generative search, which is similar to SearchGPT, is currently only available to a “small percentage” of users. It gathers information from the internet and provides a summary in response to search requests.

Users of the website noticed that X, previously Twitter, had silently rolled out an update that seemed to default user data into its training pool for X’s chatbot Grok on Friday. Regulators in the EU and elsewhere swiftly objected. Is it unclear how to withdraw? This is a manual.)

EU asks for assistance with AI: Under the bloc’s risk-based framework for regulating applications of AI, the AI Act, the European Union has begun a consultation on regulations that will apply to suppliers of general-purpose AI models.

Publisher licensing data for Perplexity: As its chatbot presents news publishers’ content in answer to a query, the AI search engine Perplexity will soon begin splitting advertising money with them. This move seems to be intended to appease critics who have accused Perplexity of plagiarism and unethical site scraping.

Meta releases AI Studio: On Monday, Meta announced that it is making its AI Studio tool available to all American creators, enabling them to create customized chatbots driven by AI. In June of last year, the business began testing AI Studio with a small group of artists after first revealing it.

U.S. Commerce Department supports “open” models: On Monday, the department released a study endorsing “open-weight” generative AI models, such as Meta’s Llama 3.1, but it also suggested that the agency create “new capabilities” to keep an eye on these models for potential hazards.

$99 Friend: Harvard dropout Avi Schiffmann is developing a $99 AI-enabled gadget named Friend. The pendant that is worn around the neck is intended to be used as a kind of companion, as the name implies. However, it’s still unclear if it performs as promised.

Weekly Research Paper

Reward learning from human feedback (RLHF) is the most popular method for guaranteeing that generative AI models obey safety regulations and follow directions. However, RLHF necessitates the time-consuming and costly recruitment of a large number of individuals to score a model’s responses and provide feedback.

Thus, OpenAI is welcoming substitutes.

Researchers at OpenAI have published a paper describing what they refer to as rule-based rewards (RBRs), which assess and direct a model’s answers to prompts using a series of sequential rules. RBRs decompose desirable behaviors into particular rules that are then applied to train a “reward model,” which effectively “teaches” the AI how to act and react in particular scenarios.

According to OpenAI, RBR-trained models require less extensive amounts of human feedback data and perform safer than those trained solely with human feedback. The business claims that since the release of GPT-4, RBRs have been a part of its safety stack and that it intends to continue using them in upcoming versions.

This Week’s Model

DeepMind at Google is getting closer to solving challenging arithmetic problems with artificial intelligence.

A few days ago, the esteemed high school math competition, the International Mathematical Olympiad (IMO), held its annual competition. DeepMind said that it has trained two AI systems to handle four of the six challenges. AlphaProof and AlphaGeometry 2, DeepMind’s systems that replaced AlphaGeometry in January, are said to have shown a capacity for creating and utilizing abstractions as well as intricate hierarchical planning, two tasks that have traditionally proven difficult for AI systems to accomplish.

Two algebraic problems and a number theory problem were solved by AlphaProof and AlphaGeometry 2. (The two unresolved combinatorics concerns remained). Mathematicians confirmed the findings, which mark the first time AI systems have performed on IMO questions at the level of a silver medal.

But there are a few disclaimers. Some of the difficulties took days for the models to solve. While AlphaProof and AlphaGeometry 2 have amazing reasoning powers, they may not be able to assist with open-ended situations that have multiple alternative solutions as opposed to ones with a single correct answer.

We’ll see what comes from the following generation.

Snatch Up

A generative AI model developed by AI startup Stability AI can split a video of an object into numerous segments that appear to have been taken from various perspectives.

The approach, known as Stable Video 4D, may find use in virtual reality, video editing, and game creation, according to Stability. The business stated in a blog post, “We anticipate that companies will adopt our model, fine-tuning it further to suit their unique requirements.”

Users upload footage and choose desired camera angles in order to use Stable Video 4D. Eight five-frame films are produced by the model after around 40 seconds (but “optimization” may require an additional 25 minutes).

According to Stability, the model is being actively improved so that it can handle more real-world films than just the present artificial datasets used for training. The business went on, “We are excited to see how this technology will evolve with ongoing research and development. It has a vast potential for creating realistic, multi-angle videos.”