Clicky chatsimple

2024’s Top Ten Machine Learning And AI Trends

Category :

Workflow Automation

Posted On :

Share This :

With the release of ChatGPT in November 2022, artificial intelligence underwent a sea change in 2023. The last year has seen a number of exciting innovations in the field of artificial intelligence, including complex multimodal models and a thriving open source landscape.

However, as corporations begin to focus more on real-world activities rather than experimentation, attitudes are becoming more nuanced and mature, even while generative AI continues to enthrall the tech world. The trends of this year show how AI development and deployment tactics are becoming more sophisticated and cautious while keeping an eye on safety, ethics, and the changing legal environment.

The top ten machine learning and artificial intelligence trends for 2024 are listed here.

1. AI With Multiple Modes

By analyzing several input formats, including text, graphics, and sound, multimodal AI goes beyond conventional single-mode data processing and gets closer to simulating humans’ capacity to process a variety of sensory inputs.

“The world’s interfaces are multimodal,” declared Mark Chen, OpenAI’s head of frontiers research, during a presentation at the EmTech MIT conference in November 2023. “We want our models to see what we see and hear what we hear, and we want them to also generate content that appeals to more than one of our senses.”

The GPT-4 model from OpenAI has multimodal capabilities that let the program react to both visual and aural input. During his presentation, Chen used the example of snapping pictures of the contents of a refrigerator and allowing ChatGPT to recommend a dish using the components seen. If you ask the question out loud in ChatGPT’s voice mode, the exchange might even include audio.

‘MIT Technology Review’ is displayed onto the wall as a man stands onstage. “Multimodal brings us closer to AGI,” says a presentation slide behind him, “LEV CRAIG/TECHTARGET.”

Mark Chen, the leader of frontiers research at OpenAI, speaks to the EmTech MIT audience.

“The real power of these capabilities is going to be when you can marry up text and conversation with images and video, cross-pollinate all three of those, and apply those to a variety of businesses,” stated Matt Barrington, Americas emerging technologies leader at EY, despite the fact that the majority of generative AI initiatives today are text-based.

The real-world applications of multimodal AI are numerous and constantly growing. Multimodal models, for instance, can evaluate medical imaging in the context of a patient’s history and genetic information to enhance diagnostic precision in the healthcare industry. Multimodal models can broaden the range of tasks that different employees can perform at the job function level by giving those without formal training in design and coding access to fundamental design and coding skills.

Barrington remarked, “I can’t draw to save my life.” “Well, now I can. I’m decent with language, so … I can plug into a capability like [image generation], and some of those ideas that were in my head that I could never physically draw, I can have AI do.”

Multimodal capabilities could also improve models by providing them with fresh data to work with. “As our models get better and better at modeling language and start to hit the limits of what they can learn from language, we want to provide the models with raw inputs from the world so that they can perceive the world on their own and draw their own inferences from things like video or audio data,” Chen stated.

2. Artificial Agents

Reactive AI is gradually giving way to proactive AI with the advent of agentic AI. Artificial intelligence (AI) agents are sophisticated systems with autonomy, proactivity, and autonomous thought. AI agents, in contrast to traditional AI systems, are made to comprehend their surroundings, set goals, and take actions to accomplish those goals without the need for direct human participation. Traditional AI systems primarily react to user inputs and follow preset programming.

An AI agent might be trained to gather information, identify trends, and start preventive measures in reaction to dangers like the first indications of a forest fire, for instance, in the context of environmental monitoring. Similarly, an artificial intelligence financial agent might actively manage an investment portfolio through adaptive strategies that respond in real time to shifting market conditions.

According to computer scientist Peter Norvig, a fellow at Stanford’s Human-Centered AI Institute, “2023 was the year of being able to chat with an AI,” written a recent blog post. “In 2024, we’ll see the ability for agents to get stuff done for you. Make reservations, plan a trip, connect to other services.”

Agentic and multimodal AI combined may also create new opportunities. Chen provided an example of an application intended to determine the contents of an uploaded image in the previously mentioned presentation. In the past, developing such an application would have required developing and training one’s own picture recognition model, in addition to figuring out how to implement it. However, with multimodal, agentic models, all of this might be done with prompts in natural language.

“I really think that multimodal together with GPTs will open up the no-code development of computer vision applications, just in the same way that prompting opened up the no-code development of a lot of text-based applications,” Chen stated.

3. AI that is open source

It is costly and takes a lot of computing and data to build huge language models and other potent generative AI systems. However, by building on top of others’ work, developers can lower costs and increase access to AI by utilizing an open source paradigm. Open source AI allows academics and organizations to build upon and contribute to existing code because it is made available to the public, usually for free.

The last year’s GitHub data demonstrates a noteworthy rise in developer interest in AI, especially generative AI. With projects like Stable Diffusion and AutoGPT attracting thousands of new contributors, generative AI projects made their debut in the top 10 most popular projects on the code hosting site in 2023.

There weren’t many open source generative models available at the beginning of the year, and when they were, they frequently performed worse than private solutions like ChatGPT. However, in 2023, the field grew substantially to include strong open-source competitors like Mistral AI’s Mixtral models and Meta’s Llama 2. This has the potential to change the AI environment in 2024 by making advanced AI models and tools previously inaccessible to smaller, less resource-rich enterprises available.

“It gives everyone easy, fairly democratized access, and it’s great for experimentation and exploration,” said Barrington.

Open source methodologies have the potential to foster transparency and ethical advancement by increasing code visibility, which increases the probability of detecting biases, errors, and security vulnerabilities. However, academics have also voiced worries about the improper use of open source AI to produce damaging content, such as misinformation. Furthermore, creating and managing open-source software can be challenging even for conventional software, much less intricate and computationally demanding AI models.

4. Recovery-Enhanced Production

Despite being widely used in 2023, generative AI technologies are still beset by the issue of hallucinations, which causes them to provide users with answers to their inquiries that seem logical but are actually inaccurate. Due to the potential for catastrophic hallucinations in situations involving customers or business criticality, this shortcoming has been a barrier to enterprise adoption. The reduction of hallucinations through retrieval-augmented generation (RAG) has gained traction, and this could have a significant impact on the deployment of AI in enterprise settings.

RAG improves the precision and pertinence of AI-generated content by combining text synthesis with information retrieval. It gives LLMs access to outside data, which aids in their ability to provide responses that are more precise and sensitive to context. Eliminating the necessity of storing all information directly in the LLM also results in a smaller model, which boosts efficiency and decreases expenses.

“You can use RAG to go gather a ton of unstructured information, documents, etc., [and] feed it into a model without having to fine-tune or custom-train a model,” Barrington stated.

These advantages are especially alluring for business applications where current factual knowledge is essential. For instance, companies can employ RAG in conjunction with foundation models to develop virtual assistants and chatbots that are more effective and educational.

5. Bespoke Generative AI Models For Businesses

Big, versatile tools like ChatGPT and Midjourney have garnered the most interest from users investigating generative AI. Smaller, more focused models, however, may prove to be the most resilient for commercial use cases due to the increasing need for AI systems that can satisfy specialized needs.

Although it is possible, building a new model from scratch will be too costly for many firms due to its resource-intensive nature. Instead, most firms alter pre-existing AI models to create customized generative AI, such as by fine-tuning their architecture or optimizing on domain-specific data sets. Compared to creating a new model from scratch or depending on API calls to a public LLM, this may be less expensive.

Vice president of AI and machine learning at Workday Shane Luke stated, “Calls to GPT-4 as an API, just as an example, are very expensive, both in terms of cost and in terms of latency — how long it can actually take to return a result.” “We are working a lot … on optimizing so that we have the same capability, but it’s very targeted and specific. And so it can be a much smaller model that’s more manageable.”

Customized generative AI models’ main benefit is its capacity to meet user demands and niche markets. Customized generative AI tools can be developed for a wide range of applications, including document analysis, supply chain management, and customer service. This is particularly important for industries like healthcare, finance, and law that have highly specialized language and procedures.

For many corporate use cases, the largest LLMs are superfluous. “It’s not the state of the art for smaller enterprise applications,” Luke remarked, even though ChatGPT might be the best option for a consumer-facing chatbot that can answer any question.

Barrington anticipates that as AI developers’ capabilities start to converge, businesses will be investigating a wider variety of models in the upcoming year. “We’re expecting, over the next year or two, for there to be a much higher degree of parity across the models — and that’s a good thing,” he stated.

Luke has observed a comparable situation occur on a smaller scale at Workday, which offers teams access to a range of AI technologies for internal experimentation. Despite the fact that staff members initially mostly used OpenAI services, Luke noted he has progressively noticed a trend toward a combination of models from different suppliers, such as Google and AWS.

Because building a bespoke model provides companies more control over their data, it frequently enhances privacy and security as compared to using off-the-shelf public tools. Luke used the challenge of creating a model for Workday tasks involving the management of sensitive personal information, such a person’s medical history and disability status. “Those aren’t things that we’re going to want to send out to a third party,” he stated. “Our customers generally wouldn’t be comfortable with that.”

Gillian Crossan, risk advisory principal and global technology sector leader at Deloitte, noted that stronger AI regulation in the upcoming years could force firms to concentrate their efforts on proprietary models in light of these privacy and security benefits.

“It’s going to encourage enterprises to focus more on private models that are proprietary, that are domain-specific, rather than focus on these large language models that are trained with data from all over the internet and everything that that brings with it,” she stated.

6. Talent In AI And Machine Learning Is Needed

It’s not easy to design, train, test, and deploy a machine learning model, much less keep it running in a complicated corporate IT environment after it’s in production. Thus, it should come as no surprise that there will be a continued demand for talent in AI and machine learning until 2024 and beyond.

“The market is still really hot around talent,” Luke stated. “It’s very easy to get a job in this space.”

In instance, there is an increasing demand for experts who can close the knowledge gap between theory and practice as AI and machine learning are incorporated into commercial operations. This calls for expertise in deploying, overseeing, and maintaining AI systems in practical environments. This field is commonly known as machine learning operations or MLOps.

The top three talents that respondents’ firms needed for generative AI projects were AI programming, data analysis and statistics, and operations for AI and machine learning, according to a recent O’Reilly survey. But there is a dearth of these kinds of abilities. “That’s going to be one of the challenges around AI — to be able to have the talent readily available,” Crossan stated.

Look for organizations—not just large tech companies—to seek out personnel possessing these kinds of abilities in 2024. Building internal AI and machine learning skills is positioned to be the next step in digital transformation, with IT and data becoming almost as ubiquitous as business activities and AI efforts becoming more and more popular.

Additionally, Crossan stressed the value of diversity in AI programs at all levels, from the board to technical teams developing models. “One of the big issues with AI and the public models is the amount of bias that exists in the training data,” she stated. “And unless you have that diverse team within your organization that is challenging the results and challenging what you see, you are going to potentially end up in a worse place than you were before AI.”

7. AI Shadows

The use of AI within an enterprise without clear IT department authority or oversight is known as “shadow AI,” and it is a problem that firms are confronting as individuals from a variety of job areas show interest in generative AI. As AI becomes more widely available and even nontechnical workers can use it autonomously, this tendency is becoming more and more common.

Employees who require speedy fixes for issues or who wish to learn about new technologies more quickly than authorized channels permit can give rise to shadow AI. This is particularly typical for user-friendly AI chatbots that staff members may easily test out in their web browsers without requiring IT review and approval procedures.

Positively, investigating applications for these new technologies shows initiative and inventiveness. However, there is also a risk because end users frequently don’t have the necessary knowledge about security, data privacy, and compliance. For instance, a user may enter trade secrets into an LLM that is visible to the public without realizing that doing so exposes such private data to outside parties.

“Once something gets out into these public models, you cannot pull it back,” Barrington stated. “So there’s a bit of a fear factor and risk angle that’s appropriate for most enterprises, regardless of sector, to think through.”

The drawbacks of shadow IT for businesses include higher expenses, greater risk, inconsistent departmental practices within departments, and a lack of IT function control.

Shadow AI is but a small part of the wider shadow IT issue.

Organizations will have to manage shadow AI by 2024 using governance frameworks that strike a compromise between fostering innovation and safeguarding security and privacy. This could entail establishing explicit guidelines for appropriate AI use, offering approved platforms, and promoting communication between business and IT leaders to learn how different departments plan to use AI.

Barrington stated, “The truth is, everyone uses it,” alluding to a recent EY study that revealed 90% of participants using AI in their jobs. “Whether you like it or not, your people are using it today, so you should figure out how to align them to ethical and responsible use of it.”

8. A Reality Check On Generative AI

In 2024, organizations will probably experience a reality check as they move from the early excitement surrounding generative AI to actual adoption and integration. This period is sometimes referred to as the “trough of disillusionment” in the Gartner Hype Cycle.

“We’re definitely seeing a rapid shift from what we’ve been calling this experimentation phase into [asking], ‘How do I run this at scale across my enterprise?’” Barrington stated.

Organizations are coming to terms with generative AI’s drawbacks as the initial excitement starts to fade, including issues with output quality, security and ethical issues, and integration challenges with current workflows and systems. It is common to underestimate how difficult it is to scale and execute AI in a corporate setting. Issues like keeping AI systems in production, training models, and guaranteeing data quality can prove to be more difficult than expected.

“It’s not very easy to build a generative AI application and put it into production in a real product setting,” Luke stated.

The bright side is that, despite being uncomfortable now, these growing pains may eventually lead to a more balanced and healthy perspective. Setting reasonable expectations for AI and gaining a more sophisticated knowledge of what it can and cannot achieve will be necessary to get past this stage of development. Projects involving AI should have a clear framework in place for measuring results and be closely connected to business objectives and real-world use cases.

“If you have very loose use cases that are not clearly defined, that’s probably what’s going to hold you up the most,” Crossan stated.

9. AGreater Focus On The Security And Ethics Of AI

Concerns over the possibility of identity theft and other forms of fraud, as well as the spread of deepfakes and sophisticated AI-generated material, are being raised about the potential for misinformation and manipulation in politics and the media. AI has the potential to increase the effectiveness of ransomware and phishing assaults by making them more resilient, persuasive, and difficult to identify.

While there are efforts to develop technology that can identify information generated by artificial intelligence, the task is still difficult. Existing AI detection technologies can be prone to false positives, and current AI watermarking approaches are very simple to get around.

The necessity of making sure AI systems are transparent and equitable is further highlighted by their increasing ubiquity. One way to do this is to thoroughly check training data and algorithms for bias. Crossan underlined that while creating an AI strategy, these compliance and ethics considerations should be woven throughout.

“You have to be thinking about, as an enterprise … implementing AI, what are the controls that you’re going to need?” she stated. “And that starts to help you plan a bit for the regulation so that you’re doing it together. You’re not doing all of this experimentation with AI and then [realizing], ‘Oh, now we need to think about the controls.’ You do it at the same time.”

Luke noted that looking at smaller, more narrowly focused models can also be motivated by ethical and safety considerations. “These smaller, tuned, domain-specific models are just far less capable than the really big ones — and we want that,” he stated. “They’re less likely to be able to output something that you don’t want because they’re just not capable of as many things.”

10. Changing Regulation Of AI

Given these ethical and security issues, it should come as no surprise that 2024 is shaping up to be a critical year for AI regulation, with laws, rules, and industry frameworks fast changing both domestically and internationally. In the upcoming year, organizations will need to remain aware and flexible because changing compliance regulations may have a big impact on international operations and AI development plans.

The first comprehensive AI law in the world is the EU’s AI Act, on which members of the EU Parliament and Council recently reached a tentative agreement. If enacted, it will forbid the use of AI in specific situations, place duties on those who create high-risk AI systems, and mandate transparency from businesses that employ generative AI. Violations might result in fines of millions of dollars. Furthermore, not simply new laws may become operative in 2024.

“Interestingly enough, the regulatory issue that I see could have the biggest impact is GDPR — good old-fashioned GDPR — because of the need for rectification and erasure, the right to be forgotten, with public large language models,” Crossan stated. “How do you control that when they’re learning from massive amounts of data, and how can you assure that you’ve been forgotten?”

The AI Act, when combined with the GDPR, may establish the EU as a global AI regulator, thereby impacting AI development and usage standards around the globe. “They’re certainly ahead of where we are in the U.S. from an AI regulatory perspective,” Crossan stated.

Although there isn’t yet comprehensive federal legislation in the United States akin to the EU’s AI Act, experts advise firms to consider compliance even before official standards are put in place. “We’re engaging with our clients to get ahead of it,” according to Barrington, of EY, for instance. If not, companies might have to make up for lost time when laws finally take effect.

In addition to the fallout from European legislation, recent developments within the U.S. executive branch offer insight into the potential domestic landscape for AI regulation. New rules were put into place by President Joe Biden’s executive order in October. These included constraints to guard against the hazards of AI being used to design dangerous biological materials and a need for AI developers to report safety test results with the U.S. government. Additionally, a number of federal agencies have released guidelines that are targeted at particular industries. Examples of these include the Federal Trade Commission’s statement cautioning businesses against making false claims about the AI application of their products and NIST’s AI Risk Management Framework.

The fact that 2024 is an election year in the United States and the present presidential field has a diverse spectrum of stances on tech policy issues further complicates matters. The executive branch’s oversight of AI might potentially be altered by a future administration by rolling back or amending Biden’s executive order and nonbinding agency guidelines.