Clicky chatsimple

Enkrypt Launches A Seed Round To Build A “Control Layer”

Category :

AI

Posted On :

Share This :

The Boston-based firm Enkrypt AI, which provides a control layer for the responsible application of generative AI, announced today that Boldcap led a $2.35 million seed round of funding for the company.

The product provided by Enkrypt is pretty intriguing as it helps ensure the private, secure, and compliant deployment of generative AI models, even when the number isn’t as large as AI businesses raise these days. Sahil Agarwal and Prashanth Harshangi, two Yale PhDs who created the startup, assert that their technology can easily overcome these obstacles and accelerate the adoption of gen AI for enterprises by up to 10 times.

“We are advocating for a paradigm that combines trust and innovation, allowing the implementation of AI technologies with the assurance that they are both innovative and safe,” Harshangi stated in a release.

In addition to Boldcap, the seed round was also attended by Berkeley SkyDeck, Kubera VC, Arka VC, Veredas Partners, Builders Fund, and numerous additional angel investors in the AI, healthcare, and enterprise domains.

What Is Available With Enkrypt AI?

The use of generative AI is growing rapidly, and practically every business is considering the technology to improve productivity and optimize processes. Nevertheless, there are a number of safety obstacles to overcome while adjusting and adopting foundation models across applications. It is imperative to ensure that data privacy is maintained throughout all phases, that no security risks exist, and that the model remains dependable and compliant (with both internal and external requirements) even after it has been deployed.

Today, the majority of businesses manually overcome these obstacles with the aid of internal teams or outside experts. Although the method works well, it is quite time-consuming and can cause delays of up to two years in AI projects. This might easily result in the company missing out on possibilities.

Sentry, a comprehensive all-in-one solution from Enkrypt, which was founded in 2023, fills this gap by providing visibility and oversight of LLM usage and performance across business functions, safeguarding sensitive data from security threats and managing compliance through stringent access controls and automated monitoring.

Sentry is an enterprise-level secure gateway that facilitates data privacy, model security, and model access controls. It is implemented between users and models. The CEO of the firm, Agarwal, told VentureBeat that “the gateway routes any LLM interaction (from external models, or internally hosted models) through our proprietary guardrails for data privacy and LLM security, as well as controls and tasks for regulations to ensure no breach occurs.”

Sentry guardrails, for example, rely on the company’s unique models to avoid prompt injection attacks, which can takeover programs, or stop jailbreaking in order to assure security. It can even anonymize sensitive personal data and cleanse model data for privacy. In other scenarios, the system may monitor and filter out hazardous and off-topic content, as well as test generative AI APIs for special circumstances.

“From executives to individual engineers, Chief Information Security Officers (CISOs) and product leaders have full visibility into every generative AI project. The organization may use our patented guardrails to make the use of LLMs safe, secure, and reliable thanks to this granularity. In the end, this lowers financial, reputational, and regulatory risk,” the CEO continued.

Decrease In Generative AI Vulnerabilities Significantly

The company notes that mid- to large-sized businesses in regulated sectors like finance and life sciences are testing the Sentry technology, even though it is still in the pre-revenue stage and cannot disclose any significant growth statistics.

Sentry discovered that the Llama2-7B model used by a Fortune 500 company had 6% of the time jailbreak vulnerabilities. They were able to reduce that number tenfold to 0.6%. This made it possible for LLMs to be adopted for even more use cases throughout the company’s departments more quickly—in weeks as opposed to years.

Agarwal stated that the comprehensive nature of the solution sets it apart in the market. “The specific request (from enterprises) was to get a comprehensive solution, instead of multiple point-solutions for sensitive data leak, PII leaks, prompt-injection attacks, hallucinations, manage different compliance requirements, and finally make the use-cases ethical and responsible,” Agarwal said.

As a future step, Enkrypt intends to develop this solution and implement it in other businesses, showcasing its adaptability to various models and settings. Succeeding in this will be crucial for the startup as, instead of being a best practice, safety is now required for any business using or building generative AI.

“To further develop the product, we are presently collaborating with design partners. The largest rival of ours is Protect AI. The CEO stated, “They intend to build a comprehensive security and compliance product as well with their recent acquisition of Laiyer AI.”

In addition, an AI safety consortium including more than 200 companies was formed earlier this month by the US government’s NIST standards authority with the goal of “focusing on establishing the foundations for a new measurement science in AI safety.”