From onboarding client data to processing payroll, data integration is an essential component of many workflows; but, for many data sets, the process is drawn out and manual. It is challenging to transfer data between databases since it is divided among databases and SaaS apps, each of which maintains the data in a different format. Lume uses AI to try to fix that.
By pulling data from its database silos and “normalizing” it—that is, transforming it into a standardized format that makes it easier to transfer or integrate into other workflows—Lume’s system automates data mapping through the use of AI and algorithms. Lume alerts consumers in the event of a data integration failure, which is a frequent occurrence, and attempts to fix the problem using AI. Additionally, Lume provides a web platform and API so that customers may incorporate Lume straight into their workflows.
Loom differs from previous data mapping tools in that it focuses on intricate nested data formats like JSON rather than pulling data from spreadsheets or PDFs. Nicolas Machado, co-founder and CEO of Lume, told TechCrunch that Lume can assist businesses in mapping difficult math, taxonomy, and text manipulation jobs. He went on to say that by concentrating on this, businesses can save time and money compared to outsourcing big data projects.
According to Machado, “one of the main issues that we observed is that data movement—like, really seamless data transfer between systems—is entirely manual and has been for literally 60 years.” “What prevents this from being automated? Why couldn’t it happen earlier? The reason for this is that each system’s data is distinct. Every organization, vendor, and integration defines data differently. They have their own method for organizing facts. They have a different interpretation of the evidence.
Machado and his co-founders, Nebyou Zewde and Robert Ross, are familiar with the issue. When the three first met, they were Stanford freshman majoring in AI and computer science. They then left for jobs at computer businesses like Apple and OpenDoor, where they all worked on data integration projects. When the founders realized that AI was going to improve in 2022, they thought it was time to try to resolve this data integration problem.
According to Machado, “every engineer has encountered this issue.” “It is a requirement for all engineers. We started by getting together, going to my co-founder Robert’s residence, and working on it in the nights.
Lume was established in January 2023, went through Y Combinator’s W23 batch, and released its first product in March 2023. According to Machado, the business has gained dozens of clients thus far and has experienced high inbound demand ever since. Although he declined to provide specifics, Machado stated that Lume’s clientele ranges in size from startups to Fortune 500 companies.
In addition to angel investors, Lume has raised $4.2 million in a seed round headed by General Catalyst, with participation from Khosla Ventures, Floodgate, and Y Combinator.
Machado stated, “They were hooked on this problem because they truly understand it.” They are thrilled about it because of this. They were operators. Like, “Wait, this was an issue when I was a CEO thirty years ago.” Is this still an issue? That is crazy.
According to Machado, the round will be utilized for employment, as the company hopes to increase its workforce from five to 10 by the start of next year, as well as for ongoing technological development.
Lume is not the only business trying to resolve issues with data integration. One such company is SnapLogic, which has raised $371 million in venture capital. Another firm that wants to assist businesses with this is Osmos. Competition is expected to increase as engineers continue to address this problem. Machado stated that he is not concerned about competition since he believes Lume will stand out because to its algorithms and the way its API integrates Lume into the business’s current processes.
Lume aspires to be the glue that connects any two data systems in the future, allowing data to move between them with ease.
According to Machado, “we all love data and we’re big believers in how important data is.” The analogy we employ is similar to that of oil, which historically requires processing before being used to power machinery and other devices. That is precisely what data is and has been.