AI for VCs

autopopulation and semantic search

Since Sevanta Dealflow's launch in 2010, its most-used and probably most important feature has been the ease with which users can forward emails in order to create new deals — a lot of magic has always happened with no user effort: creating a deal folder and saving attachments into it, scanning for people on the thread and creating contacts that are linked to the deal, autofilling fields from Crunchbase, and more. It seems like obvious stuff, but we've always executed the details better than any other software out there.

Most initial emails that users send to Sevanta have a pitch deck attached, but important information about the deal was generally confined to the file... but no longer! Now thanks to our self-hosted LLM, we can extract information from pitch decks and fill many more fields than were ever possible before. We also ask the AI to summarize the deck — technology details, target customers and market size, background of the founders, etc. — and add its comments to the deal. The depth and quality of autopopulation we can achieve through this is really impressive; deal capture is easier and better than ever!

We also use vector embeddings to make semantic search possible, which means if you search for "insulin" the system can now surface deals that don't contain that word, but do have related concepts like diabetes. These vector embeddings also enable the system to suggest comparable deals from your own database and also from Crunchbase.

For more details, see this intro video here:

Thanks to our collaboration with Knap, specialists in private/local AI, all of this is done on Sevanta's own servers, so that client data never leaves our infrastructure. Despite promises from the big AI providers, we don't trust them to not use client data for training or other purposes, so this self-hosting was a critical requirement. It also allows stability in a volatile marketplace where 3rd party providers may come and go, or may make changes to APIs or performance that break functionality. Now that we have our own in-house LLM, there's a lot more we plan to do with it in the coming months.