It has been more than ten years since Vidrio first launched to the market, but what many people don’t know is that the concept of Vidrio first emerged two decades ago. In this Q&A with our CEO, Mazen Jabban, we discuss the unique complexities of managing alternatives allocations across public and private fund managers.
EA: Vidrio was born of your own frustration as a former Fund of Funds manager suddenly playing the all-consuming role of data manager. Describe the challenges that led to your development of Vidrio.
MJ: At the time of Vidrio’s inception, I was at the helm of a Fund of Fund, Focus Investments, and feeling increasingly overwhelmed by the sheer volume of data required to manage our alternatives portfolios across multiple external managers. We were doing what we needed to do, rolling up our sleeves across the team to collect manager documents, performance estimates, portfolio holdings, exposure profiles – you name it. Needless to say, it was an incredible distraction and drain on resources.
EA: I’m sure most allocators and LPs can attest to that frustration. Collecting and extracting information from across multiple external fund managers, custodians, service providers – definitely not for the weak of heart. I imagine the data management process was not just a resource drain, but also created quite a bit of operational risk?
MJ: If the data was coming all through a single feed into a nice clean file, then sure, easy. But that’s not how it works for allocators. It is not a clean flow of information like you might get with live market data, for example. Fund manager data trickles in drop by drop – multiple data points from multiple sources at any given point in time. Take performance data, for example. It’s not as simple as, “Ok, we’ll just collect all manager letters, extract performance numbers and feed them into the appropriate field.” Performance data does not necessarily come only from manager letters, but perhaps from an email or a website.
EA: And I assume that’s the case with all data points – not just the performance data example?
MJ: Exactly. There are myriad document types to be collected from across multiple fund managers, and then you parse through the content to extract only the useful data from across those sources of information and store it in a central location that is accessible and fit-for-purpose across the investment management functions – from research and risk management to portfolio management, compliance and reporting.
EA: Ok, so your investment team was doing double duty as data managers – collecting a manager letter here, a performance report there – then extracting that data and making it available in meaningful ways for their investment workflows - no doubt a very distracting and disruptive process with a high risk of missing some of the aforementioned drops?
MJ: Yes. And obviously as the amount of data increased, the process became more and more complicated. It became clear that we would need more stringent processes to efficiently collect and extract the data correctly - meaning in a systematized, timely fashion and without impacting the productivity of the investment team. We would need a solution with pipelines to external data sources that could also extract the right information into our database, create a single source of record and then distribute that information across our research, risk, portfolio management and reporting workflows.
We shopped around for potential solutions, and while there were bespoke software platforms for managing various aspects of the asset allocation process, there was no complete end to end solution, and certainly not one to solve for the data management challenge.
EA: It sounds like even with a more sophisticated software solution and/or database, the business of collecting and extracting the data would still sit with your investment team – based on the solutions you assessed at the time?
MJ: Yes. As we discovered the hard way, software without data services just would not solve the problem. Ultimately, in order to let our investment team get back to their day jobs, we created a completely separate data team and built technology for that team to collect and disseminate, in a timely manner, the information contained in those documents that we were collecting. And that is the foundation upon which Vidrio was built – by bringing together human expertise and innovative technology to help simplify and overcome investment management hurdles unique to alternatives allocating.
EA: Since Vidrio’s inception, data complexity has exponentially “exploded,” creating ever more nuanced data challenges for allocators to transform that complex data into better business insights. How has this helped to drive demand or solutions like Vidrio?
MJ: Right now, I see more and more organizations, namely asset owners, coming to the realization that data management, particularly the specialized capabilities necessary to support alternative asset class investments, is simply not a function that should have to concern the investment team. It is a nuanced, all-consuming set of responsibilities with high potential for error when not done in a systematized manner by a dedicated, fully responsible team.
EA: Can you dive a bit deeper into some of those nuances you’ve described?
MJ: As I mentioned, data collection from across multiple sources is just the very first step. Step two is then extraction of the useful data. Finally, you must make all of this information talk to each other in order to create a complete picture across all portfolios, funds and managers - and ensure that data is fit for purpose for different teams to leverage in different, meaningful ways. The following summary, while a bit over-simplified, highlights all of the moving parts.
- Collection: Who (managers, custodians, etc.)? What (manager letters, performance reports, investment statements, etc.)? Where (emails, websites, PDFs)?
- Extraction: What information do you want to parse from the sources you’ve collected? For example, maybe a particular email has just the one performance number you need and nothing else that is relevant or of value. In other words, you must be able to extract the valuable content from the noise to filter in only what is needed.
- Analysis: As each relevant data point is extracted from each source of information, those information points must then be plugged into a system where analytics can be performed on the raw data. This is how the aforementioned picture gets formed.
- Visualization: And finally, once you have all the data collected and extracted into a central data hub, and then translated into meaningful analytics, the final picture can be generated via applications that display the information in valuable ways for different functions, workflows, use cases. You end up with a harmonious picture for due diligence, valuation, accounting, research, risk, portfolio management - all the meaningful information needed to enable sound allocations.
EA: I’m sure at this point, readers are wondering how Vidrio has evolved from being a DIY solution that you built during your Fund of Fund days to the global provider you are today – how have you been able to scale all of the above best practices to support the global institutional alternative allocators that we currently serve?
MJ: Automation. Since the beginning, our managed services have been central to providing a truly differentiated value-added total information solution to our clients. Vidrio has made and continues to make a significant investment to our innovation stack, ensuring that we can continue to lead in the ongoing evolution of Data Management, Automation and Acquisition Capabilities (DMAC). To do this we are leveraging the latest innovations in AI and Machine Learning, while establishing key collaborations with leading industry partners. This investment has already drastically increased our systematized data automation, leading to unparalleled speed and scalability for the Vidrio Service team. Combining these capabilities now with enhanced “smart” tools will further enable us to deliver high value managed data solutions directly to our clients, while also unbundling our offering to offer any combination of data services, analytics and applications to a broader range of allocators and LPs. This is the Vidrio vision: Enhanced smart solutions and speed enabled by our internal DMAC enhancements that will let us deliver unmatched managed data services to each and every client, however they want it.
EA: Here’s to the future of asset allocation management!
We bring together human expertise and innovative technology to help allocators take control of their complex multi-asset class investment management hurdles. Born from the first-hand experience of allocating to hedge fund managers for nearly two decades, the Vidrio team is unique and adept at producing solutions that solve for the multi-asset class data collection, aggregation, and reporting challenges that LPs and allocators face daily. We have lived in allocators shoes and still do.
Vidrio Enterprise, our flagship solution, offers an all-in-one centralized, client-specific database that serves as the information hub, or chassis for all of your investment data. Our team works with you to timely and systematically collect and extract all necessary external data, create pipelines to external data sources, and compile it into your database to create a single source of record. We offer flexible solutions to help manage your qualitative and quantitative information, so that you can rely on consistent and broader insight into all aspects of your allocation operations.
If you are looking for a more flexible, unbundled set of services and software, Vidrio One allows our core software and services to be implemented in a more flexible and targeted configuration to meet specific pain points. In this way, you can lighten your load by allocating part of the work to us - any combination of data collection, extraction, analysis, workflow application, etc.