Why Current Research Valuation Processes Are Biased Toward Large Brokers


The following is a guest article by Paul Warme, founder of Content Street, a research management system, based on a white paper, Provider Analytics: A Guide for Asset Managers, which can be found here

Even though MiFID II rules have been in place for a few years, most asset managers still lack a complete picture of their research relationships. Research consumption data supplied by brokers is inherently biased, favoring large research providers.  Unfortunately, most asset managers continue to rely on deficient research valuation processes, constrained by inadequate, biased data and ineffective analytics tools.

Flawed data

Research consumption data currently available to asset managers is incomplete and of low quality.  Most asset managers still rely on deficient ‘click’ and ‘count’ data supplied by their research providers.  There are several problems with this.

First, click data is notoriously noisy.  Email scanning services generate false positives, indicating emails have been opened when in fact they have not.  Even if you click on a report doesn’t mean that you actually read it.  

Second, not all research is created equal.  This sounds obvious, but the fact is that asset managers typically lack a good way of reflecting in their data the difference between high-quality research and poor-quality research.  Just because you read a report doesn’t mean that it actually added value.  This means that there is an inherent ‘positive’ bias in the data used, as ‘clicks’ and ‘counts’ are by default all assumed to be value-additive.

Finally, brokers withhold data on the huge volume of research sent but not consumed, making it impossible to normalize the data.  Clicking on less than 10% of a large broker’s content could outweigh clicking on 90% of a specialized boutique’s output.  There is an inherent bias toward large research providers, even if only a small fraction of the research sent is consumed. 

Similar issues surround interactions data.  Most asset managers know how many analyst calls they had, and how many conferences they attended, but they don’t know how many of these interactions added value and how many did not.

Upgrading data quality

Ideally, research valuation processes should be based on high-quality, timely data that is both quantitative and qualitative in nature.  This data should be deeply tagged (by report and interaction type, sector, company, regions, country, etc.), allowing for more detailed category analysis.  In addition, the data should be tagged in such a way that allows asset managers to identify who is consuming the service and who is delivering it.

A screenshot of a cell phone

Description automatically generated

For research reports (depending on the tools available), this would include not just clicks, but unique reports clicked, reads, unique reads, bookmarks, shares and ratings, all of which signal that some value was added. Clicks should be normalized for the overall volume of research sent (non-clicks), so that brokers that spam multiple low-quality reports are not rewarded.  For research services (interactions), data would ideally include service level (# of interactions); quality of service (rating or vote); and the cost of the service ($). 

Weak analytics

Notwithstanding the lack of quality data, asset managers might still uncover more value from the data that they do have by employing a more disciplined and sophisticated data analytics process.  Vendor analytics programs are common practice in other industries, and asset managers could certainly benefit from this as well.

Excel is still the go-to tool for analysis within organizations.  However, a commercial Business Intelligence (BI) tool has more advanced capabilities and can take an analytical process to the next level.  There are dozens of BI tools available in the market today, and the best ones are both affordable, and relatively easy to adopt.


Asset managers will continue to feel pressure to maximize the value of their research relationships, and to do so, the evaluation process will have to continue to evolve and improve.  At the very least, asset managers should consider putting in place a more data-intensive, analytical process to correct for the inherent bias in the data supplied by research providers. A better approach would be to reduce reliance on external data supplied by considering market solutions that will help collect more high-quality data.  Taken together, these process improvements should help asset managers uncover valuable insights and help them get more value out of their research relationships.


About Author

Paul Warme is founder and head of ContentStreet, a Research Management System that specializes in Provider Analytics, and that helps asset managers maximize the value of their research relationships. Prior to this, he co-founded Lusight, a best-in-class independent research firm, where he served as CEO for 14 years. Lusight was also an innovator in the research technology space, and pioneered a models-database solution that still has wide application in the market today. He spent many years as an analyst in New York covering the Emerging Markets before Lusight.

Leave A Reply