Revenge of the Humans II: A New Blueprint For Discretionary Management


The following is a guest article written by Estimize Founder and CEO Leigh Drogen. To view his previous three-part series examining the rise of quantitative investing and how it is transforming discretionary asset management, the articles can be found here and here and here.

“If investing is entertaining, if you’re having fun, you’re probably not making any money. Good investing is boring.” – George Soros

In early May I published a thought piece titled, “Revenge of the Humans: How Discretionary Managers Can Crush Systematics”. It was the culmination of my thinking over the past few years regarding the seismic shift taking place in the asset management industry towards quantitative processes, and what’s next for all of us.

To be honest, the response to the piece was surprising given what I felt was a pretty high level overview. It was widely disseminated through the industry, and republished all over the place. Maybe when you’re so deeply embedded in something it’s hard to see where the rest of the industry stands on the learning curve. Or maybe it just struck a chord and put a bunch of things together in a coherent way that made sense for people.

The piece was published a few weeks before the Estimize L2Q (Learn to Quant) Conference held on June 20th. The point of the conference being to teach discretionary managers the 101 class for the quantitative research process, using factor models, being factor aware, and working with new unique data sets. I want to thank the few hundred attendees, especially the Jefferies and Worldquant teams who sponsored, taught sections of the day, and invited their clients to what was a standing room only affair.

After the conference I spent some time talking to the attendees, especially the discretionary PMs who made up most of the list, and the quants at their firms attempting to help them make this shift. I wasn’t surprised to hear their largest piece of feedback given the nature of the Q&A sessions during the event. While they found the rest of the day useful and interesting, what really struck a chord was the discussion of how discretionary firms will actually make this shift, how they will structure their teams, how they’ll design their workflows and stock selection processes, and how they’ll overcome the egotistical issues associated with taking decision making power away from the PM and distributing it more widely. Frankly, we only touched on these things in cursory way, yet it was the thing that struck the biggest chord.

For that reason, I wanted to take the time to follow up on the first piece with a second, a deeper dive into how I see the best ways to structure this process having met with hundreds of firms over the last few years to bring them onto the Estimize platform. We’ve seen what’s out there, what’s working, and what’s not.

AlphaGo and the Human

While the recent string of articles (some very poorly researched and written, some dead on point) by the Wall Street Journal, Bloomberg and FT will have you believe Skynet is going to exterminate every last human from investing, those pieces are obviously hyperbolic and far from the truth. Two things are very important to remember. First, even systematic quantitative trading begins with humans doing research and making ex-ante assumptions regarding an economic rationale for why one thing should lead to another (when a company beats its consensus estimate by a lot, the stock goes up?). Yes, it’s possible that true artificial intelligence might make a dent in this at some point, but don’t hold your breath. Until then, humans need to make creative thoughtful decisions about why two things might be correlated in some way. And two, we already have quite a bit of evidence that humans with access to intelligent machines can consistently beat machines.

The best example of this is the recent performance of Google’s AlphaGo model which ran the table on the best Go player in the world relatively easily. But when the AlphaGo system was given to a mediocre Go player, that player was able to consistently beat the AlphaGo model on its own. Why? Because humans are incredibly intelligent and astute at picking up on data that doesn’t fit. Even the most nonlinear intelligent AI models we have today are still based mostly on pattern matching, not necessarily building mental models of things, which is the basis of true intelligence. When something is slightly out of whack, we’re pretty good at sensing it. It’s based in our evolutionary biology stemming from needing to avoid being eaten on the African Savanna or track a wild animal over the course of many days. Of course, the advantage of the AlphaGo system, or any quantitative model for that matter, is that it never oversleeps and gets eaten by the lion. Algos don’t get too drunk the night before, they don’t go through a divorce, and they don’t suffer from the heuristics and biases that are hardcoded into us from birth that we must overcome to operate well in markets.

In our specific industry, this translates into a very important distinction between fully systematic models and the human/algo hybrid. Even the best most sophisticated quantitative systematic funds can not significantly overweight positions in their book based on a very specific understanding of the minutia of what drives a specific stock in a specific industry. It’s simply too much work to do that across 1,000 names, or 3,000 names. You can’t have models for a zillion different variables, we know intuitively you’re likely to come out with nothing good in that situation versus having a dozen variables that work, but only work ok across all 1,000 names. And for them that’s fine, because diversification pays. But it means they hit for singles only, all day.

Discretionary managers can do better though. They can do the deep dive into the quantitative research to find the variables that matter for their specific 50 names, and build the models that sit beside the human to guide decision making 97% of the time. And it’s that other 3% where humans can decide to size up when they’ve “seen this pitch” 1,000 other times, it’s a hanging curveball, and it’s time to swing for the fences.

There Are No Shortcuts

While the majority of discretionary firms have woken up to the existential issue they face, almost every single one of them is going about solving the problem in the wrong way. In essence, they are either naive of or unwilling to tackle the true issue at the root of their performance, and have attempted to take a shortcut which will not solve anything. The core problem is not getting access to unique data sources, it’s not figuring out how to build a “data science” team, it’s not figuring out how to build a “data lake”. These are necessary but not sufficient steps to solving the problem. The true problem is more deeply embedded in the culture of the discretionary firms, it is a uniquely human problem. The real problem that firms must solve before they tackle those above, is how to structure their teams and the investment process itself to be more quantitative and systematic.

In fact, solving those other problems before solving the real one will only make things worse. When performance doesn’t improve because PMs are not listening to the output of those processes, what do you think everyone is going to say when reviewing resource allocation? The “quant and data stuff” is going to be the first thing thrown under the bus as adding no value. And after being burned firms will take two steps back after attempting that first step forward. This isn’t a thing that you just bolt onto your firm by hiring a few quants or a data science team. This isn’t a thing that you can just purchase your way to success in. God knows there are big firms out there that have built big teams and bought a lot of data only to have the firm at large claim it all useless. That happens because no one did the really hard work to rethink the core investment process from the ground up, including these things from the beginning.

I say this to the detriment of my own ability to quickly sell more Estimize data to these discretionary firms which certainly could benefit from it. But I’ve seen what happens when firms that don’t have a process to use things buy them, it doesn’t work out well for either party in the end. I’m involved in no less than a dozen deals right now where the quant who was hired by a discretionary firm to review new data sources is banging his head against the wall trying to figure out how to incorporate any of the stuff he looks at into the PMs decision making. It’s not just our stuff, it’s everyone else’s as well. Eventually these guys are going to get fed up and leave their firms. What quant wants to deal with interpersonal, social, and process issues all day to have any of their work mean anything?

Tear It Down To The Studs

A few weeks ago I was sitting in the conference room of a $20B asset manager who wanted to hear more about the Estimize data set and how they might use it. I normally start these meetings asking them to explain how their investment process takes place, who makes decisions, and on what timeframes. The senior management went on to explain that they had 17 analysts across four main sectors. Four PMs shared the analysts. Their books consist of 60-70 names at any given time and they are shooting for holding periods of 1-2 years on the long side (believe me this never actually happens but all firms like to tell you they are longer term investors, frankly I don’t understand why, because it sounds good?). When I asked them how stock selection took place, they all kind of looked at each other like no one wanted to answer the question (there was a PM sitting in the room). The basic answer that the CIO eventually managed to produce like Sylvester coughing up Tweety Bird was that it was at the analyst’s discretion to bring ideas to the PM and at the PMs discretion to use them…or not. And of course, the analyst’s calls were based on their “unique understanding” of each stock, whatever that means. I asked if they measured the efficacy of the analysts’ historical picks. No. I asked if they measured the accuracy of the analysts’ fundamental estimates. No.

At this point everyone around the table got it and they began asking me questions about the percentage of firms I had met with that were attempting the transformation at different levels. I remarked that I wasn’t there to tell them how to structure their investment process, it wasn’t my place to do that. They looked at me and said half jokingly, “maybe you should”. It was slightly awkward. You could see their faces, they had brought me in to talk to them about how they might be able to leverage our specific data set because their “alternative data committee” had been tasked with that, but they were realizing that the more they dug into this the more it was going to reveal the real issues they faced.

While these guys should get a lot of credit for being openly introspective, the amount of ego damage that’s going to take place for some very rich people in order to make this shift is going to be massive. To my surprise, the guys in that room (no women?) were sober about it and seemed to accept the conclusions they themselves had drawn about the situation they are in. But they certainly understood how difficult it would be to make this shift from the perspective of institutional inertia. It is not enough for executives to want something to happen in our industry, the PMs who truly wield the power must be on board as well, and this is a threat not only to their ego, but if done correctly, their position within the firm.

Here’s What Needs To Happen

Let’s transition away from the problem now, which everyone should be sober about the difficulty of solving, and talk about some solutions. I will preface all of this with the fact that what I’m about to propose has not ever been fully implemented. Some funds I’ve met with have pieces of this process, other funds have other pieces. Most in-house software built to support pieces of this process is super shitty. That’s what you get when financial people build software, they forsake user experience and then no one uses it. I would sincerely advise firms to bend away from attempting to build software meant for use by non quantitative individuals. For far fewer dollars I guarantee you can find a fintech company or startup that is willing to work with you to build the feature set you’re looking for, and they will have a far greater incentive to see that it works. And if you believe that your edge will be in building this glorious piece of software that no other fund has, well, good luck with that.

  1. Core beliefs and universe
  2. Developing differentiated forward looking views
  3. Structuring the unstructured
  4. Developing factor models
  5. Measurement of historical accuracy
  6. List review and debate
  7. Stock selection, position sizing and marketing timing
  8. Organizational Structure
  9. The Software

Core Beliefs and Universe

I ask some funds what their investment philosophy is and many can’t answer that simple question. All teams need to know what their north star is. What are the variables, timeframes, market caps and sectors that you believe your team has an advantage in analyzing better than everyone else? Almost all funds lie about turnover and how often they trade, or why they trade. Frankly I find it interesting how poor most firms are at elucidating this set of constraints given that LPs like to bucket their investment in funds this way and look at the exposure to these variables. Any fund looking to market well should have a good grasp on this.

The important aspect of seriously sitting down to lay out your focus here is that many of the next steps are dependent on your bent here. Many firms will figure out they don’t have the right people to execute the strategy they actually want to run. The same type of people who will successfully run a momentum book will fail miserably at picking value stocks. It’s a completely different set of emotional and cognitive abilities. There’s no such thing as a “generalist”, when someone tells me they are a generalist I think, oh that’s awesome, this must have been the first job you were offered.

Let’s say that we’re building a fund in which our investment universe is tech and consumer names above $300M market cap. We’re a long/short shop and our target investment horizon is one year (more on this later). We believe in growth and momentum so we’re focused on names that are growing top line numbers quickly and show relative strength vs their peers and the market.

Developing Differentiated Forward Looking Views

At its most basic level, fundamental investing is a very simple algorithm. Stock XYZ (which is part of your universe) currently trades at 5X trailing 12 month revenue. Revenue is the variable most causally related and correlated to the trailing one year performance of the stock. Your expectation for forward 12 month revenue is $1B. You believe that if the company hits that mark the multiple will go from 5X trailing to 6X trailing (for various reasons). This would give the company a $6B valuation. The “market’s consensus” (more on that later) is for the company to earn $900M. The market’s expected price is $4.5B ($900M * 5). Your alpha is the difference in the valuation the market expects and what you expect, you can either be right about the revenue, the multiple, or both.

In order to generate real alpha, you must have differentiated views from the market consensus, otherwise you’re just playing with beta. And as I outlined in the previous piece, it’s very obvious that the number of LPs willing to invest in managers that are simply leveraging beta exposure are dwindling.

In today’s world analysts at hedge funds and asset management firms do obviously make estimates and have price targets. But there are a few huge issues around that.

First off, they almost exclusively reside within disparate Excel models. Excel is a great tool for modeling, but it’s a really bad tool for running processes. Yes macros exist, but in my humble opinion anything being run via a macro in Excel should probably be a piece of software externally. It’s not that hard to get data out of Excel. These estimates should be collected centrally so that they can be processed centrally. There are a small handful of firms doing this, none well. I’ve seen the software, it sucks and the analysts hate it.

Second, analysts don’t update their models at very regular intervals. It’s normally an ad-hoc process, which severely limits how useful the models are.

Third, because there’s no central repository, there’s no ex-post analysis of the results of those estimates and certainly no good Moneyball being played by looping the learnings back into the investment decision process.

Fourth, analysts only tend to share the models with the PMs for stock selection when they feel, for whatever reason, it’s warranted. The entire idea that analysts should be “pitching” ideas to PMs HAS TO END. Just think for a second how many crazy heuristics and outside variables are associated with this being how ideas bubble up to the person who puts things in the book. While the analyst is doing really good work to analyze a few dozen or more companies, has models, has estimates, maybe has a really good feel for supply/demand for the stock, and other variables, he could decide not to take an outlier view that he thinks is a winner to the PM because he had a fight with his wife this morning and doesn’t feel like getting into another difficult argument again today. Maybe the last few ideas he brought the PM didn’t go well and so he wants to play it safe with the next one. Maybe he’s been bringing all his ideas to the PM but the PM doesn’t listen to him and now he’s pissed so he’s gonna keep a personal record of them but not give the good ones to the PM because he hates his boss and wants to change firms. I’ve seen it all! The fights between analysts to get their ideas into the book, the back stabbing, the bro-ing out with your PM to become better friends. Who thought any of this was a good idea?!

Fifth, no one is measuring the efficacy of any of the estimates being made and it’s rare that the performance of the analyst is tracked to any meaningful degree. And if it is, it’s rarely, if ever, used to inform future decisions. We’re dealing with very structured data here that we can measure extremely easily at regular intervals. There is no reason we should be measuring analysts subjectively in dumb ways like how well they get along with the PM. This isn’t a hockey team, chemistry isn’t necessary, it’s about having a differentiated view and being correct. Yes you need to be able to talk through how you got there and your confidence intervals, but man, that should be tertiary at best in terms of the totem poll of analyst skill sets. Give me Michael Burry over the hedge fund bro who can talk your ear off any day.

“But if we don’t have a long history of someone’s track record, a small sample of their estimates won’t be statistically significant,” you say.  Yes, true, but you’re not dealing with one analyst, you’re dealing with a dozen at a decent sized fund, and we can quickly get to that level in aggregate, and then as track records build for individuals we can start to narrow down our selection.

How do analysts develop differentiated views? Amongst all of the other methods which I’m not going to get into here, it can and should include access to new unique data sources. As I said previously, firms are mostly doing this whole thing backwards. But once they get this process in place, this is where a lot of the company level data fits in. It should inform the analyst as to what their beliefs of the forward earnings, revenue, EBITDA, same store sales, monthly active users, ARPU, or whatever other variable is important to that company. And then it should also inform their views on the multiple. It is really at the analyst level that a lot of the company specific information should be used. But at the end of the day, it needs to be used to inform a structured estimate of future expectations and placed into the rest of the process for stock selection just like any other.

An analyst told me a story about his fund in which the data science team had gotten some good data that made them feel like Netflix’s international subscriber additions that quarter was going to be well above what everyone was expecting. They themselves made a recommendation to buy the stock to the PM. The PM looked at it, and went the other way for whatever reason and shorted the stock. Netflix was up 20% the next day. The head of the firm then put it on the “no trade” list.

None of this is going to matter if you don’t have a process to use it in.

Here’s how our theoretical hedge fund is going to work to solve some of these issues. Analysts are going to be required to update their models and thus their estimates the day of/after each earnings release, as well as 45 days into each company’s fiscal quarter, and again three days before each company report. Analysts are required to make a full forward year of quarterly estimates for EPS, Revenue, and the 2-3 key performance indicators specific to their firm (same store sales, bookings, iPhones, etc.). Analysts will put an expected multiple on the aggregated full year of estimates they make to imply a price target. Let’s assume we have 10 analysts, each of them is going to cover 30 names, so we’ll have 300 to work with.

Structuring the Unstructured

Stock selection obviously is not just about structured forward looking estimates. There are a range of other variables that have to be taken into consideration, variables associated with risk factors that have to be accounted for, or catalysts that could affect the fundamentals or the multiple of the stock. Theoretically these unstructured variables should all play into either the fundamental expectation of the analyst or the multiple she puts on it, but realistically they won’t. So we need to capture them.

There are a few decent software tools for teams to capture this type of information and share it, and firms have done a much better job at this task than the structured estimate side. Factset acquired Code Red a little while back, Advent acquired Tamale from my friend John Fawcett who now runs the great Quantopian platform. The problem with both products is the nature of the input being vastly unstructured. Good luck getting anyone to systematically review the notes the analysts input. And if you think getting analysts to make structured estimates on regular time frames is difficult, try convincing them to put notes in there regularly. There are some firms now trying to run semantic analysis on their own internal research notes to structure the sentiment embedded in it, but I haven’t seen anyone doing this effectively or getting anything out of it.

There’s a better way to do this. Analysts need to pick the 10 or so unstructured variables that they believe will affect the stock performance on a regular basis and give structured sentiment for them. I’m talking about stuff like management quality, probability of getting acquired, disruption risk, etc. (each company will share some variables with the others and have their own unique drivers). The key here is to turn those research notes into data by asking the analyst to rate each variable on a 1-10 scale or give a binary yes/no. There are two uses for this. One, when we get to the latter part of the stock selection process where the PM gets involved, it’s so much easier for her to look at structured variables than a whole lot of words. Two, we can see the historical change in an analyst’s sentiment for these variables and ask them questions about that change, and we can run correlation between these variables and stock performance, or between these variables and the estimates.

We’re simply trying to collect as much useful data at regular intervals as we can, because machines are really good at using data, they aren’t so great at dealing with a bunch of words entered at random intervals.

Our analysts are going to fill out these unstructured surveys at the same interval they make their estimates, or at any point in time between.

Developing Factor Models

In a minute we’re going to discuss how our analysts’ forward looking estimates are going to be the backbone of our stock selection process. But before we get to that, I want to talk a little bit about the other piece of the puzzle which we will later merge with the former.

Up until now, quants really haven’t played a role at all in our process. Quants aren’t really the people who are going to help our analysts get a hold of unique data sets that help them make better estimates, that really falls into the realm of data engineers and data analysts (more on that later). But we’re now going to pull them into developing factor models which will help us with both our stock selection and risk management (market timing, position sizing, etc.).

A factor model, for those not initiated, is basically a Z score of some variable across the stocks in your universe. Some factors can represent betas (value, momentum, growth) and some can be true alphas like the ones we provide to clients at Estimize (post earnings drift, pre earnings consensus trend, historical surprise). Our goal in this part of the process is to develop alpha generating factor models, at the very end of this I’ll talk briefly about portfolio construction and how to limit factor exposure to betas.

Alpha generating factor models are developed by quants using the scientific method. We start with a hypothesis, usually focus on a specific data set, for why variable A would be causally related and correlated to the outcome in the price of a set of stocks. For example, maybe we believe that the tone of the words the CEO uses on the earnings call is associated with six month stock performance. Either we’ve developed the tools internally to perform this analysis, or a 3rd party has and can deliver us an output our quants can leverage. In either case, our goal is to basically rank the sentiment from the calls from best to worst. We turn that into a numeric score for each company indexed between -100 and +100. Once we have our score we take the first half of the time series of our factor score data set and look at the correlation between scores and stock movement. If there is a positive relationship and the top decile of scores significantly outperform the bottom decile of scores, we’ve completed step one. Then we take the second half of our time series and run the same analysis. We then analyze how similar the results were between the first half (in sample) and the second half (out of sample). If they are similar we can be confident that we have a model that will inform us well in the future.

Warning: the above paragraph is probably the quickest dirtiest review of quant research and factor models ever written, but it should get the basic point across.

We’re not just going to develop one factor model, we’re going to build a whole set of them. And each company may have different interesting data sets that gain us insight into how stocks will perform relative to one another. We’re basically attempting to algorithmically rank our universe from best to worst on a range of different factors.

Quants are going to play a big role in this process, but so will the data analyst who works with the fundamental analysts to understand their hypotheses for what data might drive their names. You can not do the quant research without the data analyst or the fundamental analysts, this is a team effort due to the nature of having to have an economic rationale for why a variable is causally related to under/over performance amongst their names.

The quant research pipeline is a tricky one. There are always limited resources and data companies calling in left and right to get you to test their data set. Data analysts need to be attending conferences and talking to new data vendors constantly, while working with the analysts and PM to figure out what variables might make the most sense for their names. A simple weekly meeting where the data analyst, PM, and analysts all come with a forcerank of the data sets on the board for testing in the future makes sense. Results from ongoing testing will also be shared at this meeting. The PM should ultimately be the one who sets the schedule for future testing, not because they are the most knowledgeable, but because it will engender a sense of ownership over the process and the outcome that the PM needs to take, this will become more clear why later.

Measurement of Historical Accuracy

While having all this structured data from our analysts is great, it only gets us so far if we don’t measure and use it for a few different purposes.

It’s not that hard to run basic statistics on how accurate our analysts are. We want to know which stocks they are best at estimating, whether they were correct because of their fundamental estimates or their multiple estimates. When are they most correct, when they are super aggressive or when they are more conservative? Obviously the larger the delta to consensus the greater chance they are going to be wrong, so we need to understand their weighted score.

As I said above, we’re trying to play Moneyball here. We’re looking for the analyst predictions with the greatest delta (opportunity) which we have confidence in.

We can also look at the unstructured variables for which we had the analysts give us their ratings. Were they able to predict names that would be taken out or management that would get caught committing fraud? Only one way to know, measure! This can all be done easily with software.

Being able to measure all of this should not only inform which analysts you trust, but who should be compensated more/less, or fired. Let’s not pretend our industry is based on love and kindness, analysts are mercenaries who should gravitate towards the firms that treat them the best (comp, resources, environment, etc.). If I was a great analyst, I’d want to work in a truly meritocratic environment where I didn’t have to play politics with my PM, I’d just do my job as a great analyst and have the track record to prove it if they don’t ultimately listen to me.

This is also going to have an impact on who firms hire, namely, women. I’m no culture warrior, but guys, you are destroying your chance at building your firms by continuing to lock an entire gender out of your firms because of the cultural issue. Women are more analytical, less prone to irrational bets, and frankly, in my experience, they cause less issues in the workplace because they have less ego. There are academic papers that back up the fact that female PMs are simply better at their jobs.

List Review and Debate

So let’s bring this all together now. We have our universe and investment philosophy. We have our analysts’ expectations. And we have our factor models. Now comes the hard part where all the egos come out to play.

The software that sits at the center of this process is going to produce an analysis of where our analysts’ implied price targets differ from the market consensus, and the confidence interval the system has in those differentiated expectations being correct. All of our names are going to be sorted top to bottom, both on the long and the short side, by the delta in those expectations so that we can run down the list at a regular interval. On this screen we’ll be able to dig into all of the underlying statistics about our analysts’ expectations, their historical accuracy and all of the unstructured attributes they’ve given 1-10 scores for.

Theoretically, this list should be the basis for the PMs stock selection model. These are her analysts’ best ideas. It is now her job to dig further in. In the case of our fictional fund, the PM should be reviewing this list every day since changes in expectations due to recent earnings reports can and should significantly change future expectations from our analysts and affect which names are long and short targets. For longer term funds this process might be weekly or even monthly or quarterly.

The PM then needs to go back to each analyst and ask questions. Dig deeper into their thesis. Bring up issues and have debates about their underlying assumptions. Why did they expect the multiple to increase? What was the catalyst for that? Do we really believe the market will see things the way we do? Why would you be wrong about growth going from 20% to 30% YoY? Could this short name get squeezed, or bought?

This is where the PM earns her job. Theoretically the alpha she creates is the difference between a portfolio simply rebalanced regularly to produce the top and bottom ideas vs the portfolio she actually holds due to weeding out names where her experience and intuition says to deviate from the model. We can directly measure this. It’s not the analyst’s job to fight for their position to be in the book, it’s the PMs job to weed it out. I hope you appreciate the difference between those two things and why it makes such a huge impact.

In the case of our fictional fund, the PM will review this list daily and have conversations with analysts as needed. Because our universe is 300 names, our book is going to be roughly 40 names, and we’re going to run it beta neutral.

Stock Selection and Portfolio Review

While the PM may review the list daily, that doesn’t mean they’ll be trading daily and it doesn’t mean whatever is at the top and bottom will get into the book. This is where we now weave in the factor models we built earlier.

Our software will place the -100 to +100 factor model scores for each of our factors next to the stocks in very simple color coded boxes and then give an overall factor score. The idea here is that we want to match the fully quantitative view with our analysts’ predictions and see where both line up. There may be cases where our analysts believe in the fundamental thesis, but our factor models simply say that it’s not the right time, so we’ll pass.

This really gets useful when the factor model scores line up with our analysts’ assumptions. This is where discretionary firms employing this process can size up, and hit home runs. This is why discretionary managers can beat the systematic quants, because they can take bigger swings when a fat pitch comes right down the center of the plate.

PMs need to be able to put their egos away here and listen to what the factor models are saying. Don’t try and bet against the Moneyball system, more often than not you’re going to lose.

Some of these factor scores are going to be stock selection based (history surprise, relative strength, etc.) but some of them will be more timing based and shorter term like a few of the factor models we’ve built at Estimize around pre earnings, through earnings, and post earnings drift. These models are helpful for risk management and position sizing around earnings and can help you collect alpha in the few weeks around each report. It can also keep you out of holding a name through a report where the odds are poor and a negative move is expected. We can use these and other timing models to manage risk better and limit the amount of names that blow up on us.

Our goal is to bat around 50%, but slug 600, and limit our losses on the downside by admitting when we’re wrong and exiting trades. There’s nothing that says we can’t re-enter them later if the fundamental outlook still holds. But if multiples move opposite our thesis, we need to reevaluate why we were wrong about that side of the estimate.

The names in our book need to be evaluated regularly as well. Do they still fall at the top of the list? Do our factor models still favor them? Just because our target timeframe for our trades is a year does not mean that we need to hold names that long or that we can’t hold them longer. It means that we’re basing our book on assumptions made about stock prices a year out.

In terms of position sizing and portfolio construction, it’s very important that we’re aware of what betas we’re exposed to and attempt to limit that exposure. Software like OmegaPoint does a good job at helping to optimize position sizing and portfolio construction for this purpose. Many firms have no clue that they are generating real alpha, but producing negative returns because they are exposed to the wrong betas at the wrong times. A great example of this being that Tesla and Microsoft have both done well recently. But simply being long both did not generate you the same amount of alpha. Most of Microsoft’s move can be explained by a few betas, but Tesla’s move was not, it was independent of the market so to speak, it was true alpha. We’re looking for true alpha, or at least to very selectively leverage beta at opportune times. Betas like momentum and value tend to oscillate in terms of performance, and we would be smart to pay attention to when we should limit our exposure to these in relation to sizable positive moves in our P&L.

Organizational Structure

One of the biggest problems today is that while some firms are begrudgingly trying to make the switch over to being more quantitative, they have not altered their organizational structures at all. The quant or data science teams they’ve hired sit almost entirely outside the process itself. CIOs have no clue where to put these people, and they didn’t want to tell the PM to deal with them, so they just stuck them in another room. Huge $200B asset management firms have hired a bunch of quants, usually in the risk management department, to do research, simply to say they have quants working there. Even the senior most quants and heads of data science have zero power to implement anything, because the PMs who run each book are still in charge. And these firms rarely want to build out fully systematic books because that’s not where the AUM is, they don’t really have nor do they want products set up for that.

I want to walk through the different types of people in the front and middle office and how firms should think about hiring these people and giving them authority. But most of all, how CIOs should think about demanding cooperation amongst them.

Fundamental Analyst – Gone are the days of simply hiring the kid coming out of her second year of investment banking work at Goldman. That’s fine if you just want your analysts to build models and run numbers but it’s useless if you need them to have an informed and differentiated view on an industry. Analysts are actually going to be older, most likely mid 30s or greater and have experience working within the industry they cover. If you’re gonna make estimates for the future of semiconductors you better have worked in that industry prior because technologies move quickly and there are both hugely cyclical and secular trends taking place at the same time. My enterprise tech analyst better come from a silicon valley startup background, someone who has relationships in that world where companies like IBM are getting disrupted faster than ever by startups that can scale extremely quickly. There is massive opportunity in those relationships and knowing the competitive landscape first hand. Analysts do not need to possess technical quantitative knowledge, but have to be open to using new unique data sources as inputs to their estimates and sentiment about the multiple. Overall though, I want smart people with deep industry experience willing to make bold calls.

Trader – More and more traders are becoming less relevant as algos take over, but those who are left are going to be far more quantitative than they were before.

Data Engineer – Engineering is an important part of leveraging data and having someone dedicated to working with data feeds is important. A quantitative background is great but not necessary, I would rather have someone who works quickly and accurately more than anything else. This is a data wrangling job. Matching things up, cleaning it up, dealing with vendor sales engineers.

Quantitative Engineer – These are the real expensive engineers, the ones who have to turn the quant models into code (the factor model). They need quant backgrounds but also very strong engineering skills. They are usually the lower level Phd students, but not the really awesome ones doing astrophysics.

Quantitative Researcher – This is your pure quant, the one doing the research into new factor models and looking at the efficacy of new data sets as an indicator of future fundamental performance of a company. You can grab them out of Chicago Booth and other academic institutions.

Data Analyst – Ah the data analyst, the rare bird. This is a hard role to fill and one that many firms have stupidly skipped over. This is the person who works between the analysts/PM and the quant/quant engineer/data engineer. This person needs technical knowledge, but also the ability to dive into the analysts’ models and see what data they need that can help them. This person is going to be at a lot of conferences looking for new data sets and even working with the data engineer to quickly take a look at new things. Interpersonal skills are a high priority here. It’s a hard mix of skills to find in one person, which is why I’ve seen firms pay so much money for these people. It’s a difficult role.

Portfolio Manager – Of all the roles this is where I think things really need to change in terms of who sits in this seat. It can no longer be hedge fund bros, they simply won’t survive here. Nor will the pure gunslingers and tape readers, gone. And you certainly don’t want the pure quants sitting in this seat. PMs of the future are going to be far more interpersonal and process driven. And this person does not need to be the smartest person on the desk, in fact I think that’s probably a detriment to the success of the team if she is. This is a cross functional role, and one that needs to be based on the behavioral attributes of the person more than anything else. An MBA may be useful here, but I would even say that having experience working at the early stages of a startup as a CEO can add a lot. I’m waiting for someone to develop a firm to leverage psychometric testing for different investment strategies so that we can identify people tuned for momentum vs value. You’re talking about a completely different psychology between those two people and it’s imperative you choose the person correctly. The PM needs to be able to keep the pace like a conductor and have the general knowledge about the sectors they are trading to go deep with the analysts. PMs should have some training in statistical and quantitative methods in order for them to talk intelligently with the quants and trust the factor models. Without that trust there’s simply no point in having them and you’ll only gain that by understanding how they are built. Should a PM know how to code, no. Should they understand what the code does and why, absolutely. Basic data science classes can provide this knowledge. Quantitative research methods 101 in college is a requirement.

I believe that compensation structures for the PM need to change. This is no longer “his book”. He is another player on the team, who has a specific role, to coordinate the dance. But in many ways he will have less impact on the alpha generated by the book than the analysts or the quants who create the factor models. The PM is now the offensive coordination calling the plays, not the quarterback on the field scrambling around and throwing touchdowns. We can now compensate analysts accurately for the efficacy of their calls, and the PM for how much alpha she adds above them. The rest of the team should be bonused out based on the performance of the book.

At pod based firms, the PM, trader, analyst, data analyst, quant and quant engineer are going to be members of the individual pod. Outside each pod will sit a centralized data infrastructure group made up of the firm’s CTO, a central data analyst, and centralized data engineers. Do not place true quants in this group because they will have little to no impact on anything used by the PMs if they are not part of their actual group.

The Software

I’ve talked a lot about this software that will sit at the center of this process. As I said before, firms should not attempt to build it, they will fail.

For the past 6 years I’ve been building a company called Estimize that does much of the first part of the process, collect and analyze the forward looking fundamental assumptions of the analysts. Over 55,000 people now contribute their estimates to our platform, including a broad swath of the buy side, making it the largest estimates data set in the world. It produces consensus estimates that are more accurate than Thomson Reuters or Bloomberg 70% of the time and 15% more accurate to the company’s print. Crowdsourcing works, plain and simple. Quant and discretionary firms purchase our data feeds, and some effectively use them to generate alpha.

The next step we are about to take, and this shouldn’t be a surprise to anyone, is to add the multiple expectation in order to derive a price target at those discrete intervals. This will give us the market’s consensus multiple, and this expectation for the stock price.

We are now beginning to partner with discretionary firms looking to implement this type of process to provide them with the internal dashboard which will do what I’ve outlined above. This is the next 3 years of my firm, working with funds to solve this process issue and become more quantitative in their decision making. I believe good software can produce positive behavioral results, as we have seen already with the pure fundamental estimates we provide to the market.

I’m excited to be a part of this massive shift in our industry and sincerely believe we can help build better firms that will be successful in competing for both capital and alpha against the systematic quants. My team and I are always here to chat about how you can effectively make that shift.


About Author

Leigh Drogen is the founder and CEO of Estimize, an open financial estimates platform which facilitates the aggregation of fundamental estimates from independent, buy-side, and sell-side analysts. Prior to founding Estimize, Leigh ran Surfview Capital, a New York based quantitative investment management firm trading medium frequency momentum strategies. He was also an early member of the team at StockTwits where he worked on product and business development. When he's not staring at rectangular lightboxes, Leigh can be found on the ice rink playing hockey, behind a grill, or off in search of waves to surf around the world.

Leave A Reply