Expert Conversations

3

NEW YORK – In the traditional expert network business model, investors are connected to an individual expert to speak one-on-one about a question. The expert network selects the most appropriate expert for the query based on an analysis of their qualifications and, in some cases, previous customer feedback. However, the problem with this method is that expert network firms often do a relatively mediocre job of selecting experts. Even at the best expert networks, we have heard from clients that there is a high probability that the expert provided will not be best suited to answering the question. This happens because there is a certain level of information asymmetry regarding an expert’s ability when the selection is based entirely on the seeming ‘relevance’ of an expert’s credentials and experience; for example, someone with a great deal of relevant experience may still be a very poor communicator of that experience.

Some expert networks rely heavily on customer feedback to sort out the good experts from the bad; however, there is an incentive problem with this process when one is dealing with investment research clients. If an investor finds an expert particularly useful, he has an incentive to leave less-than-excellent feedback for that expert, because he will not wish others to share the ‘unique’ source of insight he has found; paradoxically, customer feedback data may not be a reliable indicator at all of customer satisfaction.

Another way to deal with the expert-selection problem is to use the community of experts itself to rate and promote the best answers. This is the model that Techdirt has adopted (NOTE: please see correction from TechDirt’s CEO, Mark Masnick, below in the comments). Felix Salmon, in a post from earlier this year, points out some problems with Techdirt’s implementation:

For reasons which make no sense to me whatsoever, even registered members aren’t allowed to read the open questions unless and until they’ve answered that question themselves first, at some non-negligible length. It’s yet another barrier to conversation and collaboration: first you have to go through the registration process, then you have to answer the question yourself, and only then are you considered worthy of reading what your fellow bloggers have to say on any given subject. You might well find that everything you said is redundant, and has been written already by many other bloggers, and all the effort you put into answering the question was a waste of time. In which case, too bad.

After registering as a Techdirt community member, I looked through a few closed cases. They generally featured precious little discussion; the vast majority are simply a concatenation of unrelated stand-alone answers, as you’d expect if the bloggers weren’t allowed to see what their peers had written… I think the problem is that the money poisons many if not all the great aspects of blogging. Rather than linking joyously to someone else with an excellent insight, the bloggers are incentivized to treat that person as a competitor.

This is not meant to single out TechDirt for criticism – the incentive problems Mr. Salmon describes are probably applicable to all expert communities that are trying to monetize their content. Other expert networks have tried to get around these limitations by offering surveys or panels. Surveys may be good as indicators of general sentiment, but hardly give any opportunity for an expert to share his insights, so we feel that it is an inadequate substitute for an actual conversation with the entire community. Panels come closer to simulating a real conversation, in that they invite three or four experts to discuss a topic, with the transcript and notes of their discussions provided to the investors. However, in all expert networks that we know of, the members of the panel are pre-selected by the expert network, and thus we encounter once again the probability that the expert networks may not know who the best experts for a given question are.

One possible answer is for an expert network to allow really open conversations within the community, wherein both the questions and the previous answers can be seen by all registered experts. Inviting the entire community to participate in this kind of conversation will draw in many more responses; undoubtedly some of the responses will be merely chaff, but exposing the answers to the whole community may actually improve the overall quality of answers, as experts will be less inclined to comment on something they are not knowledgeable about when such efforts might be seen by many of their peers. Investors would be better off as well – they would receive a higher volume of answers, and with the help of community ratings, would ultimately get better value for their dollars than if they had only received answers from experts that were pre-selected by the expert network. The fact that experts would be conversing with one another, would reduce redundancy and stimulate more sophisticated insights

Some incentive problems do remain. Community members may still wish to rate their own answers up and those of their “competitors” down. This can be solved by paying a separate group of experts (perhaps randomly selected as is done on Slashdot) a modest amount of money to act as moderators on topics where they are knowledgeable. These moderators would not be able to contribute answers on any topic where they are also moderating responses, thereby limiting conflicts of interest. Clients would also, obviously, have the ability to select which experts should receive payment for their work.

The fact is that those who are experts on a topic enjoy discussing it with others, and are stimulated to greater insights when they work collaboratively. On practically any conceivable topic, there are discussion forums and blogs which hold such open, free-ranging discussions. However, it is tricky for investors to use such sites, because there is typically no screening of respondents whatsoever, and any questions or answers are visible to outsiders. Restricting access to just registered experts prevents the free-rider problem posed by throwing open viewership to the general public or to other clients. Restricting membership to only those who possess some minimum qualifications, as is done by Techdirt and Sermo, ensures an overall high level of discussion. Once we have put these restrictions in place, however, it is likely that open conversations which involve the entire community of experts may be preferable to the current expert network model, which typically causes experts to compete with one another at the expense of collaboration.

Share.

About Author

3 Comments

  1. Hi,

    As President and CEO of Techdirt, I wanted to first, thank you for this write up, but also respond to a few inaccuracies in the report.

    First of all, it’s incorrect to say that Techdirt relies on experts rating each other. Currently, we do not do that. Ratings are based on customer reviews or Techdirt staff reviews — not other experts. While we may experiment with peer reviews at some point in the future, they will always be a separate ranking mechanism, rather than the key mechanism.

    Second, while Felix Salmon did point out some issues with our implementation back in February, this was based on an early beta release, and we have continued to update and improve the system. Also, the system was not designed to do some of what Salmon expected it to do, which lead to some of the critique. We have a variety of different options in how the system can be used, some of which are designed to enable a broader conversation, and some of which are simply designed to get more detailed expertise to the company. We recently started offering more open conversations on some cases, while leaving others more closed. It’s really based on what the customer needs.

    While Salmon claimed that there was “no sense” in the way we implemented it, he failed to note the explanation I had given him in earlier conversation: many of our customers wanted to see the initial insights of the experts before they were influenced by the wider discussion. The reasoning was to avoid groupthink — and the system has been rather successful in doing so. That said, as mentioned above, we are also now starting to offer more open conversations as well.

    Finally, I’ll note that Salmon did not speak to any of our customers, who have been quite satisfied with the results of the Techdirt Insight Community and keep coming back for more. We most certainly recognize that the system is not for everyone, but we have many happy customers as well as experts.

    That said, we’re still in beta, and rapidly adding new features and services to what we offer. Stay tuned…

    Thanks again for the writeup, and please don’t hesitate to contact me with any questions.

    Mike Masnick
    CEO, Techdirt Insight Community

  2. Ronit Bhattacharyya on

    Thank you for your response, Mark; we regret the error in our description of TechDirt’s ratings system. Do you have any thoughts with regards to whether peer reviews might get around the information asymmetry and incentive problems

    Currently, as far as we know, all expert networks can be quite hit-or-miss when it comes to selecting experts. Firms which specialize in only one industry, such as yours, presumably are better able to have strong in-house analysts who can separate the wheat from the chaff; nevertheless, the in-house analyst-based selection process seems like it might run up against some serious problems once the business starts to scale in volume..

  3. Hi Ronit,

    Well… I have some thoughts, but some of which I’m not yet ready to share. 🙂 Keep an eye on what we’re doing, though.

    I certainly agree that in-house analyst-based selection is not viable. There are much more effective solutions that are on the way. There are some companies that have done some very interesting work in this field who are not in the expert network business, but whose models are quite enlightening.

Leave A Reply