In the movie business, we used to anticipate the value of content (movies) we were interested in financing and producing by looking at “comps” (or comparables). These were movies of the same genre and budget range as the movie we were considering financing that were released in the previous ten years. We averaged out their budgets, domestic and international box office revenue, and TV syndication revenues in order to determine the potential value of that movie throughout a twenty-year life cycle. This also helped us determine the value of our fund’s movie library and, ultimately, the value of the company.
TV networks and publishers, I imagine, have similar modeling systems but that include assumptions for subscriptions and advertising revenue. I also imagine that the more content that that a company produces, the more difficult it is to valuate the individual piece of content vs. a library of content. For example, how difficult would it be for The New York Times to valuate a single article when it’s churning out tons of content every day. Context also matters. For example: news is real-time, so how valuable is news content a week, a day, even an hour after the news has broken?
Add the commoditization of content – spear-headed by low production costs, and the democratization of distribution (anyone can now produce and distribute content through today’s social web) and the rise of aggregation (publisher’s like The Huffington Post and Business Insider often offer two paragraphs and a link referring to another publisher’s content as a piece of new content) – and we now produce as much information/content in two days as was produced from the dawn of civilization through 2003.
How can we update the revenue model, so that today’s publishers and brands can appropriately price content? The answer, I imagine, will be through social curation. I’m going to investigate this further.