Microsoft’s $1.2 billion purchase of enterprise social tool Yammer caught many by surprise. Three quarters later, Microsoft is publicly trumpeting the price that it paid, right next to performance metrics for its new subsidiary. The implication is simple: Yammer was a good buy.
Today in a blog post, Microsoft detailed that in its fiscal third quarter – the most recent quarter – Yammer grew its sales by 259% year over year. In the quarter, its best regarding “user engagement,” Yammer added 312 new clients.
Juan Carlos Perez at CITEworld clarified with Microsoft, regarding the sales metric. He reports that: “[Microsoft] specified that to calculate the revenue spike, Microsoft considered only sales of standalone Yammer licenses, and left out revenue coming in from Enterprise Agreement volume licensing deals involving the ESN product.”
Put another way, the above 259% growth figure only applied to Yammer, and nothing external to it.
It’s mildly frustrating that Microsoft will only report a percentage-ratio figure regarding sales, but it isn’t out of character for the firm. In its most recent quarterly report, Microsoft, by way of a single example, detailed that Windows Phone revenue rose by $249 million. But it declined to disclose the aggregate number, only releasing the increase.
Amazon is infamous for releasing ratio data as well.
Yammer integration across Microsoft enterprise and corporate products, particularly SharePoint remains nascent, it should be noted. A final point: Microsot is exceptionally fond of announcing when a new business segment reaches a run rate of $1 billion per year. That breaks down to $250 million per quarter. As it hasn’t said as much about Yammer, we can assume that it hasn’t reached that size. That gives you a better comprehension of its size.
Top Image Credit: Håkan Dahlström
Originally posted here: Microsoft touts Yammer’s growth: Sales up 259% YoY, 312 new customers in most recent quarter
Editor’s note: Richard Price is founder and CEO of Academia.edu, a platform for academics to share research papers. You can follow Richard on Twitter @richardprice100.
Aaron Swartz was determined to free up access to academic articles. He perceived an injustice in which scientific research lies behind expensive paywalls despite being funded by the taxpayer. The taxpayer ends up paying twice for the same research: once to fund it and a second time to read it.
The heart of the problem lies in the reputation system, which encourages scientists to put their work behind paywalls. The way out of this mess is to build new reputation metrics. The changes to reputation metrics in science that are underway are reflective of how reputation is measured online: Twitter has followers and retweets; GitHub has followers and forks; StackOverflow has reputation; Facebook has likes and comments; YouTube has view counts. An ecosystem of startups is working on building these new reputation metrics in science, including my startup Academia.edu, as well as Mendeley and ResearchGate (other important players in the space are PLoS and Google Scholar). All three platforms have passed 2 million users and are growing fast. In three to four years, all the world’s scientists will be on one or all of these platforms.
Scientists need to build their reputations, and the primary reputation metric in science is being published in prestigious journals, such as Nature, Science, and The Lancet. When scientists apply for a grant or a job, they know that there are 200 other people applying for the same grant, and that the grant committee scans resumes looking for such journal titles.
Journal publishers use their ownership of the reputation system to their advantage. When a scientist is looking to be published, they require a scientist to transfer the copyright of their paper. In this transaction, the scientists who wrote the paper are not paid and receive no royalties from the revenues from the paywalls. The peer reviewers who review the paper for the journal are not paid, nor are the taxpayers who have provided between $20K and $160K for the funding of the research behind the paper.
Because of its ownership of the reputation system in science, the journal industry is able to acquire the copyright to the world’s peer-reviewed scientific output for free. It then charges the public who funded the research — and the scientific community who authored and peer-reviewed it — $8 billion a year to access it. Effectively, the scientific community provides the product to the journal industry (the papers and the peer reviews), and then has to pay, along with the public, to get it back.
The tragedy of the commons is that individually rational decisions, namely scientists handing over the copyright of their papers to collect reputation metrics, lead to an outcome that is bad for the public at large: Because of paywalls, the majority of the world ends up being unable to access the scientific literature that it has funded.
To break out of the tragedy of the commons, new reputation metrics, developed by a number of startups, have been developed that incentivize scientists to share their research openly, rather than incentivizing them to put their research behind a paywall. Scientists are adopting them to better stand out from the crowd when applying for jobs. Examples of these new reputation metrics include inbound citation counts, readership metrics and follower counts.
Inbound citation metrics. A few years ago, Google Scholar started displaying inbound citation counts for papers – counts of how often a given paper was cited by other papers. Scientists have started to see these inbound citation counts as a way to demonstrate the impact of their work, and are increasingly including them in their job and grant applications. In some fields, such as physics, scientists more proud of their inbound citation counts than they are of the journal titles on their resume.
Readership metrics. Academia.edu, Mendeley and ResearchGate are helping scientists to understand readership metrics around their research. These sites tell academics how many people are reading their work, as well as some demographic data about those readers. Increasingly these readership metrics are helping to influence hiring decisions by tenure committees.
Follower counts. Scientists are increasingly wanting direct, unmediated relationships with their audiences. Twitter, Facebook and other sites have put content creators directly in touch with their audiences. Scientists are saying ‘I want that direct relationship with my audience too!’ The personal brands of scientists are starting to eclipse those of journals, and follower counts help a scientist understand the growth of their personal brand.
In the pre-web era, scientists used to print out papers and read them in their labs in non-trackable ways. Increasingly scientists are reading and sharing papers online. The reputation metrics described above are derived from this online activity; two others that will emerge include:
To distinguish between mere popularity and genuine impact, these metrics will take into account the reputation of the scientists doing the commenting/recommending. The metrics will be recursive in the way that Google’s PageRank algorithm looks at the quality of the linking site and not just the quantity of them.
As I mentioned, the journal title has historically accounted for close to 100 percent of a scientist’s public reputation. That figure is probably now at 90 percent, with 10 percent for the new reputation metrics mentioned above. As new reputation metrics emerge, the journal title will decline in relative significance. Soon we will get to a point where the journal title contributes less than 10 percent of a scientist’s reputation, and the bulk of the scientist’s reputation metrics are coming from other sources.
The costs of publishing a paper via a journal are significant, both in impact and money. Journals take a long time to publish research. There is an average time lag of 12 months between submitting a paper to a journal, and the journal publishing it. This is 12 months of lost impact for the scientist.
Journals mostly put papers behind paywalls, which further limits the audience and impact of the paper. Some journals now make the paper accessible to readers for free, but the author typically has to pay $1,000-$3,000 to remove the paywall around their research.
Increasingly it will be seen as perverse to submit a paper to a journal and wait 12 months for comments from two scientists, instead of sharing it on a platform like Academia.edu and getting comments from hundreds of scientists in two weeks.
The first journals to disappear will be the ones whose titles offer the least reputation boost – the second- and third-tier journals. Shortly afterwards, Nature, Science and the top-tier journals will disappear. Scientists will be sharing their work on multiple platforms, and their reputations will be based on a constellation of metrics. And as journals lose their significance, the dream of open access will be realized: a villager in India will have the same access to the world’s scientific literature as a professor at Harvard.
In addition to incentivizing scientists to share their work openly, new reputation metrics will also play a role in changing science in a number of ways:
Better peer review. Right now the peer-review system takes 12 months to complete, and surfaces the opinions of only two scientists – scientists who may be biased, uninformed about the subject matter, or just in a bad mood when writing the review. Reputation metrics will bring about a system where opinions are surfaced from the entire scientific community, and in real time. A mathematician who sees an incorrect theorem in a paper they are reading will be racing to get their refutation out by 6 p.m. in order to collect the glory and the reputation metrics that will follow from that insight.
Instant distribution. Reputation metrics will incentivize scientists to share their work instantly, rather than let their work be held back in 12-month publication time lags.
Data sets and other content formats. Historically, papers are shared because the journal title has been the only reputation metric, and journals only publish papers. Journals don’t publish data sets, code, videos, and other aspects of a scientist’s output. Seventy-five percent of the world’s scientific data isn’t shared because the incentives aren’t there for scientists to share it. New reputation metrics will provide those incentives.
Platforms like Facebook, Twitter, YouTube, and others don’t charge users to share or consume content. The costs of the platforms are low enough for them to be able to monetize via ancillary services such as advertising.
We are moving towards a science where scientists and the general public will not be paying to share and consume research. The business models that will emerge in science will be as diverse as the ones on the web at large. There will be advertising businesses; freemium models; and enterprise sales models.
$1 trillion a year is spent on R&D, and as scientific activity moves online and becomes trackable, it is going to be possible to build tools that help that R&D capital be better spent more efficiently.
Every innovation in medicine and technology in the world has its roots in a science paper, and speeding up science will change the rate of innovation. The startups looking to help facilitate this, such as those mentioned above and Science Exchange, Figshare, Microryza, Quartzy, Altmetric and ImpactStory, are engineering-driven and need engineers and designers to aid in the effort. If you are interested in joining, there is a list of startups looking to accelerate science here.
[Richard recently appeared on "In The Studio" with TechCrunch's Semil Shah. Watch him discuss Academia.edu and his plan to help scientists break out of the tragedy of the commons.]
The data in this post is fresh, raw, possibly in flux*, and interesting. If you can put on your big kid pants and read on with that knowledge, let’s have some fun.
According to new data from NetApplications, for the week of November 11th, here are the following market share percentages of the Windows brand in the global market:
For comparison, the same data pegs Linux at just above Windows 8, with some 1.4%. That should provide you with a bit of context. Underneath all of this is but a single data point: Microsoft claimed to sell 4 million copies of Windows 8 in the first 3 days of its formal launch.
The issue with that data was that it included copies sold preemptively to retailers, along with copies of the code sold to consumers. Put another way Microsoft managed to obfuscuate the data in such a way that it is all but impossible to tell how the damn operating system is doing.
We have, however, two more rather scurrrilous points of information from NetApplications:
Note: As Windows 8 RT Touch is ranked above Windows ME, it is likely that its market share simply didn’t manage to round up to 0.01%. So while it appears to be at zero, it is perhaps not at utterly nil; the Surface would explain for that squinch of market share.
The best metric to take from all of this is that Windows 8 is already polling at 19.19% of Vista’s market share. For other comparison, at the very end of October, Windows 8 controlled less than 0.5% of the market.
So, how is Windows 8 doing? Doubling in two weeks or so is decent, TNW thinks. Again, Windows 8 is hardly blowing the walls down, but at this rate the product will grow at a passable rate. Whether we will see a spike of Christmas sales is the next question that will be answered in short order.
Go here to see the original: Early, raw data says Windows 8 has 1% of desktop market, all but zero tablet market share
Today two apps have been released into the Windows Phone Store that will raise the profile of its application collection, and likely give users of the mobile platform something to smile: Angry Birds, the ‘Space’ edition, and Cut The Rope, are now available.
Angry Birds Space will set you back a mere $0.99, as will Cut The Rope. The titles feature 150, and 300 levels respectively, so you are going to get quite a lot of game for your buck.
Windows Phone’s Store is chock full of applications, sporting more than 120,000. However, the platform has long suffered from a lack of ‘top tier’ applications – those that could be deemed as ‘must haves.’ Quantity is but one way to measure the health of a platform’s application ecosystem.
App density, the average quality of higher end applications present in a single marketplace, is non-numeric metric that is a key element of user experience.
And so, two more today in the bag for Windows Phone. Two good ones, I’d say, given the popularity of the titles on other platforms. Microsoft has been banging on about Cut The Rope for so long I almost want to give it a try.
As an aside, yesterday TNW calculated that Windows 8 has passed 13,000 applications, or just over 10% of Windows Phone’s total. It’s a start.
Top Image Credit: thethreesisters
See the original post here: Angry Birds Space and Cut The Rope have both landed in the Windows Phone Store
Robert J. Moore is the co-founder of RJMetrics, a company whose software helps online businesses make smarter decisions using their own data. He also previously served on the Investment Team of Insight Venture Partners.
One of the fun things that happens when you start a company is that you get opportunities to share what you’ve learned with other technology leaders. In past year, I’ve been fortunate to present “best practices” sessions to a number of groups, including the portfolios of First Round Capital, Insight Venture Partners and FirstMark Capital.
These presentations have been informed by my years working with online businesses as a venture investor and as the CEO of RJMetrics. They share a common theme: how to build value by making data-driven decisions. In today’s post, I detail five key steps to ensuring that data can be used effectively at your company:
Whether you are a two-person startup or a Fortune 500 company, these steps are critical to building a data organization that enables action and drives results.
Everyone on your team should be able to pull data from the exact same source in a systematic way that will always yield consistent results.
How much revenue did you have yesterday? This sounds like an easy question but, more often than not, two random people from an organization will give two different answers to that question. Is that net of returns? Are you including shipping and handling fees in your revenue number? How about gift certificates? Did you pull from the billing system or the ERP system? What time zone are you talking about when you say “yesterday?” The list goes on.
To combat this phenomenon, establish a clear set of definitions for the key metrics that drive your business. You can do this any way you choose: build an internal wiki, paint it on the walls or use a third-party tool to centralize your data. What’s important is that you make it easy to access and leave no doubt in anyone’s mind about how key metrics are calculated.
Centralizing these rules allows everyone to compare their analyses apples-to-apples. In a data-driven organization, this is critical.
The data has to be correct. And auditable.
In the early days of RJMetrics, we would sometimes get support tickets alleging that our data was “wrong.” Most often, after an investigation, we realized that the data was totally correct. What was happening was that our customer’s assumptions about their data were being challenged by reality.
It is extremely common for members of an organization to challenge the accuracy of an analysis that disagrees with their assumptions or priorities. (I’ll admit that I’ve caught myself doing this at times.) This go-to excuse speaks to the widespread difficulties most organizations have with data accuracy and consistency.
The solution to this issue is an auditability chain back to raw data that is universally accepted as accurate. If everyone is on-board with the fact that the data in your raw database represents reality, provide the means for team members to audit calculated metrics by showing the steps that transformed the raw data into the into the metric in question. Eventually, with these controls in place, the data’s accuracy won’t be the first thing in question when someone’s assumptions are challenged.
You need to have this auditability chain, no matter what system you have in place. The minute someone questions the accuracy of the data you’re presenting, the credibility of the entire decision-making system begins to decay.
With those first two prerequisites out of the way, things start to get interesting very fast. The good news is that we can start focusing on the metrics that matter. The bad news is that there are literally hundreds of metrics that any given business could try to optimize. If every team in your organization is optimizing for different self-defined metrics, you may find teams doing counter-productive work.
There needs to be a clear set of Key Performance Indicators (KPIs) communicated from the top of your company’s leadership. These KPIs may vary from company to company, but what’s important is the fact that they exist and are clearly communicated. Everyone in your organization needs to unify around a central mission to optimize these KPIs.
Within sub-teams in your organization, additional KPIs can be established that are more tactical and context-specific. However, they should all be selected based on the fact that they are contributors to the company-wide KPIs.
KPIs are actionable when they clearly point you to a decision or next step. Some examples of actionable KPIs are split tests, per customer metrics and cohort analysis. Numbers like total revenue, active user count or number of page views are good for making the company sound big, but they are not as useful for guiding decisions.
This alignment on specific data-based goals is critical to ensuring that independent teams are contributing in ways that compliment each other. If you need help thinking about your own KPIs, Brad Feld’s “Three Magic Numbers” approach or Dave McClure’s Startup Metrics for Pirates are great places to get started.
If data is the lifeline of your business, downtime is death.
Even if you’re lucky enough to have an analyst or data scientist on staff, the number of people in your organization that know how your metrics get calculated should be higher than one. This minimizes the risk of outages or downtime and creates a sense of ownership around data knowledge helps a data-driven culture “catch on.”
Decisions are often time-sensitive, and if data isn’t available at all times, some people might be forced to make decisions without it. That can make it seem like it’s acceptable to let decisions like these slide through from time to time (or always). Before you know it, your investment in your data can slip away.
Sometimes, companies invest unreasonable amounts of time in using data to drive inconsequential decisions or to study things that cannot be quantified. In these cases, “analysis paralysis” can slow down the pace of progress.
Data is not the holy grail. A data-driven business is not guaranteed to succeed, and data should not be used to answer every one of your company’s questions.
The key to building a data-driven business is employing data in the aspects of your business that require consistent, quantifiable inputs and generate consistent, measurable results. Things like customer acquisition, retention and engagement tactics are great examples. These are critical components that can cause major swings in company growth if they are optimized well.
Every day, I meet companies who want to “do big data.” To me, this enthusiasm about data-driven decisions is both exciting and terrifying. Without a thoughtful data strategy, entrepreneurs run the risk of wasting time or doing more harm than good.
Successful online businesses focus on KPIs that are actionable, practical, transparent and well-communicated. I hope these lessons can help many more join the pack.
See the original post: Lessons for Data Driven Businesses