Google today announced that its cloud platform has received both a new ISO 27001 certificate and that it has completed its latest SOC 2 and SOC 3 Type II audits. Before you start yawning, it’s worth remembering that these reports certify Google’s compliance with standard security practices that are meant to keep the data on its Cloud Platform safe. That includes products like Cloud Platform, but also Google Apps for Business and Education.
The new reports and certificates now cover Google+ and Hangouts, which is nice, but the real news here is that Google is making both its ISO 27001 certificate and SOC 3 audit report easily available to anybody who wants to take a look. The SOC 3 report is about a 10-page document that summarizes the audit’s finding and lists the services that the auditors inspected. By default, this report is meant to be made public. The SOC 2 report is significantly more in-depth and runs a few hundred pages, but sadly Google isn’t making that one public.
As Google’s director of security for Google Apps Eran Feigenbaum told me, this is all about transparency and gaining trust. “Security, privacy and ultimately trust is one of the key points people still have with the cloud,” he said. “When you give your data to a vendor in the cloud, you want to understand what they do with it. A key point for gaining that trust is transparency.”
Until now, you could only get your hands on these reports after you went through a number of formalities and signed an non-disclosure agreement. Even with all of this bureaucracy, Google handed out “hundreds” of copies of its SOC 2 report every year — but only to its own customers.
Still, as Feigenbaum noted, that meant that if you were using App Engine for your product, you couldn’t give the report to any of your own customers because you were under NDA and your customers couldn’t get it because they didn’t work with Google directly.
It’s worth noting that Google isn’t the only company to make these documents public. Amazon publishes its SOC 3 report, for example, as does Microsoft (though I was only able to track down a copy from 2012).
See more here: Google’s Security Compliance Audit Report Is Now Public
Amazon today announced that it’s making Zocalo, its secure document storage and sharing service designed for enterprise use, generally available. The news comes, not coincidentally, on a day when cloud storage competitor Dropbox announced lowered pricing and storage increases for its Pro customers.
Zocalo, which is Spanish for town square, launched into a limited preview just last month, along with very aggressive price points. For $5 per user per month, end users would receive 200 GB of storage. They can then use that service to store all manner of files, comment on and within files, share them with others, upload new versions and more, all from any device, including PCs and Macs, as well as Android and iOS devices.
Meanwhile, IT admins are able to manage Zocalo, integrating it with existing corporate directories, including Active Directory, which allows users to sign in with their existing Active Directory credentials. IT can also apply the appropriate permissions for users, making sure they only have access to the documents they’re meant to see.
The Zocalo service is now open to all AWS customers, says Amazon this morning in a blog post, and includes a 30-day free trial, as previously announced.
While Zocalo is aimed at the enterprise crowd, many of whom are still paying for legacy, on-premise solutions, it is to some extent a competitor with consumer-first services like Dropbox, which is now trying to stretch itself further into the “Pro” and business markets where it’s up against other cloud storage rivals like Box and Google Drive.
It’s also not the first cloud storage service from Amazon – the company offers a consumer-grade service called Amazon Cloud Drive, a Google Drive competitor whose biggest advantage may be its integration with the company’s own Fire phone. (Fire phone users have unlimited photo storage for their smartphone photos in Cloud Drive.)
Along with today’s public launch, Amazon notes that AWS CloudTrail, a web service that records AWS API calls and delivers log files to you, is also now integrated with Zocalo. CloudTrail will now record calls made with the Zocalo API, which is currently internal, but is planned to be made public in the future, says Amazon.
Originally posted here: Amazon Opens Up Its Enterprise Cloud Storage Service Zocalo To All
Google and Mesosphere today announced a partnership that brings support for Mesos clusters to Google’s Compute Engine platform. While the Mesos project and Mesosphere aren’t quite household names yet, they are quickly becoming important tools for companies that want to be able to easily scale their applications, no matter whether that’s in their own data centers, in a public cloud service, or as a hybrid deployment.
With this collaboration between Google and Mesosphere, Cloud Platform users will now be able to set up a Mesosphere cluster on Google’s servers in less than 10 minutes. Developers get to choose between two basic installs: a development cluster with four instances, eight virtual CPUs and 30GB of memory for prototyping their applications, or a production-ready install with 18 instances, 36 virtual CPUs and 136GB of memory. If those two options don’t fit, they can also create their own custom clusters.
By default, those clusters include the Mesos kernel, Zookeeper, Marathon and OpenVPN. Once the cluster is up and running, Mesosphere offers a straightforward web-based dashboard for managing these clusters that can be accessed right from the Google dashboard.
As Florian Leibert, the co-founder and CEO of Mesosphere told me earlier this week, the main idea behind Mesosphere has always been to allow developers to treat a data center like a single computer — with Mesos and other software packages abstracting much of the basic devops work away. Some companies that currently use Mesos are Leibert’s former employers Twitter and Airbnb, which he introduced to the open-source Mesos project.
Mesosphere essentially creates a layer on top of your hardware that handles all of the servers, virtual machines and cloud instances in the background and lets an application draw from a single pool of resources like CPU power and memory. By default, Mesosphere’s service does not really care what operating system you run or what cloud you are using. The team tells me, however, that it worked with Google to optimize its offerings for its cloud to take full advantage of the environment (you can read a bit more about Mesosphere and its tools here).
As part of the partnership with Google, Mesosphere also today announced that it is integrating Google’s recently launched open source Kubernetes service for managing Docker containers right into Mesopshere. The company says this will make it easier to manage the deployments of Docker workloads. It’s worth noting that this is not just for running Mesosphere on the Google Cloud Platform. As Leibert notes in today’s announcement, “our combined compute fabric can run anywhere, whether on Google Cloud Platform, your own datacenter, or another cloud provider.”
Google’s Craig McLuckie, its lead product manager for next generation cloud computing products like Kubernetes, also told me that what Google wanted to do with Kubernetes was to bring many of the core concepts that it has developed for managing its own datacenters to users outside the company. He believes that what Mesosphere and Google are working on is “very complimentary” and that he believes that the company can bring some of the concepts it developed into Mesos, too.
As Mesosphere’s senior VP Matt Trifiro (and former Heroku CMO) told me, he believes that projects like Kubernetes and Mesos can bring some of these “rarefied air concepts” behind these technologies to everybody. What happened so far, he argues, is that “the tooling hasn’t kept up with being accessible for companies that need to get to web scale.” But now with the expertise from Google and Mesos, the company can make these concepts consumable for developers to that they can operate at a new abstraction level that frees them from directly dealing with much of the infrastructure that powers their applications.
“We look forward to working with Google to make Cloud Platform the best place to run traditional Mesosphere workloads, such as Marathon, Chronos, Hadoop, or Spark—or newer Kubernetes workloads,” Leibert writes today.
It’s probably not too early to start thinking about whether Mesosphere could become an acquisition target for Google given how close the two companies worked together on this project. For now that’s just speculation, of course, but if it ever happens, remember you read it here first.
Go here to see the original: Mesosphere Comes To The Google Cloud Platform, Integrates Google’s Open Source Kubernetes Project
PagerDuty, a 5-year old IT incidents management startup, announced $27.2M in Series B funding today led by Bessemer Venture Partners with help from early round investors Andreessen Horowitz and Baseline Metal. The Series B money brings the total money raised to date to $39.8M. As part of the deal, Trevor Oelschig, a partner at Bessemer Venture Partners, will join PagerDuty’s board of directors.
The money was a huge financial boost for the cloud-based startup, which up until now had raised $1.9M in 2010 in its seed round, then an additional $10.7M in Series A funding in January, 2013.
Alex Solomon, co-founder and CEO of PagerDuty says the company name stems from IT folks who are on call over night to take care of any problems that pop up on the company IT systems. Even today, many IT pros work with pagers and get beeped when there’s a problem, hence the company name.
PagerDuty allows companies to pull all of their incident reporting tools into a single interface and send an alert when an incident occurs. That’s where they are today, but Solomon says the plan is to use the money to expand the product in a big way by not only reporting incidents, and bringing in an IT pro as quickly as possible to solve major problems, but also to give more intelligence to the incident report and even offer ways to resolve it or fix it automatically without intervention from a sleepy human in the middle of the night.
He says that, too often in the past, reporting tools have given reports of something wrong, when it was really minor or nothing at all, and they are aiming to minimize those false-positives to the extent possible, so that individuals on pager duty are not pulled in unless there is something seriously wrong.
While this might sound like what New Relic, AppDynamics or Splunk is doing already, Solomon says it’s different because these companies are looking at the application performance layer and their products plug into PagerDuty, which can look at the entire IT infrastructure incident reporting picture, regardless of the system doing the reporting.
He says, it also works with tools like ScienceLogic, which provides insight into IT management, whether the tool is in the cloud or in a private data center. ScienceLogic also uses a similar plug-in kind of architecture to monitor these systems, but Solomon says the difference is while ScienceLogic is monitoring these systems, it only passes information to PagerDuty when something happens, an incident occurs that requires the attention of an IT pro.
That’s because PagerDuty is designed to pull in these incidents across systems to be a central reporting hub.
Solomon says before a product like PagerDuty came along, companies tended to cobble together their own incident management programs. What his company allows them to do is plug in whatever monitoring systems they are using and manage the incident reporting from a single interface.
All of these tools represent a category of tool designed to simplify the life of IT managers. They are aiming at different parts of the stack, but they are designed to give visibility into the health of the company IT assets.
PagerDuty says it’s gaining traction across a variety of verticals including 30 percent of the Fortune 100. Their customers include Nike, Adobe, Intuit, Panasonic and Evernote among others.
Solomon says the primary objective with the new money will be to invest aggressively in product development and engineering and product management and scale up the team, which currently has 90 employees with headquarters in San Francisco and a programming office in Toronto.
“We have a huge vision of building new category. That takes a lot of work, a lot of moving parts and different components,” he explained.
PHOTO CREDIT: (c) Can Stock Photo
Cloud hosting company DigitalOcean announced its third expansion into Europe today with a new data center in London. The company added two facilities in the Amsterdam region earlier this year. This new center will be located on the outskirts of London proper to meet the growing developer demand in the area.
London’s tech scene has been bubbling up for the past couple of years. A recent report from Bloomberg shows tech jobs have accounted for 30% of all new job growth in the city since 2009. According to London.gov the city now has 32 accelerators and incubators for start-up companies and more than 340 London-based tech companies have attracted investment of over £1.47 billion (or U.S. $2.9 billion). DigitalOcean estimates over 10,000 developers currently work in London.
This puts the city on the map for key user growth, but it also helps DigitalOcean with government regulations. Europeans may be a bit nervous about an American data center after revelations made by Edward Snowden about the NSA mining our data. The European Union’s Data Privacy Directive currently makes it difficult for data to be moved outside of the region. Any lag in data can cause a loss in users and potential revenue. According to this KissMetrics infographic, even a 1 second delay can result in a 7% reduction in users.
The new London location will also run IPv6 support on all “Droplets” – the company’s branded term for cloud servers. IPv6 is the latest version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. It can also be added to existing Droplets without the need for a reboot.
DigitalOcean raised $37.2 million from Andreesen Horowitz just a few months ago. Part of that money will now be used to expand to more data centers globally, including London. The European market is important to the company. According to CEO Ben Uretsky, about 20% of the company’s presence is outside the U.S. There are already two data centers in operation around Amsterdam. Headquartered in New York, DigitalOcean has data centers in San Francisco, Singapore and now in the UK as well.
Here is the original post: DigitalOcean Expands To London
Amazon Web Services is known for many things, but all of those have to do with developer services like cloud computing instances, databases and storage. Lately, however, AWS is slowly getting more into productivity tools that are meant for end users.
Amazon‘s first attempt to get into this market was Amazon Cloud Drive. It launched back in 2011, but while there are no exact numbers about its usage, I doubt all that many consumers ever signed up for it. Now — maybe in the wake of its Fire Phone launch — it feels like the company is starting to reboot its efforts, and it is doing so for enterprise users under the AWS label.
After Cloud Drive, things got pretty quiet in this space for Amazon, but last year, it launched an invite-only beta of Amazon WorkSpaces, a virtual desktop for enterprises that launched to the public in March. With WorkSpaces, an admin still has to go into the AWS Management Console and provision it, but for the user, the experience is pretty straightforward.
That project, of course, was more about virtualization than about an actual web application. With Zocalo, however, Amazon launched a full-featured competitor to Google Drive for Work and Dropbox, complete with a web-based interface. The focus here is still mostly on enterprises, and there is no free tier for consumers (though the regular price of $5 per user/month is extremely aggressive). But once it’s out of preview, it’s hard to imagine that Amazon would only allow businesses to sign up.
While Amazon itself has long offered some kinds of web apps for its e-book and music service, for example (and one could probably argue that Amazon.com is also a web app), Zocalo is a step in a very new direction for AWS. It’s also one that startups should be worried about. Dropbox started out on AWS, for example. But what if Amazon now wants a piece of this market for itself, too?
With Fire OS, the company has shown that it can do design and it’s probably no coincidence that Zocalo takes some of its design queues from Fire OS.
While it isn’t for consumers, AWS’s new mobile app analytics service similarly puts Amazon into competition with other Analytics services that were likely built on top of its architecture. Its feature set doesn’t seem to be quite on par with the likes of Flurry’s analytics service just yet, but it has a pretty generous free allowance and may just be enough for many developers.
At this point, AWS offers pretty much everything developers need to build their applications, whether that’s for mobile or web apps. While it continues to roll out new features for its services at a rapid clip, most of them are now very incremental updates. It makes sense that the company is looking at how it can expand AWS into new areas (or at least new for Amazon), and many of those involve going beyond developer services and APIs.
Amazon is nothing if not a very ambitious company and that’s on display right now with the launch of the Fire Phone and these new web services. That may irk some of its competitors in these spaces, but that’s probably not something Amazon is all that worried about.
See more here: Amazon Web Services Moves Beyond Developer Tools
Microsoft today announced a number of new features for its Azure cloud computing platform ahead of its Worldwide Partner Conference next week. There is quite a bit that’s new in this update, but the highlights are two new Azure regions for the U.S. (US Central in Iowa and US East 2 in Virginia) that will go live next week, as well as the launch of Microsoft’s newest Azure StorSimple hybrid storage arrays for enterprise customers.
Microsoft says bringing two new regions online will help it continue to double its Azure capacity every six to nine months. The company hasn’t yet announced which services will be available in these new regions or what the pricing will look like. There has always been a bit of disparity between Microsoft’s different data centers, but it’s probably a fair guess that its second Virginia data center will look a lot like its current one in the area, and the Iowa location will have slightly fewer services available and will be on par with the current US North Central and South Central locations. The two new regions will join Microsoft’s four existing regions in the U.S. later next week.
StorSimple is likely a somewhat obscure service for many, but Microsoft has long offered this storage solution for large enterprise customers like Mazda, SK Telecom and GF Health Products. The new 8000 series arrays are more powerful than Microsoft’s existing 5000 and 7000 series StorSimple arrays (hence the higher number). The twist here is that these new arrays can use Azure Storage as a hybrid cloud tier on top of the existing HDDs and SSDs in the system for capacity expansion and off-site data warehousing whenever necessary.
IT can manage all of this from a new dashboard that consolidates all of these features and allows administrators to control all of the storage and data management services included in the service.
Microsoft has long been betting that large enterprises will opt for hybrid cloud deployments for the time being. StorSimple 8000 handles the storage aspect of this for large enterprises, but businesses who don’t quite need the full power of the 8000 series can still opt for the 5000/7000 series, too.
As part of this focus on hybrid clouds, Microsoft also today announced that it will expand access to Azure through ExpressRoute – which allows for private connections between Azure and on-premise infrastructure — to six new locations around the world (up from three in the U.S. and Europe that were available at launch).
But there is more: Azure’s Machine Learning service for big data modeling, which was announced earlier this year, will be available for public preview next week; the Azure Government Cloud is adding more partners and customer solutions, and the Azure Preview Portal — Microsoft’s new central management dashboard for all things Azure — is getting a number of new features, including support for Azure SQL Database.
The idea for subscription billing startup Zuora was born in Marc Benioff’s office. In 2006, K.V. Rao, then a WebEx senior engineer, was meeting with Benioff and Salesforce CMO Tien Tzuo. Tzuo made a comment that subscription billing was a hard problem for Salesforce, and Rao agreed that WebEx also felt the same challenge. He left the meeting with the feeling that this problem was something he could solve.
Rao researched ideas for the next few months and recruited fellow WebEx engineer Cheng Zou to work on the fledgling startup. The next step was to raise money. By then it was 2007, and it wasn’t easy to raise money, he says, especially for a company that wasn’t consumer-focused. Rao and Zou scored a meeting with Benchmark’s newest partner at the time, Peter Fenton.
As Rao tells it, he totally bombed the meeting. “[Fenton] told me that it was one of the worst presentations he’d seen in VC history.” Fenton did see a potential opportunity with the idea, but saw deficiency in the team. Not long after seeing (and passing) on Zuora, Fenton had breakfast with Tzuo and told him about Rao and his idea, with the subtext that this could potentially be Tzuo’s next step after Salesforce. Fenton always believed Tzuo would be a great CEO, and saw the potential to apply his Salesforce learnings to Zuora.
Zuora’s premise was around a cloud-based billings platform that would alleviate the need for online businesses to develop their own billing systems, especially to handle recurring payments like those associated with subscriptions. The company wanted to build a platform that would automate metering, pricing and billing for products, bundles and configurations.
Rao and Tzuo started the standard co-founder “dating” ritual. They met for coffee, took to the whiteboard for strategy sessions, and had a few double dates with their wives. Tzuo got the feeling that Zuora was on to something and this was his next step. So he went to talk to Benioff to get his approval and perspective. As Tzuo recalls, Benioff always said he had to go to his then-boss Larry Ellison at Oracle three times before he got the approval to leave. Tzuo expected Benioff to be equally as hard on him — but Benioff believed in the idea, and as Tzuo explains, “is a big believer in building.”
Benioff was also a believer in karma: Ellison had put some of the first money into Salesforce, and he ended up putting $1 million in Zuora.
Tzuo started at Zuora in January 2008, and kicked off his new beginning by helping to raise the company’s first round of funding. Tzuo went back to Fenton to see if Benchmark was interested, and Fenton, who was on his honeymoon at the time, immediately lined up the partnership for a meeting.
“This is a company where there was a gut feeling with partnership that we should invest,” says Fenton. “There was a glaring need in the market for a billing system, and the thing that haunts billing is the complexity. Zuora changed that.”
Zuora ended up raising $6.5 million led by Benchmark, with Benioff and Tzuo both investing.
We now live in a world where, on the front end, paying for subscriptions is as easy as tapping a button and entering our payment information. But on the back end, subscription billing is as complicated as “designing an entire database,” says Zou. “It’s not something that a programmer can do because our ambitions were so broad. We wanted to create a billing system that covered any industry.”
As Tzuo explains, “This isn’t something two kids from Y Combinator can do….if you think of Salesforce as the CRM for every industry, and WebEx as web conferencing for any industry, we wanted to be cloud-based billing for everyone.”
It took the Zuora team six months to turn the prototype into a live cloud-based engine, and in July 2008, the company’s first customers, Coremetrics and Marketo, went live.
For most companies, billing is complicated and difficult to build in-house. Legacy systems are expensive and cumbersome. As SaaS started to become more of a buzzword in 2008, customers found Zuora through simple Google searches. “We walked into demand right away,” says Tzuo.
Early customers included then fledgling startup Box (which is still a customer today), and even Sun Microsystems, which remained a customer until Oracle bought the company. UK company Reed Business Information was one of Zuora’s earliest large deals.
Mary Collerton explains that in late 2008, Reed Business Information was looking to replace a system in-house to manage electronic subscription billing. “We preferred to buy before building; we found Zuora through a search on the web and were impressed with the functionality they could provide.” Even now, she says, the executive team checks in with her and her team to ensure integrations are going well.
From the start, Tzuo wanted to make sure that there was an element of customer centricity. At off-sites with the entire company, each employee is assigned a customer and has to walk through their billing challenges and present to the company how each customer should approach their billing situation.
“We really want everyone to understand what it means to be in our customers’ shoes. Every employee should have a deep understanding of this — not just our sales or implementation teams,” says Tzuo. He still interviews every single employee to ensure a good cultural fit.
In mid-summer of 2008, Zuora was at Benchmark giving the partners a product update, and Bill Gurley told the team to start raising soon. Lehman Brothers had not collapsed yet, but Gurley said he was a little nervous about the Q4 funding environment, and the Series B needed to be raised soon. In August, Zuora signed a term sheet for $20 million, which was led by Shasta Ventures, with Benchmark, Benioff and Tzuo all putting in more.
Benioff had given Tzuo advice to get to cash flow positive as soon as possible, and for the next year Zuora didn’t spend a lot, choosing to focus instead on serving customers. The economic downturn ended up being a blessing in some ways for organic sales — bigger companies saw the cloud as a way to save money and were more willing to bet on the smaller guys, says Tzuo.
In 2009, Zuora was able to triple revenue. “It took us a few years to get our product footprint broad enough so customers felt that they didn’t have to make big tradeoffs.”
Redpoint led a $20 million round in Zuora in 2010, in which the valuation doubled, says Tzuo. A year later, Index Ventures participated in the company’s Series D. Around the same time, Zuora started to place more people internationally, focusing first on Europe and Australia.
Index’s Mike Volpi led the round and joined the company’s board. As Volpi explains, Tzuo and his team have set themselves apart by taking an existing knowledge base around the challenges of billing, and extending this to an actual product. “This is very special and unusual,” says Volpi. He adds that there is no one who understand subscription services as well as Tzuo.
The company raised its last round in 2013, at $50 million, at just under a $1 billion in valuation. Next World Capital and Paul Allen’s Vulcan Capital were both in the most recent round. Zuora is expecting $100 million in sales for this year, we hear.
Despite building an impressive set of technologies used by companies like Dell, Zendesk, Pearson and Tata, Zuora has never fielded any serious acquisition offers. It’s surprising considering that some of the company’s contemporaries, particularly in the cloud SaaS space, like Zendesk and Box, were getting serious attention.
“It doesn’t really phase us to not have any acquisition interest,” he says. “And we believe SAP and Oracle are archaic — if they acquired us it wouldn’t be a good fit.”
Fenton says he is sometimes surprised that Zuora hasn’t fielded more acquisition interest but at the end of the day, Zuora has built something that doesn’t have much competition and are the clear market leader.
One area where Zuora will need to focus its attention is on integrations. Volpi believes that shutting down the boundaries between data will be key for further adoption. The company hasn’t made any acquisitions but is considering doing more in M&A in the near future, perhaps in startups where the technology could add to the existing product line.
Zuora’s seven-year journey is in stark contrast to the more common startup journey we see these days with two to three years of development, and then a multi-billion-dollar valuation or exit or acqui-hire. Fenton credits the perseverance of the leadership team in having an unwavering commitment to success, despite the hardships of running a startup.
“They signed up to solve a really hard problem, and they have been able to stay motivated to solve it, while many would have lost faith. Tzuo and his team are spiritually connected to this, which has allowed to him to build a great team,” he says.
Fenton adds that he usually tells founders that it’s best not to focus on how long the road is and to stay in the moment. But Fenton also firmly sees Zuora as a public company.
Bankers are already courting Tzuo to see if he’s interested in taking the company public. While that’s the goal, he’s not focused on it at the moment. He’s already been through this rodeo after working through an IPO with Salesforce.
For now Tzuo doesn’t want to be distracted from Zuora’s vision, which is helping companies to find success in the subscription economy.
Original post: Zuora’s Journey To Managing The Subscription Economy
Today, the cloud infrastructure market is dominated by several big companies – Amazon, Google and Microsoft — but a public/business/academia partnership called the Massachusetts Open Cloud project is hoping to change that by creating an open computing marketplace where you can negotiate whatever services you need from multiple infrastructure vendors.
Peter Desnoyers, a professor at Northeastern University who helped launch the project, explained that while companies like Amazon offer useful services, they have limitations.
First of all, from an academic perspective, they have a closed system. That means their internal team has access to the system for research purposes, but anyone outside the company like academics who want to study the system and present papers are shut out. While they can go to company conferences and hear employees present papers, they can’t get deep inside the system and that’s a real problem for him and his fellow academics.
The other is that Amazon and other IaaS vendors offer what he calls the “Henry Ford” approach to IaaS. You can have any color you want as long it’s black. In other words, they have certain products they have packaged together. The trouble with this approach though, Desnoyers explained, is that people often have very specialized requirements, and the way Amazon designs its products shuts those people out or makes it prohibitively expensive if they need specialized services.
Desnoyers says that the project hopes to create a marketplace where multiple vendors can come together and offer their services in an ad-hoc kind of way, so you might get your compute power from one vendor, your storage from a second and your memory from a third. The vendors seem like to this approach and include industry heavyweights Cisco, Juniper, Intel, Red Hat and others.
The colleges involved include Harvard, MIT, UMass Amherst, Boston University and Northeastern.
The Commonwealth of Massachusetts is also involved and the project will be housed at the Massachusetts Green High Performance Computing Center in Holyoke, Mass.
Vendors will contribute equipment and engineering talent and the goal of the project is to create a commercial project based on open source tools.
One vendor involved with the MOC project is Red Hat, and Jan Mark Holtzer, who is senior consulting engineer for the CTO office at Red Hat says his company can learn a lot from a project like this.
“For us I would see the key opportunities we see around MOC is operational access, understanding large scale cloud infrastructure, and growing skills [around these areas]. We will rotate resources from support and consulting organizations so they can get first hand experience.”
Holtzer says the initial use case for the project probably involves getting vast computing resources for a short period of time to meet a specific need. “Clearly currently the initial use case we see and MOC sees is probably driven by [high performance computing] and MOC would give customers the capability of harvesting large amount of resources and then releasing them quickly,” he said.
He says, however, before it becomes a viable commercial entity for vendors like Red Hat, he sees potential as an incubation space for innovation where participants can experiment with different business models and Service Level Agreements (SLAs).
But perhaps the biggest advantage of being involved in a project like this from a vendor perspective is very similar to the academic one. They can get real data about how large-scale systems like this work. “Probably the very interesting use case is the ability to get the operational data from such a large scale environment. A lot of cloud services are black boxes. We work with these vendors, but we don’t have the ability to get as much information from inside a large scale infrastructure,” he said.
Holtzer added that there is a huge advantage in making the MOC project operational data transparent and visible.
The fact is there are lots of cloud infrastructure options available out there, but no open marketplace where people can negotiate pricing and access different pieces of the infrastructure. A project like this is at least a starting point for offering a more open way of selling infrastructure services moving forward.
For now it’s experimental, but if it works, it has the potential to change the way enterprise customers interact with and deal with IaaS vendors and that’s significant in itself.
Prepare another entry into your File Of No Surprise: Microsoft is moving ahead with its efforts to bring the highly lucrative Office franchise to Android tablets.
A full Office suite for Android tablets is roughly as surprising as San Francisco morning fog. Microsoft confirmed that it was building the native suite earlier this year, and rumor followed that the Android apps would beat a touch-first build of Office for Windows out of the gate.
To see Microsoft begin to ramp up testing is hardly surprising.
Office for iPad has been a material success for Microsoft. Despite some market doubt that the apps were too late to make an impact, or that users wouldn’t use them due to Office 365-related restrictions, Microsoft’s latest sally into iOS has gone well. Android may be no different.
The mystery that I can’t unravel is why touch Office for Windows tablets is so damned late.
The above is merely another plank in the current Microsoft effort to have its corporate focus be both mobile-first, and cloud-first. Office, of course, is now heavily based on OneDrive, Microsoft’s cloud storage service. What will be interesting to gauge is market response to Office for Android, measuring if it can match the prior response to the iOS suite. Microsoft saw 27 million downloads of its iOS Office apps in 46 days.
Microsoft declined to comment.
See the article here: Microsoft Presses Ahead With Office For Android