May 23, 2019
Category archive

open source software

Microsoft open-sources a crucial algorithm behind its Bing Search services

in Artificial Intelligence/Bing/Cloud/computing/Delhi/Developer/India/Microsoft/open source software/Politics/search results/Software/Windows Phone/world wide web by

Microsoft today announced that it has open-sourced a key piece of what makes its Bing search services able to quickly return search results to its users. By making this technology open, the company hopes that developers will be able to build similar experiences for their users in other domains where users search through vast data troves, including in retail, though in this age of abundant data, chances are developers will find plenty of other enterprise and consumer use cases, too.

The piece of software the company open-sourced today is a library Microsoft developed to make better use of all the data it collected and AI models it built for Bing .

“Only a few years ago, web search was simple. Users typed a few words and waded through pages of results,” the company notes in today’s announcement. “Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.”

With the Space Partition Tree and Graph (SPTAG) algorithm that is at the core of the open-sourced Python library, Microsoft is able to search through billions of pieces of information in milliseconds.

Vector search itself isn’t a new idea, of course. What Microsoft has done, though, is apply this concept to working with deep learning models. First, the team takes a pre-trained model and encodes that data into vectors, where every vector represents a word or pixel. Using the new SPTAG library, it then generates a vector index. As queries come in, the deep learning model translates that text or image into a vector and the library finds the most related vectors in that index.

“With Bing search, the vectorizing effort has extended to over 150 billion pieces of data indexed by the search engine to bring improvement over traditional keyword matching,” Microsoft says. “These include single words, characters, web page snippets, full queries and other media. Once a user searches, Bing can scan the indexed vectors and deliver the best match.”

The library is now available under the MIT license and provides all of the tools to build and search these distributed vector indexes. You can find more details about how to get started with using this library — as well as application samples — here.

News Source =

How open source software took over the world

in apache/author/cloud computing/Cloudera/cockroach labs/Column/computing/Databricks/Delhi/designer/executive/free software/Getty/GitHub/HashiCorp/hortonworks/IBM/India/linus torvalds/linux/Microsoft/microsoft windows/mongo/MongoDB/mulesoft/mysql/open source software/operating system/operating systems/oracle/Politics/red hat/RedHat/sap/Software/software as a service/TC/Yahoo by

It was just 5 years ago that there was an ample dose of skepticism from investors about the viability of open source as a business model. The common thesis was that Redhat was a snowflake and that no other open source company would be significant in the software universe.

Fast forward to today and we’ve witnessed the growing excitement in the space: Redhat is being acquired by IBM for $32 billion (3x times its market cap from 2014); Mulesoft was acquired after going public for $6.5 billion; MongoDB is now worth north of $4 billion; Elastic’s IPO now values the company at $6 billion; and, through the merger of Cloudera and Hortonworks, a new company with a market cap north of $4 billion will emerge. In addition, there’s a growing cohort of impressive OSS companies working their way through the growth stages of their evolution: Confluent, HashiCorp, DataBricks, Kong, Cockroach Labs and many others. Given the relative multiples that Wall Street and private investors are assigning to these open source companies, it seems pretty clear that something special is happening.

So, why did this movement that once represented the bleeding edge of software become the hot place to be? There are a number of fundamental changes that have advanced open source businesses and their prospects in the market.

David Paul Morris/Bloomberg via Getty Images

From Open Source to Open Core to SaaS

The original open source projects were not really businesses, they were revolutions against the unfair profits that closed-source software companies were reaping. Microsoft, Oracle, SAP and others were extracting monopoly-like “rents” for software, which the top developers of the time didn’t believe was world class. So, beginning with the most broadly used components of software – operating systems and databases – progressive developers collaborated, often asynchronously, to author great pieces of software. Everyone could not only see the software in the open, but through a loosely-knit governance model, they added, improved and enhanced it.

The software was originally created by and for developers, which meant that at first it wasn’t the most user-friendly. But it was performant, robust and flexible. These merits gradually percolated across the software world and, over a decade, Linux became the second most popular OS for servers (next to Windows); MySQL mirrored that feat by eating away at Oracle’s dominance.

The first entrepreneurial ventures attempted to capitalize on this adoption by offering “enterprise-grade” support subscriptions for these software distributions. Redhat emerged the winner in the Linux race and MySQL (thecompany) for databases. These businesses had some obvious limitations – it was harder to monetize software with just support services, but the market size for OS’s and databases was so large that, in spite of more challenged business models, sizeable companies could be built.

The successful adoption of Linux and MySQL laid the foundation for the second generation of Open Source companies – the poster children of this generation were Cloudera and Hortonworks. These open source projects and businesses were fundamentally different from the first generation on two dimensions. First, the software was principally developed within an existing company and not by a broad, unaffiliated community (in the case of Hadoop, the software took shape within Yahoo!) . Second, these businesses were based on the model that only parts of software in the project were licensed for free, so they could charge customers for use of some of the software under a commercial license. The commercial aspects were specifically built for enterprise production use and thus easier to monetize. These companies, therefore, had the ability to capture more revenue even if the market for their product didn’t have quite as much appeal as operating systems and databases.

However, there were downsides to this second generation model of open source business. The first was that no company singularly held ‘moral authority’ over the software – and therefore the contenders competed for profits by offering increasing parts of their software for free. Second, these companies often balkanized the evolution of the software in an attempt to differentiate themselves. To make matters more difficult, these businesses were not built with a cloud service in mind. Therefore, cloud providers were able to use the open source software to create SaaS businesses of the same software base. Amazon’s EMR is a great example of this.

The latest evolution came when entrepreneurial developers grasped the business model challenges existent in the first two generations – Gen 1 and Gen 2 – of open source companies, and evolved the projects with two important elements. The first is that the open source software is now developed largely within the confines of businesses. Often, more than 90% of the lines of code in these projects are written by the employees of the company that commercialized the software. Second, these businesses offer their own software as a cloud service from very early on. In a sense, these are Open Core / Cloud service hybrid businesses with multiple pathways to monetize their product. By offering the products as SaaS, these businesses can interweave open source software with commercial software so customers no longer have to worry about which license they should be taking. Companies like Elastic, Mongo, and Confluent with services like Elastic Cloud, Confluent Cloud, and MongoDB Atlas are examples of this Gen 3.  The implications of this evolution are that open source software companies now have the opportunity to become the dominant business model for software infrastructure.

The Role of the Community

While the products of these Gen 3 companies are definitely more tightly controlled by the host companies, the open source community still plays a pivotal role in the creation and development of the open source projects. For one, the community still discovers the most innovative and relevant projects. They star the projects on Github, download the software in order to try it, and evangelize what they perceive to be the better project so that others can benefit from great software. Much like how a good blog post or a tweet spreads virally, great open source software leverages network effects. It is the community that is the source of promotion for that virality.

The community also ends up effectively being the “product manager” for these projects. It asks for enhancements and improvements; it points out the shortcomings of the software. The feature requests are not in a product requirements document, but on Github, comments threads and Hacker News. And, if an open source project diligently responds to the community, it will shape itself to the features and capabilities that developers want.

The community also acts as the QA department for open source software. It will identify bugs and shortcomings in the software; test 0.x versions diligently; and give the companies feedback on what is working or what is not.  The community will also reward great software with positive feedback, which will encourage broader use.

What has changed though, is that the community is not as involved as it used to be in the actual coding of the software projects. While that is a drawback relative to Gen 1 and Gen 2 companies, it is also one of the inevitable realities of the evolving business model.

Linus Torvalds was the designer of the open-source operating system Linux.

Rise of the Developer

It is also important to realize the increasing importance of the developer for these open source projects. The traditional go-to-market model of closed source software targeted IT as the purchasing center of software. While IT still plays a role, the real customers of open source are the developers who often discover the software, and then download and integrate it into the prototype versions of the projects that they are working on. Once “infected”by open source software, these projects work their way through the development cycles of organizations from design, to prototyping, to development, to integration and testing, to staging, and finally to production. By the time the open source software gets to production it is rarely, if ever, displaced. Fundamentally, the software is never “sold”; it is adopted by the developers who appreciate the software more because they can see it and use it themselves rather than being subject to it based on executive decisions.

In other words, open source software permeates itself through the true experts, and makes the selection process much more grassroots than it has ever been historically. The developers basically vote with their feet. This is in stark contrast to how software has traditionally been sold.

Virtues of the Open Source Business Model

The resulting business model of an open source company looks quite different than a traditional software business. First of all, the revenue line is different. Side-by-side, a closed source software company will generally be able to charge more per unit than an open source company. Even today, customers do have some level of resistance to paying a high price per unit for software that is theoretically “free.” But, even though open source software is lower cost per unit, it makes up the total market size by leveraging the elasticity in the market. When something is cheaper, more people buy it. That’s why open source companies have such massive and rapid adoption when they achieve product-market fit.

Another great advantage of open source companies is their far more efficient and viral go-to-market motion. The first and most obvious benefit is that a user is already a “customer” before she even pays for it. Because so much of the initial adoption of open source software comes from developers organically downloading and using the software, the companies themselves can often bypass both the marketing pitch and the proof-of-concept stage of the sales cycle. The sales pitch is more along the lines of, “you already use 500 instances of our software in your environment, wouldn’t you like to upgrade to the enterprise edition and get these additional features?”  This translates to much shorter sales cycles, the need for far fewer sales engineers per account executive, and much quicker payback periods of the cost of selling. In fact, in an ideal situation, open source companies can operate with favorable Account Executives to Systems Engineer ratios and can go from sales qualified lead (SQL) to closed sales within one quarter.

This virality allows for open source software businesses to be far more efficient than traditional software businesses from a cash consumption basis. Some of the best open source companies have been able to grow their business at triple-digit growth rates well into their life while  maintaining moderate of burn rates of cash. This is hard to imagine in a traditional software company. Needless to say, less cash consumption equals less dilution for the founders.

Photo courtesy of Getty Images

Open Source to Freemium

One last aspect of the changing open source business that is worth elaborating on is the gradual movement from true open source to community-assisted freemium. As mentioned above, the early open source projects leveraged the community as key contributors to the software base. In addition, even for slight elements of commercially-licensed software, there was significant pushback from the community. These days the community and the customer base are much more knowledgeable about the open source business model, and there is an appreciation for the fact that open source companies deserve to have a “paywall” so that they can continue to build and innovate.

In fact, from a customer perspective the two value propositions of open source software are that you a) read the code; b) treat it as freemium. The notion of freemium is that you can basically use it for free until it’s deployed in production or in some degree of scale. Companies like Elastic and Cockroach Labs have gone as far as actually open sourcing all their software but applying a commercial license to parts of the software base. The rationale being that real enterprise customers would pay whether the software is open or closed, and they are more incentivized to use commercial software if they can actually read the code. Indeed, there is a risk that someone could read the code, modify it slightly, and fork the distribution. But in developed economies – where much of the rents exist anyway, it’s unlikely that enterprise companies will elect the copycat as a supplier.

A key enabler to this movement has been the more modern software licenses that companies have either originally embraced or migrated to over time. Mongo’s new license, as well as those of Elastic and Cockroach are good examples of these. Unlike the Apache incubated license – which was often the starting point for open source projects a decade ago, these licenses are far more business-friendly and most model open source businesses are adopting them.

The Future

When we originally penned this article on open source four years ago, we aspirationally hoped that we would see the birth of iconic open source companies. At a time where there was only one model – Redhat – we believed that there would be many more. Today, we see a healthy cohort of open source businesses, which is quite exciting. I believe we are just scratching the surface of the kind of iconic companies that we will see emerge from the open source gene pool. From one perspective, these companies valued in the billions are a testament to the power of the model. What is clear is that open source is no longer a fringe approach to software. When top companies around the world are polled, few of them intend to have their core software systems be anything but open source. And if the Fortune 5000 migrate their spend on closed source software to open source, we will see the emergence of a whole new landscape of software companies, with the leaders of this new cohort valued in the tens of billions of dollars.

Clearly, that day is not tomorrow. These open source companies will need to grow and mature and develop their products and organization in the coming decade. But the trend is undeniable and here at Index we’re honored to have been here for the early days of this journey.

News Source =

The crusade against open-source abuse

in Amazon/Amazon Web Services/Apache Software Foundation/author/chef/Cloud/Column/computing/copyleft/Delhi/Dynatrace/founder/free software/Gradle/groovy/India/lawyer/MongoDB/neo4j/open source software/player/Politics/red hat/redis labs/Scala/Spinnaker/terms of service/web services by

There’s a dark cloud on the horizon. The behavior of cloud infrastructure providers, such as Amazon, threatens the viability of open source. I first wrote about this problem in a prior piece on TechCrunch. In 2018, thankfully, several leaders have mobilized (amid controversy) to propose multiple solutions to the problem. Here’s what’s happened in the last month.

The Problem

Go to Amazon Web Services (AWS) and hover over the Products menu at the top. You will see numerous open-source projects that Amazon did not create, but runs as-a-service. These provide Amazon with billions of dollars of revenue per year. To be clear, this is not illegal. But it is not conducive to sustainable open-source communities, and especially commercial open-source innovation.

Two Solutions

In early 2018, I gathered together the creators, CEOs or general counsels of two dozen at-scale open-source companies, along with respected open source lawyer Heather Meeker, to talk about what to do.

We wished to define a license that prevents cloud infrastructure providers from running certain software as a commercial service, while at the same time making that software effectively open source for everyone else, i.e., everyone not running that software as a commercial service.

With our first proposal, Commons Clause, we took the most straightforward approach: we constructed one clause, which can be added to any liberal open source license, preventing the licensee from “Selling” the software — where “Selling” includes running it as a commercial service. (Selling other software made with Commons Clause software is allowed, of course.) Applying Commons Clause transitions a project from open source to source-available.

We also love the proposal being spearheaded by another participant, MongoDB, called the Server Side Public License (SSPL). Rather than prohibit the software from being run as a service, SSPL requires that you open-source all programs that you use to make the software available as a service, including, without limitation, management software, user interfaces, application program interfaces, automation software, monitoring software, backup software, storage software and hosting software, all such that a user could run an instance of the service. This is known as a “copyleft.”

These two licenses are two different solutions to exactly the same problem. Heather Meeker wrote both solutions, supported by feedback organized by FOSSA.

The initial uproar and accusations that these efforts were trying to “trick” the community fortunately gave way to understanding and acknowledgement from the open source community that there is a real problem to be solved here, that it is time for the open source community to get real, and that it is time for the net giants to pay fairly for the open source on which they depend.

In October, one of the board members of the Apache Software Foundation (ASF) reached out to me and suggested working together to create a modern open source license that solves the industry’s needs.

Kudos to MongoDB

Further kudos are owed to MondoDB for definitively stating that they will be using SSPL, submitting SSPL in parallel to an organization called Open Source Initiative (OSI) for endorsement as an open source license, but not waiting for OSI’s endorsement to start releasing software under the SSPL.

OSI, which has somehow anointed itself as the body that will “decide” whether a license is open source, has a habit of myopically debating what’s open source and what’s not. With the submission of SSPL to OSI, MongoDB has put the ball in OSI’s court to either step up and help solve an industry problem, or put their heads back in the sand.

In fact, MongoDB has done OSI a huge favor. MongoDB has gone and solved the problem and handed a perfectly serviceable open source license to OSI on a silver platter.

Open Source Sausage

The public archives of OSI’s debate over SSPL are at times informative and at times amusing, bordering on comical. After MongoDB’s original submission, there were rah-rah rally cries amongst the members to find reasons to deem SSPL not an open source license, followed by some +1’s. Member John Cowan reminded the group that just because OSI does not endorse a license as open source, does not mean that it is not open source:

As far as I know (which is pretty far), the OSI doesn’t do that. They have never publicly said “License X is not open source.” People on various mailing lists have done so, but not the OSI as such. And they certainly don’t say “Any license not on our OSI Certified ™ list is not open source”, because that would be false. It’s easy to write a license that is obviously open source that the OSI would never certify for any of a variety of reasons.

Eliot Horowitz (CTO and co-founder of MongoDB) responded cogently to questions, comments and objections, concluding with:

In short, we believe that in today’s world, linking has been superseded by the provision of programs as services and the connection of programs over networks as the main form of program combination. It is unclear whether existing copyleft licenses clearly apply to this form of program combination, and we intend the SSPL to be an option for developers to address this uncertainty.

Much discussion ensued about the purpose, role and relevance of OSI. Various sundry legal issues were raised or addressed by Van Lindberg, McCoy Smith, and Bruce Perens.

Heather Meeker (the lawyer who drafted both Commons Clause and SSPL) stepped in and completely addressed the legal issues that had been raised thus far. Various other clarifications were made by Eliot Horowitz, and he also conveyed willingness to change the wording of the license if it would help.

Discussion amongst the members continued about the role, relevance and purpose of OSI, with one member astutely noting that there were a lot of “free software” wonks in the group, attempting to bastardize open source to advocate their own agenda:

If, instead, OSI has decided that they are now a Free Software organization, and that Free Software is what “we” do, and that “our” focus is on “Free software” then, then let’s change the name to the Free Software Initiative and open the gates for some other entity, who is all about Open Source, to take on that job, and do it proudly. 🙂

There was debate over whether SSPL discriminates against types of users, which would disqualify it from being open source. Eliot Horowitz provided a convincing explanation that it did not, which seemed to quiet the crowd.

Heather Meeker dropped some more legal knowledge on the group, which seemed to sufficently address outstanding issues. Bruce Perens, the author of item 6 of the so-called open source definition, acknowledged that SSPL does not violate item 6 or item 9 of the definition, and subsequently suggested revising item 9 such that SSPL would violate it:

We’re not falling on our swords because of this. And we can fix OSD #9 with a two word addition “or performed” as soon as the board can meet. But it’s annoying.

Kyle Mitchell, himself an accomplished open source lawyer, opposed such a tactic. Larry Rosen pointed out that some members’ assertion (that “it is fundamental to open source that everyone can use a program for any purpose”) is untrue. Still more entertaining discussion ensued about the purpose of OSI and the meaning of open source.

Carlos Piana succinctly stated why SSPL was indeed open source. Kyle Mitchell added that if licenses were to be judged in the manner that the group was judging SSPL, then GPL v2 was not open source either.


Meanwhile Dor Lior, the founder of database company ScyllaDB compared SSPL and AGPL side-to-side and argued that “MongoDB would have been better off with Commons Clause or just swallowed a hard pill and stayed with APGL.” Player.FM released their service based on Commons Clause licensed RediSearch, after in-memory database company Redis Labs placed RediSearch and four other specific add-on modules (but not Redis itself) under Commons Clause, and graph database company Neo4J placed its entire codebase under Commons Clause and raised an $80M Series E.

Then Michael DeHaan, creator of Red Hat Ansible, chose Commons Clause for his new project. When asked why he did not choose any of the existing licenses that OSI has “endorsed” to be open source, he said:

This groundswell in 2018 should be ample indication that there is an industry problem that needs to be fixed.

Eliot Horowitz summarized and addressed all the issues, dropped the mic, and left for a while. When it seemed like SSPL was indeed following all the rules of open source licenses, and was garnering support of the members, Brad Kuhn put forward a clumsy argument for why OSI should change the rules as necessary to prevent SSPL from being deemed open source, concluding:

It’s likely the entire “license evaluation process” that we use is inherently flawed.

Mitchell clinched the argument that SSPL is open source with definitive points. Horowitz thanked the members for their comments and offered to address any concerns in a revision, and returned a few days later with a revised SSPL.

OSI has 60 days since MongoDB’s new submission to make a choice:

  1. Wake up and realize that SSPL, perhaps with some edits, is indeed an open source license, OR
  2. Effectively signal to the world that OSI does not wish to help solve the industry’s problems, and that they’d rather be policy wonks and have theoretical debates.

“Wonk” here is meant in the best possible way.

Importantly, MongoDB is proceeding to use the SSPL regardless. If MongoDB were going to wait until OSI’s decision, or if OSI were more relevant, we might wait with bated breath to hear whether OSI would endorse SSPL as an open source license.

As it stands, OSI’s decision is more important to OSI itself, than to the industry. It signals whether OSI wants to remain relevant and help solve industry problems or whether it has become too myopic to be useful. Fearful of the latter, we looked to other groups for leadership and engaged with the Apache Software Foundation (ASF) when they reached out in the hopes of creating a modern open source license that solves the industry’s needs.

OSI should realize that while it would be nice if they deemed SSPL to be open source, it is not critical. Again in the words of John Cowan, just because OSI has not endorsed a license as open source, does not mean it’s not open source. While we greatly respect almost all members of industry associations and the work they do in their fields, it is becoming difficult to respect the purpose and process of any group that anoints itself as the body that will “decide” whether a license is open source — it is arbitrary and obsolete.


In my zest to urge the industry to solve this problem, in an earlier piece, I had said that “if one takes open source software that someone else has built and offers it verbatim as a commercial service for one’s own profit” (as cloud infrastructure providers do) that’s “not in the spirit” of open-source. That’s an overstatement and thus, frankly, incorrect. Open source policy wonks pointed this out. I obviously don’t mind rattling their cages but I should have stayed away from making statements about “what’s in the spirt” so as to not detract from my main argument.


The behavior of cloud infrastructure providers poses an existential threat to open source. Cloud infrastructure providers are not evil. Current open source licenses allow them to take open source software verbatim and offer it as a commercial service without giving back to the open source projects or their commercial shepherds. The problem is that developers do not have open source licensing alternatives that prevent cloud infrastructure providers from doing so. Open source standards groups should help, rather than get in the way. We must ensure that authors of open source software can not only survive, but thrive. And if that means taking a stronger stance against cloud infrastructure providers, then authors should have licenses available to allow for that. The open source community should make this an urgent priority.


I have not invested directly or indirectly in MongoDB. I have invested directly or indirectly in the companies behind the open source projects Spring, Mule, DynaTrace, Ruby Rails, Groovy Grails, Maven, Gradle, Chef, Redis, SysDig, Prometheus, Hazelcast, Akka, Scala, Cassandra, Spinnaker, FOSSA, and… in Amazon.

News Source =

Facebook open sources library to enhance latest Transport Layer Security protocol

in Delhi/Developer/Facebook/India/open source software/Politics/Security by

For several years, the Internet Engineering Task Force (IETF) has been working to improve the Transport Layer Security (TLS) protocol, which is designed to help developers protect data as it moves around the internet. Facebook created an API library called Fizz to enhance the latest version, TLS 1.3, on Facebook’s networks. Today, it announced it’s open sourcing Fizz and placing it on GitHub for anyone to access and use.

Facebook is currently running more than 50 percent of its traffic through TLS 1.3 and Fizz, which they believe is the largest implementation of TLS 1.3 to date.

All of this is referring to how traffic moves around the internet and how servers communicate with one another in a secure way. This is particularly important because as Facebook points out, in modern internet server architecture, it’s not uncommon to have different key pieces of the process spread out across the world. This raises challenges around reducing latency as data moves from server to server.

One of the major issues involved writing data to a huge chunk of memory, which increased resource overhead and reduced speed. To get around this issue, Facebook decided to divide the data into smaller chunks as it moved into memory and then encrypt it in place, a process called scatter/gather I/O. This provides a more efficient way of processing data in memory, reducing the overhead required to process it, while increasing the processing speed.

Instead of encrypting a single of chunk of data, using Scatter/Gather Fizz breaks it into discrete pieces and encrypts each one. Diagram: Facebook.

TLS 1.3 introduced a concept called “early data” (also known as zero round trip data or 0-RTT data), which has helped reduce latency. According to ITEF, it does this by “allowing a client to send data to a server in the first round trip of a connection, without waiting for the TLS handshake to complete if the client has spoken to the same server recently.” The problem is that this concept can be insecure, so Fizz includes APIs that support this concept and builds on it by reducing the known vulnerabilities.

The company has been working with IETF because it has unique needs due to the sheer number of transactions it processes on a daily basis. According to Facebook, TLS 1.3, “incorporates several new features that make internet traffic more secure, including encrypting handshake messages to keep certificates private, redesigning the way secret keys are derived, and a zero round-trip connection setup, which makes certain requests faster than TLS 1.2.”

As for Fizz, “In addition to the enhancements that come with TLS 1.3, Fizz offers an improved solution for middlebox handshake failures, supports asynchronous I/O by default, and can handle scatter/gather I/O to eliminate the need for extra copies of data,” Facebook wrote in the blog post announcing it was open sourcing the library.

Fizz improves the newest version of the Transport Layer Security protocol, and by making it open source, Facebook is sharing this technology with the community at large where others can take advantage of and build upon the work Facebook has done.

News Source =

After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

in Adobe/Amazon/Amazon Web Services/Atlassian/AWS/bigid/CIO/cloud applications/cloud computing/cloud-native computing/Column/computing/CRM/Delhi/digitalocean/Dropbox/Edward Snowden/enterprise software/European Union/Facebook/Getty-Images/github enterprise/Google/hipchat/India/Infrastructure as a Service/iPhone/Marc Benioff/Microsoft/open source software/oracle/oracle corporation/Packet/Politics/RAM/SaaS/Salesforce/ as a service/software vendors/TC/United States/web services by

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?


The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.

News Source =

Go to Top