Menu

Timesdelhi.com

June 16, 2019
Category archive

open source

Solo.io wants to bring order to service meshes with centralized management hub

in Cloud/Delhi/Developer/Enterprise/idit levine/India/microservices/open source/Politics/service mesh/solo.io/Startups/TC by

As containers and microservices have proliferated, a new kind of tool called the service mesh has developed to help manage and understand interactions between services. While Kubernetes has emerged as the clear container orchestration tool of choice, there is much less certainty in the service mesh market. Solo.io announced a new open source tool called Service Mesh Hub today, designed to help companies manage multiple service meshes in a single interface.

It is early days for the service mesh concept, but there are already multiple offerings including Istio, Linkerd (pronounced Linker-Dee) and Convoy. While the market sorts itself it out, it requires a new set of tools, a management layer, so that developers and operations can monitor and understand what’s happening inside the various service meshes they are running.

Idit Levine, founder and CEO at Solo, say she formed the company because she saw an opportunity to develop a set of tooling for a nascent market. Since founding the company in 2017, it has developed several open source tools to fill that service mesh tool vacuum.

Levine says that she recognized that companies would be using multiple service meshes for multiple situations and that not every company would have the technical capabilities to manage this. That is where the idea for the Service Mesh Hub was born.

It’s a centralized place for companies to add the different service mesh tools they are using, understand the interactions happening within the mesh and add extensions to each one from a kind of extension app store. Solo wants to make adding these tools a simple matter of pointing and clicking. While it obviously still requires a certain level of knowledge about how these tools work, it removes some of the complexity around managing them.

Solo.io Service Mesh Hub. Screenshot: Solo.io

“The reason we created this is because we believe service mesh is something big, and we want people to use it, and we feel it’s hard to adopt right now. We believe by creating that kind of framework or platform, it will make it easier for people to actually use it,” Levine told TechCrunch.

The vision is that eventually companies will be able to add extensions to the store for free, or even at some point for a fee, and it is through these paid extensions that the company will be able to make money. She recognized that some companies will be creating extensions for internal use only, and in those cases, they can add them to the hub and mark them as private and only that company can see them.

For every abstraction it seems, there is a new set of problems to solve. The service mesh is a response to the problem of managing multiple services. It solves three key issues, according to Levine. It allows a company to route the microservices, have visibility into them to see logs and metrics of the mesh and to provide security to manage which services can talk to each other.

Levine’s company is a response to the issues that have developed around understanding and managing the service meshes themselves. She says she doesn’t worry about a big company coming in and undermining her mission because she says that they are too focused on their own tools to create a set of uber-management tool like these (but that doesn’t mean the company wouldn’t be an attractive acquisition target).

So far, the company has taken over $13 million in funding, according to Crunchbase data.

With Kata Containers and Zuul, OpenStack graduates its first infrastructure projects

in AT&T/bare metal/China Mobile/china telecom/china unicom/Cloud/cloud computing/computing/Delhi/Developer/Enterprise/India/Kata Containers/mirantis/open source/openstack/openstack foundation/Politics/red hat/suse/TC by

Over the course of the last year and a half, the OpenStack Foundation made the switch from purely focusing on the core OpenStack project to opening itself up to other infrastructure-related projects as well. The first two of these projects, Kata Containers and the Zuul project gating system, have now exited their pilot phase and have become the first top-level Open Infrastructure Projects at the OpenStack Foundation.

The Foundation made the announcement at its first Open Infrastructure Summit (previously known as the OpenStack Summit) in Denver today after the organization’s board voted to graduate them ahead of this week’s conference. “It’s an awesome milestone for the projects themselves,” OpenStack Foundation executive direction Jonathan Bryce told me. “It’s a validation of the fact that in the last 18 months, they have created sustainable and productive communities.”

It’s also a milestone for the OpenStack Foundation itself, though, which is still in the process of reinventing itself in many ways. It can now point at two successful projects that are under its stewardship, which will surely help it as it goes out an tries to attract others who are looking to bring their open-source projects under the aegis of a foundation.

In addition to graduating these first two projects, Airship — a collection of open-source tools for provisioning private clouds that is currently a pilot project — hit version 1.0 today. “Airship originated within AT&T,” Bryce said. “They built it from their need to bring a bunch of open-source tools together to deliver on their use case. And that’s why, from the beginning, it’s been really well aligned with what we would love to see more of in the open source world and why we’ve been super excited to be able to support their efforts there.”

With Airship, developers use YAML documents to describe what the final environment should like like and the result of that is a production-ready Kubernetes cluster that was deployed by OpenStack’s Helm tool – though without any other dependencies on OpenStack.

AT&T’s assistant vice president, Network Cloud Software Engineering, Ryan van
Wyk, told me that a lot of enterprises want to use certain open-source components, but that the interplay between them is often difficult and that while it’s relatively easy to manage the lifecycle of a single tool, it’s hard to do so when you bring in multiple open-source tools, all with their own lifecycles. “What we found over the last five years working in this space is that you can go and get all the different open-source solutions that you need,” he said. “But then the operator has to invest a lot of engineering time and build extensions and wrappers and perhaps some orchestration to manage the lifecycle of the various pieces of software required to deliver the infrastructure.”

It’s worth noting that nothing about Airship is specific to the telco world, though it’s no secret that OpenStack is quite popular in the telco world and unsurprisingly, the Foundation is using this week’s event to highlight the OpenStack project’s role in the upcoming 5G rollouts of various carriers.

In addition, the event will also showcase OpenStack’s bare metal capabilities, an area the project has also focused on in recent releases. Indeed, the Foundation today announced that its bare metal tools now manage over a million cores of compute. To codify these efforts, the Foundation also today launched the OpenStack Ironic Bare Metal program, which brings together some of the project’s biggest users like Verizon Media (home of TechCrunch, though we don’t run on the Verizon cloud), 99Cloud, China Mobile, China Telecom, China Unicom, Mirantis, OVH, Red Hat, SUSE, Vexxhost and ZTE.

Google Cloud Run brings serverless and containers together

in Cloud/containers/Delhi/Developer/Enterprise/Google Cloud Next 2019/google cloud platform/India/Kubernetes/open source/Politics/Serverless by

Two of the biggest trends in applications development in recent years have been the rise of serverless and containerization. Today at Google Cloud Next, the company announced a new product called Cloud Run that is designed to bring the two together. At the same time, the company also announced Cloud Run for GKE, which is specifically designed to run on the Google’s version of Kubernetes.

Oren Teich, director of product management for serverless, says these products came out of discussions with customers. As he points out, developers like the flexibility and agility they get using serverless architecture, but have been looking for more than just compute resources. They want to get access to the full stack, and to that end the company is announcing Cloud Run.

“Cloud Run is introducing a brand new product that takes Docker containers and instantly gives you a URL. This is completely unique in the industry. We’re taking care of everything from the top end of SSL provisioning and routing, all the way down to actually running the container for you. You pay only by the hundred milliseconds of what you need to use, and its end-to-end managed,” Teich explained.

As for the GKE tool, it provides the same kinds of benefits, except for developers running their containers on Google’s GKE version of Kubernetes. Keep in mind, developers could be using any version of Kubernetes their organizations happen to have chosen, so it’s not a given that they will be using Google’s flavor of Kubernetes.

“What this means is that a developer can take the exact same experience, the exact same code they’ve written — and they have G Cloud command line, the same UI and our console and they can just with one-click target the destination they want,” he said.

All of this is made possible through yet another open source project the company introduced last year called Knative. “Cloud Run is based on Knative, an open API and runtime environment that lets you run your serverless workloads anywhere you choose —fully managed on Google Cloud Platform, on your GKE cluster or on your own self-managed Kubernetes cluster,” Teich and Eyal Manor, VP of engineering wrote in a blog post introducing Cloud Run.

Serverless, as you probably know by now, is a bit of a misnomer. It’s not really taking away servers, but it is eliminating the need for developers to worry about them. Instead of loading their application on a particular virtual machine,  the cloud provider, in this case, Google, provisions the exact level of resources required to run an operation. Once that’s done, these resources go away, so you only pay for what you use at any given moment.

Chef goes 100% open source

in Automation/chef/Cloud/Delhi/Developer/Enterprise/India/open core/open source/Politics by

Chef, the popular automation service, today announced that it is open sourcing all of its software under the Apache 2 license. Until now, Chef used an open core model with a number of proprietary products that complemented its open-source tools. Most of these proprietary tools focused on enterprise users and their security and deployment needs. Now, all of these tools, which represent somewhere between a third and half of Chef’s total code base, are open source, too.

“We’re moving away from our open core model,” Chef SVP of products and engineering Corey Scobie told me. “We’re now moving to exclusively open source software development.”

He added that this also includes open product development. Going forward, the company plans to share far more details about its roadmap, feature backlogs and other product development details. All of Chef’s commercial offerings will also be built from the same open source code that everybody now has access to.

Scobie noted that there are a number of reasons why the company is doing this. He believes, for example, that the best way to build software is to collaborate in public with those who are actually using it.

“With that philosophy in mind, it was really easy to justify how we’d take the remainder of the software that we product and make it open source,” Scobie said. “We believe that that’s the best way to build software that works for people — real people in the real world.”

Another reason, Scobie said, is that it was becoming increasingly difficult for Chef to explain which parts of the software were open source and which were not. “We wanted to make that conversation easier, to be perfectly honest.”

Chef’s decision comes during a bit of a tumultuous time in the open source world. A number of companies like Redis, MongoDB and Elasic have recently moved to licenses that explicitly disallow the commercial use of their open source products by large cloud vendors like AWS unless they also buy a commercial license.

But here is Chef, open sourcing everything. Chef co-founder and board member Adam Jacob doesn’t think that’s a problem. “In the open core model, you’re saying that the value is in this proprietary sliver. The part you pay me for is this sliver of its value. And I think that’s incorrect,” he said. “I think, in fact, the value was always in the totality of the product.”

Jacob also argues that those companies that are moving to these new, more restrictive licenses, are only hurting themselves. “It turns out that the product was what mattered in the first place,” he said. “They continue to produce great enterprise software for their customers and their customers continue to be happy and continue to buy it, which is what they always would’ve done.” He also noted that he doesn’t think AWS will ever be better at running Elasticsearch than Elastic or, for that matter, at running Chef better than Chef.

It’s worth noting that Chef also today announced the launch of its Enterprise Automation Stack, which brings together all of Chef’s tools (Chef Automate, Infra, InSpec, Habitat and Workstation) under a unified umbrella.

“Chef is fully committed to enabling organizations to eliminate friction across the lifecycle of all of their applications, ensuring that, whether they build their solutions from our open source code or license our commercial distribution, they can benefit from collaboration as code,” said Chef CEO Barry Crist. “Chef Enterprise Automation Stack lets teams establish and maintain a consistent path to production for any application, in order to increase velocity and improve efficiency, so deployment and updates of mission-critical software become easier, move faster and work flawlessly.”

Microsoft open sources its data compression algorithm and hardware for the cloud

in Cloud/compression/Delhi/Enterprise/India/Microsoft/open compute project/open source/Politics by

The amount of data that the big cloud computing providers now store is staggering, so it’s no surprise that most store all of this information as compressed data in some form or another — just like you used to zip your files back in the days of floppy disks, CD-ROMs and low-bandwidth connections. Typically, those systems are closely guarded secrets, but today, Microsoft open sourced the algorithm, hardware specification and Verilog source code for how it compresses data in its Azure cloud. The company is contributing all of this to the Open Compute Project (OCP).

Project Zipline, as Microsoft calls this project, can achieve 2x higher compression ratios compared to the standard Zlib-L4 64KB model. To do this, the algorithm — and its hardware implementation — were specifically tuned for the kind of large data sets Microsoft sees in its cloud. Because the system works at the systems level, there is virtually no overhead and Microsoft says that it is actually able to manage higher throughput rates and lower latency than other algorithms are currently able to achieve.

Microsoft stresses that it is also contributing the Verilog source code for register transfer language (RTL) — that is, the low-level code that makes this all work. “Contributing RTL at this level of detail as open source to OCP is industry leading,” Kushagra Vaid, the general manager for Azure hardware infrastructure, writes. “It sets a new precedent for driving frictionless collaboration in the OCP ecosystem for new technologies and opening the doors for hardware innovation at the silicon level.”

Microsoft is currently using this system in its own Azure cloud, but it is now also partnering with others in the Open Compute Project. Among these partners are Intel, AMD, Ampere, Arm, Marvell, SiFive, Broadcom, Fungible, Mellanox, NGD Systems, Pure Storage, Synopsys and Cadence.

“Over time, we anticipate Project Zipline compression technology will make its way into several market segments and usage models such as network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices,” writes Vaid.

1 2 3 11
Go to Top