Thanks to containers and microservices, the way we are building software is quickly changing. But as with all change, these new models also introduce new problems. You probably still want to know who actually built a given container and what’s running in it. To get a handle on this, Google, JFrog, Red Hat, IBM, Black Duck, Twistlock, Aqua Security and CoreOS today announced Grafeas (“scribe” in Greek), a new joint open source project that provides users with a standardized way for auditing and governing their software supply chain.
In addition, Google also launched another new project, Kritis (“judge” in Greek, because after the success of Kubernetes, it would surely be bad luck to pick names in any other language for new Google open source project). Kritis allows businesses to enforce certain container properties at deploy time for Kubernetes clusters.
Grafeas basically defines an API that collects all of the metadata around code deployments and build pipelines. This means keeping a record of authorship and code provenance, recording the deployment of each piece of code, marking whether code passed a security scan, what components it uses (and whether those have known vulnerabilities) and whether Q&A signed off on it. So before a new piece of code is deployed then, the system can check all of the info about it through the Grafeas API and if it’s certified and free of vulnerabilities (at least to the best knowledge of the system), then it can get pushed into production.
At first glance, this all may seem rather bland, but there’s a real need for projects like this. With the advent of continuous integration, decentralization, microservices, an increasing number of toolsets and every other buzzworthy technology, enterprises are struggling to keep tabs on what’s actually happening in their data centers. It’s pretty hard to stick to your security and governance policies if you don’t exactly know what software you’re actually running. Currently, all of the different tools that developers use can record their own data, of course, but Grafeas represents an agreed-upon way for collecting and accessing this data across tools.
Like so many of Google’s open source projects, Grafeas basically mimics how Google itself handles these issues. Thanks to its massive scale and early adoption of containers and microservices, Google, after all, saw many of these problems long before they became an issue for the industry at large. As Google notes in today’s announcement, the basic tenants of Grafeas reflect the best practices that Google itself developed for its build systems.
All of the various partners involved here are bringing different pieces to the table, but JFrog, for example, will implement this system in its Xray API. Red Hat will use it to enhance its security and automation features in OpenShift (it’s container platform) and CoreOS will integrate it into its Tectonic Kubernetes platform.
One of the early testers of Grafeas is Shopify, which currently build about 6,000 containers per day and which keeps 330,000 images in its primary container registry. With Grafeas, it can now know whether a given container is currently being used in production, for example, when it was downloaded from the registry, what packages are running in it and whether any of the components in the container include any known security vulnerabilities.
“Using Grafeas as the central source of truth for container metadata has allowed the security team to answer these questions and flesh out appropriate auditing and lifecycling strategies for the software we deliver to users at Shopify,” the company writes in today’s announcement.
News Source = techcrunch.com
Sumo Logic brings data analysis to containers
Sumo Logic has long held the goal to help customers understand their data wherever it lives. As we move into the era of containers, that goal becomes more challenging because containers by their nature are ephemeral. The company announced a product enhancement today designed to instrument containerized applications in spite of that.
Sumo’s CEO Ramin Sayer says containers have begun to take hold over the last 12-18 months with Docker and Kubernetes emerging as tools of choice. Given their popularity, Sumo wants to be able to work with them. “[Docker and Kubernetes] are by far the most standard things that have developed in any new shop, or any existing shop that wants to build a brand new modern app or wants to lift and shift an app from on prem [to the cloud], or have the ability to migrate workloads from Vendor A platform to Vendor B,” he said.
He’s not wrong of course. Containers and Kubernetes have been taking off in a big way over the last 18 months and developers and operations alike have struggled to instrument these apps to understand how they behave.
“But as that standardization of adoption of that technology has come about, it makes it easier for us to understand how to instrument, collect, analyze, and more importantly, start to provide industry benchmarks,” Sayer explained.
They do this by avoiding the use of agents. Regardless of how you run your application, whether in a VM or a container, Sumo is able to capture the data and give you feedback you might otherwise have trouble retrieving.
The company has built in native support for Kubernetes and Amazon Elastic Container Service for Kubernetes (Amazon EKS). It also supports the open source tool Prometheus favored by Kubernetes users to extract metrics and metadata. The goal of the Sumo tool is to help customers fix issues faster and reduce downtime.
As they work with this technology, they can begin to understand norms and pass that information onto customers. “We can guide them and give them best practices and tips, not just on what they’ve done, but how they compare to other users on Sumo,” he said.
News Source = techcrunch.com
Kubernetes stands at an important inflection point
Last week at KubeCon and CloudNativeCon in Copenhagen, we saw an open source community coming together, full of vim and vigor and radiating positive energy as it recognized its growing clout in the enterprise world. Kubernetes, which came out of Google just a few years ago, has gained acceptance and popularity astonishingly rapidly — and that has raised both a sense of possibility and a boat load of questions.
At this year’s European version of the conference, the community seemed to be coming to grips with that rapid growth as large corporate organizations like Red Hat, IBM, Google, AWS and VMware all came together with developers and startups trying to figure out exactly what they had here with this new thing they found.
The project has been gaining acceptance as the defacto container orchestration tool, and as that happened, it was no longer about simply getting a project off the ground and proving that it could work in production. It now required a greater level of tooling and maturity that previously wasn’t necessary because it was simply too soon.
As this has happened, the various members who make up this growing group of users, need to figure out, mostly on the fly, how to make it all work when it is no longer just a couple of developers and a laptop. There are now big boy and big girl implementations and they require a new level of sophistication to make them work.
Against this backdrop, we saw a project that appeared to be at an inflection point. Much like a startup that realizes it actually achieved the product-market fit it had hypothesized, the Kubernetes community has to figure out how to take this to the next level — and that reality presents some serious challenges and enormous opportunities.
A community in transition
The Kubernetes project falls under the auspices of the Cloud Native Computing Foundation (or CNCF for short). Consider that at the opening keynote, CNCF director Dan Kohn was brimming with enthusiasm, proudly rattling off numbers to a packed audience, showing the enormous growth of the project.
If you wanted proof of Kubernetes’ (and by extension cloud native computing’s) rapid ascension, consider that the attendance at KubeCon in Copenhagen last week numbered 4300 registered participants, triple the attendance in Berlin just last year.
The hotel and conference center were buzzing with conversation. Every corner and hallway, every bar stool in the hotel’s open lobby bar, at breakfast in the large breakfast room, by the many coffee machines scattered throughout the venue, and even throughout the city, people chatted, debated and discussed Kubernetes and the energy was palpable.
David Aronchick, who now runs the open source Kubeflow Kubernetes machine learning project at Google, was running Kubernetes in the early days (way back in 2015) and he was certainly surprised to see how big it has become in such a short time.
“I couldn’t have predicted it would be like this. I joined in January, 2015 and took on project management for Google Kubernetes. I was stunned at the pent up demand for this kind of thing,” he said.
Yet there was great demand, and with each leap forward and each new level of maturity came a new set of problems to solve, which in turn has created opportunities for new services and startups to fill in the many gaps. As Aparna Sinha, who is the Kubernetes group product manager at Google, said in her conference keynote, enterprise companies want some level of certainty that earlier adopters were willing to forego to take a plunge into the new and exciting world of containers.
As she pointed out, for others to be pulled along and for this to truly reach another level of adoption, it’s going to require some enterprise-level features and that includes security, a higher level of application tooling and a better overall application development experience. All these types of features are coming, whether from Google or from the myriad of service providers who have popped up around the project to make it easier to build, deliver and manage Kubernetes applications.
Sinha says that one of the reasons the project has been able to take off as quickly as it has, is that its roots lie in a container orchestration tool called Borg, which the company has been using internally for years. While that evolved into what we know today as Kubernetes, it certainly required some significant repackaging to work outside of Google. Yet that early refinement at Google gave it an enormous head start over an average open source project — which could account for its meteoric rise.
“When you take something so well established and proven in a global environment like Google and put it out there, it’s not just like any open source project invented from scratch when there isn’t much known and things are being developed in real time,” she said.
For every action
One thing everyone seemed to recognize at KubeCon was that in spite of the head start and early successes, there remains much work to be done, many issues to resolve. The companies using it today mostly still fall under the early adopter moniker. This remains true even though there are some full blown enterprise implementations like CERN, the European physics organization, which has spun up 210 Kubernetes clusters or JD.com, the Chinese Internet shopping giant, which has 20K servers running Kubernetes with the largest cluster consisting of over 5000 servers. Still, it’s fair to say that most companies aren’t that far along yet.
But the strength of an enthusiastic open source community like Kubernetes and cloud native computing in general, means that there are companies, some new and some established, trying to solve these problems, and the multitude of new ones that seem to pop up with each new milestone and each solved issue.
As Abbie Kearns, who runs another open source project, the Cloud Foundry Foundation, put it in her keynote, part of the beauty of open source is all those eyeballs on it to solve the scads of problems that are inevitably going to pop up as projects expand beyond their initial scope.
“Open source gives us the opportunity to do things we could never do on our own. Diversity of thought and participation is what makes open source so powerful and so innovative,” she said.
It’s worth noting that several speakers pointed out that diversity of thought also required actual diversity of membership to truly expand ideas to other ways of thinking and other life experiences. That too remains a challenge, as it does in technology and society at large.
In spite of this, Kubernetes has grown and developed rapidly, while benefiting from a community which so enthusiastically supports it. The challenge ahead is to take that early enthusiasm and translate it into more actual business use cases. That is the inflection point where the project finds itself, and the question is will it be able to take that next step toward broader adoption or reach a peak and fall back.
News Source = techcrunch.com
DigitalOcean launches its container platform
DigitalOcean is getting into the container game. While it’s still best known for its affordable virtual private server hosting, the company’s ambition is to become a major player in the cloud computing space. Hosting was just the first part of that plan, and with its Spaces storage services, for example, it signaled its future plans.
The service is now in early preview (and you can sign up here) and the company plans to make it widely available later this year.
“We’ve always been devoted to providing simple solutions for developers — starting with our cloud servers, Droplets,” said DigitalOcean VP of Product Shiven Ramji. “This product is no exception, allowing developers to focus on successfully shipping their applications while not being burdened by the complexity involved with creating and running a highly scalable and secure cluster across multiple apps.”
DigitalOcean Kubernetes, as the service is called, will allow developers to deploy and manage their container workloads on the DigitalOcean platform. Like competing products from virtually every major cloud computing provider, DigitalOcean’s offering will abstract away a lot of the underlying complexity of running Kubernetes. Users will get their own isolated Kubernetes cluster with full access to the Kubernetes API if they need it. The service integrates with the company’s existing storage service, firewall tools and other key features. Developers can choose whether to run their containers on standard DigitalOcean nodes or on more high-powered compute-optimized nodes. There also is support for access control through a new “teams” feature, as well as all of the usual metrics and logging features you’d expect from a service like this.
News Source = techcrunch.com
Follow on Twitter
Delhi1 month ago
Chefow – Delhi based food tech startup to bring home cooked food at your doorstep.
Delhi1 month ago
A Start of New Technical Ecosystem by International Blockchain Technology Council
Delhi1 month ago
Army of Poets – Education For All
Delhi1 year ago
Vistaprint to shutdown Indian Operations? 75% Employees Fired, Senior Leaders
Amazing8 months ago
वीडियो : सलमान खान की शादी हो या ना हो लेकिन राहुल गांधी ने अपनी शादी को लेकर खोला राज
Delhi1 month ago
Dheeraj Sharma, National President Of NSC to partner with DIYguru for NSC Students
India1 year ago
How BajaTutor is helping Car Racing enthusiasts to build their ATV’s?
Education1 year ago
Haridwar Boy making his way to promote Online Learning in India for GATE / IES Exams