/ By Tim Hinrichs / 0 Comments

One risk in deploying fleets of powerful and flexible clusters on constantly changing infrastructure like Kubernetes is that mistakes happen. Even minute manual errors that slip past review can have substantial impacts on the health and security of your clusters. Such mistakes, in the form of misconfigurations, are reportedly the leading cause of cloud breaches, for example. And, with everything that can happen in the containerized world, these types of mistakes are virtually guaranteed to occur.

The question, then, is how developers and platform engineers can, under today’s accelerated development timeframes, minimize these errors — if not eliminate them entirely for the vast majority of common cases.

To read this article in full, please click here

/ By Tim Hinrichs / 0 Comments

One risk in deploying fleets of powerful and flexible clusters on constantly changing infrastructure like Kubernetes is that mistakes happen. Even minute manual errors that slip past review can have substantial impacts on the health and security of your clusters. Such mistakes, in the form of misconfigurations, are reportedly the leading cause of cloud breaches, for example. And, with everything that can happen in the containerized world, these types of mistakes are virtually guaranteed to occur.

The question, then, is how developers and platform engineers can, under today’s accelerated development timeframes, minimize these errors — if not eliminate them entirely for the vast majority of common cases.

To read this article in full, please click here

/ By Tim Hinrichs / 0 Comments

One risk in deploying fleets of powerful and flexible clusters on constantly changing infrastructure like Kubernetes is that mistakes happen. Even minute manual errors that slip past review can have substantial impacts on the health and security of your clusters. Such mistakes, in the form of misconfigurations, are reportedly the leading cause of cloud breaches, for example. And, with everything that can happen in the containerized world, these types of mistakes are virtually guaranteed to occur.

The question, then, is how developers and platform engineers can, under today’s accelerated development timeframes, minimize these errors — if not eliminate them entirely for the vast majority of common cases.

To read this article in full, please click here

/ By Tim Hinrichs / 0 Comments

One risk in deploying fleets of powerful and flexible clusters on constantly changing infrastructure like Kubernetes is that mistakes happen. Even minute manual errors that slip past review can have substantial impacts on the health and security of your clusters. Such mistakes, in the form of misconfigurations, are reportedly the leading cause of cloud breaches, for example. And, with everything that can happen in the containerized world, these types of mistakes are virtually guaranteed to occur.

The question, then, is how developers and platform engineers can, under today’s accelerated development timeframes, minimize these errors — if not eliminate them entirely for the vast majority of common cases.

To read this article in full, please click here

/ By Tim Hinrichs / 0 Comments

One risk in deploying fleets of powerful and flexible clusters on constantly changing infrastructure like Kubernetes is that mistakes happen. Even minute manual errors that slip past review can have substantial impacts on the health and security of your clusters. Such mistakes, in the form of misconfigurations, are reportedly the leading cause of cloud breaches, for example. And, with everything that can happen in the containerized world, these types of mistakes are virtually guaranteed to occur.

The question, then, is how developers and platform engineers can, under today’s accelerated development timeframes, minimize these errors — if not eliminate them entirely for the vast majority of common cases.

To read this article in full, please click here

/ By David Linthicum / 0 Comments

The two most mentioned advantages of cloud-based platforms are pay-per-use billing and the ability to scale up to an almost unlimited number of resources. No more buying ahead of resource demand, attempting to guess the amount of physical hardware and software that you’ll need.

But enterprise IT needs to understand that scale and costs are coupled concepts in cloud computing. The more resources you use, either self-scaling or auto-scaling, the more you pay. How much you pay may depend on the architecture patterns as much as on the cost of the resources themselves. Here’s why.

In building cloud-based systems, I’ve discovered that cloud architecture is really just making a bunch of the right decisions. Those who make bad decisions are not punished, they are just underoptimized. That everything works conceals the fact that you’re paying twice as much as you would if the architecture were fully optimized as to scaling and cost.

To read this article in full, please click here

/ By Simon Bisson / 0 Comments

Microservices are at the heart of many cloud-native architectures, using tools such as Kubernetes to manage service scaling on demand. Microsoft has been at the forefront of much of this movement, with a deep commitment to the Cloud Native Computing Foundation and by using Kubernetes to support its hyperscale Azure and its on-premises hybrid Azure Stack.

Part of that commitment comes from its tools, with a range of different platforms and services to support cloud-native microservice development. One of those tools is Dapr, the Distributed Application Runtime, an event-driven runtime that supports creating and managing service elements using best practices. It’s designed to be platform agnostic, so you can use your choice of target environments (local, Kubernetes, or any other environment with Dapr support) and your choice of languages and frameworks.

To read this article in full, please click here

/ By Scott Carey / 0 Comments

Google Cloud Platform (GCP) is the most performant public cloud infrastructure-as-a-service (IaaS) provider for running online transactional processing (OLTP) workloads, but Amazon Web Services (AWS) remains the best value for the money.

That’s according to the 2021 Cloud Report from Cockroach Labs, the company behind the open source CockroachDB database, which recently raised a $160 million mega-round of funding.

“Declaring a winner was much harder to declare than in years past,” according to the authors of the report, as the gap on most metrics was razor thin.

To read this article in full, please click here

/ By David Linthicum / 0 Comments

A new study from Cloudreach and IDC entitled “Cloud Trends 2021” (registration required) surveyed more than 200 CIOs. Questions focused on the COVID-19 pandemic’s effect on the use of cloud computing and digital transformation. Keep in mind that the sponsor has a dog in the hunt in that they sell technology.

Of course, it’s the usual “cloud is good,” “cloud is important” stuff you find in most other analyst reports. However, the number that I found interesting is that 27.5 percent stated that large-scale migration to the public cloud was “essential for survival.” Hop in a time machine and just five years ago most enterprises considered cloud as an option for consuming technology such storage and compute, but really not essential. What changed?

To read this article in full, please click here

/ By James Kobielus / 0 Comments

Traditional in-person physical offices have been disappearing from our work lives for many years. With pandemic-wracked 2020 receding into history, many sectors of the global economy now have experienced the pleasures and frustrations of working from home.

Emergence of hybrid physical-virtual work environments

We’ve now seen practically every big technology company from Google to VMware give up trying to bring employees back to traditional offices for the indefinite future. According to a recent enterprise survey by 451 Research, the emerging technology research unit of S&P Global Market Intelligence, 80 percent of companies have implemented or expanded universal work-from-home policies, and 67 percent plan to keep at least some work-from-home policies in place long term or permanently.

To read this article in full, please click here