Succeed in the combination of cloud and data center

If corporate data center infrastructure has evolved a bit over the past 10 or 20 years, the same is not true for uses. Cloud services have shaken expectations: Ease of provisioning, resource management, and pay-as-you-go is the norm today. With the right tools, data centers should be more fluid and fluid in the future, as companies aim to balance the use of on-premises infrastructure and cloud resources for optimization purposes.

New, more flexible management tools make cloud and on-premises computing resources partially interchangeable. “On-premises computing has evolved as quickly as cloud services,” said Tony Locke, analyst at Freeform Dynamics. Previously it was fairly static, and had an infrastructure dedicated to specialized applications.

“IT has transformed workplaces in 10 years: it is now very easy to scale most IT platforms.”

Tony LockeAnalyzer, Freeform Dynamics

“It has been transformed in 10 years: it is now very easy to scale most IT platforms. We no longer need to interrupt everything for the weekend to actually install the new equipment. All you have to do is bring these new hardware to the data center and plug it in to make it work.” »

Another change observed in the data center: virtualization. Users easily move applications from one physical server to another, which greatly enhances portability, especially with the extension of virtual networks or SDNs (Software Defined Network) observed in the past five to 10 years, says Tony Locke.

The rapid development of automation tools that manage on-premises and cloud resources gives concrete expression to the idea of ​​combining the two types of resources together in one group.

In June, HashiCorp announced Terraform version 1.0, showing that its infrastructure management platform was mature and stable enough for production use, although many customers had already deployed it without waiting for another grind.

Using the Programmable Infrastructure Tool or IaC (Infrastructure as Code), the user will build their infrastructure using declarative configuration files that describe the target state of the infrastructure. These are guidelines that allow the efficient and recurring provision of infrastructure by Terraform for a specific application or service.

It is also possible to automate complex changes to the infrastructure by reducing human interactions, by simply updating configuration files. The great thing about Terraform is that in addition to the on-premises infrastructure, it can also manage resources spread across multiple cloud providers, including AWS, Azure, and Google Cloud Platform.

Since Terraform configurations are not tied to a specific cloud, they define the same application environment everywhere. You can move or copy the application very easily.

“The idea of ​​a programmable infrastructure is not without its appeal,” says Tony Locke. It is developing, but it still has a long way to go to reach maturity. It is part of the most universal automation framework. Information technology is becoming more automated. Freed from the repetitive and redundant tasks that are now well supported by software, IT teams can focus on other areas of greater added value to the business. »

Native cloud storage

Storage has become more flexible, at least when it comes to virtual storage systems (SDS) designed to run on a group of servers rather than on proprietary hardware. In the past, apps were often associated with persistent storage networks.

SDS storage is easy to expand. It is usually sufficient to add nodes to the storage block.

Since this type of system is programmable, it is easy to provide and manage through APIs or using infrastructure tools like Terraform.

The sophistication and flexibility of SDS storage is demonstrated in WekaIO’s Limitless data platform, which has been deployed to several supercomputer projects. The WekaIO platform standardizes the namespace provided for applications, and they are deployed on dedicated storage servers or in the cloud.

If necessary, organizations can move data from their on-premises group to the public cloud and provide a Weka group there. According to WekaIO, any file-based application can be run in the cloud without further modification.

One of the main functions of WekaIO is to take a snapshot of the entire environment, including all data and metadata associated with the file system, which can then be transferred to an object store such as Amazon S3 cloud storage.

Thus the company can create and use a storage system for a specific project, and then take a snapshot that will be stored in the cloud at the end of the project to free up the hosting infrastructure for other purposes. If the project resumes, it is sufficient to re-create the file system similarly from the snapshot, WekaIO explains.

“The very low prices of some cloud platforms for storage costs alone are often actually offset by rather high exit fees.”

Tony LockeAnalyzer, Freeform Dynamics

But this scenario has a major downside: the potential cost, not for cloud data storage, but access. This is because big cloud providers like AWS charge a fee when retrieving data.

According to T. Lock, “The very low prices of some cloud platforms for storage costs alone are often offset by fairly high exit fees. It will be very expensive to extract data for review and use. Stored data does not cost you much, but scanning and using it will quickly become expensive Some plans include an active archive with no exit fee, but at a higher price.”

Wasabi Technologies has moved away from this agreement and offers various calculation methods, including a monthly fee per terabyte.

Integrated management

If the IT infrastructure continues to become more resilient, resilient, and adaptable, companies will not soon need to expand the capacity of their data centers. With the right management and automation tools, they can effectively and dynamically manage their infrastructure, including reallocating on-premises IT for other purposes and using cloud services to expand their resources.

In order to get to this point, one point still needs to be improved: the ability to locate the problem if an application slows or crashes, a task that is sometimes difficult in a complex distributed system. Organizations with a microservices architecture will not be surprised. T-Lock believes that new technologies based on machine learning may be useful.

“If you can move everything all the time, how can you keep good data management and only the right things running in the right places with the right security?”

Tony LockeAnalyzer, Freeform Dynamics

He continues: “Monitoring has improved a lot, and now the question is how to make the important come out in telemetry. This is where machine learning starts to pay. Root cause analysis is one of the great IT challenges that machine learning greatly simplifies.”

Another difficulty related to data management: How can you ensure that data-related governance and security policies track the frequent movements of workloads and remain in effect? “If you can move everything all the time, how do you keep good data management and just run the right things in the right places with the right security? T-Lock asks.

The tools are there, including the open source Apache Atlas project, offered as a one-stop solution for all phases of data management and metadata management. Atlas was initially designed for Hadoop data ecosystems, but integrates with other environments. For businesses, it seems that the dream of being able to mix on-premises and cloud assets, then deposit and redeem them without restrictions, is finally becoming a reality.

Leave a Comment