Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Changes, Part 2 -So, What Do We Do With That?

Feb, 26, 2016 Hi-network.com

Last week, in part 1 of this 3 part series, we discussed how Digitization* is driving change. We also covered how Changes (cue** David Bowie) are manifesting themselves in the shifting shape of Applications, which are the raison d'etre for the Data Center.

Deconstructing Apps: Scale 
As traditional apps are deconstructed 
into multiple microservices running on containers, these containers represent a significant increase of new addressable endpoints on the network. As this deconstruction of the app progresses across dozens, hundreds or even thousands of traditional apps in a given enterprise, there are scaling challenges associated with this. Yeah, ok, so what do we do with that?

Ghosts and Invisibility Cloaks: Telemetry

Lets think some more about the implications beyond scaling issues. As the traditional application is decomposed into multiple microservices, containers can come up and go away very rapidly. They're faster than VM's. Consider an event, such as a microservice starting, stopping or moving, or a security attack on one of those mobile endpoints, for example. If sampling occurs every 5 milliseconds (or packets, or whatever...pick an incremental unit), but an event starts at 2 and stops at 4, what happens?

It is undetectable.

It's a ghost.

It does not exist.

How do you troubleshoot or manage a ghost you can't see or measure?  You cannot.

This type of event -which is very real in terms of the problems it may create -is essentially invisible as far as existing tools are concerned. In this scenario, sampling ain't gonna cut it. Consequently, we need telemetry information that lets us know what's occurring continuously, in real time.

Ok, so what do we do with that?

Get On The Bus: Capacity

Remember the phrase "the network is the computer"? Over the years, there have been many arguments as to why that vision is becoming realized, and we now see it taking yet another step. As processes become distributed across containers, some traffic that was once perhaps contained within the OS itself will now be distributed across the network. What was once only in the server is now truly distributed across blades, racks, aisles, data centers, as well as private and public clouds. The network becomes essentially the IPC (inter process communication) bus of the computer.(Check here to see what a really fast bus looks like in action.) These concepts layer on top of the increasing endpoint density discussed above, to place additional requirements on the network in terms of speed, performance and capacity.

Oh, and let's not forget that with all that increased data comes the need to store it and move it, and to do so, increasingly, with distributed storage technology. This means far more data moving between compute nodes over Ethernet/IP (in addition to Fiber Channel). Another thing...all those IoT devices that are popping up like mushrooms, or are reproducing like rabbits, or are multiplying like rabbits on 'shrooms (?), and they are creating data that needs to be stored somewhere as well. All of this combines to generate significant storage growth, much of which will run over the network, also driving the need for additional bandwidth. (Additionally, once that data is collected and stored, somebody probably wants to do some analysis and glean some useful info from it, so we've got yet more activity in the form of Big Data...)

So, what do we do with that?

Supercalifragilisticonvergence: Intelligence

So, at a high level, we see compute, network and storage meld themselves into more of a singular thing (umm, though no, not a singularity) that can be set up and torn down rapidly through self-service. For that to happen effectively, we need to assure that at a more tactical level, we are mindful of how, as discrete technologies converge upon one another, they impact one another's requirements, whether we're talking about hyperconvergence or whatever.

As an example, as more storage moves across the network, it must be low latency, and deterministic if it is to deliver lossless transmission of storage traffic. So there must be a higher level of intelligence in terms of things like buffering algorithms and flow control within the network, if this is to be done in a cost effective manner that works. Again, an appropriate response is: So, what do we do with that?

I've called out several trends and challenges associated with them, but haven't yet provided any answers in terms of how to address them. So, return to this blog next week, for#3 in the series, where I will share with youwhat we do with that.

* BTW, here is a nice summary and white paper of interesting research -not just conjecture -on digitization.

** Yes, I know this is a technology blog but I did mean 'cue' not 'queue'.

Photo credits: commons.wikimedia.com, en.wikipedia.org, youtube (see bus link)

 

 

 


tag-icon Горячие метки: Большие данные Cisco ACI Центр обработки данных #* облако * * Код SDN 3. Анализ - переключатель < < сиско нексус > > containers

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.