Hyperscale, Hyper-V, and OpenStack are my topics today! And of course SDE!
You have not heard about SDE?
Well, as a start, read Steve Duplessie's comments here!
Now, with that put into perspective, let's look at some recent exciting news in the realm of cloud storage: The OpenStack Open Source Cloud Mission:
"... to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable."
IBM is very actively participating and contributing to that initiative and if you can spend half an hour, listen to this presentation delivered at the recent OpenStack Summit conference!
Now, hyperscale storage is a slightly different beast and could probably be used best to describe the architectures and challenges that some of those companies face who are actually providing these cloudy storage services: "The term hyperscale storage is coming into vogue to describe systems capable of rapid, efficient expansion to handle massive quantities of data from Web-serving, database, data analysis,high-performance computing and other especially busy applications"
Please read the complete article in searchstorage here!
And obviously, part of hyperscale storage requirements will increasingly be addressed by high-performance Flash based systems. After IBM has rolled out a family of NAND memory-based systems last month (see my previous post), there was an interesting launch of Pure Storage last week!
And two more things: in a reminiscence to IBM's DFSMS (data facility system managed storage) first introduced in 1989 (!!) Windows now implements a similar concept to enable intelligent and OS-based management of back end storage resources: Windows Storage Spaces! They even use some of the same terminology: Storage Pools as an example!
Lastly, around the topics of backup and archiving (which get confused all the time), I found this short ESG video blog to be very enlightening and easy to remember: backup is about RECOVERY, archive is about DISCOVERY...see for yourself here!
Many clients have moved to disk-based backup solutions using deduplication technologies to optimize/minimize the use of rotating disk for backup data. Deduplication uses hash-based signatures to mark files or segments of data as unique and there is often discussions about the likelihood of so-called hash-collisions - thus two different segments/files resulting in the same signature value. Read this article here about the underlying math and likelihood of such collisions!
Off-topic: we have not seen the sun here for weeks, almost months, struggling thru the worst spring on record for many parts of central Europe, including Switzerland. So it feels good to be reminded that my country was recently voted as the #1 place to be born.
OpenBSD disables Intel’s hyper-threading
1 hour ago