Sunday, November 20, 2011

Storage Efficiency Key To Managing Fast-Growing Data

While pondering the advantage of my products, http://www.linkedin.com/pub/lucy-y-l/24/510/aa5,
this article came into my sight, http://www.crn.com/news/storage/231902774/storage-efficiency-key-to-managing-fast-growing-data.htm


Perfect! That is the problem my product is trying to solve ------ less storage same data ------ due to efficient usage of the storage --- less maintenance 

Tuesday, November 1, 2011

Object Storage

Obviously, every company that is in the storage space is finding some kind of solution in the "object storage" area. I view this as a trend, unavoidable trend. Question is, how to do it? to gain market share?

First, start with What is the problem?

Then, how to solve the problem?

To proceed, start reading some articles on this area. A good reference article is here, http://www.zdnet.co.uk/blogs/back-office-10012454/object-storage-inches-closer-to-mainstream-10024505/

Monday, September 12, 2011

Designing Big Data Storage Infrastructures

http://www.storage-switzerland.com/Articles/Entries/2011/6/16_Designing_Big_Data_Storage_Infrastructures.html

Friday, June 10, 2011

War between x86 and ARM

So, is there a war? who is the winner? pondering about this, an article drew my attention, http://www.brighthub.com/computing/hardware/articles/107133.aspx.

Several factors to consider,

- formost, the instruction sets. Anyway, that is all it is about the difference between the two; ARM is simpler. "simpler" - does it ring a bell to cost? cheaper? some specifics, an ARM achieves 1GHz can fit into 2watt TDP, while Atom struggle to dip below 5watt TDP. However, 64-bit is not avail on ARM yet. ARM is also limited to Andoid, iOS and other embedded operating systems, while x86 is wide spread to windows, OS X and Linux.

- existing playgrounds. of course, they mean the Intel- or AMD- based boxes, versus the smartphone and tablets;

- licensing hence popularity. x86 is Intel's, period, with the old license for AMD and VIA; however, ARM is a totally different story. As a direct result, it is wide spread.

Winner? not yet. mostly, in the old days, they focuses on different market sector. Things can change, though. new Atom can run Android, while NVIDIA is making laptop chips based on ARM.

We will see.

Sunday, June 5, 2011

Six IT Challenges

1,Virtualization is a challenge
2, Large Home Diretory Challenge
3, Retention / Archive challenge
4, Making Big Storage Simple and Easy to Use
5, Making Big Storage Reliable
6, Making Big Storage Affordable


Tuesday, February 8, 2011

isilon oneFS

http://www.isilon.com/onefs-operating-system

Detailed specs:
                 http://www.scaleoutnas.com/scale_out_nas_faq.html 

Summary:


Rob Anderson is Isilon's consultancy manager for EMEA. He has come to this position after being involved in writing Isilon's operating software and, indeed, holds a couple of software patents, as a result of this work. He knows, as we might say, whereof he speaks.
Isilon's idea of a highly-performant NAS system is to cluster NAS nodes together using Infiniband and to have nodes function as peers in producing a virtualised and automatically load-balancing single pool of NAS storage and processing resource.
The company reckons it has built the first industrial-strength clustered storage system in existence. In comparison IBM's GFS is slow, has performance issues, is without industrial-strength reliability and is hardly ever seen in the market place.
Isilon puts some of its performance advantage down to its use of Infiniband for clustering nodes together. All back-end traffic between the nodes is Infiniband-based. It has a latency one-twentieth of Ethernet, and Infiniband's bandwidth roadmap is going to deliver bandwidth increases much faster than Isilon nodes (currently maxed out at 88 per cluster) can soak up.
Clearly inter-nodal traffic could choke Infiniband if there were enough nodes in an Isilon cluster. This is because one node deals with a single file request from an accessing server. Files and associated parity data are striped across nodes. So, to deliver a file, the requested node has to fetch the pieces, the stripes, from the other nodes. Have 88 nodes simultaneously fetching stripes from up to nine nodes each simultaneously, and you have a lot of Infiniband traffic. But, Rob Anderson, says, nowhere near enough to even begin to throttle performance. Isilon clusters could handle, he asserts: "up to 300,000 concurrent dial-up connections."
"The Infiniband bandwidth roadmap is orders of magnitude bigger than what you would get from an EMC SAN or whatever."
Gigabit Ethernet clustering just doesn't give you the performance and future headroom that Infiniband does.
Anderson says: "NetApp uses Infiniband to cluster two nodes. When NetApp bought Spinnaker it then made a mistake. It tried to add features out of the Spinnaker product into ONTAP. But clustering can't be done that way; it has to be in the DNA of the system. NetApp's approach didn't work. Two years ago NetApp reversed direction. Dave Hitz (NetApp CEO) announced that Data ONTAP GX is a Spinnaker foundation with NetApp features added to it."
Anderson added this comment: "(Data ONTAP GX) is namespace organisation. It's not clustering. It's RAID behind the veil and can still take eight hours to rebuild a disk. They'll be performance problems downstream. It's a bandaid. It's a total kluge."
With Isilon file data and parity data is striped across up to 9 nodes. A failed disk can be re-built in 30 minutes to an hour. In effect, Isilon's striping technology renders RAID redundant.
Anderson says suppliers like Acopia 'do it in the switch layer. It's not rich, it's lightweight.' Again there will be performance problems downstream.
A virtualised pool of NAS resource requires the NAS nodes to be clustered for smooth performance scaling. It also requires N + 2 protection so that the system can recover from two failed disks and not just one. (NetApp's RAID DP provides protection against two disk failures.)
Isilon is working on N + 3 and N + 4 protection. The N + 1 and N + 2 protection schemes can apply to nodes, to folders, even to individual files.
In the Isilon scheme we can conceive of nodes as water tanks connected by a pipe (Infiniband). When a new (and empty) node is added to the cluster then the water finds a fresh level across all the tanks. In the same way the data stored on the existing nodes is spread out across the now-expanded cluster so that all nodes have the same data occupancy utilisation; automatic load-balancing of data storage.
Isilon also says that adding nodes increases I/O performance in the same way as adding lanes to a motorway or runways to an airport does. For added processing performance you can add processor-only nodes. For added capacity scaling you can add disk expansion units.
In Isilon's view it is a simple and clean design that is simple and clean to manage and very reliable and performant in use.





Sunday, February 6, 2011

What cloud computing really means?

Frankly, no one has the answer yet. Several years passed, everyone is claiming they are doing "cloud" project. But what exactly is it? This article gave some actual examples, http://www.infoworld.com/d/cloud-computing/what-cloud-computing-really-means-031?page=0,0.

However, what about the storage vendors? the internet equipment vendors? the service providers? the supply chain vendors? it is going to be a big war, and with several winners at the end, consumers will feel the benefit and pain.

I think whoever can provide the easiest-to-use and most-secure cloud would be the winner. But again, what does the secure? easiest-to-use mean?

Monday, January 10, 2011

Cloud OS

This is the trend, no questions asked. From working with Solaris, Linux, QNX, and Cisco IOS, I have got to believe this is the trend.

Good reads on this,

      



The Future of the Operating System in Cloud Computing


Steve Jin, VMware R&D
This entry was reposted from DoubleCloud.orga blog for architects and developers on virtualization and cloud computing.
An operating system (OS) is a piece of software. It manages the computer hardware and provides common services for various applications. With the rise of cloud computing, people may wonder whether the OS is still relevant and what role it will play in the future cloud.
Key Components of OS
There are different flavors of operating systems: from real-time OS, desktop OS, all the way to a mainframe OS. The most recent OS is the Cloud OS.
In general, every OS has these common components:
- The kernel, which manages memory, processes, etc.
- Device drivers, which drive different hardware from different vendors.
- User interfaces, including command line shell and Window system.
- File system, which provides a hierarchical way to persist data.
- Security, which authenticates users and protects information.
Depending on the type of OS, you may miss something here or have something extra. For example, an embedded OS may not have a user interface and everything is controlled remotely. For the desktop OS, you may have extra commonly used applications such as a calculator, a calendar, a browser, and so on.
Squeezed Sandwich
With virtualization, operating systems moved up in the software stack. The role of managing and abstracting hardware is relinquished to hypervisors underneath the virtual machines. So the OS is squeezed by virtualization from the bottom.
Long before virtualization, there have been concepts of software middleware including database, messaging, etc. These provide a higher level platform for applications, and result in a higher quality of products and higher developer productivity. Also, the rise of software virtual machines like JVM and .Net, have abstracted the OS services to a higher level of programming APIs. So the OS is less important from the application development point of view. It has been squeezed from the top as well.
In being squeezed, there is less “meat” in the sandwich today. As VMware CEO Paul Maritz pointed out at VMworld 2010, “The innovation in how hardware is coordinated today and the innovation in how services are provided to applications is no longer happening inside the operating system." In other words, the innovation happens below (virtualization) and above (middleware) today.
Why an OS still Important in Cloud Computing
As I have said before, cloud computing is more an evolution than a revolution. Keeping the traditions is important for adoption. One of these traditions is the operating system.
Technically, an OS owns a very important asset: IP addresses. An IP address has two functionalities in networking: 1) helping routing traffic in and out, and 2) identifying a particular operating system.
Hypervisors such as ESX do own IP addresses, but for management purposes not for computing workloads. Middleware and applications do not own IP addresses, but rather bind themselves with a particular service port as needed.
The second part for IP being an ID is critical. It’s like a public interface. Unless we can move the IP up to the middleware or application layer, the OS continues to be a critical component.
With IPv4, it’s not practical to assign an IP to an application due to the limited number of IPv4 addresses available. With IPv6, it’s all possible because there are plenty of IP addresses. But then the question becomes whether it’s really needed or the change buys you anything. For one thing, moving the IP attribute up to middleware or application will totally change people’s perceptions which may hurt adoption.
Is JEOS Enough?
Coming along with virtualization, Just Enough Operating System (JEOS) and virtual appliance (VA) concepts were born. The idea is that you can trim down an OS to the minimum required to support an application. In this comparison, think of the virtual machine as an application.
For one thing, you can significantly reduce the size of an operating system. The challenge there is how to keep it as convenient as before. I personally haven’t seen a perfect solution to the challenge. In general, most people still aren’t comfortable thinking of a virtual appliance as an application. They still look at it as an OS and they have the same expectations (unless a new set of tools could help to manage them in a similar way as applications). It may lead to a new wave of innovations as the next big things.
Diverging of OSes
Because the OS is used in different environments for different purposes, we can’t predict that they’ll all evolve in the same manner.  In the context of cloud computing, I think the future of OSes will diverge based on their purposes.
First, there is the OS for enterprises. As VMware CEO Paul Maritz said, the OS will continue to be a component in the stack. It is still an important and stable component in supporting the cloud infrastructure. Depending on the type of cloud, you may not need to know about the OS but it’s still critical for operations. Instead of innovating new features, the imperative work there is to make the OS reliable, convenient, and secure, and at the same time keep up with new CPU architectures.
Second, there is the OS for end users. Although some people think all you need is a browser, I prefer convenience of a full-fledged desktop OS for work and home. You could build similar desktop user experience within a browser like Google Chrome OS, but it’s still a desktop with features I already familiar with. These OSes can live in the cloud and serve end users remotely. End users will need OSes outside the cloud as well. These OSes may bundle tightly with computer hardware, not as independent software. For better mobility and easy management of personal data across various desktops/netbooks/smart phones/setup boxes, we will need a better way to synchronize them.




Sunday, January 9, 2011

Leadership

The first responsibility of a leader is to define reality.  The last is to say ‘thank you’.  In between the two, the leader must become a servant and a debtor.  That sums up the progress of an artful leader.”  ---Max Depree,
author of "Leadership is an Art".