Catbird, the pioneer in leading-edge security and compliance solutions for virtual and cloud infrastructure, today announced that Catbird vSecurity ranked amongst the top three vendors in the VirtualizationAdmin.com Readers’ Choice Awards Security Category behind VMware and CISCO. Catbird was the top Virtualization Security company selected by the Readers of VirtualizationAdmin.com, a Leading Terminal Services resource site.
“Our Readers’ Choice Awards give visitors to our site the opportunity to vote for the products they view as the very best in their respective category,” said Sean Buttigieg, VirtualizationAdmin.com manager. “VirtualizationAdmin.com users are specialists in their field who encounter various solutions for virtualization at the workplace. The award serves as a mark of excellence, providing the ultimate recognition from peers within the industry.”
VirtualizationAdmin.com conducts bi-monthly polls to discover which product is preferred by administrators in a particular category of third party solutions for virtualization environments. The awards draw a huge response per category and are based entirely on the visitors’ votes. VirtualizationAdmin.com users can submit their votes for the current Readers' Award poll in the site’s left-hand bar.
Wednesday, December 23, 2009
2010 Will Require Rethinking in Data Center Technology
2010 Will Require Rethinking in Data Center Technology
The industry is at a key inflection point in data center technology and in how vendors and enterprises react to these technology changes. Server virtualization kick-started nothing less than a complete rethinking of how computing workloads are provisioned, managed and moved and, in 2010, everything surrounding the CPU resources (memory, storage, networking, security services) will require rethinking, too.
At the core of this shift is the notion of provisioning workloads from ‘pools' of resources. This started with a single CPU core being virtualized and was extended to multi-core CPUs, and then several multi-core CPUs in a cluster, where workloads can be initiated, moved and torn down from any CPU within the cluster. In the near future, this will be extended to larger clusters, an entire data center and eventually across multiple data centers.
This fundamental change in how workloads are provisioned also impacts the rest of the physical infrastructure surrounding the servers: the networks, storage, memory and security services. Not all applications are equal, nor do they require the same access to the same amount or type of resources. So how do you account for this in a highly mobile, flexible and non-deterministic virtualized environment? One choice is to statically over-provision physical resources to account for all possible peaks from any workload. Networks, for example, would involve dedicating lots of individual connections and bandwidth to servers so that they can handle any workload thrown at them. This is not feasible because of cost, complexity and, perhaps most importantly, because of the inability to predict what resources will be needed in the future. As VM density increases beyond 30-to-1 and traffic fluctuates over time, especially across resource intensive applications including database and messaging systems, it becomes increasingly difficult to account for all possible workload scenarios.
So let's consider making the underlying physical infrastructure completely flexible, which can provide the means to allocate resources from a ‘pool' where and when necessary, much like CPU is treated today. The industry is approaching this idea in several ways, and vendors are working on pieces of the overall solution; for example, providing the ability to combine distributed local disk drives into a common pool of storage, or extending addressable memory in a given server or across clusters of servers. Networking companies are developing standards that allow distributed virtual switches to act as if they are part of one large, flat, layer 2 network. Others are working on the problem of providing I/O (network and storage connections) as virtual functions from a common resource pool and making this I/O, including bandwidth, identity and policy, flexible and mobile along with the VMs that it serves.
Experts say this harkens back to the mainframe days, and in many ways this is true. However, the game has changed in a much more fundamental way since all of this is accomplishable on a $2,000 server. The next step is to allow these servers to connect to any network or storage resource on-demand, from any vendor, across an entire data center management platform that is enhanced by downloadable plug-ins and applications to provide the visibility, control and automation of the available pools of resources.
About the Author
Craig Thompson, Vice President, Product Marketing
Craig brings diverse experience in corporate management, product marketing and engineering management to Aprius. He has held various senior marketing roles in data communications, telecommunications and broadcast video, most recently with Gennum Corporation, based in Toronto, Canada. Prior to that Craig was Director of Marketing for Intel's Optical Platform Division where he was responsible for the successful ramp of 10Gb/s MSAs into the telecommunications market. Craig holds a Bachelor of Engineering with Honours from the University of New South Wales in Sydney, Australia and Masters of Business Administration from the Massachusetts Institute of Technology.
The industry is at a key inflection point in data center technology and in how vendors and enterprises react to these technology changes. Server virtualization kick-started nothing less than a complete rethinking of how computing workloads are provisioned, managed and moved and, in 2010, everything surrounding the CPU resources (memory, storage, networking, security services) will require rethinking, too.
At the core of this shift is the notion of provisioning workloads from ‘pools' of resources. This started with a single CPU core being virtualized and was extended to multi-core CPUs, and then several multi-core CPUs in a cluster, where workloads can be initiated, moved and torn down from any CPU within the cluster. In the near future, this will be extended to larger clusters, an entire data center and eventually across multiple data centers.
This fundamental change in how workloads are provisioned also impacts the rest of the physical infrastructure surrounding the servers: the networks, storage, memory and security services. Not all applications are equal, nor do they require the same access to the same amount or type of resources. So how do you account for this in a highly mobile, flexible and non-deterministic virtualized environment? One choice is to statically over-provision physical resources to account for all possible peaks from any workload. Networks, for example, would involve dedicating lots of individual connections and bandwidth to servers so that they can handle any workload thrown at them. This is not feasible because of cost, complexity and, perhaps most importantly, because of the inability to predict what resources will be needed in the future. As VM density increases beyond 30-to-1 and traffic fluctuates over time, especially across resource intensive applications including database and messaging systems, it becomes increasingly difficult to account for all possible workload scenarios.
So let's consider making the underlying physical infrastructure completely flexible, which can provide the means to allocate resources from a ‘pool' where and when necessary, much like CPU is treated today. The industry is approaching this idea in several ways, and vendors are working on pieces of the overall solution; for example, providing the ability to combine distributed local disk drives into a common pool of storage, or extending addressable memory in a given server or across clusters of servers. Networking companies are developing standards that allow distributed virtual switches to act as if they are part of one large, flat, layer 2 network. Others are working on the problem of providing I/O (network and storage connections) as virtual functions from a common resource pool and making this I/O, including bandwidth, identity and policy, flexible and mobile along with the VMs that it serves.
Experts say this harkens back to the mainframe days, and in many ways this is true. However, the game has changed in a much more fundamental way since all of this is accomplishable on a $2,000 server. The next step is to allow these servers to connect to any network or storage resource on-demand, from any vendor, across an entire data center management platform that is enhanced by downloadable plug-ins and applications to provide the visibility, control and automation of the available pools of resources.
About the Author
Craig Thompson, Vice President, Product Marketing
Craig brings diverse experience in corporate management, product marketing and engineering management to Aprius. He has held various senior marketing roles in data communications, telecommunications and broadcast video, most recently with Gennum Corporation, based in Toronto, Canada. Prior to that Craig was Director of Marketing for Intel's Optical Platform Division where he was responsible for the successful ramp of 10Gb/s MSAs into the telecommunications market. Craig holds a Bachelor of Engineering with Honours from the University of New South Wales in Sydney, Australia and Masters of Business Administration from the Massachusetts Institute of Technology.
DataCore Software Super-Sizes Virtual Disks with Its Latest Storage Virtualization Software Release
DataCore Software, a leading provider of storage virtualization, business continuity and disaster recovery software solutions, once again responds quickly to market demands, this time by stretching the size of its virtual disks from 2 Terabytes (TBs) to 1 Petabyte (PB).
"Rather than inch up to 4 or 16 TBs as others are considering, DataCore made the strategic design choice to blow the roof off the capacity ceiling with 1 Petabyte LUNs," commented Augie Gonzalez, Director of Product Marketing, DataCore Software. "But we're still frugal on the back-end, using thin-provisioning to minimize how much real capacity has to be in place day one."
Performance-wise, these immense virtual disks benefit from DataCore's 1 TB per node, 64-bit "mega-caches. "You can be big, and very fast too," added Gonzalez.
Please read more about this feature here: DataCore Supports up to a Terabyte (TB) of Cache.
Why so Big?
The huge virtual disk requirement arises from two industry trends. As regards the physical storage pool, clients are eager to group multiple disk drives, each exceeding 1 TB, into Redundant Array of Inexpensive Disks (RAID) sets. With this release, DataCore storage virtualization nodes can control pools consisting of numerous RAID sets, each well over the previous 2 TB maximum.
Demand has also been strong from applications seeking to update and analyze very large datasets that in the foreseeable future will grow well past the 2 TB cap.
Extensible Architecture - Key to Rapid Response
The pace of DataCore's noteworthy innovations is indicative of its uniquely extensible software architecture. Unbridled by hardware, DataCore rapidly adapts its storage virtualization products to harness the power of bigger, faster, and cheaper equipment within weeks of the technology becoming generally available. That puts customers in the best position to take advantage of future hardware advancements.
The new software release is available immediately. DataCore customers under current maintenance contracts are eligible to receive the 1 PB software enhancements at no charge.
"Rather than inch up to 4 or 16 TBs as others are considering, DataCore made the strategic design choice to blow the roof off the capacity ceiling with 1 Petabyte LUNs," commented Augie Gonzalez, Director of Product Marketing, DataCore Software. "But we're still frugal on the back-end, using thin-provisioning to minimize how much real capacity has to be in place day one."
Performance-wise, these immense virtual disks benefit from DataCore's 1 TB per node, 64-bit "mega-caches. "You can be big, and very fast too," added Gonzalez.
Please read more about this feature here: DataCore Supports up to a Terabyte (TB) of Cache.
Why so Big?
The huge virtual disk requirement arises from two industry trends. As regards the physical storage pool, clients are eager to group multiple disk drives, each exceeding 1 TB, into Redundant Array of Inexpensive Disks (RAID) sets. With this release, DataCore storage virtualization nodes can control pools consisting of numerous RAID sets, each well over the previous 2 TB maximum.
Demand has also been strong from applications seeking to update and analyze very large datasets that in the foreseeable future will grow well past the 2 TB cap.
Extensible Architecture - Key to Rapid Response
The pace of DataCore's noteworthy innovations is indicative of its uniquely extensible software architecture. Unbridled by hardware, DataCore rapidly adapts its storage virtualization products to harness the power of bigger, faster, and cheaper equipment within weeks of the technology becoming generally available. That puts customers in the best position to take advantage of future hardware advancements.
The new software release is available immediately. DataCore customers under current maintenance contracts are eligible to receive the 1 PB software enhancements at no charge.
Subscribe to:
Comments (Atom)