Monday, April 8, 2013

NAKIVO Provides NFR Licenses for VM Backup with File Recovery and Cloud Support to VMware Professionals

NAKIVO Inc. has announced today that it has extended its NFR program to June 24th, allowing current VMUG members, VMware vExperts, VCPs, and VCIs to receive a two-socket Not For Resale license of NAKIVO Backup & Replication v3 free of charge.
"We have launched the NFR program in Q1 with the mission to support VMware community by providing VMware professionals with a free virtualization data protection solution for their home and non-production labs,' said Bruce Talley, CEO and Founder of NAKIVO. "Hundreds of VMUG members, VMware vExperts, VCPs, and VCIs from all around the world have requested and received their free NFR licenses for NAKIVO Backup & Replication v2. Answering the persistent demand, we are happy to extend the program and offer free NFR licenses for NAKIVO Backup & Replication v3."
NAKIVO Backup & Replication offers a complete data protection feature set for virtualized environments, including an intuitive Web 2.0 UI, local and offsite VM backup and replication, file recovery from local, remote, and cloud backups, support for live applications & databases (Microsoft Active Directory, Microsoft Exchange, Microsoft SQL, Oracle, etc.), single-click integration with Amazon cloud, deduplication, compression, encryption, and advanced reporting.
"Its [NAKIVO] fast VMware infrastructure discovery, advanced VM protection features, and extensive reporting capabilities, deliver one of the highest levels of performance, reliability, and ease of use available in the VM Backup and Replication market today." 
Eric Sloof, Blogger, www.ntpro.nl
"Nakivo has all the important features that are necessary to protect VMs in today’s virtualized environments."
Marco Broeken, Blogger, www.vclouds.nl
"NAKIVO Backup & Replication is loaded with new technology and capabilities to backup VM's locally, offsite, and to the cloud." And, "If you are a VCP, vExpert, VCI or even a VMUG member you have no excuse to not try out Nakivo."
Mike Preston, Blogger, www.blog.mwpreston.net
The NFR license keys are available for non-production use only, including educational, lab testing, evaluation, training, and demonstration purposes. NFR licenses are available at www.nakivo.com/en/free_nfr_license.htm.

Video: The State of Converged Infrastructure

With all of the chatter around the promised benefits of Converged Infrastructure, it can still be tough to understand the real-world numbers and trends to determine whether or not it really makes sense for our own environments.

Zenoss ran a survey to research this topic, and the results were eye-opening. More businesses have already adopted Converged Infrastructure than we'd speculated, and many more are currently considering it. Current users also offered a wealth of insight from their experience selecting and implementing new systems (including which roles within the enterprise are responsible for managing them). 

This highly informative, hour-long webcast will reveal:
  • Which company sizes and industries have already adopted Converged Infrastructure and how they're integrating it with their existing systems, processes, and org charts 
  • Which key projects and requirements ultimately drove the business decision to deploy Converged Infrastructure and where their previous environments fell short
  • And important, first-hand advice shared by System Administrators, System Architects, IT Managers, and C-Level Leaders who have already made the switch
Watch Stu Miniman, Wikibon, Sr Analyst joins Zenoss VP of Engineering Alan Conley (formerly CTO of the Network Management Technology Group at Cisco), along with Zenoss Director of Product Marketing, Jen Darrouzet to report the full survey results and provide expert analysis around what it could all really mean for you.

Real-Time Processing Solutions for Big Data Application Stacks – Integration of GigaSpaces XAP and Cassandra DB

A Contributed Article By Yaron Parasol, Director of Products at GigaSpaces
GigaSpaces Technologies has developed infrastructure solutions for more than a decade and in recent years has been enabling Big Data solutions as well. The company's latest platform release - XAP 9.5 - helps organizations that need to process Big Data fast. XAP harnesses the power of in-memory computing to enable enterprise applications to function better, whether in terms of speed, reliability, scalability or other business-critical requirements. With the new version of XAP, increased focus has been placed on real-time processing of big data streams, through improved data grid performance, better manageability and end-user visibility, and integration with other parts of your Big Data stack - in this version, integration with Cassandra.
XAP-Cassandra Integration
To build a real-time Big Data application, you need to consider several factors.
First- Can you process your Big Data in actual real-time, in order to get instant, relevant business insights? Batch processing can take too long for transactional data. This doesn't mean that you don't still rely on your batch processing in many ways...
Second - Can you preprocess and transform your data as it flows into the system, so that the relevant data is made digestible and routed to your batch processor, making batch more efficient as well. Finally, you also want to make sure the huge amounts of data you send to long-term storage are available for both batch processing and ad hoc querying, as needed.
XAP and Cassandra DB together can easily enable all the above to happen. With built-in event processing capabilities, full data consistency, and high-speed in-memory data access and local caching - XAP handles the real-time aspect with ease. Whereas, Cassandra is perfect for storing massive volumes of data, querying them ad hoc, and processing them offline.
Several hurdles had to be overcome to make the integration truly seamless and easy for end users - including XAP's document-oriented model vs. Cassandra's columnar data model, XAP's immediate consistency (data must be able to move between models smoothly), XAP offers immediate consistency with performance, while Cassandra trades off between performance and consistency (with Cassandra as the Big Data store behind XAP processing, both consistency and performance are maintained).
Together with the Cassandra integration, XAP offers further enhancements. These include: 
Data Grid Enhancements 
To further optimize your queries over the data grid XAP now includes compound indices, which enable you to index multiple attributes. This way the grid scans one index instead of multiple indices to get query result candidates faster.
On the query side, new projections support enables you to query only for the attributes you're interested in instead of whole objects/documents. All of these optimizations dramatically reduce latency and increase the throughput of the data grid in common scenarios.
The enhanced change API includes the ability to change multiple objects using a SQL query or POJO template. Replication of change operations over the WAN has also been streamlined, and it now replicates only the change commands instead of whole objects. Finally, a hook in the Space Data Persister interface enables you to optimize your DB SQL statements or ORM configuration for partial updates.
Visibility and Manageability Enhancements
A new web UI gives XAP users deep visibility into important aspects of the data grid, including event containers, client-side caches, and multi-site replication gateways.
Managing a low latency, high throughput, distributed application is always a challenge due to the amount of moving parts. The new enhanced UI helps users to maintain agility when managing their application.
The result is a powerful platform that offers the best of all worlds, while maintaining ease of use and simplicity.

Avere Unveils Industry's First-of-its-Kind Hybrid Storage Appliance

Avere Systems, a leader in network-attached storage (NAS) optimization, today announced another industry-first in the innovation and development of the next generation of storage with the introduction of the Avere FXT 3800. This hybrid Edge filer contains both Flash/Solid State Drive (SSD) media and Serial Attached SCSI hard drives (SAS HDD) and delivers significant performance gains in benchmark testing.
With this new hybrid technology, Avere can now automatically tier data across four media types: RAM, SSD, SAS and SATA HDDs, delivering maximum performance for the hottest files, while at the same time moving cold data out of the performance tier and onto SATA to minimize costs and shrink the data storage footprint. Dynamic tiering assures that every block of file data is located in storage that matches its current level of activity. As a result, the new system is 40 percent faster than the FXT 3500, the company's previous top performer on the SPECsfs2008 NFS benchmark test, and is far less costly than flash-only solutions. 
"With the new FXT 3800, Avere continues to be on the cutting edge of file system storage innovation and gives companies a new way to think about the way they purchase data storage," said Benjamin Woo, analyst with Neuralytix. "Customers can now receive the greatest amount of flexibility and choice by leveraging all four media tiers of storage, while defining the performance and efficiency requirements based on the activity of the data."
The Avere FXT 3800 Edge filer contains 144GB of DRAM, 2GB NVRAM and 800GB of SSD to accelerate the read, write and metadata performance of most active data.  It contains 7.8TB of 10k SAS HDDs to store a large working set of recently active data.  The FXT 3800's 2x 10GbE and 6x 1GbE ports allow connectivity to clients and servers for high performance access to active data and to core filers for infrequently accessed data.  Each unit can be clustered to other FXT Edge filers with scaling of up to 50 nodes for linear performance and high availability.
"The performance gains and cost benefits associated with our latest FXT Edge filer demonstrate the massive advantages of a hybrid approach that can precisely match the storage media to the data being accessed," said Ron Bianchini, President and CEO of Avere Systems. "And when deployed as part of our edge-core architecture, it also delivers the flexibility businesses need to locate storage where it makes most sense for the business.
Availability and Pricing
Available within 30 days, the new Avere FXT 3800 starts at $112,500.

Splunk Releases New Version of Splunk App for Windows

Splunk Inc., the leading software platform for real-time operational intelligence, today announced version 5.0 of the Splunk® App for Windows, which delivers enterprise-class monitoring for Microsoft Windows Server. The Splunk App for Windows enables users to monitor their end-to-end infrastructure to prevent outages and pinpoint performance issues in minutes. The Splunk App for Windows will be demonstrated this week at the Microsoft Management Summit in Las Vegas in the Splunk booth (#623). Download the Splunk App for Windows today.
“The update to the Splunk App for Windows, as well as the recent update to the Splunk App for VMware, reflect Splunk's commitment to creating a platform that will allow end users to analyze data from their strategic IT investments,” said Rachel Chalmers, research vice president - infrastructure, 451 Research. “Windows presents a monitoring dilemma for organizations of any size. The new version of the Splunk App for Windows should help simplify this monitoring challenge by incorporating Windows data with other data generated across the infrastructure.”
“IT organizations around the world rely on Microsoft Windows Server as the base of their business infrastructure,” said Manish Kalra, director of product marketing, Splunk. “The Splunk App for Windows helps to create visibility across the entire Windows infrastructure, monitoring everything from Windows Servers to the thousands of Windows-based laptops and PCs. This can help keep critical systems online by enabling users to identify performance issues in real time or before they happen.”
Today’s Windows environments are more complex than ever before and are often deployed at a massive global scale. The need for real-time monitoring of the Windows platform is key to ensuring compliance of critical updates and patches and avoiding performance bottlenecks and network lag. The Splunk App for Windows enables Windows operations teams to monitor and analyze components beyond basic infrastructure by collecting and indexing machine data from CPUs, memory, disk and network usage, Windows updates, event logs and more. The Splunk App for Windows facilitates getting more machine-generated data from more sources into a single platform, enabling Splunk Enterprise to deliver more value across the enterprise by helping:
  • Application teams better understand the baseline infrastructure that supports an application or service and avoid service degradation.
  • Windows administrators troubleshoot performance and use real-time metrics to plan for workload delivery.
  • Network administrators leverage real-time performance metrics to avoid service outages and increase performance.
  • CIOs gain real-time visibility of Windows Server infrastructure performance and service levels.
Concur Technologies, Inc. is a leading global provider of integrated online and mobile business travel and expense management solutions. Concur uses Splunk Enterprise across its organization for a variety of business and IT use cases including IT operations, application management and security. Concur relies on the Splunk App for Windows to create a holistic view of its Windows Server environment.
“Windows Server is an important part of our business systems, and visibility into that infrastructure is critical to our performance as an enterprise,” said John Tharp, Software Configuration Engineer, Concur. “The Splunk App for Windows helps us troubleshoot our Windows Servers to speed recovery and prevent outages. Our admins now have the big picture of OS and application events at their fingertips and no longer need to aggregate results individually. This improves our ownership, helping us to keep these critical systems online.”
Concur also relies on several other apps from the Splunk community website Splunkbase to generate operational intelligence in the enterprise including the Splunk App for Enterprise Security, the Splunk App for Windows Server Active Directory and Splunk Deployment Monitor.
Go to the Splunk website to learn more about the Splunk App for Windows and Splunk Enterprise.

Hitachi Data Systems Announces Support for Microsoft Private Cloud Fast Track, Version 3

Hitachi Data Systems Corporation, a wholly owned subsidiary of Hitachi, Ltd. (TSE: 6501), today announced that Hitachi Unified Compute Platform (UCP) Select for Microsoft Private Cloud supports Microsoft Corp.’s Private Cloud Fast Track architecture, version 3. This updated converged solution adds support for Microsoft Windows Server 2012, System Center 2012 SP1 and the associated monitoring packs, PowerShell cmdlets (pronounced “command-lets”) and Orchestration integrations, and combines Hitachi compute and Hitachi storage with industry-leading virtualization capabilities and best-of-breed networking.
Hitachi Unified Compute Platform Select for Microsoft Private Cloud provides a simplified approach to delivering scalable, preconfigured, and validated infrastructure platforms for on-premises, private cloud implementations. With local control over data and operations, customers can dynamically pool, allocate, secure, and manage resources for agile infrastructure-as-a-service (IaaS) or platform-as-a-service (PaaS.) Likewise, business units can deploy line-of-business applications with speed and consistency using self-provisioning and decommissioning automated data center services in a virtualized environment.
Today’s announcement builds on the longstanding strategic relationship and close collaboration between Hitachi Data Systems and Microsoft. As an inaugural participant in the Fast Track Program, HDS has seen tremendous momentum and growth in demand for Microsoft’s private cloud solutions over the last three years. Hitachi Data Systems has seen strong adoption of its Microsoft Private Cloud Fast Track solutions in key vertical markets including telecommunications, hosted managed services, and the service provider space, as well as global 1000 customers in the retail and financial sectors. Rapid deployment of Microsoft applications such as SQL Server, Microsoft Exchange, Microsoft SharePoint and Microsoft Lync Server is driving customer adoption, demonstrated by the strong alignment between HDS and the Microsoft Fast Track team in Redmond and worldwide.
Hitachi Data Systems was one of the first OEMs to commit to the Microsoft Private Cloud Fast Track program as a Gold Sponsor at TechEd Berlin in November 2010. HDS introduced its first Microsoft Private Cloud solutions in early 2011, followed by version 2 solutions for Microsoft Private Cloud in 2012, and today’s announcement of version 3 solutions.
“In today’s dynamic market, companies require an IT infrastructure that can adapt and grow with their business while protecting their current investments,” said Mike Walkey, senior vice president, Global Partners and Alliances, Hitachi Data Systems. “We are proud to deliver innovative information solutions optimized for the latest version of Microsoft Private Cloud FastTrack architecture, providing the flexibility and scalability that our mutual customers demand. We are dedicated to continuing our investment in Microsoft private cloud solutions for our joint customers and continue to see these solutions driving a strong business impact for Hitachi Data Systems.”
The new Microsoft validated HDS reference architectures provide seamless integration with the Virtual Machine Manager component of System Center 2012 and transparent live migration of virtual machines (VMs) on same-site and multi-site deployments up to distances of 200 kilometers, or 125 miles. Hitachi Unified Compute Platform Select for Microsoft private clouds also supports full provisioning of compute and storage resources, allowing customers to quickly provision virtual machines with full support for the Orchestrator component of System Center 2012. With the heterogeneous virtualization capabilities of Hitachi Virtual Storage Platform and Hitachi Unified Storage VM, companies can extend investments in supported 3rd party storage.
“With Microsoft’s Cloud OS vision, customers gain groundbreaking technology to deploy cloud infrastructure using Windows Server 2012 and System Center 2012 SP1,” said Chris Phillips, Partner Director PM, Windows Server and System Center, Microsoft. “Strategic collaborations with innovative companies like Hitachi Data Systems help customers move forward with their deployments quickly and confidently. Hitachi Data Systems has been an active participant in the Microsoft Private Cloud Fast Track program since its inception. Their latest solution, the Hitachi Unified Compute Platform Select for Microsoft Private Cloud, version 3, builds on their experience and success with Microsoft private cloud implementations in the datacenter. Microsoft and Hitachi Data systems help our customers realize the benefits of our Cloud OS vision today, by providing a quick and efficient way for telecommunications companies, hosters, service providers, and enterprises to deploy robust Microsoft private cloud solutions.”
Hitachi Data Systems and Microsoft have a long-standing relationship that has delivered mission-critical solutions to customers around the world. Hitachi solutions that support Microsoft applications include Hitachi Compute Blade, Hitachi Virtual Storage Platform, the industry’s leading storage virtualization platform for enterprise data centers, Hitachi Unified Storage systems, and Hitachi Storage Cluster for Windows Server Hyper-V, an end-to-end server-to-storage virtualization solution.

HBGary Unveils First Deep Malware Analysis Solution for Virtual Desktop Infrastructures (VDI)

In a significant technical advancement to help organizations proactively and quickly detect zero-days, rootkits and other targeted malware in remote virtual environments, today HBGary, a subsidiary of ManTech International Corporation, unveiled Active Defense 1.3 to provide live, runtime memory analysis of concurrent Guest OS sessions with minimal impact on the shared physical resources of the underlying server.
With HBGary Active Defense 1.3, malware analysis is no longer reliant on a physical memory dump saved to disk, resulting in quicker results that do not tax valuable shared resources to attain it.
Remote desktop virtualization is one of the biggest trends in IT today because it addresses the mobility of users while at the same time reduces the costs traditionally associated with supporting the devices they use. By using application virtualization and user profile management, it enables the central management of the desktop session environment and achieves separation from the physical device used to run it.
Yet VDIs are not immune to cyberattacks – roaming profiles enable roaming access; centralizing assets on shared physical resources means an outage will have a greater impact, and hypervisor isolation will only be secure so long.
“The popularity of remote virtualized desktops have made them a prime target for today’s cyberattackers. Active Defense 1.3 provides live, runtime malware behavior analysis for these environments,” said Penny Leavy, Vice President & General Manager, HBGary. “More than five years ago, HBGary developed our revolutionary Digital DNA technology to find the bad guys in the one place that they cannot hide – physical memory. We are pleased to offer our customers the industry’s first deep malware analysis solution for Virtual Desktop Infrastructures.”
Active Defense 1.3: How It Works
Active Defense 1.3 scores thousands of software modules so cyber defenders, using the technology’s color-coded threat severity score, can quickly triage and respond to the most severe threats targeting their business environment.
“Runtime Digital DNA reads the pseudo-physical memory abstraction on the Guest operating system, making it ideal for quick scans that will have minimal impact on the usability of the host system managing the virtualization tasks. Unlike our traditional Digital DNA, it is no longer necessary to dump the memory to the disk prior to reassembling and analyzing its contents. When you consider the exponential impact of doing this a hundred plus times to analyze each Guest, it is not hard to exceed the physical resources of the host hardware,” said Jim Butterworth, CSO, HBGary. “Active Defense 1.3, with runtime Digital DNA, is almost 20x faster when compared to the traditional (Memdump) Digital DNA.”
Active Defense customers can choose to preserve memory using our traditional (Memdump) Digital DNA or opt for the memory–only, runtime Digital DNA version to adapt to the ever-changing threat environment while not adversely impacting their own resources.
In a live environment, the analysis of a memory dump file can involve a significant amount of disk I/O, which can impact usability of the system being scanned in heavily virtualized environments where multiple Guests will be sharing the same physical disk. “For those users who cannot accept any server downtime but still need to detect malware in the Guests, runtime Digital DNA is available,” added Butterworth.
Active Defense 1.3 Availability

Veeam Extends Award-Winning Innovations to Windows Server Hyper-V

Veeam Software, innovative provider of backup, replication and virtualization management solutions, today announced Virtual Lab for Windows Server Hyper-V. One of the many new features to be included in the upcoming release of Veeam Backup & Replication v7, Virtual Lab delivers powerful and easy-to-use capabilities that further ensure the protection, performance and availability of Hyper-V environments.
Virtual Lab works in conjunction with vPower, Veeam’s patented technology for running a virtual machine (VM) directly from a compressed, deduplicated backup file on regular backup storage. With vPower, a VM can run from any restore point, full or incremental, without any changes to the backup itself. Virtual Lab for Hyper-V follows the 2012 release of Instant VM Recovery for Hyper-V, which takes advantage of vPower to minimize downtime and disruption by enabling IT to run a VM directly from a backup while a full restore is underway.
New patented capabilities provided by the combination of vPower and Virtual Lab for Hyper-V include:
  • SureBackup: Eliminates risk through backup testing and recovery verification. The fully automated process lets users verify the recoverability of every backup of every VM, without additional hardware or administrative time and effort.
  • U-AIR (Universal Application-Item Recovery): Enables quick, agent-free recovery of individual objects from any virtualized application. When users accidentally delete important e-mails or run scripts that result in unintended changes, U-AIR helps administrators easily recover lost or damaged items.
  • On-Demand Sandbox: Runs VMs in the safety of isolated environments built from existing backups and storage. Instead of using production resources to restore backups or burdening production VMs with additional snapshots, virtual environments can be created “on the fly,” allowing users to simply power on required VMs at desired restore points.
Veeam Backup & Replication, #1 VM backup, is Built for Virtualization from the ground up. Used by more than 60,000 organizations around the world, Veeam Backup & Replication is a powerful, easy-to-use and affordable solution for virtual environments of all sizes. Virtual Lab for Hyper-V continues Veeam’s strategy of helping organizations make the move to Modern Data Protection and reap the overwhelming cost and efficiency benefits of virtualization.
Veeam Backup & Replication v7 will become generally available in Q3, 2013. For more information about Virtual Lab for Hyper-V and other features coming in v7, go tohttp://go.veeam.com/v7. Please also visit the Veeam booth (#332) at the Solutions Expo for Microsoft Management Summit 2013 in Las Vegas, NV from April 8-11. Veeam is a Gold Sponsor for MMS 2013 and Microsoft’s 2012 Management and Virtualization Partner of the Year.
Doug Hazelman, Vice President of Product Strategy, Veeam Software
“Modern data protection is critical to the success of Windows Server Hyper-V initiatives, and eliminating risk is a top priority for our customers. By providing an innovative and easy way to safely perform tasks like validating backups and testing software updates without impacting production environments, Virtual Lab for Hyper-V helps administrators protect their infrastructures, while also enhancing reliability and availability.
Ratmir Timashev, President and CEO, Veeam Software
“Organizations with Hyper-V infrastructures need reliable data protection that is built specifically for virtual environments. In October 2012, Veeam was proud to be one of the first vendors to support Windows Server 2012. We’re excited now to be the first data protection vendor to bring Virtual Lab capabilities to Hyper-V, providing vital support and advanced backup and recovery features to this rapidly growing user base.”

VMware Joins the Cloud Credential Council to Help Accelerate Cloud Adoption Through Training and Certification

The Cloud Credential Council today announced that VMware has joined the council.
Cloud Computing has significant benefits for organizations ranging from improving cost efficiency to the development of completely new business models that leverage cloud, and training and certification play a key role in increasing the successful adoption of cloud. With the adoption of cloud there are many factors to take into consideration to ensure success, one of them being the organizational skills gap related to cloud computing. The Cloud Credential Council (CCC) aims to minimize the skills gap through world-class cloud training and certification. The CCC does this by working with its members and cloud experts to define the required cloud competencies, and subsequently developing cloud courses and exams.
"We are excited to be a part of the Cloud Credential Council. Various VMware cloud experts have contributed to the development of the CCC syllabi in a collaborative way with cloud experts from other organizations, and we believe this joint effort will accelerate the overall industry adoption of cloud. The CCC certification cloud courses provide an excellent foundation for VMware courses, and we look forward to offering these initially as part of our Americas curriculum," said Michael Yakiemchuk, Director, Education and Training, VMware.
"The CCC appreciates VMware's involvement and thought leadership. Although the CCC certifications are vendor-neutral, they are complementary to vendor certifications. We always recommend candidates to choose a vendor-specific follow-on course/certification. We are currently in discussion with the majority of technology vendors and we welcome all to participate," said Marcel Heilijgers, Executive Director of the Cloud Credential Council.
The Cloud Credential Council will release five Professional level cloud certifications this year: CCC Professional Cloud Developer, Solutions Architect, Administrator, Service Manager, and Security/Governance. The syllabi are available for public review on the CCC website.

Tintri Introduces Next-Generation of VM-Aware Storage Functionality for the Software-Defined Data Center

Tintri, Inc., the leading producer of purpose-built, VM-aware storage platforms today announced the release of version 2.0 of the Tintri Operating System (OS). Tintri OS 2.0 introduces advanced data management features that allow IT to more efficiently manage virtualized workloads across globally distributed environments, and provides new levels of VM agility to meet the dynamic resource requirements of the software-defined data center. A central component of Tintri OS 2.0 is the new ReplicateVM software which enables per-VM replication. ReplicateVM is an industry-first for per-VM array-side data replication and builds on Tintri snapshot and cloning technology.

The Tintri unique, VM-aware approach to storage for virtualized environments eliminates the cost and complexity associated with legacy storage systems, which were built for physical – not virtualized – infrastructure. As IT organizations embrace the concepts of more software-defined infrastructure, they need storage platforms that understand and automatically adapt to application workloads.  Extending beyond software-defined storage, the Tintri VM-aware file system operates natively at the VM or vDisk level, eliminating the need for LUNs, volumes, tiers, or other legacy storage constructs.

Tintri OS 2.0 enables a much greater degree of operational flexibility by removing the need for pre-planning and greatly increases VM mobility with highly efficient replication. With the click of a mouse, administrators can easily and efficiently replicate individual VMs between multiple Tintri VMstore systems, streamlining VM availability for data protection, development testing, business intelligence reporting and other use cases where access to VMs across multiple systems or locations is desirable.

The implementation of data replication functionality in the latest Tintri OS further demonstrates how the Tintri VM-centric architecture enables new levels of simplicity and efficiency over traditional legacy storage approaches.  With granularity at an individual VM-level, rather than at a LUN or volume level, only the exact required data is replicated.  Data protection policies can be set on a per-VM basis, and individual VMs can be protected or restored with just a few mouse clicks.  Tintri inline de-duplication and compression also applies to all data being replicated, greatly reducing WAN bandwidth consumption and improving overall performance. 

Impact on Use Cases

Enterprises have widely deployed Tintri to support a wide variety of use cases including Virtual Desktop Infrastructure (VDI), workload consolidation and test and development projects.  The new Tintri OS 2.0 brings a number of new capabilities that will further help customers meet an even broader set of solution requirements:

Enterprise Applications

Test & development – Tintri per-VM replication capability enables efficient duplication of production applications or databases to test environments where development engineers can perform testing on production data without fear of disrupting operations. Replicated VM snapshots can be cloned on the second system, creating full-performance, yet space-efficient clone VMs for testing.  

Business analytics – Running detailed business analytics applications or extracting data from a large database for reporting purposes can potentially impact production database workloads.  Tintri per-VM, network efficient replication efficiently replicates these VMs to a second environment which allows the analytical  and reporting work to be conducted without   any impact to the production environment

Virtual Desktop Infrastructure (VDI)

Highly efficient replication of persistent desktops – Manual pool or persistent desktops are often created from a few base image VMs and may require different protection policies depending on each user’s requirements.  Tintri per-VM data management enables administrators to quickly   create space-efficient clones on remote VMstore systems.  

Remote clone VM creation and distribution of VM images – In virtual environments, there is a need to create space efficient clones for VDI, test and development, business intelligence, and many other uses.  Tintri per-VM data management capabilities enable the creation of VMs anywhere from a single source.  Tintri remote cloning, which leverages native per-VM replication, allows VMs to be cloned across a WAN at high throughput, while preserving space efficiency on remote VMstore systems.

Data Protection and Disaster Recovery

Data protection - As IT environments become increasingly virtualized, protecting application data contained in VMs is a critical part of a datacenter data protection plan.  However, data protection policies are unique to each application, requiring customization on a per VM basis. Tintri per-VM snapshots combined with per-VM replication bring new levels of efficiency to data protection in virtual environments.

Disaster recovery - Enterprises have had to overhaul their disaster recovery plans in recent years as they adapt to new challenges posed by cloud-based / virtualized infrastructures. Storage is a big part of this equation, with a growing need to ensure that the most critical data is safe and accessible at all times. With Tintri, data centers can now efficiently protect their most critical VM data and quickly bring up VMs from any location on demand as necessary.

Supporting Quotes

“This release sets a new bar for VM-aware storage,” said Kieran Harty, Tintri CEO. “Today’s data center requires a far greater degree of agility when it comes to resource allocation, and we’re offering precisely that with this new release. This is a modern approach to modern problem, and one that’s tailored to meet the complex demands of the software-defined data center and take the pain out of the data management process for virtualized environments.”

"Ease of use was one of the key factors in our decision to choose Tintri," said Jason Bourque, Director of Enterprises Systems at Kerzner International. "Tintri has consistently made everything easier and more efficient for us, where storage is concerned. We were amazed how easy it was to set up replication. The simple right-click and protect the VM, defining the protection policy and then watching the replication in progress is awesome.  Our VM template conversions went from about 20 to less than 2 minutes. The cloning process was done in a matter of seconds. As a result, we have the option to convert several VMs at a time, which used to be a sequential process. A SQL query for a 30-day period used to time out on our legacy storage; now, it's done in about a minute on Tintri. This is going to save us lots of time and further positions Tintri as our VMware storage platform of choice."

"After bringing us VM-aware storage, Tintri has done it again!" said Ryan Makamson, Systems Engineer at the School of Electrical Engineering and Computer Science at Washington State University.  "It is awesome to be able to completely remove pre-planning resource allocation out of the storage equation altogether.  This capability adds to an already-strong product that has allowed me to deploy VDI at an extremely cost-effective price-per-workload."

“To speed the delivery of IT-enabled services to the business and support the shift toward cloud-based computing models, enterprises are transforming data centers into pools of dynamically allocatable compute, storage and networking resources,” according to Neil MacDonald, Gartner Vice President and Fellow Emeritus in his report, The Impact of Software-Defined Data Centers on Information Security. “At the heart of this transformation is a shift to software-based management and definition of IT services (the "software-defined data center") and a decoupling from the hardware underneath for services such as compute, networking and storage. The goal is agility and speed within enterprise data centers by enabling applications to be quickly and transparently provisioned, moved and scaled as business requirements require across network segments, across data centers and potentially into the cloud without rearchitecting the network.”

"As server virtualization adoption accelerates, customers must look for complete solutions that simplify data center operations,’ said Stu Miniman, Senior Analyst at Wikibon. “Tintri delivers storage that is unparalleled for managing virtualized applications, and they have continued to build solid functionality onto their hybrid flash array.”