CiRBA, a
leading provider of capacity transformation and control software,
today announced a new API that enables organizations to connect their
cloud management platforms to CiRBA in order to optimize new
workload placements within internal clouds. CiRBA’s analytics
determine the optimal placement for VMs within cloud
infrastructure, both at the environment and server level, reducing
the risk of capacity shortfalls and driving up VM density by an
average of 48 percent. The new API also provides access to CiRBA’s
bookings functionality, allowing users to reserve capacity for
future needs using existing self-service portals.
Many
organizations building internal clouds are looking to rely on cloud
management platforms such as OpenStack, but face a challenge in
bringing together all of the required capabilities. Cloud
management platforms are designed to provision VMs, but do not
have the ability to analyze capacity in order to determine the
best environment to host a workload in or the best host within an
environment to start an instance on. As a result, hot-spots and
imbalances in resource utilization will occur in the
infrastructure, creating both performance issues and inefficient use
of capacity.
CiRBA’s new workload routing API
enables cloud management platforms to send placement requests to
CiRBA, and to receive an answer back that contains the best
possible environment and host-level placement for a new workload.
This answer is based on CiRBA’s industry leading analytics, which
consider a broad set of factors including utilization patterns,
licensing requirements, capacity availability, policy constraints
and technical considerations. This brings a new level of
automation, enabling cloud management platforms to dynamically leverage
CiRBA’s analytics in order to intelligently process workload
placement requests, which is often one of the biggest gaps in
internal cloud implementations. Integrating CiRBA into this
process ensures high efficiency while reducing operational risks,
allowing more instances to fit into each environment while at the
same time making existing instances work better. This new
capability complements CiRBA’s standard control capabilities,
which continuously “auto-corrects” cloud infrastructure through
ongoing rebalancing and instance right-sizing.
The new API
also enables capacity reservations to be made for new VMs through
CiRBA’s Bookings Management System. This ensures the capacity is
held for that workload when it is ready to be deployed if it is not
required immediately.
“We see increased focus on maturing
internal cloud operations in our customers,” said Andrew Hillier,
CiRBA CTO and co-founder. “For self-service requests, there is a
dire need for more intelligence in determining where these
workloads go, and how much resources must be assigned to them. As
internal cloud implementations scale and there are multiple
environments, SLA levels or internal customers, it becomes
unworkable to continue to place workloads based on simplistic or random
algorithms.”
Continues Hillier: “Even more
importantly, in enterprise environments, there is the recognition
that instantaneous provisioning may not be as important as the
ability to reserve capacity. Both are forms of self-service, but
the ability for users to reserve capacity in advance is much more
consistent with the way these enterprises work, where last-minute,
unplanned use of capacity is the exception, not the norm.”
CiRBA’s new API ships on June 14th 2013.
Tuesday, June 18, 2013
Bromium Introduces vSentry 2.0 for Endpoint Security
Bromium, Inc.,
a pioneer in trustworthy computing, today announced the general
availability of Bromium vSentry 2.0. Powered by its Xen-based
Bromium Microvisor, vSentry 2.0 makes endpoints secure – by design,
enabling enterprises to embrace key IT trends such as mobility and
collaboration, without risk of attack from insecure networks, the web
and malicious documents or media.
vSentry uses Intel CPU features for virtualization and security to invisibly hardware-isolate each Windows task that accesses the Internet or untrusted documents. Its architecture guarantees that all malware will be defeated and automatically discarded. In addition, vSentry automates live attack visualization and analysis – giving security operations teams unparalleled insight into attacks when they occur.
“The Intel 4th generation Core vPro platform offers enterprises a very secure endpoint architecture as well as a rich set of features that enhance endpoint security, including AES-NI, Data Execution Prevention (DEP) and Intel Platform Protection Technology with OS Guard,” said Rick Echevarria, vice president and general manager of Intel’s Business Client Platforms Division. “Bromium vSentry uses Intel VT-x, VT-d and EPT to hardware-isolate operating system tasks, and Intel AES-NI, DEP, and OS Guard to further protect the endpoint. Bromium vSentry advances endpoint security enabling enterprises to secure mobile endpoints and empowers employees to safely access networks and media.”
The enhancements in vSentry 2.0 focus on three important requirements for enterprise deployments - secure mobility, safe enablement, and improved manageability. The new release also delivers improved overall performance and end user experience.
Secure Mobility
Mobile users need to access enterprise applications and the web from untrusted networks that could be used to attack the endpoint. vSentry 2.0 hardware-isolates each user task that accesses an untrusted network, blocking all attacks from captive portals, the web and untrusted content. It guarantees the security of mobile endpoints that are used to remotely access enterprise SaaS and web applications, and virtual desktops. User credentials and application data delivered to the endpoint are secure at all times.
Safe Collaboration
Employees need to securely interact and collaborate with content originating from both within and outside the enterprise, requiring them to access untrustworthy content from removable media, the web, email and social applications. This places endpoint security in the user’s hands by making them remove security restrictions from, or “trust” content before interacting with it. If a user mistakenly trusts a malicious document, an attacker can compromise the endpoint. vSentry 2.0 lets users access and edit content without ever having to trust it, enabling them to be productive without risk.
Improved Manageability
The Bromium Management Server (BMS) that comes with vSentry now provides granular monitoring of deployment progress of vSentry endpoint agents, as well as automated gathering of critical information – such as missing software pre-requisites and installation progress. BMS delivers centralized policy management – and now includes simplified policy creation, editing, and distribution, event aggregation and reporting, as well as dashboards for monitoring key metrics. These improvements help simplify and accelerate enterprise-wide deployments of vSentry.
Bromium vSentry 2.0 secures both 32- and 64 bit versions of Windows 7, and virtual desktops delivered with Microsoft Remote Desktop Services (including Citrix XenDesktop and VMware View). It is deployed as a standard MSI package, and configured via simple policies using Microsoft Active Directory or using the Bromium Management Server. NYSE and BlackRock are among the growing number of enterprise customers planning to deploy vSentry enterprise-wide.
“vSentry 2.0 delivers on our goal to make endpoints fully protected from targeted attacks, by hardware-isolating all untrusted user tasks,” said Gaurav Banga, CEO and co-founder of Bromium Inc. “vSentry 2.0 addresses important use cases that further empower end users without compromising on enterprise security. It represents the industry’s most secure solution for enterprise mobility and gives users unparalleled flexibility and ease of use in collaborative environments.”
Interested parties can view a webcast covering the new features and functionality of vSentry 2.0 presented by Simon Crosby, CTO and co-founder of Bromium, at http://learn.bromium.com/newin2_register.html.
Bromium vSentry is licensed per-user, enterprise wide, and priced according to volume. For more information, contact sales@bromium.com.
vSentry uses Intel CPU features for virtualization and security to invisibly hardware-isolate each Windows task that accesses the Internet or untrusted documents. Its architecture guarantees that all malware will be defeated and automatically discarded. In addition, vSentry automates live attack visualization and analysis – giving security operations teams unparalleled insight into attacks when they occur.
“The Intel 4th generation Core vPro platform offers enterprises a very secure endpoint architecture as well as a rich set of features that enhance endpoint security, including AES-NI, Data Execution Prevention (DEP) and Intel Platform Protection Technology with OS Guard,” said Rick Echevarria, vice president and general manager of Intel’s Business Client Platforms Division. “Bromium vSentry uses Intel VT-x, VT-d and EPT to hardware-isolate operating system tasks, and Intel AES-NI, DEP, and OS Guard to further protect the endpoint. Bromium vSentry advances endpoint security enabling enterprises to secure mobile endpoints and empowers employees to safely access networks and media.”
The enhancements in vSentry 2.0 focus on three important requirements for enterprise deployments - secure mobility, safe enablement, and improved manageability. The new release also delivers improved overall performance and end user experience.
Secure Mobility
Mobile users need to access enterprise applications and the web from untrusted networks that could be used to attack the endpoint. vSentry 2.0 hardware-isolates each user task that accesses an untrusted network, blocking all attacks from captive portals, the web and untrusted content. It guarantees the security of mobile endpoints that are used to remotely access enterprise SaaS and web applications, and virtual desktops. User credentials and application data delivered to the endpoint are secure at all times.
Safe Collaboration
Employees need to securely interact and collaborate with content originating from both within and outside the enterprise, requiring them to access untrustworthy content from removable media, the web, email and social applications. This places endpoint security in the user’s hands by making them remove security restrictions from, or “trust” content before interacting with it. If a user mistakenly trusts a malicious document, an attacker can compromise the endpoint. vSentry 2.0 lets users access and edit content without ever having to trust it, enabling them to be productive without risk.
Improved Manageability
The Bromium Management Server (BMS) that comes with vSentry now provides granular monitoring of deployment progress of vSentry endpoint agents, as well as automated gathering of critical information – such as missing software pre-requisites and installation progress. BMS delivers centralized policy management – and now includes simplified policy creation, editing, and distribution, event aggregation and reporting, as well as dashboards for monitoring key metrics. These improvements help simplify and accelerate enterprise-wide deployments of vSentry.
Bromium vSentry 2.0 secures both 32- and 64 bit versions of Windows 7, and virtual desktops delivered with Microsoft Remote Desktop Services (including Citrix XenDesktop and VMware View). It is deployed as a standard MSI package, and configured via simple policies using Microsoft Active Directory or using the Bromium Management Server. NYSE and BlackRock are among the growing number of enterprise customers planning to deploy vSentry enterprise-wide.
“vSentry 2.0 delivers on our goal to make endpoints fully protected from targeted attacks, by hardware-isolating all untrusted user tasks,” said Gaurav Banga, CEO and co-founder of Bromium Inc. “vSentry 2.0 addresses important use cases that further empower end users without compromising on enterprise security. It represents the industry’s most secure solution for enterprise mobility and gives users unparalleled flexibility and ease of use in collaborative environments.”
Interested parties can view a webcast covering the new features and functionality of vSentry 2.0 presented by Simon Crosby, CTO and co-founder of Bromium, at http://learn.bromium.com/newin2_register.html.
Bromium vSentry is licensed per-user, enterprise wide, and priced according to volume. For more information, contact sales@bromium.com.
VMUnify at HostingCon – Booth #616
VMUnify will be at HostingCon between the 17th and 19th of June at
Austin, Texas. You can register for the conference with the code VMUnify2013
http://www.hostingcon.com/ for
discounts.
At the conference, VMUnify will be unveiling a brand new and intuitive UI and also extensive Reseller support.
VMUnify is a platform for delivering Infrastructure as a Service (IaaS) or Cloud Servers with Secure Virtual Data Centers and Unified Cloud Environments.
VMUnify's features include:
At the conference, VMUnify will be unveiling a brand new and intuitive UI and also extensive Reseller support.
VMUnify is a platform for delivering Infrastructure as a Service (IaaS) or Cloud Servers with Secure Virtual Data Centers and Unified Cloud Environments.
VMUnify's features include:
- Orchestration - Automation, Self Service, Workflows, Templates
- Secure Multi-tenancy - VLANs, Firewall, Tenant Identity
- Hypervisors - VMware, Hyper-V
- Public Clouds - Amazon, Azure
- Provisioning Systems - WHMCS, HostBill
- Billing - EBS, PayPal, BillDesk
- Reseller Support & White labeling - Easy to add resellers and re brand the product
- Cloud Broker - Jamcracker, Parallels Automation
- Federation - Unified Provisioning across datacenters
- Customization - For Service Providers
- Simplicity of Deployment - Based on Virtual Appliances
- API Interfaces - Interfaces to the product available via REST/ WS
Flexera Software Study Reveals Organisations Are Unprepared for Software License Compliance Risks Arising from Virtualisation
As the server virtualisation, desktop virtualisation and application
virtualisation trends continues to take firm root within organisations
globally, a new Flexera Software Survey,
prepared jointly with IDC, has found 43% of organisations do not have
sufficient processes and automation in place to manage their virtual
licenses, placing them in substantial risk of falling out of compliance
with their software licenses.
“While server, desktop and application virtualisation provide tremendous operational efficiencies to organisations, each vendor has unique, evolving and frequently opaque licensing rules around virtualisation. If sufficient measures are not taken to manage and optimise those virtualised licenses, companies may be vulnerable to substantial ‘true-up’ penalties if they are audited by their software vendors,” said Amy Konary, Research Vice President - Software Licensing & Provisioning, IDC. “In one instance, I am aware of a global enterprise that saved $4 million in hardware through virtualisation, but it cost them $52 million in a resulting software license compliance issue.”
Growth in Virtualisation & Audits Suggests Noncompliance Windfall for Producers
The survey points to both the increasing penetration of virtualisation within organisations, and the increasing frequency of vendor software license audits, underscoring the vulnerabilities enterprises will face if they do not take additional steps to more strategically manage and optimise their virtualised applications. According to the survey, 56% of enterprises (up from 51% in 2011) report that 41% or more of their applications have been virtualised using server virtualisation, and 24% say that between 10-25% of their apps are delivered though desktop virtualisation (VDI).
The survey also reveals that application producers see virtualisation as a new revenue opportunity. 50% of producers indicated that over the next 18-24 months they will be changing their licensing models to accommodate virtualisation. When the producers were asked why they change licensing models, the overwhelming majority – 69% -- said it was to generate more revenue.
Where will some of that additional revenue come from? According to the survey, 17% of producers currently rely upon trust-based licensing coupled with vendor-compliance audits. Looking into the future 18-24 months, this method of licensing and enforcement is expected to increase by 11% – suggesting an acceleration of the audit trend.
“Our customers have been reporting a major uptick in the frequency of vendor compliance audits, underscoring the strategic importance of continual compliance, and continual license optimisation in reducing financial risk,” said Jim Ryan, Chief Operating Officer of Flexera Software. “When organisations have the best practices and solutions in place to optimise their virtual licenses, they will know ahead of time the impact that virtualising their applications will have – allowing them to minimise their virtualisation costs, and risk.”
“While server, desktop and application virtualisation provide tremendous operational efficiencies to organisations, each vendor has unique, evolving and frequently opaque licensing rules around virtualisation. If sufficient measures are not taken to manage and optimise those virtualised licenses, companies may be vulnerable to substantial ‘true-up’ penalties if they are audited by their software vendors,” said Amy Konary, Research Vice President - Software Licensing & Provisioning, IDC. “In one instance, I am aware of a global enterprise that saved $4 million in hardware through virtualisation, but it cost them $52 million in a resulting software license compliance issue.”
Growth in Virtualisation & Audits Suggests Noncompliance Windfall for Producers
The survey points to both the increasing penetration of virtualisation within organisations, and the increasing frequency of vendor software license audits, underscoring the vulnerabilities enterprises will face if they do not take additional steps to more strategically manage and optimise their virtualised applications. According to the survey, 56% of enterprises (up from 51% in 2011) report that 41% or more of their applications have been virtualised using server virtualisation, and 24% say that between 10-25% of their apps are delivered though desktop virtualisation (VDI).
The survey also reveals that application producers see virtualisation as a new revenue opportunity. 50% of producers indicated that over the next 18-24 months they will be changing their licensing models to accommodate virtualisation. When the producers were asked why they change licensing models, the overwhelming majority – 69% -- said it was to generate more revenue.
Where will some of that additional revenue come from? According to the survey, 17% of producers currently rely upon trust-based licensing coupled with vendor-compliance audits. Looking into the future 18-24 months, this method of licensing and enforcement is expected to increase by 11% – suggesting an acceleration of the audit trend.
“Our customers have been reporting a major uptick in the frequency of vendor compliance audits, underscoring the strategic importance of continual compliance, and continual license optimisation in reducing financial risk,” said Jim Ryan, Chief Operating Officer of Flexera Software. “When organisations have the best practices and solutions in place to optimise their virtual licenses, they will know ahead of time the impact that virtualising their applications will have – allowing them to minimise their virtualisation costs, and risk.”
NetJapan Introduces vmGuardian, Backup and Disaster Recovery for VMware ESXi Virtual Environments
NetJapan, Inc.,
introduces vmGuardian backup and disaster recovery software for
VMware ESXi virtual environments. vmGuardian brings high-performance
and uncompromised protection to your virtual environments featuring
Inline Data Deduplication Compression and Selective File Restore.
A browser interface helps guide you through the various configuration settings, scheduling and backup management tasks making it easy to backup and restore your virtual environments. vmGuardian supports live backup of virtual machines and virtual disks so there is no down time. Virtual host backups gain significant benefit from vmGuardian’s inline data deduplication compression because it allows nominally different data from each virtual machine to be coalesced into a single storage space. The resultant backups are compressed at remarkably high ratios, significantly reducing backup storage needs.
vmGuardian Features
vmGuardian software and support is available in Japanese and U.S. English. NetJapan, Inc. distributes vmGuardian through authorized system integrators, business partners, distributors, online shops and direct via www.vmguardian.com.
A browser interface helps guide you through the various configuration settings, scheduling and backup management tasks making it easy to backup and restore your virtual environments. vmGuardian supports live backup of virtual machines and virtual disks so there is no down time. Virtual host backups gain significant benefit from vmGuardian’s inline data deduplication compression because it allows nominally different data from each virtual machine to be coalesced into a single storage space. The resultant backups are compressed at remarkably high ratios, significantly reducing backup storage needs.
vmGuardian Features
- Easy to use browser based management interface.
- Restore all virtual machines, selected virtual machines or individual files, and folders.
- Agentless architecture means no more expensive agent licenses per virtual machine.
- Backup all host virtual machines or only individual virtual machines.
- Inline deduplication compression inspects large volumes of data during the backup and identifies large sections that are identical, in order to store only one copy of the data.
- Smart Sector technology improves backup performance and reduces storage requirements by only backing up allocated space.
- Full and incremental backups.
- Flexible scheduling.
- Incremental backup file consolidation.
- Virtual machine backups are host independent; virtual machines can be restored to the same or different ESXi host.
- Smallest backup window in the industry.
- E-Mail Notifications.
vmGuardian software and support is available in Japanese and U.S. English. NetJapan, Inc. distributes vmGuardian through authorized system integrators, business partners, distributors, online shops and direct via www.vmguardian.com.
DataCore Software Builds on its Software-Defined Storage Lead with Enhancements to its Proven SANsymphony-V Storage Virtualization Platform
Amid all the
talk and future-looking promises of software-defined storage from
hardware-biased manufacturers, DataCore Software has delivered real-world
solutions to thousands of customers worldwide. DataCore continues to advance and
evolve its device-independent storage management and virtualization software,
while maintaining focus on empowering IT users to take back control of their
storage infrastructure. To that end, the company announced today significant
enhancements to the comprehensive management capabilities within version R9 of
its SANsymphony-V storage virtualization platform.
New advancements in SANsymphony-V include:
Real-World Software-Defined Storage: Customer-driven Enhancements Overcome Challenges Many of the new features which extend the scope and breadth of storage management would not even occur to companies just developing a software-defined package. They are the product of DataCore's 15 years of customer feedback and field-proven experience in broad scenarios across the globe.
The enhancements introduced in the latest version of SANsymphony-V take on major challenges faced by large scale IT organizations and more diverse mid-size data centers. Aside from confronting explosive storage growth (multi-petabyte disk farms), organizations are experiencing massive virtual machine (VM) sprawl where provisioning, partitioning and protecting disk space taxes both staff and budget. Problems are further aggravated by the insertion of flash technologies and SSDs used to speed up latency-sensitive workloads. The time and resource demands required to manage a broadening diversity of different storage models, disk devices and flash technologies - even when standardized with a single manufacturer - are a growing burden for organizations already struggling to meet application performance needs on limited budgets.
The bottom line is that companies are forced to confront many unknowns in terms of storage. With traditional storage systems, the conventional practice has been to oversize and overprovision storage with the hope that it will meet new and unpredictable demands, but this drives up costs and too often fails to meet performance objectives. As a result, companies have become smarter and have realized that it is no longer feasible or sensible to simply throw expensive, purpose-built hardware at the problem. Companies today are demanding a new level of software flexibility that endures over time and adds value over multiple generations and types of hardware devices. What organizations require is a strategic - rather than an ad hoc - approach to managing storage.
Notable Advances with SANsymphony-V Update 9.0.3 SANsymphony-V is a strategic productivity solution that works infrastructure-wide across many storage hardware brands and models. Its auto-tuning cache and auto-tiering software maximize the use of available CPU, memory and disk resources to dramatically increase overall storage performance, which translates into faster, more responsive applications. By more effectively leveraging existing disk storage investments, organizations can now cost-effectively add, and fully benefit from, the latest high-speed technologies like flash memory and SSDs.
DataCore's software makes it even easier to incorporate and optimize powerful "server-side" flash memory technologies. SANsymphony-V can operate directly with any flash and disk devices connected to application servers or can be used across all connected storage area networking (SAN) assets. New configuration flexibility and options have also been documented with the latest release of SANsymphony-V to simplify flash integration and maximize its utilization and performance.
DataCore offers an extensive set of management tools including "heat maps" to optimize performance and cost-effective tiering of storage. Auto-tiering prioritizes and applies the right storage to best fit application and workload needs. Storage profiles provide greater control, and because environments vary, more optimization to improve cost and performance. Profiles for virtual disks can be customized to govern how dynamic policies for auto-tiering, remote replication and synchronous mirror recovery are prioritized, while supplementing default policies built into the software. Virtual disk importance can be set to critical, high, normal, low or archive, controlling which volumes take precedence for shared resources. This ensures important applications benefit from more valuable resources, such as flash memory and SSDs, with less demanding tasks using lower cost, higher density storage.
Auto-regulating the best utilization of precious resources keeps them from unintentionally being consumed by lower priority demands, as often happens with backup snapshots and other replicas of line-of-business data. Instead, point-in-time copies are directed to tiers of storage more appropriate for their role, while SQL Server, Oracle, SAP, Exchange, SharePoint and other mission-critical apps are directed to higher speed resources.
DataCore customers can now enjoy cutting-edge recording, analysis and reporting for responsive, continuously available IT services. SANsymphony-V adds the ability to record historical performance for trend analysis. By displaying metrics gathered over time, workload spikes and potential bottlenecks can be easily addressed. There is also a greater emphasis on automating more nuanced aspects of provisioning and advanced storage services. The difficulty here has not been managing overall storage capacity as much as the total number of virtual hosts and volumes to be coordinated in a predictable, repeatable fashion. Admins now can kick off and manage these tasks effortlessly and visualize their state at a glance. Large-scale provisioning is made simple through the use of templates from which virtual disks can be instantiated with the same characteristics (size, profile, availability, etc.).
For business continuity, disaster recovery and off-site data protection, the new release offers faster asynchronous remote replication to meet stringent Recovery Time (RTO) and Recovery Point (RPO) Objectives. It also takes better advantage of lower speed/lower cost wide area networks. Safeguarding against regional catastrophes has raised the urgency for cost-effective remote replication solutions, even for smaller firms.
SANsymphony-V further leverages the Windows Server 2012 platform and Microsoft's latest clustering capabilities for faster, cost-effective unified NAS file serving and SAN disk services. This allows fully redundant, highly available configurations to scale out across multiple nodes and enable rapid switchover of network file system and Common Internet File System SMB clients despite hardware and facility outages. This powerful combination makes SANsymphony-V a unified NAS/SAN storage platform that is an ideal and affordable choice to support Microsoft Clusters and more demanding Microsoft File Serving environments.
With regards to high performance, low-latency needs, SANsymphony-V supports the newest 16Gbit Fibre Channel host bus adapters (HBAs) from QLogic. These can be mixed and matched with prior generation HBAs, as well as iSCSI NICs employed in less demanding areas of infrastructure. Fibre Channel is often the preferred interface between databases, high-speed apps and pools of hybrid and all flash arrays virtualized by DataCore. To take advantage of the enhanced SANsymphony storage virtualization platform, please consult with your DataCore-authorized solution provider or your DataCore representative.
New advancements in SANsymphony-V include:
- Wizards to provision multiple virtual disks from templates
- Group commands to manage storage for multiple application hosts
- Storage profiles for greater control and auto-tiering across multiple levels of flash, solid state (SSDs) and hard disk technologies
- A new database repository option for recording and analyzing performance history and trends
- Greater configurability and choices for incorporating high-performance "server-side" flash technology and cost-effective network attached storage (NAS) file serving capabilities
- Preferred snapshot pools to simplify and segregate snapshots from impacting production work
- Improved remote replication and connectivity optimizations for faster and more efficient performance
- Support for higher speed 16Gbit Fibre Channel networking and more.
Real-World Software-Defined Storage: Customer-driven Enhancements Overcome Challenges Many of the new features which extend the scope and breadth of storage management would not even occur to companies just developing a software-defined package. They are the product of DataCore's 15 years of customer feedback and field-proven experience in broad scenarios across the globe.
The enhancements introduced in the latest version of SANsymphony-V take on major challenges faced by large scale IT organizations and more diverse mid-size data centers. Aside from confronting explosive storage growth (multi-petabyte disk farms), organizations are experiencing massive virtual machine (VM) sprawl where provisioning, partitioning and protecting disk space taxes both staff and budget. Problems are further aggravated by the insertion of flash technologies and SSDs used to speed up latency-sensitive workloads. The time and resource demands required to manage a broadening diversity of different storage models, disk devices and flash technologies - even when standardized with a single manufacturer - are a growing burden for organizations already struggling to meet application performance needs on limited budgets.
The bottom line is that companies are forced to confront many unknowns in terms of storage. With traditional storage systems, the conventional practice has been to oversize and overprovision storage with the hope that it will meet new and unpredictable demands, but this drives up costs and too often fails to meet performance objectives. As a result, companies have become smarter and have realized that it is no longer feasible or sensible to simply throw expensive, purpose-built hardware at the problem. Companies today are demanding a new level of software flexibility that endures over time and adds value over multiple generations and types of hardware devices. What organizations require is a strategic - rather than an ad hoc - approach to managing storage.
Notable Advances with SANsymphony-V Update 9.0.3 SANsymphony-V is a strategic productivity solution that works infrastructure-wide across many storage hardware brands and models. Its auto-tuning cache and auto-tiering software maximize the use of available CPU, memory and disk resources to dramatically increase overall storage performance, which translates into faster, more responsive applications. By more effectively leveraging existing disk storage investments, organizations can now cost-effectively add, and fully benefit from, the latest high-speed technologies like flash memory and SSDs.
DataCore's software makes it even easier to incorporate and optimize powerful "server-side" flash memory technologies. SANsymphony-V can operate directly with any flash and disk devices connected to application servers or can be used across all connected storage area networking (SAN) assets. New configuration flexibility and options have also been documented with the latest release of SANsymphony-V to simplify flash integration and maximize its utilization and performance.
DataCore offers an extensive set of management tools including "heat maps" to optimize performance and cost-effective tiering of storage. Auto-tiering prioritizes and applies the right storage to best fit application and workload needs. Storage profiles provide greater control, and because environments vary, more optimization to improve cost and performance. Profiles for virtual disks can be customized to govern how dynamic policies for auto-tiering, remote replication and synchronous mirror recovery are prioritized, while supplementing default policies built into the software. Virtual disk importance can be set to critical, high, normal, low or archive, controlling which volumes take precedence for shared resources. This ensures important applications benefit from more valuable resources, such as flash memory and SSDs, with less demanding tasks using lower cost, higher density storage.
Auto-regulating the best utilization of precious resources keeps them from unintentionally being consumed by lower priority demands, as often happens with backup snapshots and other replicas of line-of-business data. Instead, point-in-time copies are directed to tiers of storage more appropriate for their role, while SQL Server, Oracle, SAP, Exchange, SharePoint and other mission-critical apps are directed to higher speed resources.
DataCore customers can now enjoy cutting-edge recording, analysis and reporting for responsive, continuously available IT services. SANsymphony-V adds the ability to record historical performance for trend analysis. By displaying metrics gathered over time, workload spikes and potential bottlenecks can be easily addressed. There is also a greater emphasis on automating more nuanced aspects of provisioning and advanced storage services. The difficulty here has not been managing overall storage capacity as much as the total number of virtual hosts and volumes to be coordinated in a predictable, repeatable fashion. Admins now can kick off and manage these tasks effortlessly and visualize their state at a glance. Large-scale provisioning is made simple through the use of templates from which virtual disks can be instantiated with the same characteristics (size, profile, availability, etc.).
For business continuity, disaster recovery and off-site data protection, the new release offers faster asynchronous remote replication to meet stringent Recovery Time (RTO) and Recovery Point (RPO) Objectives. It also takes better advantage of lower speed/lower cost wide area networks. Safeguarding against regional catastrophes has raised the urgency for cost-effective remote replication solutions, even for smaller firms.
SANsymphony-V further leverages the Windows Server 2012 platform and Microsoft's latest clustering capabilities for faster, cost-effective unified NAS file serving and SAN disk services. This allows fully redundant, highly available configurations to scale out across multiple nodes and enable rapid switchover of network file system and Common Internet File System SMB clients despite hardware and facility outages. This powerful combination makes SANsymphony-V a unified NAS/SAN storage platform that is an ideal and affordable choice to support Microsoft Clusters and more demanding Microsoft File Serving environments.
With regards to high performance, low-latency needs, SANsymphony-V supports the newest 16Gbit Fibre Channel host bus adapters (HBAs) from QLogic. These can be mixed and matched with prior generation HBAs, as well as iSCSI NICs employed in less demanding areas of infrastructure. Fibre Channel is often the preferred interface between databases, high-speed apps and pools of hybrid and all flash arrays virtualized by DataCore. To take advantage of the enhanced SANsymphony storage virtualization platform, please consult with your DataCore-authorized solution provider or your DataCore representative.
Differentiate Cloud Management and Virtualization Related to Cloud Computing
A Contributed Article by Deney Dentel, CEO at
Nordisk Systems
Cloud computing is the means through which the user can get computing applications, power, infrastructure, personal data, business process and anything else they need, wherever they want. It is the set of storage, network, hardware and interfaces combined together to provide computing as a service. It can provide the user with software, storage, and infrastructure as long as they are connected to the internet.
Due to cloud computing, the user does not have to buy any other extra hardware and software as long as they have a computer and internet. This technology saves people time and money.
Cloud management is the technology which is designed to operate and monitor applications, services and data residing in the cloud. It helps to ensure that the cloud computing is working efficiently, optimally and is not experiencing any problems.
Virtualization is the technology to develop a virtual version of the resource or device like network, operating system, software etc. It is to divide the resource into one or more executable environments.
Let's see the differences between cloud management and server virtualization related to cloud computing.
Quick time to Deliver:
Virtualization has quick delivery in the cloud. In virtualization all the hardware and software are right in front of the user. So, they can just go through the storage to obtain some files. In cloud, the computer has to be first connected to the internet and through the use of browser the files can be obtained. If there is problem with the Internet, then the files stored in the cloud cannot be obtained.
Flexibility isn't all about On-Demand Provisioning:
Even though virtualization has quick delivery, it creates problem in terms of flexibility. To use virtualization, all the software and hardware has to be with user at all times, which doesn't sound flexible. But in cloud, there is no need for software or hardware as everything is stored in the cloud. All it requires is an internet connection.
Expect ROI in Days:
Virtualization costs less. There is huge initial cost in the purchase of hardware and software and some for the IT services. In the case of cloud, there is less initial cost as no hardware is required. But, as the company use cloud's services, higher the costs will be. At some point, it maybe even cost more than the hardware.
Competitive advantage through provisioning control:
In virtualization, as all the hardware and software are managed by the company itself, they have a control over it. The services, security etc are controlled by the company. In cloud, all the hardware and software are managed by the service provider, so the company has little control over it.
Cloud computing is the means through which the user can get computing applications, power, infrastructure, personal data, business process and anything else they need, wherever they want. It is the set of storage, network, hardware and interfaces combined together to provide computing as a service. It can provide the user with software, storage, and infrastructure as long as they are connected to the internet.
Due to cloud computing, the user does not have to buy any other extra hardware and software as long as they have a computer and internet. This technology saves people time and money.
Cloud management is the technology which is designed to operate and monitor applications, services and data residing in the cloud. It helps to ensure that the cloud computing is working efficiently, optimally and is not experiencing any problems.
Virtualization is the technology to develop a virtual version of the resource or device like network, operating system, software etc. It is to divide the resource into one or more executable environments.
Let's see the differences between cloud management and server virtualization related to cloud computing.
Quick time to Deliver:
Virtualization has quick delivery in the cloud. In virtualization all the hardware and software are right in front of the user. So, they can just go through the storage to obtain some files. In cloud, the computer has to be first connected to the internet and through the use of browser the files can be obtained. If there is problem with the Internet, then the files stored in the cloud cannot be obtained.
Flexibility isn't all about On-Demand Provisioning:
Even though virtualization has quick delivery, it creates problem in terms of flexibility. To use virtualization, all the software and hardware has to be with user at all times, which doesn't sound flexible. But in cloud, there is no need for software or hardware as everything is stored in the cloud. All it requires is an internet connection.
Expect ROI in Days:
Virtualization costs less. There is huge initial cost in the purchase of hardware and software and some for the IT services. In the case of cloud, there is less initial cost as no hardware is required. But, as the company use cloud's services, higher the costs will be. At some point, it maybe even cost more than the hardware.
Competitive advantage through provisioning control:
In virtualization, as all the hardware and software are managed by the company itself, they have a control over it. The services, security etc are controlled by the company. In cloud, all the hardware and software are managed by the service provider, so the company has little control over it.
Managing Enterprise Applications on CloudPlatform
Businesses
have adopted cloud deployment for simple applications such as developer
tools and web servers, but the complexities of deploying and managing
multi-tier enterprise applications in the cloud are still very
cumbersome, time consuming and error-prone. This creates an obstacle for
businesses to run their enterprise application workloads in cloud
environments. AppStack provides a solution for automating and
simplifying the complex process of enterprise application provisioning
and ongoing lifecycle management. AppStack is a software platform that
leverages the infrastructure provisioning capabilities of Citrix
CloudPlatform to provide application services for Enterprises deploying
apps on private clouds, as well as for Service Providers who want to
enable app services for their end users.
To solve this, Appcara has created AppStack, an advanced software platform for provisioning and managing enterprise and distributed applications in public and private cloud computing environments. To automate these capabilities, AppStack utilizes data-model driven technology, to capture and assemble complex application Workloads in a dynamic configuration repository. This enables a high degree of automation of common tasks such as deploying Workloads, but also the common change/update cycle that most enterprise applications will undergo. AppStack changes the paradigm from tedious, manual server management - to a much simpler "point and click" style of management at the Workload level.
In this session, they will demonstrate how AppStack provides simple cloud application management for Citrix CloudPlatform powered by Apache CloudStack:
To solve this, Appcara has created AppStack, an advanced software platform for provisioning and managing enterprise and distributed applications in public and private cloud computing environments. To automate these capabilities, AppStack utilizes data-model driven technology, to capture and assemble complex application Workloads in a dynamic configuration repository. This enables a high degree of automation of common tasks such as deploying Workloads, but also the common change/update cycle that most enterprise applications will undergo. AppStack changes the paradigm from tedious, manual server management - to a much simpler "point and click" style of management at the Workload level.
In this session, they will demonstrate how AppStack provides simple cloud application management for Citrix CloudPlatform powered by Apache CloudStack:
- Point-and-click launching of application Workloads on to CloudPlatform clouds
- Portal-based active management of apps across their lifecycle
- Integrated App Marketplace with popular commercial and open-source apps that can be deployed into more complex Workloads
- Multi-cloud support with complete Workload portability across public and private clouds
- Paul Speciale (Chief Marketing Officer, Appcara)
- Geralyn Miller (Sr. Alliance Marketing Manager, Citrix)
- Gaurav Chhaunker (Technical Relationship Lead, Alliance Marketing, Citrix)
Condusiv Technologies' V-locity Server Software Drives Industry Initiative to Manage Increase in I/O from Virtualization, BYOD and Big Data—without Additional Hardware
Condusiv Technologies,
the leader in high-performance software optimizing technology,
people and businesses, today is announcing the results of strict,
third party benchmark testing of its newly-released V-locity
Server Optimization software designed for I/O-intensive applications
like SQL Server and Exchange running on physical servers.
Condusiv’s V-locity software architecture, which contains transformational read and write optimization engines, represents the culmination of 31 years of research and development in optimizing and accelerating Windows environments for business.
Condusiv’s goal is to help broaden industry awareness of the benefits of V-locity’s unique approach to optimizing read/write performance at the source, addressing critical I/O performance barriers without adding storage or server hardware. Condusiv Technologies is presenting a sponsored Technology Spotlight by leading IT market research and advisory firm IDC, entitled “The Shift to I/O Optimization to Boost Virtual and Physical Server Performance.” In addition, Westborough, Massachusetts-based openBench Labs released a third party test report revealing that V-locity Server accelerated SQL performance by 55%. Access both papers at http://www.condusiv.com/business/v-locity/server/.
“Virtual environments, cloud services, mobile devices, and Big Data all contribute to the rise in digital information organizations must manage. All of this data not only must be stored, but utilized to drive value for an organization's competitive advantage,” said Jerry Baldwin, CEO of Condusiv Technologies. “As much as the I/O explosion needs to be managed, CIO’s find themselves investing 80% of their annual IT budget on maintaining their existing infrastructure and services. This model is broken. V-locity customers typically see 50% or more performance gains on mission-critical applications like SQL and Exchange. That gain also comes with a very unique proposition—a savings of about 80% from their annual hardware capital expense budget.”
The Problem
The I/O problem stems, in part, from the fact that while the number of virtual machine shipments is growing at an average of 25% annually, the number of physical servers shipped is growing at a modest 2–3%. As more workloads are put on virtual servers and heavier workloads are placed on physical servers, this can triple or quadruple the amount of random I/O generated from a single server, burdening the compute infrastructure. Increasingly, the storage controller and disk architectures cannot keep pace with this growing random I/O.
When it comes to I/O and its impact on servers, storage and applications, there are two performance barriers: 1) Windows creating unnecessary I/O traffic by splitting files upon write, which also impacts subsequent reads, and 2) frequently accessed data unnecessarily traveling the full distance from server to storage and back.
These two behaviors create a surplus of I/O that prevents applications from performing at peak speeds. In today's enterprise, the problem is compounded as a multitude of random I/O traffic, from a mass of disassociated data access points, is making requests for storage blocks—random and sequential—to a shared storage system. All this unnecessary I/O leads to extra processing cycles that increase overhead and reduce application, network, and storage performance.
The I/O problem will continue to grow. IDC predicts the amount of information that needs to be managed by enterprises will increase 50 times in the next 10 years, and the number of files will increase 75 times. However, with Moore's law slowing from processor speeds doubling every 18 months to doubling every three years, processor performance will grow only by a factor of eight and storage performance will grow by a factor of four1.
Condusiv’s V-locity Server optimization software addresses critical I/O issues by eliminating application bottlenecks without the need to add server or storage hardware. Condusiv's differentiator is that its software resides at the top of the technology stack, eliminating unnecessary I/O at the source, where it originates.
As a first step to I/O optimization, V-locity Server eliminates nearly all unnecessary I/O operations at the operating system level when writing a file, which in turn eliminates all unnecessary I/O operations on subsequent reads. Second, V-locity Server caches frequently accessed data within available server memory without resource contention to the application to keep read requests from traveling the full distance to storage and back.
With V-locity at the top of the technology stack, optimizing I/O at the point of origin, this means only productive I/O is pushed through the server, network and storage. This approach to I/O optimization complements technologies that may already be running to promote IOPS or latency reduction, including SSDs, flash cards, and SAS, and provides tremendous benefit from the top-down. And since I/O is optimized at the source, V-locity Sever is network storage-agnostic, providing benefits to advanced storage features like snapshots, replication, thin provisioning and deduplication.
Solution: V-locity Server increased SQL Server 2012 transaction processing rate by 55% and improved response time by 33% without additional hardware.
openBench Labs tested the ability of V-locity Server to optimize I/O in a SQL Server environment. Using SQL Server 2012, openBench tested a mix of a high volume of light-weight SQL select transaction processing (TP) queries, combined with heavy-weight background update queries.
For the SQL Server benchmark testing, openBench simulated 1 to 32 daemon processes (1 daemon generating the equivalent of 70 normally-queued user processes) issuing queries non-stop. When a real application user interacts with SQL Server, there is lag between queries issued. In the test scenario, however, the daemon process issued queries without lag—that is, no think-time, type-time, or pause-time between query activity.
In a series of tests, openBench Labs measured the ability of V-locity Server’s IntelliMemory™ to offload I/O on read operations through dynamic caching in order to boost throughput and reduce latency. In addition, openBench examined the ability of IntelliWrite® technology to prevent unnecessary split I/Os, using its intelligence to extend current database files and create new log files as single, contiguous collections of logical blocks.
In a test of SQL Server query processing, openBench Labs benchmark findings revealed that V-locity, on a server running SQL Server, enabled higher transaction per second (TPS) rates and improved response time by reducing I/O processing on storage devices. What’s more, in a SAN- or NAS-based storage environment, V-locity Server reduced I/O stress on multiple systems sharing storage resources. Overall, V-locity Server can improve scalability by reducing average response time and enabling SQL Server to support more users.
“On a server running SQL Server 2012, V-locity Server created an environment that enabled up to 55% higher TPS rates, improved transaction response time by 33%, and enabled SQL to process 62% more transactions at peak transaction rates. As a result, IT has a powerful tool to maximize the ROI associated with any business application initiative driven by SQL Server at the back end,” said Dr. Jack Fegreus, founder of openBench labs.
Condusiv’s V-locity software architecture, which contains transformational read and write optimization engines, represents the culmination of 31 years of research and development in optimizing and accelerating Windows environments for business.
Condusiv’s goal is to help broaden industry awareness of the benefits of V-locity’s unique approach to optimizing read/write performance at the source, addressing critical I/O performance barriers without adding storage or server hardware. Condusiv Technologies is presenting a sponsored Technology Spotlight by leading IT market research and advisory firm IDC, entitled “The Shift to I/O Optimization to Boost Virtual and Physical Server Performance.” In addition, Westborough, Massachusetts-based openBench Labs released a third party test report revealing that V-locity Server accelerated SQL performance by 55%. Access both papers at http://www.condusiv.com/business/v-locity/server/.
“Virtual environments, cloud services, mobile devices, and Big Data all contribute to the rise in digital information organizations must manage. All of this data not only must be stored, but utilized to drive value for an organization's competitive advantage,” said Jerry Baldwin, CEO of Condusiv Technologies. “As much as the I/O explosion needs to be managed, CIO’s find themselves investing 80% of their annual IT budget on maintaining their existing infrastructure and services. This model is broken. V-locity customers typically see 50% or more performance gains on mission-critical applications like SQL and Exchange. That gain also comes with a very unique proposition—a savings of about 80% from their annual hardware capital expense budget.”
The Problem
The I/O problem stems, in part, from the fact that while the number of virtual machine shipments is growing at an average of 25% annually, the number of physical servers shipped is growing at a modest 2–3%. As more workloads are put on virtual servers and heavier workloads are placed on physical servers, this can triple or quadruple the amount of random I/O generated from a single server, burdening the compute infrastructure. Increasingly, the storage controller and disk architectures cannot keep pace with this growing random I/O.
When it comes to I/O and its impact on servers, storage and applications, there are two performance barriers: 1) Windows creating unnecessary I/O traffic by splitting files upon write, which also impacts subsequent reads, and 2) frequently accessed data unnecessarily traveling the full distance from server to storage and back.
These two behaviors create a surplus of I/O that prevents applications from performing at peak speeds. In today's enterprise, the problem is compounded as a multitude of random I/O traffic, from a mass of disassociated data access points, is making requests for storage blocks—random and sequential—to a shared storage system. All this unnecessary I/O leads to extra processing cycles that increase overhead and reduce application, network, and storage performance.
The I/O problem will continue to grow. IDC predicts the amount of information that needs to be managed by enterprises will increase 50 times in the next 10 years, and the number of files will increase 75 times. However, with Moore's law slowing from processor speeds doubling every 18 months to doubling every three years, processor performance will grow only by a factor of eight and storage performance will grow by a factor of four1.
Condusiv’s V-locity Server optimization software addresses critical I/O issues by eliminating application bottlenecks without the need to add server or storage hardware. Condusiv's differentiator is that its software resides at the top of the technology stack, eliminating unnecessary I/O at the source, where it originates.
As a first step to I/O optimization, V-locity Server eliminates nearly all unnecessary I/O operations at the operating system level when writing a file, which in turn eliminates all unnecessary I/O operations on subsequent reads. Second, V-locity Server caches frequently accessed data within available server memory without resource contention to the application to keep read requests from traveling the full distance to storage and back.
With V-locity at the top of the technology stack, optimizing I/O at the point of origin, this means only productive I/O is pushed through the server, network and storage. This approach to I/O optimization complements technologies that may already be running to promote IOPS or latency reduction, including SSDs, flash cards, and SAS, and provides tremendous benefit from the top-down. And since I/O is optimized at the source, V-locity Sever is network storage-agnostic, providing benefits to advanced storage features like snapshots, replication, thin provisioning and deduplication.
Solution: V-locity Server increased SQL Server 2012 transaction processing rate by 55% and improved response time by 33% without additional hardware.
openBench Labs tested the ability of V-locity Server to optimize I/O in a SQL Server environment. Using SQL Server 2012, openBench tested a mix of a high volume of light-weight SQL select transaction processing (TP) queries, combined with heavy-weight background update queries.
For the SQL Server benchmark testing, openBench simulated 1 to 32 daemon processes (1 daemon generating the equivalent of 70 normally-queued user processes) issuing queries non-stop. When a real application user interacts with SQL Server, there is lag between queries issued. In the test scenario, however, the daemon process issued queries without lag—that is, no think-time, type-time, or pause-time between query activity.
In a series of tests, openBench Labs measured the ability of V-locity Server’s IntelliMemory™ to offload I/O on read operations through dynamic caching in order to boost throughput and reduce latency. In addition, openBench examined the ability of IntelliWrite® technology to prevent unnecessary split I/Os, using its intelligence to extend current database files and create new log files as single, contiguous collections of logical blocks.
In a test of SQL Server query processing, openBench Labs benchmark findings revealed that V-locity, on a server running SQL Server, enabled higher transaction per second (TPS) rates and improved response time by reducing I/O processing on storage devices. What’s more, in a SAN- or NAS-based storage environment, V-locity Server reduced I/O stress on multiple systems sharing storage resources. Overall, V-locity Server can improve scalability by reducing average response time and enabling SQL Server to support more users.
“On a server running SQL Server 2012, V-locity Server created an environment that enabled up to 55% higher TPS rates, improved transaction response time by 33%, and enabled SQL to process 62% more transactions at peak transaction rates. As a result, IT has a powerful tool to maximize the ROI associated with any business application initiative driven by SQL Server at the back end,” said Dr. Jack Fegreus, founder of openBench labs.
Cray Leverages Intel Hadoop For Big Data in HPC
Cray’s new offerings will combine its CS300 supercomputer clusters with Intel’s Hadoop distribution.
Cray officials are adding Intel’s Hadoop distribution to their growing list of supercomputing solutions for the burgeoning big data market. Cray later this month will launch cluster supercomputers for Hadoop applications that will combine the vendor’s CS300 supercomputers with Intel’s Hadoop distribution, a Linux operating system and Cray’s Advanced Cluster Engine (ACE) management software, according to company officials. The result will be a turnkey computing infrastructure that will enable organizations to better leverage Hadoop, according to Bill Blake, senior vice president and CTO at Cray. "More and more organizations are expanding their usage of Hadoop software beyond just basic storage and reporting,” Blake said in a statement. “But while they're developing increasingly complex algorithms and becoming more dependent on getting value out of Hadoop systems, they are also pushing the limits of their architectures."“Organizations can now focus on scaling their use of platform-independent Hadoop software, while gaining the benefits of important underlying architectural advantages from Cray and Intel," Blake said.
Big data is a growing trend in the business world, with massive amounts of data being created from the wide range of connected devices, machines and sensors. Intel officials have said that every 11 seconds, a petabyte of data is created around the world. Hadoop, which includes about a dozen open-source projects, is designed to enable businesses to more easily store huge amounts of data, analyze it and leverage it in ways that benefit both the organizations and their users. For example, businesses can use it to gain a better understanding of what their customers want, while medical researchers can more quickly discover life-saving drugs and communities can improve their environments by better managing traffic patterns. Intel in February unveiled the Intel Distribution for Apache Hadoop, its own distribution of the open-source technology. The giant chip maker had been working with Hadoop since 2009, but officials said it was important to offer a Hadoop distribution optimized to work with features on its processors, such as incorporating Advanced Encryption Standard New Instructions (AES-NI) for accelerating encryption into the Hadoop Distributed File System. It’s also part of a larger effort by Intel to grow its role in the data center beyond server chips. Intel has been building up its software capabilities via in-house development and acquisitions, and while keeping open parts of its Hadoop distribution—making them interoperable with other Hadoop distributions—the company will keep some features, including management and monitoring capabilities, to itself. Intel will not open source such software as Intel Manager for Apache Hadoop—for configuration and deployment—or Active Tuner for Apache Hadoop, a tool for improving the performance of compute clusters running the distribution. Cray officials, in announcing their new Hadoop clusters, noted the strengths in Intel’s distribution, including greater security, improved real-time handling of data, and enhanced performance throughout the storage architecture. Cray is including support for InfiniBand and improved resource management, officials said. The CS300 series of supercomputers—which Cray inherited when it bought rival Appro for $25 million in November 2012—comes with an integrated high-performance computing (HPC) software stack and software tools that are compatible with most open-source and commercial compilers. That will enable organizations to leverage Intel’s Hadoop distribution, according to Girish Juneja, CTO and general manager of Intel's Big Data Software unit. "Combining these features with the highly innovative HPC technologies in Cray systems will create a compelling solution for organizations with the most demanding Hadoop requirements," Juneja said in a statement. Cray’s Hadoop supercomputer clusters, which offer energy-efficient water- or liquid-cooled architectures, are the latest move by the systems vendor to build out its portfolio of products for big data. The company also offers Cray Sonexion storage systems and YardData’s Urika appliance for graph analytics.
Subscribe to:
Comments (Atom)