Sunday, December 15, 2013

Nokia Lumia 525 Arrives, Following World's Most Popular Windows Phone - See more at: http://www.eweek.com/mobile/nokia-lumia-525-arrives-following-worlds-most-popular-windows-phone

The Nokia Lumia 525, a variant of the 520, costs $199, has swappable, candy-colored shells and may be the Moto G's challenger in emerging markets.
Nokia has quietly launched the Lumia 525, a low-cost, feature-rich Windows Phone 8 smartphone that may be the Motorola Moto G's closest competition.

The 525 is a variant—not a replacement, says Nokia—of the Lumia 520, the best-selling Windows Phone smartphone in the world, according to a Nov. 27 report from AdDuplex. It features a 4-inch super-sensitive touch screen (meaning it responds to fingernail taps and can be used with gloves on), a dual-core Snapdragon S4 processor and a 5-megapixel rear camera. Like the 520, it doesn't have flash but can shoot HD video at 720p. 

There's 1GB of RAM on board, 8GB of mass memory, Nokia's suite of HERE Maps, including for transit and driving, and a microSD card slot to support up to 64GB of memory. Further enhancing the memory situation, Microsoft is including 7GB of free SkyDrive cloud storage.

The phone, which has a swappable, candy-colored polycarbonate shell, comes in orange, yellow, white and black, and measures 4.7 by 2.5 by 0.39 inches.

Again like the 520, the 525 has access to apps including Instagram and Vine.

Nokia released the 525 out of Singapore priced, unlocked, at U.S. $199—same as the Lumia 520. Details about when and how quickly it will reach the rest of the world are still to come. (Nokia says it has a deal to sell the 525 to China Unicom and a 526 to China Mobile.)

The Windows Phone Market
Google's Android controlled 82 percent of the global smartphone OS market share as of the third quarter of 2013, up from 73 percent a year earlier, Gartner announced Nov. 14. But Windows Phone still managed to also grow its numbers, from a 2.3 percent share to 3.6 percent—a growth of 123 percent, earning Microsoft the title of "winner of this quarter," said Gartner analyst Anshul Gupta.

He added that Microsoft's purchase of Nokia's devices and services business is expected to "unify the effort and help drive appeal of the Windows ecosystem."

AdDuplex, which created its report around a sampling of 2,000-plus Windows Phone apps, found the Lumia 520 to account for nearly 27 percent of all active Windows Phone handsets. The Lumia 920 accounts for 8.8 percent and the Lumia 620 for 8.6 percent.

Nokia is, by far, the largest backer of Windows Phone handsets, selling 90 percent of the devices running the OS. HTC, in second place, has just a 7 percent share, followed by Samsung and Huawei, with less than 2 percent each.

In the United States, the leading Windows Phone smartphone is the Lumia 521; combined with the Lumia 520, they have a 30 percent share.

A Lumia 525 in the U.S. could be very good news for Nokia, which during the third quarter sold 63 million phones, thanks to its Lumia and Asha lines, proving its arrow is once again pointing in the right direction.

Low-Cost Phones for the Holidays
Beating the Lumia 525 to the U.S. market, Motorola announced Nov. 26 that the Moto G (or at least the GSM version) has come to town, weeks ahead of its promised early-January delivery.

A lower-cost follow-up to the Moto X, the Moto G has a 4.5-inch display with more pixels per inch than the Apple iPhone 5S; a processor that enables it to perform some tasks faster than the Samsung Galaxy S 4; a battery that can outlast any popular competitor (24 hours, says Motorola); and a price tag (it starts at $179) that's one-third, if not less, of the iPhone or GS4.

"It's not fair that these people have to buy an old or underpowered phone," Motorola CEO Dennis Woodwide said during his Nov. 13 introduction of the Moto G, speaking of the 500 million people expected to buy a sub-$200 device in 2014. "With Moto G, we're giving people around the world a better choice."

Tablet Growth Expected to Slow, With Android in the Lead: IDC

IDC forecasts that tablets running Google's Android operating system will constitute 60.8 percent of the worldwide market in 2013.
Worldwide tablet shipments are expected to reach 221.3 million units in 2013, down slightly from a previous forecast of 227.4 million but still 53.5 percent above 2012 levels, according to IT research firm IDC's Worldwide Quarterly Tablet Tracker.

The tracker, which provides market size, vendor share and forecasts for hundreds of technology markets from more than 100 countries around the globe, predicted shipment growth of tablets will to slow to 22.2 percent year over year in 2014 to a total of 270.5 million units.

The firm forecasted that tablets running Google's Android operating system will constitute 60.8 percent of the worldwide market in 2013, a figure that would slide back to 58.8 percent by 2017.

Apple's iOS tablets, including the iPad Air and the iPad mini, are projected to represent 35 percent of the market in 2013, and that share is forecast to fall to 30.6 percent by 2017.

"In some markets consumers are already making the choice to buy a large smartphone rather than buying a small tablet, and as a result we've lowered our long-term forecast," Tom Mainelli, research director for tablets at IDC, said in a statement. "Meanwhile, in mature markets like the U.S. where tablets have been shipping in large volumes since 2010 and are already well established, we're less concerned about big phones cannibalizing shipments and more worried about market saturation."

By 2017, annual market growth will slow to single-digit percentages and shipments will peak at 386.3 million units, down from the previous forecast of 407 million units. The report noted that Windows-based tablets are not expected to steal share from tablets running iOS and Android until the latter part of the forecast.

"For months, Microsoft and Intel have been promising more affordable Windows tablets and 2-in-1 devices," Jitesh Ubrani, research analyst for IDC's Worldwide Tablet Tracker, said in a statement. "This holiday season, we expect a huge push for these devices as both companies flex their marketing muscles; however we still don't expect them to gain much traction. We're already halfway through the holiday quarter, and though there have been some relatively high-profile launches from the likes of Dell, HP, and Lenovo, we've yet to see widespread availability of these devices, making it difficult for Windows to gain share during this crucial period."

The rise of large phones, often referred to as "phablets," could well push consumers back toward larger tablets as the difference between a 6-inch smartphone and a 7-inch tablet isn't great enough to warrant purchasing both.

Apple's launch of the iPad Air, a much thinner and lighter version of its 9.7-inch product, could herald another market transition back toward larger screens, reversing the trend toward smaller tablet devices the market has seen for the past two years.

IDC forecasts that tablets running Google's Android operating system will constitute 60.8 percent of the worldwide market in 2013.

Worldwide tablet shipments are expected to reach 221.3 million units in 2013, down slightly from a previous forecast of 227.4 million but still 53.5 percent above 2012 levels, according to IT research firm IDC's Worldwide Quarterly Tablet Tracker. The tracker, which provides market size, vendor share and forecasts for hundreds of technology markets from more than 100 countries around the globe, predicted shipment growth of tablets will to slow to 22.2 percent year over year in 2014 to a total of 270.5 million units. The firm forecasted that tablets running Google's Android operating system will constitute 60.8 percent of the worldwide market in 2013, a figure that would slide back to 58.8 percent by 2017. Apple's iOS tablets, including the iPad Air and the iPad mini, are projected to represent 35 percent of the market in 2013, and that share is forecast to fall to 30.6 percent by 2017. - See more at: http://www.eweek.com/small-business/tablet-growth-expected-to-slow-with-android-in-the-lead-idc.html#sthash.lQANCTj6.dpuf

IDC forecasts that tablets running Google's Android operating system will constitute 60.8 percent of the worldwide market in 2013.

Worldwide tablet shipments are expected to reach 221.3 million units in 2013, down slightly from a previous forecast of 227.4 million but still 53.5 percent above 2012 levels, according to IT research firm IDC's Worldwide Quarterly Tablet Tracker. The tracker, which provides market size, vendor share and forecasts for hundreds of technology markets from more than 100 countries around the globe, predicted shipment growth of tablets will to slow to 22.2 percent year over year in 2014 to a total of 270.5 million units. The firm forecasted that tablets running Google's Android operating system will constitute 60.8 percent of the worldwide market in 2013, a figure that would slide back to 58.8 percent by 2017. Apple's iOS tablets, including the iPad Air and the iPad mini, are projected to represent 35 percent of the market in 2013, and that share is forecast to fall to 30.6 percent by 2017. - See more at: http://www.eweek.com/small-business/tablet-growth-expected-to-slow-with-android-in-the-lead-idc.html#sthash.lQANCTj6.dpuf

IDC forecasts that tablets running Google's Android operating system will constitute 60.8 percent of the worldwide market in 2013.

Worldwide tablet shipments are expected to reach 221.3 million units in 2013, down slightly from a previous forecast of 227.4 million but still 53.5 percent above 2012 levels, according to IT research firm IDC's Worldwide Quarterly Tablet Tracker. The tracker, which provides market size, vendor share and forecasts for hundreds of technology markets from more than 100 countries around the globe, predicted shipment growth of tablets will to slow to 22.2 percent year over year in 2014 to a total of 270.5 million units. The firm forecasted that tablets running Google's Android operating system will constitute 60.8 percent of the worldwide market in 2013, a figure that would slide back to 58.8 percent by 2017. Apple's iOS tablets, including the iPad Air and the iPad mini, are projected to represent 35 percent of the market in 2013, and that share is forecast to fall to 30.6 percent by 2017. - See more at: http://www.eweek.com/small-business/tablet-growth-expected-to-slow-with-android-in-the-lead-idc.html#sthash.lQANCTj6.dpuf

IDC forecasts that tablets running Google's Android operating system will constitute 60.8 percent of the worldwide market in 2013.

Worldwide tablet shipments are expected to reach 221.3 million units in 2013, down slightly from a previous forecast of 227.4 million but still 53.5 percent above 2012 levels, according to IT research firm IDC's Worldwide Quarterly Tablet Tracker. The tracker, which provides market size, vendor share and forecasts for hundreds of technology markets from more than 100 countries around the globe, predicted shipment growth of tablets will to slow to 22.2 percent year over year in 2014 to a total of 270.5 million units. The firm forecasted that tablets running Google's Android operating system will constitute 60.8 percent of the worldwide market in 2013, a figure that would slide back to 58.8 percent by 2017. Apple's iOS tablets, including the iPad Air and the iPad mini, are projected to represent 35 percent of the market in 2013, and that share is forecast to fall to 30.6 percent by 2017. - See more at: http://www.eweek.com/small-business/tablet-growth-expected-to-slow-with-android-in-the-lead-idc.html#sthash.lQANCTj6.dpuf

ManageEngine to Launch Social Networking Software for IT Teams in Large Enterprises and Data Centers

ManageEngine, the real-time IT management company, today announced it is launching Social IT Plus, private social networking software for IT departments. Social IT Plus is on-premise software that helps IT shops at large enterprises and data centers collaborate in real time by establishing a one-stop, cascading wall for real-time display of IT infrastructure health. A live demo of Social IT Plus is available online at http://demo.socialitplus.com/.
Large enterprises and data centers expand their IT very quickly, leveraging technologies such as virtualization, software-defined networking (SDN) and software-defined data center (SDDC). This mandates a communication platform that is superior to email, a platform that is dynamic and capable of pulling valuable information from IT management tools, in real time.
"We have bridged the long-standing gap between IT management tools and a communication platform with Social IT Plus," said Dev Anand, director of product management at ManageEngine. "Now, admins have a way to collaborate on issues in real time and improve the mean-time-to-repair."
Inside Social IT Plus
Social IT Plus is the downloadable, on-premise version of the company's proven, SaaS-based social networking service for IT teams. To prove the power of Social IT Plus, ManageEngine has integrated it with OpManager, the data center infrastructure and network monitoring software that can monitor 50K devices from a single installation. The integration lets IT teams share a particular page - such as a device snapshot page or alarm details page of OpManager - on the Social IT Plus wall to help the team discuss the device's performance, share troubleshooting steps and fix issues without wasting time.
Social IT Plus reduces communication barriers between IT team members. The social network is very simple to use and unlike email, provides a threaded, discussion-like UI that makes it easy to follow extended conversations that include multiple participants. IT staffers can start discussions, share videos and articles, and trigger a script to post its status via REST APIs offered by Social IT Plus. Leveraging these APIs, IT admins can integrate it with IT management solutions from HP, IBM, CA and Microsoft. Alarms and performance reports from these solutions can be shared on the wall.
Social IT Plus will be launched at the DCIM Meetup, Dallas, where most of the data center admins and IT admins from large enterprises will be participating. The DCIM Meetup is a social event sponsored by ManageEngine that gives data center admins and IT admins an opportunity to socialize and discuss their day-to-day, IT-related problems and share best practices.
Pricing and Availability
Social IT Plus will be available for download beginning December 10, 2013. Social IT Plus prices start at $495 for two technicians per year. Additional technician licenses cost $95 per user per year. Users can register at http://www.manageengine.com/social-IT/download.html to receive the download link.
A six-month free license for Social IT Plus and OpManager Large Enterprise Edition will be provided for all the attendees of DCIM Meetup, Dallas.

Citrix Eyes Growth Opportunity for Virtualizing 3D CAD Apps

Citrix today announced new customer deployments of the market-leading Citrix XenDesktop® with HDX 3D Pro to deliver high-end 3D apps to designers, engineers and workers all along the product design chain. Citrix global customers, Knightec AB and Wiha Werkzeuge GmBH are using XenDesktop to securely, centrally-deliver CAD applications to dispersed design teams, with the stunning visual performance required for this type of intensive design and engineering work. Since the introduction of HDX 3D Pro graphics app acceleration technologies, Citrix has seen rapid customer adoption led by organizations across North America, Europe and Japan in heavy manufacturing, global engineering and energy sectors who have deployed XenDesktop to host and securely-deliver design and engineering applications from the datacenter.
"Supporting the CAD application market with our desktop and app virtualization solutions represents a significant growth opportunity for Citrix. Today, there are nearly 15 million CAD users that design products and another 100 million users require access to design data to view and edit these designs, representing a significant new target user base for Citrix," said Calvin Hsu, vice president of product marketing, Desktop and Apps at Citrix.
In today's global business environment, design and manufacturing organizations require the ability to collaborate and manage design lifecycles effectively with offshore, mobile and remote employees across the globe. At the same time, they have to maintain tight security and control over valuable intellectual property as the workforce becomes more mobile and distributed. XenDesktop with HDX 3D Pro is the only solution that can support high-end designers, engineers and broad range of employees working with 3D data, while delivering real-time, remote collaboration. No other solution can offer the same performance, scale and graphics compatibility. Competing solutions are architected such that they cannot keep up with the latest versions of OpenGL, OpenCL, CUDA and DirectX, which limits customers to certain applications and legacy products. In addition, these solutions only offer one method of GPU sharing, which does not leverage the full performance of GPU virtualization designed by graphics market leader, NVIDIA. Professional graphics applications such as CAD, visualization and analysis rely on the parallel processing power of Graphics Processing Units (GPUs) to visually render and manipulate large data models with pixel perfect quality.
Global Organizations Virtualize CAD Apps
Knightec has a vision to become the Nordic region's leading engineering consulting firm in product and production development. The firm's 350 engineers collaborate from throughout Sweden, combining technical skill with business development expertise to create new solutions and increase profitability for its customers. Knightec is delivering product design projects faster and at a lower cost by empowering engineers to work and collaborate from anywhere using Citrix technologies. Prior to deploying XenDesktop, design data residing on local workstations was shared between engineers via email and USB thumb drives. Traditional remote access tools performed poorly with design apps, forcing users to make simple design edits only in the office.
"We needed to provide our engineers with access to powerful workstations with CAD abilities from anywhere in the world, without having to install and support CAD applications over our network. We evaluated solutions from VMware and Citrix. XenDesktop with HDX 3D Pro was selected because of its better performance with our design and simulation applications and shared GPU roadmap with NVIDIA. Now engineers in any location can use their virtual desktops to access CAD applications and collaborate around project files. This also allows greater efficiency and flexibility for the firm because we can easily assign people to additional projects, working with different colleagues in many other locations," said Jörgen Norman, head of IT at Knightec.
Germany's Wiha Premium Tools is one of the world's leading manufacturers of precision hand tools for use in industry and skilled trades. With more than 850 employees that produce over 4,000 styles of precision tools, the company has manufacturing facilities in Germany, Switzerland, Poland and Vietnam to meet the demands of customers. In order to collaborate and design across these multiple locations, Wiha uses XenDesktop with HDX 3D Pro technology to support remote locations securely, while enabling mobile workstyles for CAD developers. Prior to using XenDesktop with HDX 3D Pro, users requiring high computing power and rapid graphics performance were provided with high-performance workstations running Siemens Solid Edge and other developer tools running locally. Recently, a new challenge emerged with plans for a new development site two hours away.
"The development of a completely independent environment would have caused high costs and considerable administrative effort, and synchronizing data regularly with the head office would have taxed the network connection enormously," Wiha IT Manager, Siegried Disch, said.
"Citrix technology opens many new options for us," explained Disch. "For example, we can recruit CAD construction specialists who live in a different city or who want to work from their home office. Freelancers and development sites abroad can also be connected simply and securely. Wherever our users work, our invaluable know-how always remains secure within our datacenter."
Industry-leading Graphics Acceleration Technologies
HDX 3D Pro in XenDesktop is a set of graphics acceleration technologies designed to deliver graphics-intensive apps and desktops using deep compression technologies that significantly reduce bandwidth requirements. HDX 3D Pro leverages server-based GPUs specifically designed for achieving the smoothest visual performance on a dedicated GPU per virtual machine basis. New NVIDIA GRID virtual GPU (vGPU) technology introduced earlier this year provides a more cost-effective solution for delivering virtualized 3D professional graphics apps by supporting more users per host without sacrificing graphics performance. Citrix XenServer® is the first hypervisor to integrate this technology from NVIDIA, and it will be available on December 16, 2013.
"Remote users can now enjoy uncompromised virtualized desktops and graphics-intensive applications," said Jeff Brown, vice president, Professional Visualization and Design business, NVIDIA. "With NVIDIA GRID vGPU virtualization  technology integrated into XenServer and XenDesktop, designers and engineers can work wherever they are, with the highest performance, stability and compatibility."

Ravello Systems Demonstrates up to 2x Increase in Application Performance on AWS With Nested Virtualization

Ravello Systems, the industry’s first Cloud Application Hypervisor provider, has demonstrated that with virtual machine consolidation in the cloud it can increase Amazon Web Services (AWS) performance up to 2x.
Virtualization today has become the de facto platform for running applications, with Forrester Research stating as many as six out of 10 workloads are running in virtual machines. The next generation of virtualization from Ravello uses nested virtualization to significantly improve enterprise agility. This includes encapsulation at the multi-VM application level with software defined networking and storage, as well as abstraction of the underlying cloud.
Ravello Demonstrates up to 2x Better AWS Performance with Nested VM Consolidation
Much like traditional hypervisors can consolidate multiple virtual machines on a single physical server, Ravello’s Cloud Application Hypervisor can consolidate multiple guest virtual machines on a larger host virtual machine. Additionally, Ravello has demonstrated that the performance of a multi-VM application can be increased by a factor of up to 2x in AWS with VM consolidation. This is primarily due to a combination of two factors, networking within the application and pooling of resources. The full details of this performance testing have been published here.
Nested Virtualization Allows for Agile Development and Test
Nested virtualization technology enables enterprises to create agile application development and test environments in the cloud. This is delivered through a hybrid cloud model that utilizes capacity on demand from private and public clouds without making any changes to the application. For example, a complex VMware workload can be run unmodified on AWS – with everything including the VMs and networking staying exactly the same. Using this technology, enterprises can now create accurate, representative test environments in the public cloud without having to modify their existing on-premise applications.
“Just like the early days of VMware which started with development and test servers going from physical to virtual, enterprises today are using Ravello HVX to take their development and test environments to the next level - from virtual to cloud,” said Alex Fishman, virtualization CTO, Ravello Systems. “The key to delivering high performance was to build a nested hypervisor specifically designed for the cloud since it’s a very different environment. Traditionally virtualization adds overhead but here at Ravello we are proving that with our new consolidation capabilities, adding another hypervisor may actually improve performance in the cloud for some workloads.”

DirectNetworks Selects ASG's CloudFactory to Advance its Clients to the Cloud

ASG Software Solutions today announced that DirectNetworks, Inc., a privately held, IT solution provider, has chosen ASG’s end-to-end cloud management suite, CloudFactory, to create its cloud-based workplace, DirectCloud Webtop.
CloudFactory solution was not only able to address all of our pain points, but, most importantly, everything was completed within our budget and roll-out timeframe of just three months,” said Colin Mehlum, partner and business development, DirectNetworks, Inc. “We recognized that cloud computing was quickly becoming an essential element of any successful, modern IT strategy, so the development of Webtop was crucial to both our customers and our survival as a company. But, without CloudFactory, Webtop simply would never have been possible. ASG really treated us like a partner in helping us reach our goals, and its innovative cloud suite has been a crucial element of our company’s vision and success.”
With CloudFactory, ASG was able to help DirectNetworks incorporate a single workspace for users to access and provision all of their applications and services, while advanced service orchestration and automated infrastructure lifecycle management is conducted in the background within DirectNetworks’ data centers. Now, DirectNetworks can provide its clients with all of the increased cost savings, mobility and productivity that come inherent with the cloud. And, by including cloud-based storage, servers and virtual desktops, DirectNetworks can help its clients future-proof their operations while increasing data accessibility and reducing costly hardware, software and in-house support.
“We are just as invested as DirectNetworks in helping them grow their business, so are pleased to see the significant role CloudFactory has played in the success of Webtop,” said Victor Paul Fiss, vice president cloud delivery, ASG Software Solutions. “DirectNetworks wants to shoulder its clients’ burden of trying to constantly adapt to increasing technology advancements. And, to know that CloudFactory played a role in that goal, and is helping advance companies’ computing infrastructure to the cloud, is an achievement in and of itself.”

Software Efficiency, Multi-Hypervisor and Software Defined Challenges to be seen in 2014

In 2013, organizations turned their attention to the problem of over-provisioning, and explored the levers that enabled them to move to the next level of operational maturity and efficiency within virtualized infrastructures, and of course, the cloud.
In 2014, the drive to efficiency will continue, but will not just focus on reducing hardware spend.  Significant savings will also be had through increasing the density of expensive operating system and application licenses.  Many organizations have already adopted per-host or per-processor licensing models for this very reason, but have not yet optimized the density of the VMs to realize the savings.  Some forward-thinking organizations have already placed a big focus on this, but the coming year will see this happen on a much broader scale as it becomes clear that a lot of money can be saved by simply moving VMs around.
This software efficiency theme will also extend to the hypervisors themselves, which have long been a sore point from a cost perspective.  Many organizations will seek to reduce the unit cost of hosting workloads, and at the same time avoid vendor lock-in, by considering different hypervisor alternatives.  Hyper-V environments will become more commonplace, and KVM will start to gain traction as it rides in on the coat tail of cloud technologies like OpenStack.  As with any adoption cycle, these trends will start in dev/test environments, but will have an accelerated path to production as the ecosystems build out around them and management vendors throw more weight behind them.  Those organizations that remain flexible and utilize the automation tools that optimize workload placements will continue to see efficiency gains.
Additionally in 2014, as more and more infrastructure components become "software-defined," organizations will realize that the more degrees of freedom there are to define things through software, the more difficult it becomes to figure out how to define them.  We have had a glimpse of this already with virtualization, which is really another name for software-defined servers.  Although it created the ability to place VMs on different servers and flexibly define their resource allocations, it ended up causing a bit of a mess as VMs were inevitably put in the wrong places and made the wrong sizes.  In recent years management software has emerged to control this flexibility, and by analyzing all of the operational metrics and constraints it became possible to optimize placements and allocations, effectively driving up efficiency and reducing operational risk.  The need for this kind of approach will expand to a broader scale, and from it will emerge software to define the software-defined data center.

Tuesday, June 18, 2013

CiRBA's New API Adds VM Placement Brain to Cloud Management Platforms Such as OpenStack

CiRBA, a leading provider of capacity transformation and control software, today announced a new API that enables organizations to connect their cloud management platforms to CiRBA in order to optimize new workload placements within internal clouds. CiRBA’s analytics determine the optimal placement for VMs within cloud infrastructure, both at the environment and server level, reducing the risk of capacity shortfalls and driving up VM density by an average of 48 percent. The new API also provides access to CiRBA’s bookings functionality, allowing users to reserve capacity for future needs using existing self-service portals.
Many organizations building internal clouds are looking to rely on cloud management platforms such as OpenStack, but face a challenge in bringing together all of the required capabilities. Cloud management platforms are designed to provision VMs, but do not have the ability to analyze capacity in order to determine the best environment to host a workload in or the best host within an environment to start an instance on. As a result, hot-spots and imbalances in resource utilization will occur in the infrastructure, creating both performance issues and inefficient use of capacity.
CiRBA’s new workload routing API enables cloud management platforms to send placement requests to CiRBA, and to receive an answer back that contains the best possible environment and host-level placement for a new workload. This answer is based on CiRBA’s industry leading analytics, which consider a broad set of factors including utilization patterns, licensing requirements, capacity availability, policy constraints and technical considerations. This brings a new level of automation, enabling cloud management platforms to dynamically leverage CiRBA’s analytics in order to intelligently process workload placement requests, which is often one of the biggest gaps in internal cloud implementations. Integrating CiRBA into this process ensures high efficiency while reducing operational risks, allowing more instances to fit into each environment while at the same time making existing instances work better. This new capability complements CiRBA’s standard control capabilities, which continuously “auto-corrects” cloud infrastructure through ongoing rebalancing and instance right-sizing.
The new API also enables capacity reservations to be made for new VMs through CiRBA’s Bookings Management System. This ensures the capacity is held for that workload when it is ready to be deployed if it is not required immediately.
“We see increased focus on maturing internal cloud operations in our customers,” said Andrew Hillier, CiRBA CTO and co-founder. “For self-service requests, there is a dire need for more intelligence in determining where these workloads go, and how much resources must be assigned to them. As internal cloud implementations scale and there are multiple environments, SLA levels or internal customers, it becomes unworkable to continue to place workloads based on simplistic or random algorithms.”
Continues Hillier: “Even more importantly, in enterprise environments, there is the recognition that instantaneous provisioning may not be as important as the ability to reserve capacity. Both are forms of self-service, but the ability for users to reserve capacity in advance is much more consistent with the way these enterprises work, where last-minute, unplanned use of capacity is the exception, not the norm.”
CiRBA’s new API ships on June 14th 2013.

Bromium Introduces vSentry 2.0 for Endpoint Security

Bromium, Inc., a pioneer in trustworthy computing, today announced the general availability of Bromium vSentry 2.0. Powered by its Xen-based Bromium Microvisor, vSentry 2.0 makes endpoints secure – by design, enabling enterprises to embrace key IT trends such as mobility and collaboration, without risk of attack from insecure networks, the web and malicious documents or media.
vSentry uses Intel CPU features for virtualization and security to invisibly hardware-isolate each Windows task that accesses the Internet or untrusted documents. Its architecture guarantees that all malware will be defeated and automatically discarded. In addition, vSentry automates live attack visualization and analysis – giving security operations teams unparalleled insight into attacks when they occur.
“The Intel 4th generation Core vPro platform offers enterprises a very secure endpoint architecture as well as a rich set of features that enhance endpoint security, including AES-NI, Data Execution Prevention (DEP) and Intel Platform Protection Technology with OS Guard,” said Rick Echevarria, vice president and general manager of Intel’s Business Client Platforms Division. “Bromium vSentry uses Intel VT-x, VT-d and EPT to hardware-isolate operating system tasks, and Intel AES-NI, DEP, and OS Guard to further protect the endpoint. Bromium vSentry advances endpoint security enabling enterprises to secure mobile endpoints and empowers employees to safely access networks and media.”
The enhancements in vSentry 2.0 focus on three important requirements for enterprise deployments - secure mobility, safe enablement, and improved manageability. The new release also delivers improved overall performance and end user experience.

Secure Mobility
Mobile users need to access enterprise applications and the web from untrusted networks that could be used to attack the endpoint. vSentry 2.0 hardware-isolates each user task that accesses an untrusted network, blocking all attacks from captive portals, the web and untrusted content. It guarantees the security of mobile endpoints that are used to remotely access enterprise SaaS and web applications, and virtual desktops. User credentials and application data delivered to the endpoint are secure at all times.

Safe Collaboration
Employees need to securely interact and collaborate with content originating from both within and outside the enterprise, requiring them to access untrustworthy content from removable media, the web, email and social applications. This places endpoint security in the user’s hands by making them remove security restrictions from, or “trust” content before interacting with it. If a user mistakenly trusts a malicious document, an attacker can compromise the endpoint. vSentry 2.0 lets users access and edit content without ever having to trust it, enabling them to be productive without risk.

Improved Manageability
The Bromium Management Server (BMS) that comes with vSentry now provides granular monitoring of deployment progress of vSentry endpoint agents, as well as automated gathering of critical information – such as missing software pre-requisites and installation progress. BMS delivers centralized policy management – and now includes simplified policy creation, editing, and distribution, event aggregation and reporting, as well as dashboards for monitoring key metrics. These improvements help simplify and accelerate enterprise-wide deployments of vSentry.
Bromium vSentry 2.0 secures both 32- and 64 bit versions of Windows 7, and virtual desktops delivered with Microsoft Remote Desktop Services (including Citrix XenDesktop and VMware View). It is deployed as a standard MSI package, and configured via simple policies using Microsoft Active Directory or using the Bromium Management Server. NYSE and BlackRock are among the growing number of enterprise customers planning to deploy vSentry enterprise-wide.
“vSentry 2.0 delivers on our goal to make endpoints fully protected from targeted attacks, by hardware-isolating all untrusted user tasks,” said Gaurav Banga, CEO and co-founder of Bromium Inc. “vSentry 2.0 addresses important use cases that further empower end users without compromising on enterprise security. It represents the industry’s most secure solution for enterprise mobility and gives users unparalleled flexibility and ease of use in collaborative environments.”
Interested parties can view a webcast covering the new features and functionality of vSentry 2.0 presented by Simon Crosby, CTO and co-founder of Bromium, at http://learn.bromium.com/newin2_register.html.
Bromium vSentry is licensed per-user, enterprise wide, and priced according to volume. For more information, contact sales@bromium.com.

VMUnify at HostingCon – Booth #616

VMUnify will be at HostingCon between the 17th and 19th of June at Austin, Texas. You can register for the conference with the code VMUnify2013 http://www.hostingcon.com/ for discounts.

At the conference, VMUnify will be unveiling a brand new and intuitive UI and also extensive Reseller support.

VMUnify is a platform for delivering Infrastructure as a Service (IaaS) or Cloud Servers with Secure Virtual Data Centers and Unified Cloud Environments.

VMUnify's features include:

  • Orchestration - Automation, Self Service, Workflows, Templates
  • Secure Multi-tenancy - VLANs, Firewall, Tenant Identity
  • Hypervisors - VMware, Hyper-V
  • Public Clouds -  Amazon, Azure
  • Provisioning Systems - WHMCS, HostBill
  • Billing - EBS, PayPal, BillDesk
  • Reseller Support & White labeling - Easy to add resellers and re brand the product
  • Cloud Broker - Jamcracker, Parallels Automation
  • Federation - Unified Provisioning across datacenters
  • Customization - For Service Providers
  • Simplicity of Deployment - Based on Virtual Appliances
  • API Interfaces - Interfaces to the product available via REST/ WS

Flexera Software Study Reveals Organisations Are Unprepared for Software License Compliance Risks Arising from Virtualisation

As the server virtualisation, desktop virtualisation and application virtualisation trends continues to take firm root within organisations globally, a new Flexera Software Survey, prepared jointly with IDC, has found 43% of organisations do not have sufficient processes and automation in place to manage their virtual licenses,  placing them in substantial risk of falling out of compliance with their software licenses.
“While server, desktop and application virtualisation provide tremendous operational efficiencies to organisations, each vendor has unique, evolving and frequently opaque licensing rules around virtualisation.  If sufficient measures are not taken to manage and optimise those virtualised licenses, companies may be vulnerable to substantial ‘true-up’ penalties if they are audited by their software vendors,” said Amy Konary, Research Vice President - Software Licensing & Provisioning, IDC.  “In one instance, I am aware of a global enterprise that saved $4 million in hardware through virtualisation, but it cost them $52 million in a resulting software license compliance issue.”

Growth in Virtualisation & Audits Suggests Noncompliance Windfall for Producers
The survey points to both the increasing penetration of virtualisation within organisations, and the increasing frequency of vendor software license audits, underscoring the vulnerabilities enterprises will face if they do not take additional steps to more strategically manage and optimise their virtualised applications.  According to the survey, 56% of enterprises (up from 51% in 2011) report that 41% or more of their applications have been virtualised using server virtualisation, and 24% say that between 10-25% of their apps are delivered though desktop virtualisation (VDI).
The survey also reveals that application producers see virtualisation as a new revenue opportunity.  50% of producers indicated that over the next 18-24 months they will be changing their licensing models to accommodate virtualisation. When the producers were asked why they change licensing models, the overwhelming majority – 69% -- said it was to generate more revenue.
Where will some of that additional revenue come from? According to the survey, 17% of producers currently rely upon trust-based licensing coupled with vendor-compliance audits. Looking into the future 18-24 months, this method of licensing and enforcement is expected to increase by 11% – suggesting an acceleration of the audit trend.
“Our customers have been reporting a major uptick in the frequency of vendor compliance audits, underscoring the strategic importance of continual compliance, and continual license optimisation in reducing financial risk,” said Jim Ryan, Chief Operating Officer of Flexera Software.  “When organisations have the best practices and solutions in place to optimise their virtual licenses, they will know ahead of time the impact that virtualising their applications will have – allowing them to minimise their virtualisation costs, and risk.”

NetJapan Introduces vmGuardian, Backup and Disaster Recovery for VMware ESXi Virtual Environments

NetJapan, Inc., introduces vmGuardian backup and disaster recovery software for VMware ESXi virtual environments. vmGuardian brings high-performance and uncompromised protection to your virtual environments featuring Inline Data Deduplication Compression and Selective File Restore.
A browser interface helps guide you through the various configuration settings, scheduling and backup management tasks making it easy to backup and restore your virtual environments. vmGuardian supports live backup of virtual machines and virtual disks so there is no down time. Virtual host backups gain significant benefit from vmGuardian’s inline data deduplication compression because it allows nominally different data from each virtual machine to be coalesced into a single storage space. The resultant backups are compressed at remarkably high ratios, significantly reducing backup storage needs.
vmGuardian Features
  • Easy to use browser based management interface.
  • Restore all virtual machines, selected virtual machines or individual files, and folders.
  • Agentless architecture means no more expensive agent licenses per virtual machine.
  • Backup all host virtual machines or only individual virtual machines.
  • Inline deduplication compression inspects large volumes of data during the backup and identifies large sections that are identical, in order to store only one copy of the data.
  • Smart Sector technology improves backup performance and reduces storage requirements by only backing up allocated space.
  • Full and incremental backups.
  • Flexible scheduling.
  • Incremental backup file consolidation.
  • Virtual machine backups are host independent; virtual machines can be restored to the same or different ESXi host.
  • Smallest backup window in the industry.
  • E-Mail Notifications.
Availability
vmGuardian software and support is available in Japanese and U.S. English. NetJapan, Inc. distributes vmGuardian through authorized system integrators, business partners, distributors, online shops and direct via www.vmguardian.com.

DataCore Software Builds on its Software-Defined Storage Lead with Enhancements to its Proven SANsymphony-V Storage Virtualization Platform

Amid all the talk and future-looking promises of software-defined storage from hardware-biased manufacturers, DataCore Software has delivered real-world solutions to thousands of customers worldwide. DataCore continues to advance and evolve its device-independent storage management and virtualization software, while maintaining focus on empowering IT users to take back control of their storage infrastructure. To that end, the company announced today significant enhancements to the comprehensive management capabilities within version R9 of its SANsymphony-V storage virtualization platform.
New advancements in SANsymphony-V include:
  • Wizards to provision multiple virtual disks from templates
  • Group commands to manage storage for multiple application hosts
  • Storage profiles for greater control and auto-tiering across multiple levels of flash, solid state (SSDs) and hard disk technologies
  • A new database repository option for recording and analyzing performance history and trends
  • Greater configurability and choices for incorporating high-performance "server-side" flash technology and cost-effective network attached storage (NAS) file serving capabilities
  • Preferred snapshot pools to simplify and segregate snapshots from impacting production work
  • Improved remote replication and connectivity optimizations for faster and more efficient performance
  • Support for higher speed 16Gbit Fibre Channel networking and more.
"Storage is undergoing a sea-change today and traditional hardware manufacturers are suffering because they are in catch-up mode to meet the ‘new world order' for software-defined storage where automation, fast flash technologies and hardware interchangeability are standard," said George Teixeira, co-founder, president and CEO of DataCore Software. "We have listened to our customers and stayed true to our vision. With the latest release of SANsymphony-V, we are well-positioned to help organizations manage growth and leverage existing investments, while making it simple to incorporate current and future innovations. Our software features and flexibility empowers CIOs and IT admins to overcome the many storage challenges faced in a dynamic virtual world." 
Real-World Software-Defined Storage: Customer-driven Enhancements Overcome Challenges Many of the new features which extend the scope and breadth of storage management would not even occur to companies just developing a software-defined package. They are the product of DataCore's 15 years of customer feedback and field-proven experience in broad scenarios across the globe.
The enhancements introduced in the latest version of SANsymphony-V take on major challenges faced by large scale IT organizations and more diverse mid-size data centers. Aside from confronting explosive storage growth (multi-petabyte disk farms), organizations are experiencing massive virtual machine (VM) sprawl where provisioning, partitioning and protecting disk space taxes both staff and budget. Problems are further aggravated by the insertion of flash technologies and SSDs used to speed up latency-sensitive workloads. The time and resource demands required to manage a broadening diversity of different storage models, disk devices and flash technologies - even when standardized with a single manufacturer - are a growing burden for organizations already struggling to meet application performance needs on limited budgets.
The bottom line is that companies are forced to confront many unknowns in terms of storage. With traditional storage systems, the conventional practice has been to oversize and overprovision storage with the hope that it will meet new and unpredictable demands, but this drives up costs and too often fails to meet performance objectives. As a result, companies have become smarter and have realized that it is no longer feasible or sensible to simply throw expensive, purpose-built hardware at the problem. Companies today are demanding a new level of software flexibility that endures over time and adds value over multiple generations and types of hardware devices. What organizations require is a strategic - rather than an ad hoc - approach to managing storage.
Notable Advances with SANsymphony-V Update 9.0.3 SANsymphony-V is a strategic productivity solution that works infrastructure-wide across many storage hardware brands and models. Its auto-tuning cache and auto-tiering software maximize the use of available CPU, memory and disk resources to dramatically increase overall storage performance, which translates into faster, more responsive applications. By more effectively leveraging existing disk storage investments, organizations can now cost-effectively add, and fully benefit from, the latest high-speed technologies like flash memory and SSDs.
DataCore's software makes it even easier to incorporate and optimize powerful "server-side" flash memory technologies. SANsymphony-V can operate directly with any flash and disk devices connected to application servers or can be used across all connected storage area networking (SAN) assets. New configuration flexibility and options have also been documented with the latest release of SANsymphony-V to simplify flash integration and maximize its utilization and performance.
DataCore offers an extensive set of management tools including "heat maps" to optimize performance and cost-effective tiering of storage. Auto-tiering prioritizes and applies the right storage to best fit application and workload needs. Storage profiles provide greater control, and because environments vary, more optimization to improve cost and performance. Profiles for virtual disks can be customized to govern how dynamic policies for auto-tiering, remote replication and synchronous mirror recovery are prioritized, while supplementing default policies built into the software. Virtual disk importance can be set to critical, high, normal, low or archive, controlling which volumes take precedence for shared resources. This ensures important applications benefit from more valuable resources, such as flash memory and SSDs, with less demanding tasks using lower cost, higher density storage.
Auto-regulating the best utilization of precious resources keeps them from unintentionally being consumed by lower priority demands, as often happens with backup snapshots and other replicas of line-of-business data. Instead, point-in-time copies are directed to tiers of storage more appropriate for their role, while SQL Server, Oracle, SAP, Exchange, SharePoint and other mission-critical apps are directed to higher speed resources.
DataCore customers can now enjoy cutting-edge recording, analysis and reporting for responsive, continuously available IT services. SANsymphony-V adds the ability to record historical performance for trend analysis. By displaying metrics gathered over time, workload spikes and potential bottlenecks can be easily addressed. There is also a greater emphasis on automating more nuanced aspects of provisioning and advanced storage services. The difficulty here has not been managing overall storage capacity as much as the total number of virtual hosts and volumes to be coordinated in a predictable, repeatable fashion. Admins now can kick off and manage these tasks effortlessly and visualize their state at a glance. Large-scale provisioning is made simple through the use of templates from which virtual disks can be instantiated with the same characteristics (size, profile, availability, etc.).
For business continuity, disaster recovery and off-site data protection, the new release offers faster asynchronous remote replication to meet stringent Recovery Time (RTO) and Recovery Point (RPO) Objectives. It also takes better advantage of lower speed/lower cost wide area networks. Safeguarding against regional catastrophes has raised the urgency for cost-effective remote replication solutions, even for smaller firms.
SANsymphony-V further leverages the Windows Server 2012 platform and Microsoft's latest clustering capabilities for faster, cost-effective unified NAS file serving and SAN disk services. This allows fully redundant, highly available configurations to scale out across multiple nodes and enable rapid switchover of network file system and Common Internet File System SMB clients despite hardware and facility outages. This powerful combination makes SANsymphony-V a unified NAS/SAN storage platform that is an ideal and affordable choice to support Microsoft Clusters and more demanding Microsoft File Serving environments.
With regards to high performance, low-latency needs, SANsymphony-V supports the newest 16Gbit Fibre Channel host bus adapters (HBAs) from QLogic. These can be mixed and matched with prior generation HBAs, as well as iSCSI NICs employed in less demanding areas of infrastructure. Fibre Channel is often the preferred interface between databases, high-speed apps and pools of hybrid and all flash arrays virtualized by DataCore. To take advantage of the enhanced SANsymphony storage virtualization platform, please consult with your DataCore-authorized solution provider or your DataCore representative.

Differentiate Cloud Management and Virtualization Related to Cloud Computing

A Contributed Article by Deney Dentel, CEO at Nordisk Systems
Cloud computing is the means through which the user can get computing applications, power, infrastructure, personal data, business process and anything else they need, wherever they want. It is the set of storage, network, hardware and interfaces combined together to provide computing as a service. It can provide the user with software, storage, and infrastructure as long as they are connected to the internet.
Due to cloud computing, the user does not have to buy any other extra hardware and software as long as they have a computer and internet. This technology saves people time and money.
Cloud management is the technology which is designed to operate and monitor applications, services and data residing in the cloud. It helps to ensure that the cloud computing is working efficiently, optimally and is not experiencing any problems.
Virtualization is the technology to develop a virtual version of the resource or device like network, operating system, software etc. It is to divide the resource into one or more executable environments.
Let's see the differences between cloud management and server virtualization related to cloud computing.

Quick time to Deliver:
Virtualization has quick delivery in the cloud. In virtualization all the hardware and software are right in front of the user. So, they can just go through the storage to obtain some files. In cloud, the computer has to be first connected to the internet and through the use of browser the files can be obtained. If there is problem with the Internet, then the files stored in the cloud cannot be obtained.

Flexibility isn't all about On-Demand Provisioning:
Even though virtualization has quick delivery, it creates problem in terms of flexibility. To use virtualization, all the software and hardware has to be with user at all times, which doesn't sound flexible. But in cloud, there is no need for software or hardware as everything is stored in the cloud. All it requires is an internet connection.

Expect ROI in Days:
Virtualization costs less. There is huge initial cost in the purchase of hardware and software and some for the IT services. In the case of cloud, there is less initial cost as no hardware is required. But, as the company use cloud's services, higher the costs will be. At some point, it maybe even cost more than the hardware.

Competitive advantage through provisioning control:
In virtualization, as all the hardware and software are managed by the company itself, they have a control over it. The services, security etc are controlled by the company. In cloud, all the hardware and software are managed by the service provider, so the company has little control over it.

Managing Enterprise Applications on CloudPlatform

Businesses have adopted cloud deployment for simple applications such as developer tools and web servers, but the complexities of deploying and managing multi-tier enterprise applications in the cloud are still very cumbersome, time consuming and error-prone. This creates an obstacle for businesses to run their enterprise application workloads in cloud environments. AppStack provides a solution for automating and simplifying the complex process of enterprise application provisioning and ongoing lifecycle management. AppStack is a software platform that leverages the infrastructure provisioning capabilities of Citrix CloudPlatform to provide application services for Enterprises deploying apps on private clouds, as well as for Service Providers who want to enable app services for their end users.
To solve this, Appcara has created AppStack, an advanced software platform for provisioning and managing enterprise and distributed applications in public and private cloud computing environments. To automate these capabilities, AppStack utilizes data-model driven technology, to capture and assemble complex application Workloads in a dynamic configuration repository. This enables a high degree of automation of common tasks such as deploying Workloads, but also the common change/update cycle that most enterprise applications will undergo. AppStack changes the paradigm from tedious, manual server management - to a much simpler "point and click" style of management at the Workload level.
In this session, they will demonstrate how AppStack provides simple cloud application management for Citrix CloudPlatform powered by Apache CloudStack:
  • Point-and-click launching of application Workloads on to CloudPlatform clouds
  • Portal-based active management of apps across their lifecycle
  • Integrated App Marketplace with popular commercial and open-source apps that can be deployed into more complex Workloads
  • Multi-cloud support with complete Workload portability across public and private clouds
Speakers:

  • Paul Speciale (Chief Marketing Officer, Appcara)
  • Geralyn Miller (Sr. Alliance Marketing Manager, Citrix)
  • Gaurav Chhaunker (Technical Relationship Lead, Alliance Marketing, Citrix)

Condusiv Technologies' V-locity Server Software Drives Industry Initiative to Manage Increase in I/O from Virtualization, BYOD and Big Data—without Additional Hardware

Condusiv Technologies, the leader in high-performance software optimizing technology, people and businesses, today is announcing the results of strict, third party benchmark testing of its newly-released V-locity Server Optimization software designed for I/O-intensive applications like SQL Server and Exchange running on physical servers.
Condusiv’s V-locity software architecture, which contains transformational read and write optimization engines, represents the culmination of 31 years of research and development in optimizing and accelerating Windows environments for business.
Condusiv’s goal is to help broaden industry awareness of the benefits of V-locity’s unique approach to optimizing read/write performance at the source, addressing critical I/O performance barriers without adding storage or server hardware. Condusiv Technologies is presenting a sponsored Technology Spotlight by leading IT market research and advisory firm IDC, entitled “The Shift to I/O Optimization to Boost Virtual and Physical Server Performance.” In addition, Westborough, Massachusetts-based openBench Labs released a third party test report revealing that V-locity Server accelerated SQL performance by 55%. Access both papers at http://www.condusiv.com/business/v-locity/server/.
“Virtual environments, cloud services, mobile devices, and Big Data all contribute to the rise in digital information organizations must manage. All of this data not only must be stored, but utilized to drive value for an organization's competitive advantage,” said Jerry Baldwin, CEO of Condusiv Technologies. “As much as the I/O explosion needs to be managed, CIO’s find themselves investing 80% of their annual IT budget on maintaining their existing infrastructure and services. This model is broken. V-locity customers typically see 50% or more performance gains on mission-critical applications like SQL and Exchange. That gain also comes with a very unique proposition—a savings of about 80% from their annual hardware capital expense budget.”
The Problem
The I/O problem stems, in part, from the fact that while the number of virtual machine shipments is growing at an average of 25% annually, the number of physical servers shipped is growing at a modest 2–3%. As more workloads are put on virtual servers and heavier workloads are placed on physical servers, this can triple or quadruple the amount of random I/O generated from a single server, burdening the compute infrastructure. Increasingly, the storage controller and disk architectures cannot keep pace with this growing random I/O.
When it comes to I/O and its impact on servers, storage and applications, there are two performance barriers: 1) Windows creating unnecessary I/O traffic by splitting files upon write, which also impacts subsequent reads, and 2) frequently accessed data unnecessarily traveling the full distance from server to storage and back.
These two behaviors create a surplus of I/O that prevents applications from performing at peak speeds. In today's enterprise, the problem is compounded as a multitude of random I/O traffic, from a mass of disassociated data access points, is making requests for storage blocks—random and sequential—to a shared storage system. All this unnecessary I/O leads to extra processing cycles that increase overhead and reduce application, network, and storage performance.
The I/O problem will continue to grow. IDC predicts the amount of information that needs to be managed by enterprises will increase 50 times in the next 10 years, and the number of files will increase 75 times. However, with Moore's law slowing from processor speeds doubling every 18 months to doubling every three years, processor performance will grow only by a factor of eight and storage performance will grow by a factor of four1.
Condusiv’s V-locity Server optimization software addresses critical I/O issues by eliminating application bottlenecks without the need to add server or storage hardware. Condusiv's differentiator is that its software resides at the top of the technology stack, eliminating unnecessary I/O at the source, where it originates.
As a first step to I/O optimization, V-locity Server eliminates nearly all unnecessary I/O operations at the operating system level when writing a file, which in turn eliminates all unnecessary I/O operations on subsequent reads. Second, V-locity Server caches frequently accessed data within available server memory without resource contention to the application to keep read requests from traveling the full distance to storage and back.
With V-locity at the top of the technology stack, optimizing I/O at the point of origin, this means only productive I/O is pushed through the server, network and storage. This approach to I/O optimization complements technologies that may already be running to promote IOPS or latency reduction, including SSDs, flash cards, and SAS, and provides tremendous benefit from the top-down. And since I/O is optimized at the source, V-locity Sever is network storage-agnostic, providing benefits to advanced storage features like snapshots, replication, thin provisioning and deduplication.
Solution: V-locity Server increased SQL Server 2012 transaction processing rate by 55% and improved response time by 33% without additional hardware.
openBench Labs tested the ability of V-locity Server to optimize I/O in a SQL Server environment. Using SQL Server 2012, openBench tested a mix of a high volume of light-weight SQL select transaction processing (TP) queries, combined with heavy-weight background update queries.
For the SQL Server benchmark testing, openBench simulated 1 to 32 daemon processes (1 daemon generating the equivalent of 70 normally-queued user processes) issuing queries non-stop. When a real application user interacts with SQL Server, there is lag between queries issued. In the test scenario, however, the daemon process issued queries without lag—that is, no think-time, type-time, or pause-time between query activity.
In a series of tests, openBench Labs measured the ability of V-locity Server’s IntelliMemory™ to offload I/O on read operations through dynamic caching in order to boost throughput and reduce latency. In addition, openBench examined the ability of IntelliWrite® technology to prevent unnecessary split I/Os, using its intelligence to extend current database files and create new log files as single, contiguous collections of logical blocks.
In a test of SQL Server query processing, openBench Labs benchmark findings revealed that V-locity, on a server running SQL Server, enabled higher transaction per second (TPS) rates and improved response time by reducing I/O processing on storage devices. What’s more, in a SAN- or NAS-based storage environment, V-locity Server reduced I/O stress on multiple systems sharing storage resources. Overall, V-locity Server can improve scalability by reducing average response time and enabling SQL Server to support more users.
“On a server running SQL Server 2012, V-locity Server created an environment that enabled up to 55% higher TPS rates, improved transaction response time by 33%, and enabled SQL to process 62% more transactions at peak transaction rates. As a result, IT has a powerful tool to maximize the ROI associated with any business application initiative driven by SQL Server at the back end,” said Dr. Jack Fegreus, founder of openBench labs.

Cray Leverages Intel Hadoop For Big Data in HPC

Cray’s new offerings will combine its CS300 supercomputer clusters with Intel’s Hadoop distribution.

Cray officials are adding Intel’s Hadoop distribution to their growing list of supercomputing solutions for the burgeoning big data market. Cray later this month will launch cluster supercomputers for Hadoop applications that will combine the vendor’s CS300 supercomputers with Intel’s Hadoop distribution, a Linux operating system and Cray’s Advanced Cluster Engine (ACE) management software, according to company officials. The result will be a turnkey computing infrastructure that will enable organizations to better leverage Hadoop, according to Bill Blake, senior vice president and CTO at Cray. "More and more organizations are expanding their usage of Hadoop software beyond just basic storage and reporting,” Blake said in a statement. “But while they're developing increasingly complex algorithms and becoming more dependent on getting value out of Hadoop systems, they are also pushing the limits of their architectures."

“Organizations can now focus on scaling their use of platform-independent Hadoop software, while gaining the benefits of important underlying architectural advantages from Cray and Intel," Blake said.
Big data is a growing trend in the business world, with massive amounts of data being created from the wide range of connected devices, machines and sensors. Intel officials have said that every 11 seconds, a petabyte of data is created around the world. Hadoop, which includes about a dozen open-source projects, is designed to enable businesses to more easily store huge amounts of data, analyze it and leverage it in ways that benefit both the organizations and their users. For example, businesses can use it to gain a better understanding of what their customers want, while medical researchers can more quickly discover life-saving drugs and communities can improve their environments by better managing traffic patterns. Intel in February unveiled the Intel Distribution for Apache Hadoop, its own distribution of the open-source technology. The giant chip maker had been working with Hadoop since 2009, but officials said it was important to offer a Hadoop distribution optimized to work with features on its processors, such as incorporating Advanced Encryption Standard New Instructions (AES-NI) for accelerating encryption into the Hadoop Distributed File System. It’s also part of a larger effort by Intel to grow its role in the data center beyond server chips. Intel has been building up its software capabilities via in-house development and acquisitions, and while keeping open parts of its Hadoop distribution—making them interoperable with other Hadoop distributions—the company will keep some features, including management and monitoring capabilities, to itself. Intel will not open source such software as Intel Manager for Apache Hadoop—for configuration and deployment—or Active Tuner for Apache Hadoop, a tool for improving the performance of compute clusters running the distribution. Cray officials, in announcing their new Hadoop clusters, noted the strengths in Intel’s distribution, including greater security, improved real-time handling of data, and enhanced performance throughout the storage architecture. Cray is including support for InfiniBand and improved resource management, officials said. The CS300 series of supercomputers—which Cray inherited when it bought rival Appro for $25 million in November 2012—comes with an integrated high-performance computing (HPC) software stack and software tools that are compatible with most open-source and commercial compilers. That will enable organizations to leverage Intel’s Hadoop distribution, according to Girish Juneja, CTO and general manager of Intel's Big Data Software unit. "Combining these features with the highly innovative HPC technologies in Cray systems will create a compelling solution for organizations with the most demanding Hadoop requirements," Juneja said in a statement. Cray’s Hadoop supercomputer clusters, which offer energy-efficient water- or liquid-cooled architectures, are the latest move by the systems vendor to build out its portfolio of products for big data. The company also offers Cray Sonexion storage systems and YardData’s Urika appliance for graph analytics.

Friday, May 17, 2013

PHD Virtual Provides Easy On-Ramp to Cloud DR and RaaS

PHD Virtual Technologies, a pioneer in virtual backup, infrastructure monitoring and innovator of disaster recovery assurance solutions, announced today an easy on-ramp for cloud service providers wanting to provide disaster recovery-as-a-service solutions (RaaS). Because IT organizations feel sustained pressure to support aggressive recovery time objectives (RTOs), they need assurance from their cloud service provider that their applications and business services can recover within a timeframe that meets business requirements.
“More and more companies are looking to leverage the cloud for disaster recovery and this is positively impacting growth in the RaaS space,” said Carlos Escapa, SVP and General Manager of Disaster Recovery products at PHD Virtual. “We are seeing exponential growth from cloud service providers who recognize that PHD’s ReliableDR, provides the only automated way to prove that they can recover their customers’ IT environments within agreed SLAs. ReliableDR’s powerful reporting capabilities also provide auditable compliance proof of business continuity policies.”
“The partnership with PHD Virtual provided NSIS Systems the ability to provide cost effective Disaster Recovery as a Service for the VMware Platform,” said Stephan Buys, the Managing Director of NSIS Systems, a UK Cloud Services Provider. “PHD Virtual's ReliableDR is a great product which provides the end user with an easy to use interface to manage their disaster recovery. The continuous and automated testing of the recovery virtual machines ensures that the end users will have functional recovery points to fail over to. NSIS Systems is looking forward to a long term partnership with PHD Virtual.”
Gartner estimates that the worldwide RaaS market will grow to $1.2 billion by 2017, a CAGR of 21%. By 2014, 30-percent of midsize companies will have adopted recovery-in-the-cloud, also known as recovery-as-a-service (RaaS), to support IT operations recovery.

Get ReliableDR For Free!
PHD’s ReliableDR 3.1 is a disaster recovery assurance solution that dramatically reduces the cost of IT disaster recovery testing to support increasingly aggressive recovery service level agreements demanded by businesses and compliance auditors. Unlike legacy DR tests, which are typically performed once per year and can cost upwards of $30,000, ReliableDR enables disaster recovery (DR) exercises to be performed on a daily, or even hourly, basis and at a fraction of the cost. ReliableDR not only enforces Recovery Time Objectives (RTOs) / Recovery Point Objectives (RPOs), but actually delivers Recovery Time Actuals (RTAs) and automatically detects stale snapshots that are outside the RPO policy.
“Clouds reconfigure virtualized resources all the time so testing for disaster recovery must be done frequently,” said Escapa. “ReliableDR, fully automates the orchestration of the DR testing process and helps cloud service providers make sure that recovery procedures are current and in compliance with business continuity requirements and regulations.”
“PHD's ReliableDR enables cloud service providers to provide ironclad assurance that their customers’ systems will come up within SLAs,” said Dave Simpson, Senior Storage Analyst, The 451 Group. “This differentiates their cloud recovery services and allows them to better meet customers’ requirements.”

ServiceNow Transforms Cloud Provisioning

ServiceNow, the enterprise IT cloud company, today announced ServiceNow Cloud Provisioning, a new orchestration application that enables IT to automate the entire cloud management lifecycle. From self-service cloud selection to automated, standardized cloud creation and resource optimization, ServiceNow Cloud Provisioning speeds the deployment of cloud environments from weeks or months to just minutes. Without special training or manual intervention, cloud administration is transformed from day-to-day IT management overhead to an automated business self-service—all within parameters determined by IT.
Benefits of the ServiceNow Cloud Provisioning Application
  • Automated provisioning of cloud services. Together with the ServiceNow IT Service Automation Application Suite, Cloud Provisioning optimizes the management of heterogeneous cloud environments within a single, integrated system of record. Business users can now request cloud infrastructure through an intuitive self-service interface using the ServiceNow Service Catalog. Cloud Provisioning orchestrates a fully automated process to provision clouds based on Amazon EC2 or VMware in minutes. ServiceNow Change Management then allows the definition and enforcement of change policies for full change control.
  • Eliminate VM sprawl. Through lifecycle automation, resources can be leased and include retirement provisions or scheduled reviews to determine necessity, thereby eliminating the common problem of virtual machine sprawl and optimizing utilization of virtual infrastructure.
  • Visibility to cloud operations. The cloud operations portal provides a consolidated view of virtual asset status, the state of resource requests and provisioning exceptions.
“While cloud computing brings efficiency and agility to the enterprise, companies are using old means of management, resulting in tremendous waste of resources and slow responsiveness to cloud provisioning needs,” said Matt Schvimmer, vice president of product management, ServiceNow. “With ServiceNow Cloud Provisioning, IT can now effectively manage heterogeneous cloud environments through zero-touch automation thus accelerating business transformation.”
“IT organizations that implement extensive automation and orchestration technologies can significantly improve operational productivity and end-to-end service levels,” said Mary Johnson Turner, research vice president, Enterprise Systems Management Software, IDC. “A consistent user interface that is tightly integrated with the cloud provisioning workflow automation engine is a critical enabler in helping IT to improve business performance across the enterprise.”
This week at Knowledge13, the largest gathering of IT professionals using cloud services for enterprise IT service automation, ServiceNow is offering sessions, labs and demos dedicated to ServiceNow Cloud Provisioning and orchestration applications. In addition, ServiceNow expects to provision approximately 16,000 ServiceNow instances at the event using ServiceNow Orchestration.
Availability
The ServiceNow Cloud Provisioning application is available today. For more information, please go to http://www.servicenow.com/cloud-provisioning.do.

Devon IT to Feature New Thin Clients, Software at Citrix Synergy 2013

Devon IT, Inc., a leading provider of thin client and VDI hardware and software solutions, today announced it will feature its newest HDX Ready thin clients, thin client operating system and thin client management software in Devon IT booth #304. The event will be held at the Anaheim Convention Center in Los Angeles, CA, from May 22 – 24, 2013.

Citrix Synergy brings together thought leaders in desktop and server virtualization, server-based computing, cloud computing, and virtual desktop computing. Devon IT will demonstrate the recently available high-powered ARM-based Acer Veriton N2010G thin client running DeTOS and the Acer Veriton N2110G thin client built with AMD’s Dual Core G-T56N 1.6 GHz Processor with AMD Radeon HD 6320 Graphics processor.

The company will also feature its newest innovative zero client that will be available for demonstration for the first time in public. The zero client will be expected to sell for under $100.
“There’s no question that the virtual desktop market is growing, a trend that is fueled by collaboration between IT thought leaders like Citrix and Devon IT,” says Joe Makoid, President, Devon IT. “New, more powerful thin client technology such as the N2010G thin client coupled with advanced software like Citrix HDX significantly expands the opportunities for IT administrators and organizations that need to do more with less. We are excited to share our ideas with other industry leaders at Synergy.”
The Devon IT team will be holding media-only demonstrations in their booth at various times throughout the event.

For more information about virtual desktop hardware and software solutions or to schedule a private demonstration, email info@devonit.com or call (610) 757-4220 or toll-free at (888) 524-9382. For more information about Devon IT DeTOS, Echo, and VDI Blaster, please visit www.devonit.com/software.

VMTurbo Launches Virtual Health Monitor – Free, Unlimited, and On Any Hypervisor

VMTurbo, the leading provider of software-defined control for cloud and virtualized environments, today announced the availability of their Virtual Health Monitor tool for free download at www.vmturbo.com/freehealthmonitor. This free tool is an evolution of the Community Edition of the company’s Operations Management product and adds additional insight to risk and efficiency improvements that should be made in environments running the solution.

“As more organizations expand their use of virtualization to include different hypervisors and more applications being run in VMs, new performance challenges are arising. While this shared resource model improves utilization, it also increases the likelihood for interference and resource contention across workloads and applications,” said Derek Slayton, vice president of marketing at VMTurbo. “Monitoring and reporting are important capabilities to understand health and performance in the environment, and they should be fundamental and free on any hypervisor. Our tool focuses on delivering that in an unlimited fashion across any hypervisor with instant time-to-value – and providing unique insights to risk and efficiency improvements that should be made to our users.”

VMTurbo's Virtual Health Monitor provides full-featured monitoring and reporting in an unlimited fashion across vSphere, Hyper-V, XenServer and Red Hat Enterprise Virtualization. In addition, the solution collects metrics and, on a weekly basis, provides insight to risk and efficiency issues identified across the environment. VMTurbo's free monitoring and reporting tool is a stepping stone to its full-featured control system for cloud and virtualized data centers, VMTurbo Operations Manager. With Operations Manager, organizations significantly reduce the time spent reviewing and interpreting dashboard data, troubleshooting, and trial-and-error remediation of performance problems since it automatically maintains the environment in a perpetually healthy state. The solution automates decisions for resource allocation and workload placement in software to ensure applications get the resources required while maximizing utilization of IT assets.
Key features of VMTurbo Virtual Health Monitor include:
  • Instant visibility to health and performance;
  • Unlimited use across virtual data centers of any size;
  • Free monitoring and reporting for any hypervisor;
  • Lowest total cost of ownership (TCO) due to innovative product architecture;
  • Weekly analysis of utilization rates and areas to improve efficiency and reduce risk.
To download VMTurbo Virtual Health Monitor, visit www.vmturbo.com/freehealthmonitor.

Scale Computing Announces HC3x, Built for Private Cloud

Scale Computing, the industry leading developers of hyperconverged solutions that seamlessly combine servers, storage and virtualization into a single system, today announced the addition of the HC3x extension of its award-winning HC3 platform.

HC3 Platform
The HC3x platform combines servers, storage and virtualization into a clustered, highly available solution analysts call hyperconverged. Built for midsize companies from the ground up, HC3x eliminates the need for expensive licenses and complex systems such as VMware and SANs from virtualized environments. As a result the deployment is radically simplified while delivering a single, scale-out environment with built in high availability. In a lab report, industry analyst Taneja Group reports "we measured deployment and configuration of HC3 as 10 minutes to set up a hardware cluster, and 2 minutes to deploy the first VMs."
“HC3 is all about utilizing our ICOS technology to radically simplify infrastructure management. We deliver a self-healing platform that takes the headache out of infrastructure management,” said Jason Collier, the company’s CTO. “With the new HC3x platform, we’re able to extend the Scale HC3 experience into even the most demanding environments.”

HC3x
HC3x, an extension of the HC3 platform, is a powerful system with twice the memory per node, 50% more compute cores and fast SAS drive technology. It allows for roughly double the VMs per node over the existing HC3 line and is targeted at customers with 50-200 VMs in their environment, who are looking to realize the benefits and efficiencies of private cloud. It includes the latest release of ICOS 4.2, the operating system for the HC3 product lines. ICOS 4.2 adds support for VLANs and multiple virtual NICs per VM.

Affordably Priced
Starting at under $25,500 for a three-node cluster, HC3 platform is ideal for organizations looking to achieve the efficiencies of virtualization but don’t want the complexities and cost of VMware. With no virtualization software to license or manage, no external storage to buy and a built in hypervisor, HC3 lowers TCO and radically simplifies the infrastructure needed to keep applications running. HC3 makes the deployment and management of a highly available and scalable infrastructure as easy to manage as a single server. HC3x prices start at under $37,500 for a three-node cluster.

HybridCluster2.0 self-healing, ultra-high availability capabilities unveiled

Following the recent highly successful launch of version 2.0 of its integrated suite of storage, replication and web clustering software, HybridCluster, an early stage software solution provider to the cloud and hosting industry, today unveils full details of the ultra-high availability platform within HybridCluster 2.0.
As standard functionality of HybridCluster 2.0, the ‘Self-Healing' ultra-high availability platform has been designed from the ground-up to automatically recover when data centre hardware, software, networks or even entire regions fail. It does this using automatic backups and snapshots that are taken every few minutes and distributed across all the machines, combined with the automatic migration of websites and applications between all machines.

"Our self-healing platform enables hosting organisations to launch highly profitable HA services to their clients at a fraction of the cost of competing solutions," said Luke Marsden, HybridCluster's CEO and founder. "We dramatically reduce both capex and opex: our clustering software utilises low-cost hardware driving capex down and we fully automate the recovery processes when a server or site goes down".
HybridCluster Self-Healing also transforms the normal concept of a backup/recovery regime for hosting businesses. Rather than taking daily or hourly backups with a separate backup application and using this as the primary means of recovery from a disk or server failure, HybridCluster self-healing continuously and automatically replicates data across machines and fully automates failover/recovery procedures between machines in a cluster and across clusters.

"Using HybridCluster self-healing HA clusters helps us recover from failures automatically and seamlessly rather than relying on a patchwork of monitors, scripts, and manual procedures." stated HybridCluster customer, David Kirkham, Managing Director at Beyond Colour, a creative marketing agency based in York in the UK who have been hosting their own dedicated servers providing solutions for individuals to enterprise customers across Europe for over 15-years.

HybridCluster Self-Healing provides fully automated recovery from site, server, network or disk failures - and measured in seconds and minutes rather than hours and days.
"Cloud and web hosting companies live in fear of data centre outages, which frequently result in damaging their margins and reputation," stated Ben Kepes, technology evangelist, commentator and HybridCluster investor and advisor. "With HybridCluster's high-availability platform, hosters can provide ultra-high availability web applications, databases and email at a fraction of the cost and time of competitive systems."
HybridCluster 2.0 is available immediately to cloud and web hosting service providers. Cloud and hosting service providers can visit www.HybridCluster.com to sign up for a trial. Visitors to HostingCon in Austin, Texas, from June 17 to 19, can see HybridCluster in action on the exhibit floor at exhibit #723 as well as see HybridCluster's CEO, Luke Marsden, first up on the Tech Track on the conference agenda.

IGEL turbocharges best selling UD5 and UD3 thin clients with Dual Core processors increasing performance up to 40%

IGEL Technology today met desktop requirements for increased broadband efficiency and computing power head on with its latest generation of UD5 and UD3 thin clients.
The new UD5 Dual Core can readily handle resource-intensive tasks such as decoding multimedia content. At the same time, it's more energy efficient than ever. In keeping with the well-proven Universal Desktop concept, the new top-of-the-line model also supports all popular communications protocols such as Citrix HDX, PCoIP and Microsoft RemoteFX.

Based on Intel's Sandy Bridge chipset technology, which is also used in today's more powerful notebooks, an Intel Celeron 847 dual-core CPU is at the heart of the UD5 Dual Core model series. In addition, 1 GB of main memory (DDR3 RAM) and up to 2 GB of flash memory (in the form of a SATA SSD) ensures fast signal processing. The newly improved UD5 also boasts upgraded hardware (including two fast USB 3.0 ports) that will be particularly beneficial when it comes to desktop virtualization. For instance, to accelerate multimedia playback on the hardware side, flash animations and other video files can be redirected from the server over to the thin client, which then locally decodes the content and plays it back smoothly and seamlessly. This highly effective multimedia redirection allows the new UD5 to conserve server resources while also offering users the best-possible playback experience.

Old and New Strengths Combined Just like its predecessor, the new UD5 Dual Core comes with a large number of standard ports for peripherals. Standard features include one PS/2 port, two serial ports, one PCIe slot and a total of six USB ports (2 x USB 3.0, 4 x USB 2.0). Furthermore, an integrated smartcard reader is optionally available as are two versions of IGEL's connectivity foot: one with an integrated WLAN module and an additional parallel port and another with an anti-theft USB port within the foot itself. The UD5 Dual Core also comes standard with dualview support, featuring a DVI and a display port allowing simultaneous use of two digital monitors.
With its excellent multimedia capabilities, there are practically no limits to where the new, high-performance UD5 can be deployed. The UD5 Dual Core is also exceptionally well suited for multi-screen workstations or as an end-user device in education, where it can easily meet the need for full-screen HD video playback or the delivery of complex graphical content, such as Aero interface effects, PowerPoint presentations or flash animations within efficient cloud environments.

IGEL UD3 Gets 20% Plus Boost In terms of performance, the fourth generation of the IGEL UD3 surpasses even the top-of-the-range UD5 single core models. The VIA Eden X2 dual core processor with the VIA VX900 chip set speeds up the ultra compact all-round thin client by over 20%. The new CPU is backed up by up to 2 GB of DDR3 RAM and up to 4 GB of flash storage in the form of an SATA SSD. Further highlights include Dualview with two digital monitors (max. resolution: 1,920 x 1,200 pixels) as well as USB 3.0 at two of the six USB ports. With the new UD3 dual core, IGEL is making the transition to the world of cloud hosted applications and virtual desktops more attractive and future-proof than ever before, combining improved multimedia capabilities and greater energy efficiency at a low price.
As with all IGEL devices, both new models come with the IGEL Universal Management Suite (UMS), the leading remote management software system within the sector. The UMS allows standardized and secure remote management as well as fast rollouts of IGEL Universal Desktops, converted PCs and selected thin clients from other manufacturers.
"Our incorporation of the very latest hardware components makes our best selling models even more powerful," said Simon Richards, IGEL's UK MD. "For our customers, this means greater freedom in deciding how to implement complex virtualization projects while, at the same, being able to enjoy all the proven benefits of thin clients along with IGEL's outstanding quality."

Prices and Availability The IGEL UD5-740 LX with IGEL Linux will cost £414; the IGEL UD5-740 W7 with Windows Embedded 7 will cost £475. Both devices will be officially available from May 16.
The IGEL UD3-740 LX with IGEL Linux is available for £329, while the IGEL UD3-740 W7 with Windows Embedded 7 costs £429 in the Advanced version. Both devices will be available from May 27.
All prices are net end-customer retail prices. IGEL offers a five-year warranty. The IGEL Universal Management Suite (UMS) remote management software comes supplied as standard.