DataCore Software, a leader in software-defined storage and desktop
virtualization software solutions, has announced that it is now shipping
DataCore VDS 2.1 software to make it simple and cost-effective to
deploy persistent ‘stateful’ virtual desktops. The latest release now
supports Windows Server 2012 and Server 2012 R2 platforms, includes an
updated 25 virtual desktop entry-level offering with Hot Standby VDI
server protection built in, and adds a number of significant
enhancements to improve the ease and use of the virtual desktop server
software platform. The release also supports Single sign-on (SSO)
access and Active Directory (AD); plus new wizards, templates and tools
to achieve a higher degree of integration with Microsoft VDI and their
remote desktop and delivery services. The new DataCore VDS 2.1 software
release is now available to customers via authorized European solution
providers who are trained and qualified to deliver simple to use,
high-performance and cost-effective VDI.
"DataCore VDS
overcomes the major cost, complexity and performance roadblocks slowing
VDI adoption especially at small to mid-size companies. Unlike highly
complex VDI solutions that are cost-effective only for very large 1000
or more desktop projects, DataCore VDS 2.1 removes the many painful
implementation obstacles while significantly lowering the cost per
virtual desktop instance,” says Christian Hagen, vice president of
DataCore EMEA. “DataCore finally makes it practical and affordable for
small, medium and large enterprises to start small and scale their
virtual desktop infrastructures without compromising on performance.”
DataCore VDS delivers a ‘true’ desktop user experience
Small
and medium businesses want to adopt desktop virtualization for the same
reasons large enterprises do – to reduce desktop management costs,
improve productivity and increase business agility, but they can't
afford enterprise-class solutions since they cost a fortune and are
overkill for smaller environments.
DataCore VDS makes it
economically feasible to deploy and operate persistent 'stateful'
virtual desktop environments at a lower cost and higher performance than
alternatives.
DataCore VDS serves complete virtual desktops
which are ‘stateful’ (or persistent), they deliver a similar user
experience as if a user were located directly on a physical desktop or
laptop PC. With ‘stateful’ virtual desktops, an end-user is assigned a
virtual machine for their own use to browse, to do downloads and to run
their personal applications. Turn the virtual desktop off and then back
on and it retains the ‘State’ of what you were doing just like when you
close and open a laptop, it functions like your own private desktop,
it’s just virtual.
DataCore VDS delivers powerful virtual desktops for less than the cost of a PC refresh
DataCore
VDS is the easy and affordable virtual desktop solution designed for 25
desktop and large-scale environments. It deploys on off-the-shelf x86
servers, accommodates all the popular disk and Flash storage offerings
and includes the tools Windows® administrators need to rapidly deliver
centrally-managed virtual desktops to any user for less than the cost of
new PCs.
The DataCore VDS highly efficient architecture
eliminates up to 75 percent of VDI costs without compromising
performance and most importantly the user experience.
Product Availability
DataCore
VDS 2.1 solutions are now shipping and available exclusively in Europe
through DataCore’s established network of trained and authorized EMEA
based solution providers.
Pricing for DataCore VDS software
licenses start at around 2500 EURO; enabling complete centrally managed
VDI systems to be deployed at prices much less than the cost per user of
new PCs.
Thursday, March 13, 2014
You're Already Running Hybrid Cloud – Whether You Know It or Not
A Contributed Article by Vic Nyman, BlueStripe Software, Co-Founder
Too many times, buzzword reality doesn't match the hype. In IT, too many buzzword-driven technologies are never implemented - destined to invade our vocabulary but never make it to the raised floor. Fortunately, not only have Hybrid Cloud applications made it to the Data Center, they are changing the way IT Operations gets things done. In fact, Hybrid Cloud-based applications influence is so broad that your Operations team is probably running Hybrid Cloud apps already, even if they don't know it. There are several reasons Hybrid Cloud is starting to make a difference in IT Operations shops large and small.
First, a Hybrid Cloud environment can deliver the best of both worlds:
The hardest part of operating an application in a Hybrid Cloud environment isn't setting it up, or even connecting it to the back-end systems - its managing service performance, especially when problems occur. Having application components both on premise and in the cloud wreaks havoc on conventional monitoring tools. Cloud management tools don't see inside the Data Center and Data Center tools can't see into the Cloud.
The key is End-to-end Transaction Monitoring.
IT Operations teams must turn to Transaction Monitoring solutions that do more than simply monitor response times. To deal with the complexity of a Hybrid Cloud application, the monitoring tools must be able to trace transactions in the cloud and in the data center, while (this is the hard part) connecting them together for an end-to-end transaction view.
To help Operations teams understand these dependencies, application and transaction topology mapping is critical for both monitoring and problem solving. Finally, the tools must be able to adjust its maps and dependency knowledge in real-time because Cloud systems are so dynamic.
To learn more, you might enjoy a presentation by Forrester Cloud Expert David Bartoletti at http://bluestripe.com/hybrid-cloud-webinar.
Too many times, buzzword reality doesn't match the hype. In IT, too many buzzword-driven technologies are never implemented - destined to invade our vocabulary but never make it to the raised floor. Fortunately, not only have Hybrid Cloud applications made it to the Data Center, they are changing the way IT Operations gets things done. In fact, Hybrid Cloud-based applications influence is so broad that your Operations team is probably running Hybrid Cloud apps already, even if they don't know it. There are several reasons Hybrid Cloud is starting to make a difference in IT Operations shops large and small.
First, a Hybrid Cloud environment can deliver the best of both worlds:
- The flexibility of on-demand systems in a Public Cloud allowing scalable user access
- The security and control achieved by running back-end systems on premises in the data center
The hardest part of operating an application in a Hybrid Cloud environment isn't setting it up, or even connecting it to the back-end systems - its managing service performance, especially when problems occur. Having application components both on premise and in the cloud wreaks havoc on conventional monitoring tools. Cloud management tools don't see inside the Data Center and Data Center tools can't see into the Cloud.
The key is End-to-end Transaction Monitoring.
IT Operations teams must turn to Transaction Monitoring solutions that do more than simply monitor response times. To deal with the complexity of a Hybrid Cloud application, the monitoring tools must be able to trace transactions in the cloud and in the data center, while (this is the hard part) connecting them together for an end-to-end transaction view.
To help Operations teams understand these dependencies, application and transaction topology mapping is critical for both monitoring and problem solving. Finally, the tools must be able to adjust its maps and dependency knowledge in real-time because Cloud systems are so dynamic.
To learn more, you might enjoy a presentation by Forrester Cloud Expert David Bartoletti at http://bluestripe.com/hybrid-cloud-webinar.
APAC Organizations Choose Zerto Virtual Replication
To protect virtualized mission-critical business applications,
enterprises around the world are turning to hypervisor-based disaster
recovery (DR) and business continuity (BC) solutions. The combination of
ease of use and the ability for end-users to protect workloads of all
sizes in the public, private or hybrid cloud enable Zerto customers
Seoul Daily News, EUKOR Car Carriers, Inc. and Comvita to
cost-effectively automate data replication and recovery, and bring
critical business operations back online following a data loss or
disaster. In order to meet the rapidly growing demand for effective
BC/DR in the region, Zerto recently opened an office located in Sydney,
Australia.
Zerto Virtual Replication replicates data at the hypervisor, rather than in physical storage, extending the benefits of datacenter virtualization to disaster recovery. Zerto’s hypervisor-based replication is the only complete BC/ DR solution as it delivers robust replication with fully automated orchestration such as simple failover, failback and DR testing. It is an easy to implement software-only solution that delivers the granularity required in today’s complex IT infrastructures, with the ability to protect at the VM-level.
“The adoption of Zerto Virtual Replication by these companies demonstrates that – regardless of physical location, cultural considerations or languages spoken – disaster recovery is a top priority for any enterprise, particularly those that cannot tolerate datacenter downtime,” said Ziv Kedem, CEO, Zerto, Inc. “Zerto delivers on the promise of virtualization by extending the cost savings, flexibility and agility of virtualization to BC/DR.”
Seoul Daily News Delivers for Its Readership
As the oldest daily newspaper in Seoul, Korea, the Seoul Daily News is the primary source of information for more than 780,000 readers every day. The backbone of its IT infrastructure is Microsoft SQL Server, which enables the organization to deliver accurate news throughout the region. The Seoul Daily News turned to Zerto to protect its database, as well as its credibility. ZVR enables the organization to failover and quickly recover in the event of a natural disaster or power outage, as well as implement fast recovery from any hacking incident that could occur. ZVR helped Seoul Daily Newspaper meet its strict service levels – including a recovery point objective (RPO) of 20 seconds and a recovery time objective (RTO) of 60 minutes. Now, the organization is protected from any disaster, natural or otherwise.
“We have been doing live and test failovers with Zerto Virtual Replication within our environment and the results have exceeded our expectations,” said Bonyang Goo, Deputy Director, Seoul Daily News. “I am a very strong advocate for the solution as I work with it on a daily basis and find it to be not only a robust BC/DR solution, but also a very simple one. We evaluated many BC/DR products that have good features, however they did not meet all of the needs of our environment.”
Virtualized BC/DR Becomes the New Standard for EUKOR Car Carriers, Inc.
EUKOR Car Carriers, Inc., of Korea operates a large and modern fleet of vehicle carriers, with many of the world’s largest car brands as its customers. EUKOR relies on its IT infrastructure to manage its route network, minimize transit times, reduce costs and maintain high quality cargo-handling for its customers. While headquartered in Korea, EUKOR has operations and offices in 10 other countries, and maintaining aggressive IT service level agreements (SLAs) ensures that EUKOR continues to deliver superior customer service. The company virtualized its environment with the goal of improving its BC/DR processes, and though it had a high availability (HA) configuration, it lacked a true DR solution. After testing Zerto in a trial environment, the IT team was impressed with the power and simplicity of Zerto, and more importantly, Zerto Virtual Replication delivered an RPO and RTO that far exceeded the company’s requirements.
“Whenever we test, Zerto helps us rapidly recover the point that we choose. It is as excellent as it is simple. The efficiency and performance of Zerto is superior to its competitors,” said MR. Up Jung, deputy general manager, EUKOR Car Carriers, Inc. “With Zerto, we took another step towards our goals to become a leading company in the shipping industry.”
Comvita Protects its Private Cloud Investment
Comvita is a natural health and beauty products company with a strong focus on developing innovative products backed by scientific research. Based in New Zealand, Comvita exports its products to more than 18 countries through a network of wholesale and third-party outlets and online. The company has offices in New Zealand, Australia, Hong Kong, Japan, Taiwan, South Korea and the United Kingdom. It implemented a private cloud configuration to maintain complete control over its IT capabilities while keeping a close eye on costs. Comvita was investigating a new approach to BC/DR and knew it did not want to install a storage area network (SAN) at a secondary site to provide array-based replication with its primary datacenter. The IT team saw that Zerto Virtual Replication was both easy-to-use and hardware agnostic. Zerto Virtual Replication not only delivered an RPO and RTO within Comvita’s requirements, it also made both application and DR testing extremely simple.
“We were very happy to find a solution that did not require a duplicate array on the target site. Like everyone else in IT, we are trying to do more with less, and Zerto Virtual Replication delivered the features we were looking for at the right price,” said Andrew Read, ICT manager, Comvita.
Zerto Virtual Replication replicates data at the hypervisor, rather than in physical storage, extending the benefits of datacenter virtualization to disaster recovery. Zerto’s hypervisor-based replication is the only complete BC/ DR solution as it delivers robust replication with fully automated orchestration such as simple failover, failback and DR testing. It is an easy to implement software-only solution that delivers the granularity required in today’s complex IT infrastructures, with the ability to protect at the VM-level.
“The adoption of Zerto Virtual Replication by these companies demonstrates that – regardless of physical location, cultural considerations or languages spoken – disaster recovery is a top priority for any enterprise, particularly those that cannot tolerate datacenter downtime,” said Ziv Kedem, CEO, Zerto, Inc. “Zerto delivers on the promise of virtualization by extending the cost savings, flexibility and agility of virtualization to BC/DR.”
Seoul Daily News Delivers for Its Readership
As the oldest daily newspaper in Seoul, Korea, the Seoul Daily News is the primary source of information for more than 780,000 readers every day. The backbone of its IT infrastructure is Microsoft SQL Server, which enables the organization to deliver accurate news throughout the region. The Seoul Daily News turned to Zerto to protect its database, as well as its credibility. ZVR enables the organization to failover and quickly recover in the event of a natural disaster or power outage, as well as implement fast recovery from any hacking incident that could occur. ZVR helped Seoul Daily Newspaper meet its strict service levels – including a recovery point objective (RPO) of 20 seconds and a recovery time objective (RTO) of 60 minutes. Now, the organization is protected from any disaster, natural or otherwise.
“We have been doing live and test failovers with Zerto Virtual Replication within our environment and the results have exceeded our expectations,” said Bonyang Goo, Deputy Director, Seoul Daily News. “I am a very strong advocate for the solution as I work with it on a daily basis and find it to be not only a robust BC/DR solution, but also a very simple one. We evaluated many BC/DR products that have good features, however they did not meet all of the needs of our environment.”
Virtualized BC/DR Becomes the New Standard for EUKOR Car Carriers, Inc.
EUKOR Car Carriers, Inc., of Korea operates a large and modern fleet of vehicle carriers, with many of the world’s largest car brands as its customers. EUKOR relies on its IT infrastructure to manage its route network, minimize transit times, reduce costs and maintain high quality cargo-handling for its customers. While headquartered in Korea, EUKOR has operations and offices in 10 other countries, and maintaining aggressive IT service level agreements (SLAs) ensures that EUKOR continues to deliver superior customer service. The company virtualized its environment with the goal of improving its BC/DR processes, and though it had a high availability (HA) configuration, it lacked a true DR solution. After testing Zerto in a trial environment, the IT team was impressed with the power and simplicity of Zerto, and more importantly, Zerto Virtual Replication delivered an RPO and RTO that far exceeded the company’s requirements.
“Whenever we test, Zerto helps us rapidly recover the point that we choose. It is as excellent as it is simple. The efficiency and performance of Zerto is superior to its competitors,” said MR. Up Jung, deputy general manager, EUKOR Car Carriers, Inc. “With Zerto, we took another step towards our goals to become a leading company in the shipping industry.”
Comvita Protects its Private Cloud Investment
Comvita is a natural health and beauty products company with a strong focus on developing innovative products backed by scientific research. Based in New Zealand, Comvita exports its products to more than 18 countries through a network of wholesale and third-party outlets and online. The company has offices in New Zealand, Australia, Hong Kong, Japan, Taiwan, South Korea and the United Kingdom. It implemented a private cloud configuration to maintain complete control over its IT capabilities while keeping a close eye on costs. Comvita was investigating a new approach to BC/DR and knew it did not want to install a storage area network (SAN) at a secondary site to provide array-based replication with its primary datacenter. The IT team saw that Zerto Virtual Replication was both easy-to-use and hardware agnostic. Zerto Virtual Replication not only delivered an RPO and RTO within Comvita’s requirements, it also made both application and DR testing extremely simple.
“We were very happy to find a solution that did not require a duplicate array on the target site. Like everyone else in IT, we are trying to do more with less, and Zerto Virtual Replication delivered the features we were looking for at the right price,” said Andrew Read, ICT manager, Comvita.
Red Hat Announces Certification for Containerized Applications, Extends Customer Confidence and Trust to the Cloud
Red Hat, Inc., the world’s leading provider of open source
solutions, today announced the extension of its application
certification program to include containerized applications. The Red Hat
Container Certification ensures that application containers built
using Red Hat Enterprise Linux will operate seamlessly across
certified container hosts. Designed with the needs of independent
software vendors (ISVs), service providers and their enterprise
customers in mind, the certification extends the confidence
customers have with Red Hat Enterprise Linux, which currently
supports thousands of certified applications, to certified
containers running on certified container hosts. The pending
release of Red Hat Enterprise Linux 7 and Red Hat’s OpenShift
Platform-as-a-Service (PaaS) offering will both be certified
container hosts, with Docker as a primary supported container format.
Linux Containers
Linux Containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Developers have rapidly embraced Linux Containers because they simplify and accelerate application deployment, while many PaaS platforms are built around Linux container technology, including OpenShift by Red Hat. Red Hat Enterprise Linux implements Linux Containers using core technologies such as Control Groups (cGroups), Resource Management, SELinux, and network namespaces, enabling secure multi-tenancy and reducing the potential for security exploits.
Building on the capabilities of Red Hat Enterprise Linux, application containers are deployed as software packages that include the application and all of its required runtime components. Benefits include:
The Red Hat Container Certification is the latest addition to Red Hat’s industry-leading certification portfolio that includes Red Hat Enterprise Linux and Red Hat Enterprise Linux OpenStack Platform. The certification is also part of a larger set of Red Hat technology initiatives which in turn support thousands of third party technology partners within Red Hat’s global ecosystem
To support this newly announced container certification, Red Hat is also announcing the container certification Partner Early Access Program (PEAP). The PEAP will run through late spring and provide the ability for ecosystem partners to participate in early testing, integration and feedback of the tools and resources required for containerization prior to the official launch of the certification and partner program, currently planned for mid-2014. Beginning today, companies interested in being part of the PEAP can learn more by visiting https://engage.redhat.com/content/enterprise-early-access-se-201402101640.
“Twelve years ago, Red Hat shook up the enterprise status quo with the introduction of Red Hat Enterprise Linux, a safe, consistent Linux platform for the ISV ecosystem. Today, the trust that ISVs place in Red Hat Enterprise Linux can extend to containerized applications with the Red Hat Container Certification, ensuring that containers built on Red Hat Enterprise Linux will work as intended across certified hosts, delivering expanded application flexibility, lowered costs and simplified maintenance, all backed by the support and expertise expected from Red Hat.”-- Paul Cormier, president, Products and Technologies, Red Hat
"As the de facto standard in open source container technology, we are pleased to see Red Hat embracing the container movement as a means of application efficiency and consistency across computing footprints. By offering a container certification on Red Hat Enterprise Linux and Docker, Red Hat is providing the industry’s only standardized, stable platform for commercial container workloads, providing a safe, reliable way to deliver the enterprise applications of the future.” -- Ben Golub, CEO, Docker
“Consistent with its mission of high value and affordability, Red Hat continues this trend through a supported platform that enables consistent enterprise-grade application deployment. We’re excited to work with Red Hat to provide customers a seamless database environment that is consistent from development to deployment, regardless of where it resides within the open hybrid cloud.” -- Matt Asay, Vice President of Marketing and Business Development, MongoDB
Linux Containers
Linux Containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Developers have rapidly embraced Linux Containers because they simplify and accelerate application deployment, while many PaaS platforms are built around Linux container technology, including OpenShift by Red Hat. Red Hat Enterprise Linux implements Linux Containers using core technologies such as Control Groups (cGroups), Resource Management, SELinux, and network namespaces, enabling secure multi-tenancy and reducing the potential for security exploits.
Building on the capabilities of Red Hat Enterprise Linux, application containers are deployed as software packages that include the application and all of its required runtime components. Benefits include:
- Instant portability, deploy the certified application container across any certified container host
- Minimal footprint, avoid the overhead of virtual machine images which include a complete operating system
- Simplified maintenance, reduce the effort and risk by patching applications along with all of their dependencies
- Lowered development costs, develop, test and certify applications against a single runtime environment
The Red Hat Container Certification is the latest addition to Red Hat’s industry-leading certification portfolio that includes Red Hat Enterprise Linux and Red Hat Enterprise Linux OpenStack Platform. The certification is also part of a larger set of Red Hat technology initiatives which in turn support thousands of third party technology partners within Red Hat’s global ecosystem
To support this newly announced container certification, Red Hat is also announcing the container certification Partner Early Access Program (PEAP). The PEAP will run through late spring and provide the ability for ecosystem partners to participate in early testing, integration and feedback of the tools and resources required for containerization prior to the official launch of the certification and partner program, currently planned for mid-2014. Beginning today, companies interested in being part of the PEAP can learn more by visiting https://engage.redhat.com/content/enterprise-early-access-se-201402101640.
“Twelve years ago, Red Hat shook up the enterprise status quo with the introduction of Red Hat Enterprise Linux, a safe, consistent Linux platform for the ISV ecosystem. Today, the trust that ISVs place in Red Hat Enterprise Linux can extend to containerized applications with the Red Hat Container Certification, ensuring that containers built on Red Hat Enterprise Linux will work as intended across certified hosts, delivering expanded application flexibility, lowered costs and simplified maintenance, all backed by the support and expertise expected from Red Hat.”-- Paul Cormier, president, Products and Technologies, Red Hat
"As the de facto standard in open source container technology, we are pleased to see Red Hat embracing the container movement as a means of application efficiency and consistency across computing footprints. By offering a container certification on Red Hat Enterprise Linux and Docker, Red Hat is providing the industry’s only standardized, stable platform for commercial container workloads, providing a safe, reliable way to deliver the enterprise applications of the future.” -- Ben Golub, CEO, Docker
“Consistent with its mission of high value and affordability, Red Hat continues this trend through a supported platform that enables consistent enterprise-grade application deployment. We’re excited to work with Red Hat to provide customers a seamless database environment that is consistent from development to deployment, regardless of where it resides within the open hybrid cloud.” -- Matt Asay, Vice President of Marketing and Business Development, MongoDB
Check Point Collaborates With VMware to Provide Network Security for the Private Cloud
Check Point Software Technologies Ltd.,
the worldwide leader in securing the Internet, today announced that the
company is collaborating with VMware to automate and simplify the
provisioning and deployment of network security in private clouds. Check
Point will make its security protection solutions interoperable with
VMware infrastructure.
As today’s data center moves quickly down the path of virtualization, it is important for security to keep pace with virtual environments. Virtualization has greatly increased the speed at which applications can be deployed, and delivers the added benefits of agility and mobility to meet changing business requirements. Network security for virtualized environments must match this speed and agility through automated security management and provisioning, while enabling seamless support for existing security operational processes and policy.
Check Point Virtual Edition (VE) is a unique security solution that provides comprehensive protections and rapid deployment of security services in virtualized environments. By closely aligning Check Point’s security with VMware vSphere® 5.5, organizations will benefit from a fully automated security solution for a VMware virtualized environment. Virtual machines will be safeguarded through the breadth of Check Point’s multi-layered security protections without changing network topology. Additionally, the interoperability of Check Point solutions and VMware vCloud® Suite will enable simplified and automatic virtual application protections.
“As enterprises deploy their network infrastructure in the cloud, having security services be provisioned quickly for these virtual environments is of utmost importance,” said Gabi Reish, vice president of product management at Check Point Software Technologies. “The collaboration with VMware enables strong protection for network security through the evolution of our customers’ virtual environments.”
Available in Q1 2014, the new Check Point VE is just one part of Check Point’s larger cloud and virtualization security strategy, which will bring further interoperability with VMware solutions.
“VMware is delivering virtualization solutions across compute, networking, storage and management designed to help customers accelerate the path to the software-defined data center,” said Hatem Naguib, vice president, networking and security at VMware. “The new release of Check Point Virtual Edition, when combined with the latest release of vSphere or our new VMware NSX™ network virtualization platform, will offer our customers a simple solution to secure, manage and automate their virtualization and cloud environments..”
Features of the Check Point VE include:
For more information on Check Point VE and the collaboration with VMware, visit: http://www.checkpoint.com/products/security-gateway-virtual-edition/
As today’s data center moves quickly down the path of virtualization, it is important for security to keep pace with virtual environments. Virtualization has greatly increased the speed at which applications can be deployed, and delivers the added benefits of agility and mobility to meet changing business requirements. Network security for virtualized environments must match this speed and agility through automated security management and provisioning, while enabling seamless support for existing security operational processes and policy.
Check Point Virtual Edition (VE) is a unique security solution that provides comprehensive protections and rapid deployment of security services in virtualized environments. By closely aligning Check Point’s security with VMware vSphere® 5.5, organizations will benefit from a fully automated security solution for a VMware virtualized environment. Virtual machines will be safeguarded through the breadth of Check Point’s multi-layered security protections without changing network topology. Additionally, the interoperability of Check Point solutions and VMware vCloud® Suite will enable simplified and automatic virtual application protections.
“As enterprises deploy their network infrastructure in the cloud, having security services be provisioned quickly for these virtual environments is of utmost importance,” said Gabi Reish, vice president of product management at Check Point Software Technologies. “The collaboration with VMware enables strong protection for network security through the evolution of our customers’ virtual environments.”
Available in Q1 2014, the new Check Point VE is just one part of Check Point’s larger cloud and virtualization security strategy, which will bring further interoperability with VMware solutions.
“VMware is delivering virtualization solutions across compute, networking, storage and management designed to help customers accelerate the path to the software-defined data center,” said Hatem Naguib, vice president, networking and security at VMware. “The new release of Check Point Virtual Edition, when combined with the latest release of vSphere or our new VMware NSX™ network virtualization platform, will offer our customers a simple solution to secure, manage and automate their virtualization and cloud environments..”
Features of the Check Point VE include:
- Interoperability with VMware vSphere 5.1 and 5.5, vCenter Server™, vCloud Suite and VMware NSX™ network virtualization
- Centralized security management for all Check Point virtual and physical security gateways
- Inspection of Inter-VM traffic without changing network topology
- Automated deployment and provisioning of Check Point’s VE gateways
- Automated protection of new virtual machines
- Use of VMware objects in Check Point policy rules
- Constant protection of VMs, regardless of IP address changes
- VM migration without breaking application connectivity
For more information on Check Point VE and the collaboration with VMware, visit: http://www.checkpoint.com/products/security-gateway-virtual-edition/
Thursday, March 6, 2014
CloudByte Software Upgrade Improves User Visibility Into Storage Metrics, Offers New Opportunities for Cloud Service Providers
CloudByte, the first company to provide dynamically selectable
storage performance on industry-standard hardware, announced today
a new release of its award-winning storage software, ElastiStor
1.3. The CloudByte software provides improved end-user visibility
into storage metrics and the ability to view the real-time status
of the CloudByte environment. It integrates with major
virtualization vendors and cloud stack environments and expands
certification of more hardware platforms.
“Without metrics that matter providing visibility insight into your storage environment usage and activity, you are flying blind,” said Greg Schulz, founder and senior advisor, StorageIO. “By providing visibility and situational awareness, CloudByte helps their customers safely navigate and leverage their storage infrastructure in different conditions to avoid becoming a cloud performance, availability or economic barrier.”
Detailed storage reports and metrics that improve end-user visibility can be accessed from the CloudByte ElastiCenter console or conveniently from a mobile device, similar to the functionality of Amazon Cloud Watch. Another visibility feature is the standardization of a key performance measure – IOPS – across hardware platforms for improved billing consistency.
Integration enhancements include a new REST API set that can be used by service providers to seamlessly provide Virtual Machine-level IOPS provisioning and management directly from VMware vCenter. VM snapshots and clones can also be driven from vCenter. A new plug-in provides application-consistent snapshots for Microsoft SQL Server environments.
Environment support for this release includes:
“This release is in direct response to our service provider customers’ requests for functionality that directly allows them to increase market share,” said Felix Xavier, CloudByte CTO and founder. “Better end-user visibility, better integration, and broader environment support increase our customers’ ability to capture new enterprises and share of wallet of existing enterprises.”
For current customers, CloudByte ElastiStor 1.3 is immediately available from the CloudByte support website. A version that includes a free perpetual license for four terabytes of storage capacity can be downloaded from the Free Trial page.
“Without metrics that matter providing visibility insight into your storage environment usage and activity, you are flying blind,” said Greg Schulz, founder and senior advisor, StorageIO. “By providing visibility and situational awareness, CloudByte helps their customers safely navigate and leverage their storage infrastructure in different conditions to avoid becoming a cloud performance, availability or economic barrier.”
Detailed storage reports and metrics that improve end-user visibility can be accessed from the CloudByte ElastiCenter console or conveniently from a mobile device, similar to the functionality of Amazon Cloud Watch. Another visibility feature is the standardization of a key performance measure – IOPS – across hardware platforms for improved billing consistency.
Integration enhancements include a new REST API set that can be used by service providers to seamlessly provide Virtual Machine-level IOPS provisioning and management directly from VMware vCenter. VM snapshots and clones can also be driven from vCenter. A new plug-in provides application-consistent snapshots for Microsoft SQL Server environments.
Environment support for this release includes:
- OpenStack Cinder plugin support for the OpenStack Grizzly release
- A CloudByte IOPS management plugin for the CloudStack 4.3 release
- VMware block and file certifications
- CloudByte-certified hardware platforms : Dell PowerEdge Rx20 server series; HP ProLiant DL Rack Servers, Generation 8; and DataON Cluster-in-a-Box series
“This release is in direct response to our service provider customers’ requests for functionality that directly allows them to increase market share,” said Felix Xavier, CloudByte CTO and founder. “Better end-user visibility, better integration, and broader environment support increase our customers’ ability to capture new enterprises and share of wallet of existing enterprises.”
For current customers, CloudByte ElastiStor 1.3 is immediately available from the CloudByte support website. A version that includes a free perpetual license for four terabytes of storage capacity can be downloaded from the Free Trial page.
DataCore Unveils Industry's First Software-Defined Storage Benefits Calculator
DataCore, a leader in software-defined storage, today launched the
industry’s first software-defined storage benefits calculator, as a
part of its revamped website. By providing a few details about an
organization’s storage environment, the benefits calculator
instantly provides a personalized savings report, which details
the benefits of using DataCore for storage virtualization,
covering six categories:
DataCore partnered with Alinean – the leading ROI and TCO experts – to develop the software-defined storage benefits calculator. The calculator estimates savings based on: industry estimates for costs associated with downtime (planned and unplanned); the reduction of downtime customers experience after introducing DataCore; industry averages for the cost of storage and the staff to manage it; the improvement in the utilization of once stranded capacity is pooled, provisioned, cached and auto-tiered using DataCore; and the extended functional coverage that DataCore’s software-defined storage platform brings through infrastructure-wide services.
In addition to the benefits calculator, the DataCore website now features hundreds of real-world customer case studies from different industries and a number of new video testimonials demonstrating the real life benefits that customers are experiencing as a result of using DataCore. The most recent video testimonial highlights how Englewood Hospital reduced downtime, increased performance, improved IT responsiveness and decreased storage-related spending. The site also features a two minute video that exposes the secret that hardware vendors have been keeping from their customers for years – how software-defined storage will change the IT industry forever.
For more information and to view the revamped website please visit, www.datacore.com.
- Financial savings over five years
- Reduce downtime in hours
- Centralize management
- Pool tiered resources
- Reclaim free space (TBs)
- Extend functional coverage
DataCore partnered with Alinean – the leading ROI and TCO experts – to develop the software-defined storage benefits calculator. The calculator estimates savings based on: industry estimates for costs associated with downtime (planned and unplanned); the reduction of downtime customers experience after introducing DataCore; industry averages for the cost of storage and the staff to manage it; the improvement in the utilization of once stranded capacity is pooled, provisioned, cached and auto-tiered using DataCore; and the extended functional coverage that DataCore’s software-defined storage platform brings through infrastructure-wide services.
In addition to the benefits calculator, the DataCore website now features hundreds of real-world customer case studies from different industries and a number of new video testimonials demonstrating the real life benefits that customers are experiencing as a result of using DataCore. The most recent video testimonial highlights how Englewood Hospital reduced downtime, increased performance, improved IT responsiveness and decreased storage-related spending. The site also features a two minute video that exposes the secret that hardware vendors have been keeping from their customers for years – how software-defined storage will change the IT industry forever.
For more information and to view the revamped website please visit, www.datacore.com.
Nutanix Patent Ups the Ante in Software-Defined Storage Wars
Nutanix, the leading provider of next-generation datacenter
infrastructure solutions, today announced that it has been granted a
patent (US 8,601,473) from the United States Patent and Trademark
Office recognizing the company’s fundamental storage architecture
for virtualized datacenters, which not only obsoletes 20 year-old
storage area network (SAN) technology, but also challenges the
entire industry to design a next-generation storage architecture
with comparable performance and scalability. The patent provides
clarity as to how software-defined storage solutions are optimally
designed and implemented, while at the same time bringing the
technology into mainstream enterprises around the world.
Leveraging these documented inventions, Nutanix brings simplification to enterprise data centers by delivering scale-out storage services via software running on off-the-shelf x86 servers. Datacenter managers are now freed from rigid storage constructs and burdensome administrative tasks that have frustrated virtualization teams and slowed important business initiatives. Nutanix customers benefit from advanced web-scale technologies without sacrificing project velocity, data management or the freedom to choose the right hypervisor for their organization.
This patent demonstrates the company’s commitment to advancing the state of the art in data center technology and to simplifying how enterprise IT infrastructures are designed, built and managed. Specifically, it details how a system of distributed nodes (servers) provides high-performance shared storage to virtual machines (VMs) by utilizing a “Controller VM” that runs on each node. The Controller VMs aggregate the local storage resources across all servers, including direct-attached flash, and delivers a shared pool of data storage that is functionally equivalent to a SAN or NAS array – but with greater scalability and higher performance. The architecture natively delivers enterprise-class storage capabilities, such as snapshots, clones, compression and deduplication, to run any production workload.
“Nutanix has pioneered a radically simpler and more scalable storage architecture for all virtualized environments,” said Dr. Willy Zwaenepoel, professor and former dean at Ecole Polytechnique Federale de Lausanne (EPFL) in Switzerland. “This patent recognizes their technical vision and leadership while raising the stakes on other companies in the web-scale market.”
The Nutanix architecture also provides a unified data fabric for building hybrid clouds. Enterprise IT managers can leverage public cloud resources on-demand, while retaining the control and security of their private cloud infrastructure. This flexibility is made possible by a 100 percent software-driven architecture and implemented independent of any specific virtualization technology or vendor.
In addition to publicly disclosing important intellectual property, the patent also details Nutanix’s vision for how data centers will evolve in the cloud era. It is this powerful vision, captured nearly three years ago, which has driven the company’s leadership in the fast-growing web-scale and converged infrastructure markets. Highlights include:
Leveraging these documented inventions, Nutanix brings simplification to enterprise data centers by delivering scale-out storage services via software running on off-the-shelf x86 servers. Datacenter managers are now freed from rigid storage constructs and burdensome administrative tasks that have frustrated virtualization teams and slowed important business initiatives. Nutanix customers benefit from advanced web-scale technologies without sacrificing project velocity, data management or the freedom to choose the right hypervisor for their organization.
This patent demonstrates the company’s commitment to advancing the state of the art in data center technology and to simplifying how enterprise IT infrastructures are designed, built and managed. Specifically, it details how a system of distributed nodes (servers) provides high-performance shared storage to virtual machines (VMs) by utilizing a “Controller VM” that runs on each node. The Controller VMs aggregate the local storage resources across all servers, including direct-attached flash, and delivers a shared pool of data storage that is functionally equivalent to a SAN or NAS array – but with greater scalability and higher performance. The architecture natively delivers enterprise-class storage capabilities, such as snapshots, clones, compression and deduplication, to run any production workload.
“Nutanix has pioneered a radically simpler and more scalable storage architecture for all virtualized environments,” said Dr. Willy Zwaenepoel, professor and former dean at Ecole Polytechnique Federale de Lausanne (EPFL) in Switzerland. “This patent recognizes their technical vision and leadership while raising the stakes on other companies in the web-scale market.”
The Nutanix architecture also provides a unified data fabric for building hybrid clouds. Enterprise IT managers can leverage public cloud resources on-demand, while retaining the control and security of their private cloud infrastructure. This flexibility is made possible by a 100 percent software-driven architecture and implemented independent of any specific virtualization technology or vendor.
In addition to publicly disclosing important intellectual property, the patent also details Nutanix’s vision for how data centers will evolve in the cloud era. It is this powerful vision, captured nearly three years ago, which has driven the company’s leadership in the fast-growing web-scale and converged infrastructure markets. Highlights include:
- The modern data center must “…virtualize all storage hardware as one global resource pool that is high in reliability, availability and performance…” to eliminate islands of storage that are difficult to manage, and nearly impossible to scale.
- Unlike traditional network-based storage, the Nutanix architecture “…permits local storage that is within or directly attached to the server and/or appliance to be managed as part of the storage pool.” This enables enterprises to fully leverage server-attached flash technology, and avoid the end-to-end latency incurred when flash is added to centralized, network-based storage systems.
- Datacenter scalability will be driven by a “…massively-parallel storage architecture that scales as and when hypervisor hosts are added…” Such massive scalability can only be achieved if the storage infrastructure can be scaled independent of any hypervisor constraints.
- When storage resources are virtualized, they should be able to “…used in conjunction with any hypervisor from any virtualization vendor.”
Cloudinary Expands One-Stop Shop for Cloud-Based Image Management; Announces New Image Processing Add-Ons
Cloudinary,
the leading cloud-based image management platform, today
announced new and fully integrated image processing add-ons,
leveraging technologies by Imagga, ReKognition,
URL2PNG, Aspose, WebPurify and others. Used by
nearly 20,000 developers from more than 100 countries,
Cloudinary's new add-on offerings enable automatic image
moderation, image categorization, smarter image cropping, improved image
compression, advanced face attributes detection, website
screenshot generation and more through a single click
integration—building on the company's comprehensive online
solution for one-stop shop image management.
“Today's websites are very image rich and require an increased focus on user experience and accessibility on a wide range of devices,” said Paul Burns, president, Neovise. “Managing the image pipeline that supports this trend – from an image's initial upload to its final delivery and viewing – is challenging, time consuming and costly, both from a product, R&D and IT perspective. We are seeing many organizations investing considerable efforts into building and supporting in-house solutions to manage their image pipeline.”
Cloudinary's cloud-based software-as-a-service (SaaS) solution, with its rich API, delivers complete end-to-end image management, including image uploads, cloud-based storage, online administration, on-the-fly manipulation and optimized delivery, enabling developers to focus on building innovative applications, instead of supporting and managing image integration in-house. The newly available image processing add-ons can be used via simple API calls without requiring separate manual integration with each external service, delivering a seamless image rich Web and mobile experience.
The Cloudinary Experience
Designed by developers for developers, Cloudinary offers a comprehensive, highly customizable, scalable and affordable approach to image management. Cloudinary's benefits include:
General Availability and Pricing
Cloudinary's add-on features are available today. Cloudinary offers a fully featured free plan, in addition to paid plans that start at $39 per month and go up to enterprise plans that support millions of daily images.
“Today's websites are very image rich and require an increased focus on user experience and accessibility on a wide range of devices,” said Paul Burns, president, Neovise. “Managing the image pipeline that supports this trend – from an image's initial upload to its final delivery and viewing – is challenging, time consuming and costly, both from a product, R&D and IT perspective. We are seeing many organizations investing considerable efforts into building and supporting in-house solutions to manage their image pipeline.”
Cloudinary's cloud-based software-as-a-service (SaaS) solution, with its rich API, delivers complete end-to-end image management, including image uploads, cloud-based storage, online administration, on-the-fly manipulation and optimized delivery, enabling developers to focus on building innovative applications, instead of supporting and managing image integration in-house. The newly available image processing add-ons can be used via simple API calls without requiring separate manual integration with each external service, delivering a seamless image rich Web and mobile experience.
The Cloudinary Experience
Designed by developers for developers, Cloudinary offers a comprehensive, highly customizable, scalable and affordable approach to image management. Cloudinary's benefits include:
- One-Stop Shop: Designed to simplify the entire image management pipeline, Cloudinary offers a one-stop shop for developers. By including new add-on capabilities, Cloudinary is expanding on its image management vision and easing the hassle developers often face in creating a media rich Web and mobile user interface.
- Rich Image Manipulations: Cloudinary follows a single-source imaging concept, in which a single uploaded original image can be manipulated on-the-fly to best fit different graphic designs and devices, including smartphones, tablets and desktop browsers.
- Straightforward Integration: Cloudinary offers integration libraries for most Web development platforms and SDKs for iPhone and Android, as well as modern PaaS solutions, making it easy to get started managing images in the cloud.
- Dynamic Scale: Cloudinary offers plans for companies of all sizes, and currently is used by a variety of companies – both large and small – across numerous verticals.
General Availability and Pricing
Cloudinary's add-on features are available today. Cloudinary offers a fully featured free plan, in addition to paid plans that start at $39 per month and go up to enterprise plans that support millions of daily images.
SolarWinds Makes Powerful Network Monitoring Accessible to Masses
SolarWinds, a leading provider of powerful and affordable IT
management software, today announced further enhancements to SolarWinds
Network Performance Monitor (NPM) that offer the robust network
performance monitoring, alerting and reporting capabilities available in
'Big Four' enterprise software, but at an affordable,
unique-to-industry price. The latest SolarWinds NPM demonstrates a
continued commitment to simplifying network management in the datacenter
amidst increasingly complex network challenges.
"SolarWinds focuses on developing products that solve the everyday needs of all IT Professionals -- most of whom don't have deep pockets or time to spare installing or configuring software," said Chris LaPoint, VP Product Management, SolarWinds. "And as technology advances and networks grow increasingly complex, SolarWinds must preserve our long-standing tradition of providing the powerful capabilities of enterprise-level software you might see in expensive 'Big Four' software, but without the need for professional services and at a price that nearly any organization can afford."
In keeping with the tradition of delivering advanced functionality at a market-differentiating price, SolarWinds NPM has added new baseline threshold calculating functionality, providing the deep visibility and advanced analytics not previously available to network administrators at such a price. Other features SolarWinds NPM was previously first to introduce to the market for a low cost include enterprise-level wireless support for LAN controllers and thin access points, monitoring for major routing protocols including OSPF and BGP, and a high-availability Failover Engine, all of which have differentiated SolarWinds NPM as a uniquely powerful, yet affordable, network monitoring solution.
New SolarWinds NPM Features
"thwack is second to none," said Ashley Cotter, applications engineer for Timico who evaluated an early version of SolarWinds NPM's latest features. "I voted for the new baseline threshold alerting and it's now included in the software. I can ask SolarWinds senior developers technical questions on a peer level and I know I'm actively impacting SolarWinds product development, and that's just not something I've seen anywhere else."
Pricing and Availability SolarWinds Network Performance Monitor pricing starts at $2,675 and includes the first year of maintenance. For more information, including a downloadable, free 30-day evaluation, visit the SolarWinds website or call 866.530.8100.
"SolarWinds focuses on developing products that solve the everyday needs of all IT Professionals -- most of whom don't have deep pockets or time to spare installing or configuring software," said Chris LaPoint, VP Product Management, SolarWinds. "And as technology advances and networks grow increasingly complex, SolarWinds must preserve our long-standing tradition of providing the powerful capabilities of enterprise-level software you might see in expensive 'Big Four' software, but without the need for professional services and at a price that nearly any organization can afford."
In keeping with the tradition of delivering advanced functionality at a market-differentiating price, SolarWinds NPM has added new baseline threshold calculating functionality, providing the deep visibility and advanced analytics not previously available to network administrators at such a price. Other features SolarWinds NPM was previously first to introduce to the market for a low cost include enterprise-level wireless support for LAN controllers and thin access points, monitoring for major routing protocols including OSPF and BGP, and a high-availability Failover Engine, all of which have differentiated SolarWinds NPM as a uniquely powerful, yet affordable, network monitoring solution.
New SolarWinds NPM Features
- Dynamic baseline threshold calculation defines critical network device thresholds based on historical network performance data, enabling users to configure alerts simply and accurately. It automatically gathers baseline data for specified time intervals (weekly or daily) to determine normal operating parameters and calculates thresholds with relevant alerts - good, warning, critical, or down - which allow IT Pros to quickly assess whether a device is up and performing poorly or is about to fail.
- Custom NOC views provide the ability to easily configure and display real-time fault, performance, and availability dashboards tailored to the needs of a network operations center (NOC) of any size.
- Dynamic customizable network mapping enables IT Pros to drag and drop network devices into network maps and automatically view real-time and color-coded link utilization for fully customized and detailed insight into network health beyond up-down status. The Network Weather Map's dynamic link utilization helps IT Pros identify when the network is approaching critical thresholds so they can prevent network errors and downtime.
- Enhancements to the universal device poller feature allow users to easily add a CPU and memory utilization chart in the SolarWinds NPM dashboard for additional uncommon devices, and includes those new device statistics in monitoring and reporting.
- Support for additional routing protocols and added support for Ruckus and Motorola wireless controllers and devices enable broader network monitoring for IT Pros.
"thwack is second to none," said Ashley Cotter, applications engineer for Timico who evaluated an early version of SolarWinds NPM's latest features. "I voted for the new baseline threshold alerting and it's now included in the software. I can ask SolarWinds senior developers technical questions on a peer level and I know I'm actively impacting SolarWinds product development, and that's just not something I've seen anywhere else."
Pricing and Availability SolarWinds Network Performance Monitor pricing starts at $2,675 and includes the first year of maintenance. For more information, including a downloadable, free 30-day evaluation, visit the SolarWinds website or call 866.530.8100.
Wednesday, February 26, 2014
Parallels New Solutions Uniquely Enable Business Transformation Through the Cloud
Parallels,
a leading hosting and cloud services enablement provider,
introduced its newest offerings to address the evolving cloud
computing opportunities for small to medium businesses (SMBs) at Parallels Summit 2014.
According to Parallels recently released SMB Cloud Insights™
research, the global market for SMB cloud was $62 billion in 2013
and is projected to grow by 26% CAGR to more than $125 billion by
2016. The growth is largely being driven by increased use of cloud
services, from 5 services in 2013 to more than 9 services by 2016.
Businesses are getting their services from many providers, which results in more time and resources spent managing IT and service providers not maximizing their opportunities. In order to capture these opportunities, hosters and service providers need to have the right underlying platform to be able to deliver a relevant, integrated and easy-to-use set of services to their customers. Parallels makes that possible.
Parallels Automation with APS 2 is the leading platform for SaaS delivery with more than 500 services available through private, public and hybrid clouds. Now, Parallels Automation with APS 2 is also the leading platform for IaaS delivery. With Parallels Automation, a single platform now manages and provisions IaaS based on the leading Cloud Management Platforms and leading Virtualization and Storage solutions. This gives Parallels partners the choice they need to take advantage of market growth with the technologies customers prefer to support a diversity of use cases - including virtual data centers, business applications, enterprise applications, e-business hosting, test and development, batch computing, and SaaS applications.
Parallels Plesk is the leading Web Management solution designed for shared hosting businesses, designers, digital agencies, SMB IT professionals and application developers – each optimized to how they use Plesk on a day-to-day basis.
“Right now, businesses everywhere are moving to the cloud faster than ever,” said Birger Steen, chief executive officer, Parallels. “For the first time in history, critical business technology is as accessible for the florist on Main Street as it is for the behemoths of Wall Street. As a result, the SMB cloud industry is growing 30-40% year over year. Service Providers working with Parallels are able to tap into this growth with a proven platform and approach to make the cloud work for all types of businesses, small and large.”
New Cloud Services Solutions
Businesses are getting their services from many providers, which results in more time and resources spent managing IT and service providers not maximizing their opportunities. In order to capture these opportunities, hosters and service providers need to have the right underlying platform to be able to deliver a relevant, integrated and easy-to-use set of services to their customers. Parallels makes that possible.
Parallels Automation with APS 2 is the leading platform for SaaS delivery with more than 500 services available through private, public and hybrid clouds. Now, Parallels Automation with APS 2 is also the leading platform for IaaS delivery. With Parallels Automation, a single platform now manages and provisions IaaS based on the leading Cloud Management Platforms and leading Virtualization and Storage solutions. This gives Parallels partners the choice they need to take advantage of market growth with the technologies customers prefer to support a diversity of use cases - including virtual data centers, business applications, enterprise applications, e-business hosting, test and development, batch computing, and SaaS applications.
Parallels Plesk is the leading Web Management solution designed for shared hosting businesses, designers, digital agencies, SMB IT professionals and application developers – each optimized to how they use Plesk on a day-to-day basis.
“Right now, businesses everywhere are moving to the cloud faster than ever,” said Birger Steen, chief executive officer, Parallels. “For the first time in history, critical business technology is as accessible for the florist on Main Street as it is for the behemoths of Wall Street. As a result, the SMB cloud industry is growing 30-40% year over year. Service Providers working with Parallels are able to tap into this growth with a proven platform and approach to make the cloud work for all types of businesses, small and large.”
New Cloud Services Solutions
- Parallels Plesk 12 provides a new security core with Atomicorp ModSecurity rules that provide powerful server-to-site security right out of the box, and a new WordPress toolkit that will help hosters mass-manage and secure multiple wordpress installs, including all plug-ins and themes, in their infrastructure. This will provide a better and more secure customer experience.
- Windows Azure Pack APS Package enables service providers to rapidly deploy Microsoft’s Windows Azure Pack services through the APS for Parallels Automation. The Windows Azure Pack helps service providers to offer the advanced cloud capabilities of Windows Azure services based on their own infrastructure. These technologies include the ability to offer IaaS and hosted databases with virtual networks, high-density scalable web sites, as well as an enterprise service bus. The Parallels Automation environment allows for seamless provisioning of these new services, as well as the consolidated billing with other offerings in the service provider’s environment.
- Flexiant Cloud Orchestrator APS Package enables the rapid deployment of highly differentiated public infrastructure as a service (IaaS) cloud offerings based on the Flexiant Cloud Orchestrator (FCO) with automated billing, provisioning and customer management provided by Parallels Automation. When the Flexiant APS Package is deployed with Parallels Automation, FCO provides compute, network and storage orchestration; end user self-service control panels; APIs; and support for a full range of virtualization technologies including Parallels Cloud Server.
- APS Packaged IaaS offerings now available from IBM/Softlayer and Hostway. For service providers who want a low barrier to entry and fast time to market IaaS solution, both IBM/Softlayer and Hostway have APS packages to enable syndication of their infrastructure through Parallels Automation.
- IDSync® APS package from InnerApps enables hybrid cloud scenarios by leveraging the APS 2 Service Bus. Utilizing the IDSync Open AD™ integration methods, ISVs can build on two-way Active Directory® data integration, advanced provisioning, and automated service discovery and configuration for any cloud offering which exploits the unique features of APS 2. This creates exciting new opportunities for service providers to offer unique upselling experiences based on discovery of compatible services, while at the same time providing unified login capabilities.
- Parallels Cloud Server, the only complete solution for cloud virtualization and cloud storage, is now integrated with Flexiant Cloud Orchestrator (FCO) to enable service providers using Flexiant Cloud Orchestrator to deploy cloud infrastructure based on high-performance, high-density Parallels Cloud Server virtualization and storage. Parallels Cloud Server combines Parallels Cloud Storage with Parallels Containers and Parallels Hypervisor to dramatically improve server performance, reliability and profitability.
Nexenta Delivers Next Generation Software Defined Storage Solutions
Nexenta
(@Nexenta), the global leader of Software Defined Storage (SDS)
(#softwaredefinedstorage) solutions, today announced the latest
release of NexentaStor, the award-winning SDS solution and a key
building block of the Software Defined Data Center. The
announcement was made today at Cloud Expo Europe 2014 in London. This release of NexentaStor will be available in Q2, 2014.
NexentaStor has strategically impacted the storage industry to lower TCO, and with the imminent release of NexentaStor 4.0 the business and technology benefits will exceed basic economics. NexentaStor is the leading commercial-grade SDS implementation, and SDS is key components of the mounting shift towards a Software Defined Data Center.
"The volume and type of data being collected by enterprises today is moving storage to the forefront of CIO and IT agendas," according to Donna Taylor, Research Director, IDC. "This data deluge is creating very real and very pressing issues to organizations struggling to keep up. In such environments, storage is becoming a strategic company-wide issue and companies are eager to find solutions that balance security, price, performance and reliability."
Today, Nexenta shares the continued innovation of its industry-leading NexentaStor open storage solution. Enhancements include:
With NexentaStor 4.0, enterprises can continue to liberate themselves from the traditional hardware-centric storage approach and have a fully featured, unified solution that scales to meet growing storage requirements on any platform. NexentaStor offers enterprise capabilities for data integrity and workload performance, providing customers the flexibility to select their storage operating system independently from the hardware on which it runs, while commodity storage hardware tends to lack defense against silent data corruption, is difficult to manage, and suffers in performance as storage needs grow.
NexentaStor has strategically impacted the storage industry to lower TCO, and with the imminent release of NexentaStor 4.0 the business and technology benefits will exceed basic economics. NexentaStor is the leading commercial-grade SDS implementation, and SDS is key components of the mounting shift towards a Software Defined Data Center.
"The volume and type of data being collected by enterprises today is moving storage to the forefront of CIO and IT agendas," according to Donna Taylor, Research Director, IDC. "This data deluge is creating very real and very pressing issues to organizations struggling to keep up. In such environments, storage is becoming a strategic company-wide issue and companies are eager to find solutions that balance security, price, performance and reliability."
Today, Nexenta shares the continued innovation of its industry-leading NexentaStor open storage solution. Enhancements include:
- 50% reduction in HA failover times
- Embedded intelligence in the Fault Management Architecture (FMA) which detects and resolves issues around failing hardware to reduce performance impacts
- Improved problem isolation that provides better reliability and visibility
- Reduced latencies to end users with the addition of Server Message Block (SMB) 2.1 support
- Simplified administration with the new wizard-driven storage pool with configuration and deployment tools
- Dramatic improvement in performance with support for 512GB memory
- Significant replication improvements with a highly improved AutoSync service
- Numerous additional SNMP MIBs and Traps for increased observability and monitoring
- Code base migration to Illumos open-source operating system, for improvements in security, management and monitoring
- Improves stability and supportability
- Seamless upgrade from all existing NexentaStor versions
With NexentaStor 4.0, enterprises can continue to liberate themselves from the traditional hardware-centric storage approach and have a fully featured, unified solution that scales to meet growing storage requirements on any platform. NexentaStor offers enterprise capabilities for data integrity and workload performance, providing customers the flexibility to select their storage operating system independently from the hardware on which it runs, while commodity storage hardware tends to lack defense against silent data corruption, is difficult to manage, and suffers in performance as storage needs grow.
Piston OpenStack 3.0: The Last OpenStack Product You'll Ever Try
Piston Cloud Computing, Inc., the enterprise OpenStack company, today
announced, in conjunction with an Intelemage customer announcement, a new
release of Piston OpenStack, a turn-key software product that uses advanced
systems intelligence to orchestrate an entire private cloud environment using
commodity hardware.
"Piston delivered the first commercial OpenStack product to market, and over the last two years, we've seen incredible customer success. To date, dozens of companies, from leading financial services organizations and medical research labs to global service providers and federal government agencies, have deployed Piston OpenStack, at scale, in diverse production environments," said Jim Morrisroe, CEO of Piston. "With Piston OpenStack 3.0, our customers can stop worrying about their IT infrastructure and spend more time growing their business."
Piston OpenStack 3.0: The Last OpenStack Product You'll Ever Try
Piston has taken the OpenStack framework and integrated it with best-of-breed technologies to create a complete private cloud solution. Including a highly-secure Micro Operating System, Piston OpenStack installs directly onto commodity hardware with no per-server configuration. Using a hyper-converged architecture, Piston delivers virtualized compute, storage, and network capabilities from every server, making it truly the most efficient model of private cloud delivery.
With Piston OpenStack, customers achieve the business agility of cloud computing, with the security of an on-premise solution, at less than a third of the cost of public cloud. New features and benefits of Piston OpenStack 3.0 include:
Availability, Professional Services and Pricing
Piston OpenStack 3.0 is available now, and may be downloaded free for 90 days at http://www.pistoncloud.com/start. After the 90-day free trial period, Piston OpenStack is available through an annual software subscription license, which includes an automated, online update service and access to 24×7 customer support.
"Piston delivered the first commercial OpenStack product to market, and over the last two years, we've seen incredible customer success. To date, dozens of companies, from leading financial services organizations and medical research labs to global service providers and federal government agencies, have deployed Piston OpenStack, at scale, in diverse production environments," said Jim Morrisroe, CEO of Piston. "With Piston OpenStack 3.0, our customers can stop worrying about their IT infrastructure and spend more time growing their business."
Piston OpenStack 3.0: The Last OpenStack Product You'll Ever Try
Piston has taken the OpenStack framework and integrated it with best-of-breed technologies to create a complete private cloud solution. Including a highly-secure Micro Operating System, Piston OpenStack installs directly onto commodity hardware with no per-server configuration. Using a hyper-converged architecture, Piston delivers virtualized compute, storage, and network capabilities from every server, making it truly the most efficient model of private cloud delivery.
With Piston OpenStack, customers achieve the business agility of cloud computing, with the security of an on-premise solution, at less than a third of the cost of public cloud. New features and benefits of Piston OpenStack 3.0 include:
- Enhanced Storage: Fine-grained configuration of multi-tier storage pools for improved performance and cost-efficiency.
- Enhanced Networking: A wide range of software-defined networking (SDN) options including Juniper Contrail, PLUMgrid, and VMware NSX.
- Moxie Runtime Environment (RTE) Access: Now customers can leverage Piston's distributed systems orchestration, Moxie RTE, to deploy third party services and run any infrastructure service as highly-available and disaster-proof.
- New Cluster Management APIs and Tools: Richer tools for node health monitoring and cluster management.
- Remote Technical Support Access: One-click secure access to customer support.
- Redesigned Dashboard: New user interface (UI) for improved usability.
- Multi-Region Management: New dashboard interface makes it easy for users to consume clouds of 10,000 servers distributed over multiple geo-locations through a single API.
Availability, Professional Services and Pricing
Piston OpenStack 3.0 is available now, and may be downloaded free for 90 days at http://www.pistoncloud.com/start. After the 90-day free trial period, Piston OpenStack is available through an annual software subscription license, which includes an automated, online update service and access to 24×7 customer support.
Tilera Launches Open Virtual Switch Solution (OVS) to Accelerate NFV and SDN
Tilera Corporation,
the leader in high-performance, power-efficient computing, today
announced the availability of TILE-OVS, an optimized Open vSwitch (OVS)
offload solution for network functions virtualization (NFV) deployments
in data center and telco networks. Powered by Tilera's TILE-Gx manycore
processors and deployed in a PCI Express form factor, the solution
delivers up to 80Gbps of OVS processing with additional headroom to
spare for many other sophisticated networking applications such as deep
packet inspection (DPI), network analytics and cyber security
processing.
In software defined networking (SDN) and NFV paradigms, networking functions are deployed as virtual machines on x86 servers rather than running on purpose-built appliances. However, due to the computational complexity of networking functions, virtual switching, and the ever-increasing rate of IP traffic, these workloads consume a significant portion of the x86 server's compute capabilities. With TILE-OVS, high-throughput networking, switching, and security functions are handled by TILE-Gx processors with direct coupling to the Ethernet I/O, dramatically offloading the x86 server.
"Offloading Open vSwitch from x86 host CPUs provides several benefits for NFV," said Matthew Mattina, CTO, Tilera Corporation. "OVS is computationally demanding on x86 machines, especially when dealing with high throughput, high packet rates, and tunneling protocols like NVGRE that leave fewer cycles to run the desired virtual functions. By offloading OVS to a Tilera data-plane adapter, our customers are able to free up more cycles for host-side applications resulting in NFV capable servers that are power and cost efficient, and support OpenFlow controllers."
"At ZTE, we create solutions to address the needs of our customers deploying next-generation solutions using NFV," said Xiong Xiankui, ZTE Central Research Institute chief engineer. "The TILE-Gx processors from Tilera deliver the right combination of features, performance-per-watt and programmability to create application offload solutions such as Open vSwitch and other compute-intensive networking offloads."
The TILEncore-Gx Intelligent Application Adapters incorporate more than enough computing power for OVS at 20 to 80Gbps, with the headroom to perform other NFV data plane offloads such as deep packet inspection (DPI) or intrusion detection and intrusion prevention (IDS/IPS) functions. Tilera's high performance PCIe interface with SR-IOV (single root-IO virtualization) enables peak data transfer rates between the host and TILEncore-Gx adapter. The OVS solution leverages open source software and Tilera's Multicore Development Environment (MDE) software tool suite with a standards-based Linux and C/C++ programming environment and scales with full software compatibility across the entire family of TILEncore-Gx adapters.
Tilera will demonstrate its TILE-OVS capabilities at RSA 2014 in San Francisco at the Moscone Convention Center, February 24-27, 2014 in South Expo booth #2320.
In software defined networking (SDN) and NFV paradigms, networking functions are deployed as virtual machines on x86 servers rather than running on purpose-built appliances. However, due to the computational complexity of networking functions, virtual switching, and the ever-increasing rate of IP traffic, these workloads consume a significant portion of the x86 server's compute capabilities. With TILE-OVS, high-throughput networking, switching, and security functions are handled by TILE-Gx processors with direct coupling to the Ethernet I/O, dramatically offloading the x86 server.
"Offloading Open vSwitch from x86 host CPUs provides several benefits for NFV," said Matthew Mattina, CTO, Tilera Corporation. "OVS is computationally demanding on x86 machines, especially when dealing with high throughput, high packet rates, and tunneling protocols like NVGRE that leave fewer cycles to run the desired virtual functions. By offloading OVS to a Tilera data-plane adapter, our customers are able to free up more cycles for host-side applications resulting in NFV capable servers that are power and cost efficient, and support OpenFlow controllers."
"At ZTE, we create solutions to address the needs of our customers deploying next-generation solutions using NFV," said Xiong Xiankui, ZTE Central Research Institute chief engineer. "The TILE-Gx processors from Tilera deliver the right combination of features, performance-per-watt and programmability to create application offload solutions such as Open vSwitch and other compute-intensive networking offloads."
The TILEncore-Gx Intelligent Application Adapters incorporate more than enough computing power for OVS at 20 to 80Gbps, with the headroom to perform other NFV data plane offloads such as deep packet inspection (DPI) or intrusion detection and intrusion prevention (IDS/IPS) functions. Tilera's high performance PCIe interface with SR-IOV (single root-IO virtualization) enables peak data transfer rates between the host and TILEncore-Gx adapter. The OVS solution leverages open source software and Tilera's Multicore Development Environment (MDE) software tool suite with a standards-based Linux and C/C++ programming environment and scales with full software compatibility across the entire family of TILEncore-Gx adapters.
Tilera will demonstrate its TILE-OVS capabilities at RSA 2014 in San Francisco at the Moscone Convention Center, February 24-27, 2014 in South Expo booth #2320.
Catbird Partners With Trapezoid to Meet Control Requirements for Security, Privacy and Compliance in Enterprise Cloud Environments
Catbird, the leader in security policy automation and enforcement for
private clouds and virtual infrastructures, announced today it has
entered into a partnership with Trapezoid, an enterprise cloud cyber
security company focused on the development and deployment of dynamic
security and compliance postures based on platform integrity data,
including Intel Trusted Execution Technology (TXT). The combined
solution dramatically increases security and compliance in the key areas
of asset inventory, logical segmentation, continuous monitoring, and
policy enforcement.
"Customers have been asking for root-of-trust capabilities to meet dozens of requirements in FISMA, FedRAMP," said Edmundo Costa, CEO of Catbird. "Trapezoid integration delivers unique security and audit capabilities to monitor integrity, boundary and trust information to meet FISMA Controls such as SC-11: Trusted Path and PE-18: Location of Information Systems. Catbird and Trapezoid are the only vendors able to offer continuous monitoring and enforcement to FISMA, FedRAMP and other leading standards for virtual and private cloud infrastructure."
"I am very excited that we are bringing together Catbird's granular understanding of the virtual network environment with Trapezoid's Intelligent Views of the integrity and identity of cloud platform infrastructure," said Robert Rounsavall, President & Cofounder of Trapezoid, Inc. "Catbird's comprehensive security, compliance, and policy enforcement engine leverages Trapezoid's integrity, boundary, and trust events for real-time enforcement actions and alerts to policy violations creating mechanisms for potential remediation actions based on the integrity and trust degradation of the cloud platform that is hosting the assets Catbird is monitoring," continued Rounsavall. "Catbird users can now diagram, configure, and secure their Catbird TrustZones not only by data flow, but also by the trust, integrity, and location of all systems and networks in the cloud provided by the Trapezoid Trust Visibility Engine."
The combined solution will leverage Trapezoid Marker to meet twenty-four FISMA controls across eight control areas by enabling workloads to be securely migrated among hybrid cloud environments and distributed data centers. With it, Catbird can take into account platform attestation, safer hypervisor launch, and geo-location restrictions to enhance security and compliance enforcement. Trapezoid also enhances Catbird's visualization, automation, and reporting of virtual infrastructure by providing new integrity, boundary, and trust data.
For more information, please visit both Catbird and Trapezoid at the RSA Conference 2014, February 24-28 at Moscone Center in San Francisco. Catbird can be found in Booth #2505 and at the session "Shifting Roles in the SDDC"; Trapezoid can be found in Intel North Hall, Booth #3213, Kiosk #13, Demo Location #19.
"Customers have been asking for root-of-trust capabilities to meet dozens of requirements in FISMA, FedRAMP," said Edmundo Costa, CEO of Catbird. "Trapezoid integration delivers unique security and audit capabilities to monitor integrity, boundary and trust information to meet FISMA Controls such as SC-11: Trusted Path and PE-18: Location of Information Systems. Catbird and Trapezoid are the only vendors able to offer continuous monitoring and enforcement to FISMA, FedRAMP and other leading standards for virtual and private cloud infrastructure."
"I am very excited that we are bringing together Catbird's granular understanding of the virtual network environment with Trapezoid's Intelligent Views of the integrity and identity of cloud platform infrastructure," said Robert Rounsavall, President & Cofounder of Trapezoid, Inc. "Catbird's comprehensive security, compliance, and policy enforcement engine leverages Trapezoid's integrity, boundary, and trust events for real-time enforcement actions and alerts to policy violations creating mechanisms for potential remediation actions based on the integrity and trust degradation of the cloud platform that is hosting the assets Catbird is monitoring," continued Rounsavall. "Catbird users can now diagram, configure, and secure their Catbird TrustZones not only by data flow, but also by the trust, integrity, and location of all systems and networks in the cloud provided by the Trapezoid Trust Visibility Engine."
The combined solution will leverage Trapezoid Marker to meet twenty-four FISMA controls across eight control areas by enabling workloads to be securely migrated among hybrid cloud environments and distributed data centers. With it, Catbird can take into account platform attestation, safer hypervisor launch, and geo-location restrictions to enhance security and compliance enforcement. Trapezoid also enhances Catbird's visualization, automation, and reporting of virtual infrastructure by providing new integrity, boundary, and trust data.
For more information, please visit both Catbird and Trapezoid at the RSA Conference 2014, February 24-28 at Moscone Center in San Francisco. Catbird can be found in Booth #2505 and at the session "Shifting Roles in the SDDC"; Trapezoid can be found in Intel North Hall, Booth #3213, Kiosk #13, Demo Location #19.
CenturyLink Cloud delivers Hyperscale service for web-scale workloads, big data and cloud-native apps
CenturyLink, Inc. today announced the commercial availability of Hyperscale high-performance server instances offered through the CenturyLink Cloud platform. The new service is designed for web-scale workloads, big data and cloud-native applications.
CenturyLink Cloud includes dozens of self-service features that help enterprises run their mission-critical and business applications in the public cloud with ease. With Hyperscale, developers can now use the same platform to launch and manage advanced next-generation apps.
"New applications are crucial to delivering a competitive advantage for enterprises, and Hyperscale is the ideal service for these workloads," said Jared Wray, CenturyLink Cloud chief technology officer, CenturyLink Technology Solutions. "CenturyLink continues to bring developers and IT together with this new capability. Developers get self-service and lightning-fast performance for popular NoSQL platforms, and IT can easily use our cloud management platform for governance and billing."
Hyperscale combines CenturyLink Cloud's high-performance compute with 100-percent flash storage to deliver a superior experience for web-scale architectures built on Couchbase, MongoDB and other NoSQL technologies. Users of the service will consistently see performance at or above 15,000 input/output operations per second (IOPS) for a diverse range of workloads. Hyperscale also is ideal for big data scenarios and serves as a complementary offering to CenturyLink's existing Big Data Foundation Services.
Global Public Cloud Expansion Continues
CenturyLink today also announced expansion plans for the CenturyLink Cloud network of public cloud data centers, growing from nine to 13 locations in the first half of 2014. Starting in March, customers will be able to deploy virtual resources in Santa Clara, Calif., and Sterling, Va., with locations in Paris and the London metro market coming online in the second quarter of 2014. All CenturyLink Cloud services, including Hyperscale, will be available in these new locations.
This broad footprint offers more choice and flexibility for geo-targeted solutions and for boosting performance while ensuring data sovereignty. Customers can also deploy advanced IT configurations through the interoperability of public cloud, colocation, managed services and network solutions available through the CenturyLink Technology Solutions portfolio.
CenturyLink Cloud includes dozens of self-service features that help enterprises run their mission-critical and business applications in the public cloud with ease. With Hyperscale, developers can now use the same platform to launch and manage advanced next-generation apps.
"New applications are crucial to delivering a competitive advantage for enterprises, and Hyperscale is the ideal service for these workloads," said Jared Wray, CenturyLink Cloud chief technology officer, CenturyLink Technology Solutions. "CenturyLink continues to bring developers and IT together with this new capability. Developers get self-service and lightning-fast performance for popular NoSQL platforms, and IT can easily use our cloud management platform for governance and billing."
Hyperscale combines CenturyLink Cloud's high-performance compute with 100-percent flash storage to deliver a superior experience for web-scale architectures built on Couchbase, MongoDB and other NoSQL technologies. Users of the service will consistently see performance at or above 15,000 input/output operations per second (IOPS) for a diverse range of workloads. Hyperscale also is ideal for big data scenarios and serves as a complementary offering to CenturyLink's existing Big Data Foundation Services.
Global Public Cloud Expansion Continues
CenturyLink today also announced expansion plans for the CenturyLink Cloud network of public cloud data centers, growing from nine to 13 locations in the first half of 2014. Starting in March, customers will be able to deploy virtual resources in Santa Clara, Calif., and Sterling, Va., with locations in Paris and the London metro market coming online in the second quarter of 2014. All CenturyLink Cloud services, including Hyperscale, will be available in these new locations.
This broad footprint offers more choice and flexibility for geo-targeted solutions and for boosting performance while ensuring data sovereignty. Customers can also deploy advanced IT configurations through the interoperability of public cloud, colocation, managed services and network solutions available through the CenturyLink Technology Solutions portfolio.
Toshiba Cloud Storage Array Service Offering Debuts In Japan, Based on the Zadara Storage as a Service (STaaS) Platform
Zadara Storage, Inc.,
provider of enterprise-class cloud storage as a service (STaaS),
today announced that Toshiba is launching the Toshiba Cloud
Storage Array Service offering in Japan, based on Zadara’s Virtual
Private Storage Array (VPSA) platform for providing service
providers with elastic block and file repository storage that is billed
on a flexible, pay as you go basis. The deployment represents yet
another expansion of Zadara’s award-winning offering – a solution
that has been proven for two years in worldwide deployments. The
Toshiba Cloud Storage Array Service will be available March 1.
As service providers seek to attract more enterprise applications to move to the cloud, the Toshiba Cloud Storage Array Service offering represents a significant endorsement of Zadara’s unique approach to providing high Quality of Service (QoS), elastic and enterprise-grade cloud storage for live repository, archive, and disaster recovery storage needs without a hardware purchase.
“Demand remains exceptionally high in Japan for solutions to address the rapid increase in data volumes while avoiding the need for purchasing storage hardware and paying for all the attendant floor space, maintenance and ongoing costs associated with it,“ said Koichi Kagawa, General Manager, ICT Cloud Service Division at Toshiba. “As an investor in Zadara we have watched the company’s rapid growth, its large-scale deployments, and its capability managing Toshiba’s state-of-the-art HDDs and SSDs. We are so impressed with its performance that we knew it was a natural fit for our own strategic cloud initiatives.”
Zadara’s VPSA architecture allows cloud-delivered QoS to remain exceptional, even for massive data volumes. Zadara’s technology approach uniquely provides dedicated disks, virtual controllers, and separate networking to achieve single-tenant, predictable performance at a multitenant price. Cloud providers using the Toshiba Cloud Storage Array Service can offer each of their customers a private management interface from which to add or eliminate drives, switch from disk to SSDs and back, adjust controller performance, and fine tune other storage needs on the fly – allowing flexibility and control.
“We’re delighted that Toshiba, with its exacting standards of quality and performance, has embraced the Zadara VPSA storage-as-a-service approach as part of its commitment to innovation in enterprise data center solutions,” said Nelson Nahum, CEO of Zadara Storage. “Competitive hype obscures the fact that most cloud storage offerings today expose enterprises to performance degradation from ‘noisy neighbors’ and cannot support essential data storage features that are commonplace in on-premise deployments. Using our technology, the Toshiba Cloud Storage Array Service and its service provider customers can help more enterprises to achieve cloud economics, agility and scalability with truly private storage, highest QoS, and the flexibility of a pay as you go model.”
As service providers seek to attract more enterprise applications to move to the cloud, the Toshiba Cloud Storage Array Service offering represents a significant endorsement of Zadara’s unique approach to providing high Quality of Service (QoS), elastic and enterprise-grade cloud storage for live repository, archive, and disaster recovery storage needs without a hardware purchase.
“Demand remains exceptionally high in Japan for solutions to address the rapid increase in data volumes while avoiding the need for purchasing storage hardware and paying for all the attendant floor space, maintenance and ongoing costs associated with it,“ said Koichi Kagawa, General Manager, ICT Cloud Service Division at Toshiba. “As an investor in Zadara we have watched the company’s rapid growth, its large-scale deployments, and its capability managing Toshiba’s state-of-the-art HDDs and SSDs. We are so impressed with its performance that we knew it was a natural fit for our own strategic cloud initiatives.”
Zadara’s VPSA architecture allows cloud-delivered QoS to remain exceptional, even for massive data volumes. Zadara’s technology approach uniquely provides dedicated disks, virtual controllers, and separate networking to achieve single-tenant, predictable performance at a multitenant price. Cloud providers using the Toshiba Cloud Storage Array Service can offer each of their customers a private management interface from which to add or eliminate drives, switch from disk to SSDs and back, adjust controller performance, and fine tune other storage needs on the fly – allowing flexibility and control.
“We’re delighted that Toshiba, with its exacting standards of quality and performance, has embraced the Zadara VPSA storage-as-a-service approach as part of its commitment to innovation in enterprise data center solutions,” said Nelson Nahum, CEO of Zadara Storage. “Competitive hype obscures the fact that most cloud storage offerings today expose enterprises to performance degradation from ‘noisy neighbors’ and cannot support essential data storage features that are commonplace in on-premise deployments. Using our technology, the Toshiba Cloud Storage Array Service and its service provider customers can help more enterprises to achieve cloud economics, agility and scalability with truly private storage, highest QoS, and the flexibility of a pay as you go model.”
Tuesday, February 11, 2014
Android Open Source Project (AOSP) ROM Builder Now Available on AWS Marketplace
Code Creator, a leading Connecticut-based high tech software and SaaS
(software as a service) development company announced a new Android
server is now available on the AWS Marketplace, which provides a
convenient, fast way to locate, purchase and immediately start using
application services on the Amazon Web Services (AWS) cloud.
Android is the leading, open source, Linux-based operating system that powers millions of devices including smart phones, tablets, home appliances, vehicle systems, and more. The Android "ROM" is the Android operating system. This is the User interface (Sense UI in HTC phones) and the file system for maintaining contacts etc. It is composed of a Linux kernel and various add-ons to achieve specific functionality.
Compiling custom Android ROMs, modifying Android's core framework or enhancing the default user interface has never been easier and faster thanks to Code Creator's new cloud based ROM builder.
Code Creator's AOSP ROM Builder is a full-featured Android Build environment, pre-configured with all the software needed to build the Android Open Source Project, create custom ROMs, or just build for one of the included devices. The server also contains the AWS Command Line Interface (CLI) tool to easily copy an AOSP image to Amazon S3 once compiled. Harness the power of AWS by launching a high CPU core instance and build Android in record time.
Android is the leading, open source, Linux-based operating system that powers millions of devices including smart phones, tablets, home appliances, vehicle systems, and more. The Android "ROM" is the Android operating system. This is the User interface (Sense UI in HTC phones) and the file system for maintaining contacts etc. It is composed of a Linux kernel and various add-ons to achieve specific functionality.
Compiling custom Android ROMs, modifying Android's core framework or enhancing the default user interface has never been easier and faster thanks to Code Creator's new cloud based ROM builder.
Code Creator's AOSP ROM Builder is a full-featured Android Build environment, pre-configured with all the software needed to build the Android Open Source Project, create custom ROMs, or just build for one of the included devices. The server also contains the AWS Command Line Interface (CLI) tool to easily copy an AOSP image to Amazon S3 once compiled. Harness the power of AWS by launching a high CPU core instance and build Android in record time.
Fusion-io Launches ioVDI Software to Speed the Deployment of VMware Horizon View Persistent Virtual Desktops
Fusion-io today announced the general availability of Fusion ioVDI software for VMware Horizon View
hosted virtual desktop environments. Fusion ioVDI software speeds the
deployment of VMware Horizon View virtual desktops by intelligently
combining the stateless cost economics of server flash performance with
the manageability benefits of installed shared storage required for
persistent desktops.
"We're committed to bringing a great user experience to our increasingly mobile workforce," said Paul Tradewell, senior systems engineer at National Marrow Donor Program. "Fusion ioVDI helped us solve this challenge with its cost-effective server-side approach that worked with our existing storage in a solution that our desktop team could manage."
Fusion ioVDI software is a virtual desktop solution that offers Write Vectoring, a patent-pending technology that monitors and directs session-based desktop writes uniquely to server-side flash. By limiting shared storage interaction to the small number of writes that persist between login sessions, Write Vectoring preserves the use of VMware value-added features such as vMotion, HA, DRS, and SRM that require shared storage while substantially reducing SAN or NAS performance dependencies.
"Many customers would prefer the performance and flexibility of persistent desktops if they could solve the cost and complexity involved with shared storage," said John Webster, senior partner at Evaluator Group. "The idea of viewing a desktop operating system as a unique application opens up some intriguing possibilities for supporting shared storage without being held hostage to it."
Fusion ioVDI software also accelerates reboot times with a patent-pending technology called Transparent File Sharing, which allows many hosted virtual desktops to simultaneously share common files. As VMware Horizon View Storage Accelerator can use up to 2GB of RAM, Transparent File Sharing optimizes performance by rapidly sharing common files throughout the VDI infrastructure. Enterprise Strategy Group (ESG) testing of ioVDI found that caching further offloads shared storage by up to 87 percent so that VDI success no longer requires expensive storage or network upgrades.
"In many ways, virtual desktops are the litmus test of application acceleration expertise," said Lee Caswell, Fusion-io vice president, Virtualization Products Group. "Every desktop user is a hypercritical judge of performance, and every desktop administrator is looking to simplify the scaling of centralized desktops to minimize both capital and operating expenses."
The ESG Lab Validation testing of Fusion ioVDI also reported that the solution delivered consistent microsecond response times for workloads of one thousand data-intensive VDI desktops, as well as fast, consistent boot times averaging less than ten seconds. With its unique approach to integrating with existing shared storage, Fusion ioVDI software is complementary to any storage products and allows resellers to eliminate shared storage cost and complexity as a selling obstacle.
"Fusion ioVDI with VMware Horizon View™ can help simplify and accelerate the adoption of hosted virtual desktops," said Mason Uyeda, senior director, technical marketing, End-User Computing, VMware. "We are pleased to work with Fusion-io to help our customers speed their deployment times in a cost-effective manner."
Fusion ioVDI software is available now as stand-alone software and as an integrated solution featuring ioVDI software and Fusion ioMemory flash through Fusion-io channel partners at a manufacturer's list price of $50 USD per desktop. Fusion-io will be exhibiting in booth 409 during the VMware Partner Exchange Summit at the Moscone Center in San Francisco from February 10 – 13, 2014. Visit Fusion-io at the Partner Exchange Summit or on our website to learn more about transforming virtualization with flash-optimized performance.
"We're committed to bringing a great user experience to our increasingly mobile workforce," said Paul Tradewell, senior systems engineer at National Marrow Donor Program. "Fusion ioVDI helped us solve this challenge with its cost-effective server-side approach that worked with our existing storage in a solution that our desktop team could manage."
Fusion ioVDI software is a virtual desktop solution that offers Write Vectoring, a patent-pending technology that monitors and directs session-based desktop writes uniquely to server-side flash. By limiting shared storage interaction to the small number of writes that persist between login sessions, Write Vectoring preserves the use of VMware value-added features such as vMotion, HA, DRS, and SRM that require shared storage while substantially reducing SAN or NAS performance dependencies.
"Many customers would prefer the performance and flexibility of persistent desktops if they could solve the cost and complexity involved with shared storage," said John Webster, senior partner at Evaluator Group. "The idea of viewing a desktop operating system as a unique application opens up some intriguing possibilities for supporting shared storage without being held hostage to it."
Fusion ioVDI software also accelerates reboot times with a patent-pending technology called Transparent File Sharing, which allows many hosted virtual desktops to simultaneously share common files. As VMware Horizon View Storage Accelerator can use up to 2GB of RAM, Transparent File Sharing optimizes performance by rapidly sharing common files throughout the VDI infrastructure. Enterprise Strategy Group (ESG) testing of ioVDI found that caching further offloads shared storage by up to 87 percent so that VDI success no longer requires expensive storage or network upgrades.
"In many ways, virtual desktops are the litmus test of application acceleration expertise," said Lee Caswell, Fusion-io vice president, Virtualization Products Group. "Every desktop user is a hypercritical judge of performance, and every desktop administrator is looking to simplify the scaling of centralized desktops to minimize both capital and operating expenses."
The ESG Lab Validation testing of Fusion ioVDI also reported that the solution delivered consistent microsecond response times for workloads of one thousand data-intensive VDI desktops, as well as fast, consistent boot times averaging less than ten seconds. With its unique approach to integrating with existing shared storage, Fusion ioVDI software is complementary to any storage products and allows resellers to eliminate shared storage cost and complexity as a selling obstacle.
"Fusion ioVDI with VMware Horizon View™ can help simplify and accelerate the adoption of hosted virtual desktops," said Mason Uyeda, senior director, technical marketing, End-User Computing, VMware. "We are pleased to work with Fusion-io to help our customers speed their deployment times in a cost-effective manner."
Fusion ioVDI software is available now as stand-alone software and as an integrated solution featuring ioVDI software and Fusion ioMemory flash through Fusion-io channel partners at a manufacturer's list price of $50 USD per desktop. Fusion-io will be exhibiting in booth 409 during the VMware Partner Exchange Summit at the Moscone Center in San Francisco from February 10 – 13, 2014. Visit Fusion-io at the Partner Exchange Summit or on our website to learn more about transforming virtualization with flash-optimized performance.
Leading Healthcare Network Selects Nexenta Software Defined Storage and VMware Virtualization Solutions to Deliver Innovative Private Cloud
Nexenta
(@Nexenta), the global leader in Software Defined Storage (SDS)
(#softwaredefinedstorage), today announced that Yale New Haven
Hospital, one of the nation's leading healthcare innovators, has
selected NexentaConnect as their storage solution for the
innovative private cloud deployment.
The VMware Horizon View, Cisco UCS and NexentaConnect solution provide Yale with one platform, one architecture, one solution for an innovative cloud deployment. This integrated offering has been validated in the Cisco IVT (Interoperability Verification Testing) Lab, and demonstrated increased desktop density, thereby eliminating crippling I/O bottlenecks. This integrated offering delivered a solution that met Yale's aggressive budget expectations.
Nexenta worked with ePlus, a leading provider of advanced technology solutions and a valued partner of both VMware and Cisco, to successfully deploy thousands of virtual desktops across multiple industries and organizations.
"We were faced with a significant challenge at Yale," said Dave Franco and Matt Openshaw, System Managers, Yale New Haven Hospital. "We needed a solution that was high performance, scalable and cost effective. With an Integrated Solution including NexentaConnect we were able to have a single architecture that met the needs of our users. We were very pleased with both the solution and the support we received from all the vendors in this project."
"Working with Nexenta to meet Yale's requirements has been a great experience," said Mason Uyeda, Senior Director, End-User Computing-Solution Marketing and Management, VMware. "Yale has a great reputation of being at the leading edge of technology adoption. Their implementation of Horizon View was a great endorsement of our solution and the NexentaConnect assures that storage challenges will not be an issue."
Jim Fitzgerald, Vice President of Business Development, Nexenta states, "NexentaConnect delivers on all the promise of Software Defined Storage. The NexentaConnect deployment tool allowed Yale to define their storage needs, which then enabled the software to intelligently synch the storage and desktops for deployment, and monitor the storage with real-time feedback to synch the entire environment. Software Defined Storage for desktop virtualization and all applications is the future and Yale is showing the way in the healthcare industry."
NexentaConnect provides a 38x physical SAN data retrieval reduction, a 2.5x SAN write I/O reduction and a 72 percent increase in VM density per host. It is able to do this by effectively utilizing compression, de-duplication and advanced caching methods that are core design characteristics of NexentaConnect. Additionally, NexentaConnect's application defined deployment tools are the industry's first implementation of true Software Defined Storage, allowing the IT manager to define the needs of the applications and then having NexentaConnect automatically deploy the storage for the maximum performance and benefit.
The VMware Horizon View, Cisco UCS and NexentaConnect solution provide Yale with one platform, one architecture, one solution for an innovative cloud deployment. This integrated offering has been validated in the Cisco IVT (Interoperability Verification Testing) Lab, and demonstrated increased desktop density, thereby eliminating crippling I/O bottlenecks. This integrated offering delivered a solution that met Yale's aggressive budget expectations.
Nexenta worked with ePlus, a leading provider of advanced technology solutions and a valued partner of both VMware and Cisco, to successfully deploy thousands of virtual desktops across multiple industries and organizations.
"We were faced with a significant challenge at Yale," said Dave Franco and Matt Openshaw, System Managers, Yale New Haven Hospital. "We needed a solution that was high performance, scalable and cost effective. With an Integrated Solution including NexentaConnect we were able to have a single architecture that met the needs of our users. We were very pleased with both the solution and the support we received from all the vendors in this project."
"Working with Nexenta to meet Yale's requirements has been a great experience," said Mason Uyeda, Senior Director, End-User Computing-Solution Marketing and Management, VMware. "Yale has a great reputation of being at the leading edge of technology adoption. Their implementation of Horizon View was a great endorsement of our solution and the NexentaConnect assures that storage challenges will not be an issue."
Jim Fitzgerald, Vice President of Business Development, Nexenta states, "NexentaConnect delivers on all the promise of Software Defined Storage. The NexentaConnect deployment tool allowed Yale to define their storage needs, which then enabled the software to intelligently synch the storage and desktops for deployment, and monitor the storage with real-time feedback to synch the entire environment. Software Defined Storage for desktop virtualization and all applications is the future and Yale is showing the way in the healthcare industry."
NexentaConnect provides a 38x physical SAN data retrieval reduction, a 2.5x SAN write I/O reduction and a 72 percent increase in VM density per host. It is able to do this by effectively utilizing compression, de-duplication and advanced caching methods that are core design characteristics of NexentaConnect. Additionally, NexentaConnect's application defined deployment tools are the industry's first implementation of true Software Defined Storage, allowing the IT manager to define the needs of the applications and then having NexentaConnect automatically deploy the storage for the maximum performance and benefit.
Dell Introduces Secure, Cost-Effective Thin Client and Infrastructure Support for Windows Server 2012 R2 and Dell vWorkspace
Dell today announced expansion of its cloud client solutions to incorporate support for Windows Server 2012 R2 and Dell vWorkspace,
combining existing infrastructure offerings for Windows Server
2012 R2 and for vWorkspace into a single, end-to-end Hyper-V based
desktop virtualization solution. Starting at under $250 per seat
including datacenter infrastructure, licensing and a three year Dell ProSupport
subscription, the new reference architecture is designed to
provide organizations with integrated infrastructure options that
deliver performance, agility and efficiency, while ensuring
enterprise-level VDI features for one of the lowest price-per-seat
solutions available today.
Dell Wyse Datacenter for Microsoft VDI and vWorkspace is a highly flexible and cost-effective solution that can scale from ten to tens of thousands of seats. It is ideal for organizations wishing to leverage their datacenter investment and existing expertise in Microsoft infrastructure to provide workers secure and flexible access to data and applications, while simplifying management and improving data security, control and compliance.
The solution also supports a greater range of audio and graphics applications enabling IT departments to virtualize more applications, including unified communications on Microsoft Lync 2013 and virtualized shared graphics with AMD FirePro S9000, S7000 and W7000 cards. The centrally hosted virtualized environment means that users experience no difference between applications running locally or virtualized. IT managers are able to optimize IT resources while providing a more effective application and system support and data security, all from a single management console.
Features include:
Dell enables a low cost per VDI seat through high-user density, optimized licensing price structure, data deduplication in Windows Server 2012 R2, reduced storage space requirements by up to 93 percent on persistent desktops and low IOPS requirements. With vWorkspace HyperCache and HyperDeploy, IT can provision 150 Windows 8.1 Virtual Machines in only 26 minutes, resulting in significant infrastructure savings while increasing speed of deployment.
Greatest flexibility and usability
The Wyse Datacenter offers support for end points running most operating systems, making it ideal for true BYOD. It provides a choice of application virtualization delivery mechanisms without the need for forklift conversion, as the solution works on Windows Servers 2003 to 2012 R2.
The Wyse Datacenter comprises Dell PowerEdge R720 servers, a choice of Dell EqualLogic 1G iSCSI PS4100E/PS6100E/PS6100XS storage arrays, Dell Networking S55 1G Top of Rack switch, Microsoft Windows Server 2008 R2 SP1/2012 R2 including Hyper-V, vWorkspace 8.0 MR1 virtualization software, Dell ProSupport Services for hardware and support for software, and highly secure Dell Wyse end points with new ThinOS 8 and qualified with RemoteFX capable models including Dell Wyse 3012-T10D, 5012-D10D (both highly secure with new ThinOS 8), D90D8, D90Q8, Z90D8, Z90Q8 and X90m7.
The optional vWorkspace component is highly recommended for increased scalability for deployments up to tens of thousands of seats. It allows customers to quickly troubleshoot VM problems with built-in diagnostics and monitoring functions (formerly Foglight for Virtual Desktops). It speeds VM deployments and reduces storage costs and comes with additional support for Windows Server 2003, Windows Server 2008, and Windows Server 2012 R2 with Hyper-V. It also adds support for extended end-point connectors and multi-tenancy with no AD Trust Requirements. Finally, vWorkspace installs seamlessly using the same hardware configuration used for Windows Server 2012 R2.
New powerful yet affordable dual-core Dell Wyse ThinOS thin client
Dell also announced the addition of the Dell Wyse thin client desktop 3012–T10D featuring a unique combination of high performance, security, and economy in a compact thin client certified for the new Windows Server and vWorkspace infrastructure. The new thin client delivers an outstanding Citrix, Microsoft, VMware and Dell vWorkspace VDI user experience and an instant-on experience from power-on to fully functioning desktop in less than 9 seconds. Featuring completely “hands-off” out-of-box automatic setup, configuration and management, the 3012-T10D runs Dell Wyse ThinOS, the premiere virus- and malware-immune thin client OS with AES disk encryption and zero attack surface.
The new 3012-T10D is the world’s first ARM SoC thin client that is Microsoft RemoteFX certified with support for RDP 8.0 features and delivers impressive performance in an affordable offering, with features including:
Dell Wyse Datacenter for Microsoft VDI and vWorkspace is a highly flexible and cost-effective solution that can scale from ten to tens of thousands of seats. It is ideal for organizations wishing to leverage their datacenter investment and existing expertise in Microsoft infrastructure to provide workers secure and flexible access to data and applications, while simplifying management and improving data security, control and compliance.
The solution also supports a greater range of audio and graphics applications enabling IT departments to virtualize more applications, including unified communications on Microsoft Lync 2013 and virtualized shared graphics with AMD FirePro S9000, S7000 and W7000 cards. The centrally hosted virtualized environment means that users experience no difference between applications running locally or virtualized. IT managers are able to optimize IT resources while providing a more effective application and system support and data security, all from a single management console.
Features include:
- Support for Microsoft Windows Server 2012 R2, Windows 8.1 and Intel Ivy Bridge processors delivers user density up to 225 users per R720 server, set up either as RDSH, pooled or persistent Windows 8.1 VMs
- Support for Microsoft RemoteFX (RDP), with support for a broad range of RDP 8.0 features
- Support for affordable personal desktops with a persistent profile, taking advantage of Windows Server 2012 R2 built-in active data deduplication feature
- Unified communications support with the Microsoft Lync 2013 VDI plug-in for Microsoft Windows 7 and Windows 8 end points increases user productivity without affecting server user density
- Virtualized shared graphics using AMD FirePro S9000, S7000 & W7000 server GPUs providing up to 85 Shared Graphics users per R720 server
- New VDI features for Windows Server 2012 R2 Remote Desktop Services including support for iOS, Mac OS and Android end points
- Enhancements through RemoteFX, DirectX, quick reconnect and visual application appearance
- Optional vWorkspace 8.0 MR1 desktop virtualization software for expanded client support, legacy applications and OS’s, and performance monitoring
Dell enables a low cost per VDI seat through high-user density, optimized licensing price structure, data deduplication in Windows Server 2012 R2, reduced storage space requirements by up to 93 percent on persistent desktops and low IOPS requirements. With vWorkspace HyperCache and HyperDeploy, IT can provision 150 Windows 8.1 Virtual Machines in only 26 minutes, resulting in significant infrastructure savings while increasing speed of deployment.
Greatest flexibility and usability
The Wyse Datacenter offers support for end points running most operating systems, making it ideal for true BYOD. It provides a choice of application virtualization delivery mechanisms without the need for forklift conversion, as the solution works on Windows Servers 2003 to 2012 R2.
The Wyse Datacenter comprises Dell PowerEdge R720 servers, a choice of Dell EqualLogic 1G iSCSI PS4100E/PS6100E/PS6100XS storage arrays, Dell Networking S55 1G Top of Rack switch, Microsoft Windows Server 2008 R2 SP1/2012 R2 including Hyper-V, vWorkspace 8.0 MR1 virtualization software, Dell ProSupport Services for hardware and support for software, and highly secure Dell Wyse end points with new ThinOS 8 and qualified with RemoteFX capable models including Dell Wyse 3012-T10D, 5012-D10D (both highly secure with new ThinOS 8), D90D8, D90Q8, Z90D8, Z90Q8 and X90m7.
The optional vWorkspace component is highly recommended for increased scalability for deployments up to tens of thousands of seats. It allows customers to quickly troubleshoot VM problems with built-in diagnostics and monitoring functions (formerly Foglight for Virtual Desktops). It speeds VM deployments and reduces storage costs and comes with additional support for Windows Server 2003, Windows Server 2008, and Windows Server 2012 R2 with Hyper-V. It also adds support for extended end-point connectors and multi-tenancy with no AD Trust Requirements. Finally, vWorkspace installs seamlessly using the same hardware configuration used for Windows Server 2012 R2.
New powerful yet affordable dual-core Dell Wyse ThinOS thin client
Dell also announced the addition of the Dell Wyse thin client desktop 3012–T10D featuring a unique combination of high performance, security, and economy in a compact thin client certified for the new Windows Server and vWorkspace infrastructure. The new thin client delivers an outstanding Citrix, Microsoft, VMware and Dell vWorkspace VDI user experience and an instant-on experience from power-on to fully functioning desktop in less than 9 seconds. Featuring completely “hands-off” out-of-box automatic setup, configuration and management, the 3012-T10D runs Dell Wyse ThinOS, the premiere virus- and malware-immune thin client OS with AES disk encryption and zero attack surface.
The new 3012-T10D is the world’s first ARM SoC thin client that is Microsoft RemoteFX certified with support for RDP 8.0 features and delivers impressive performance in an affordable offering, with features including:
- Marvell PXA2128 1.2 GHz Dual Core ARM SoC processor
- Certified for Microsoft RemoteFX (RDP), supporting a broad range of RDP 8.0 features
- Optimized Citrix HDX Receiver provides an excellent user experience including remote and work from home users. 4GB Flash and 2GB DDR3 RAM memory
- Connections including dual DVI, four USB ports, Gigabit Ethernet and optional dual band Wi-Fi 802.11 A/B/G/N with up to 300 Mbps connection speeds
- Built-in media processor to deliver smooth multimedia, bi-directional audio and Flash playback.
- Multiple management options including cloud-based management with remote imaging through the Dell Wyse Cloud Client Manager, enterprise management with the optional Dell Wyse Device Manager, and the simple and easy to use file system configurations provide robust and flexible management and configuration choices.
- Flexible mounting options enable vertical, horizontal, desk, wall or behind display positioning, and a VESA stand is included standard.
Microsoft: Power BI Office 365 Launch Means Analytics for All
Big data-driven power to the people? Microsoft's self-service business intelligence platform officially goes live.
Microsoft has officially released Power BI for Office 365, the company's answer to the big data skills shortage.
The product first debuted during the Microsoft Worldwide Partner Conference in July. It enables Office 365 users to explore data and derive potentially business-boosting insights in Excel. The draw, according to the company, is that everyone, not just data scientists or trained specialists, can perform big data analytics.
"Democratizing data availability to users" is Microsoft's goal, Julia White, general manager of Office product marketing, told eWEEK during an interview. Currently, few employees—an estimated 10 percent—have access to business analytics, but "a lot more can benefit from it," White said.
Microsoft is "bringing BI to a billion users," she said. The company boasts a billion-strong Office user base.
Power BI for Office 365 leverages cloud computing to help power its data analysis and visualizations tools. Organizations can leverage its Data Management Gateway to link on-premise data sources, schedule refreshes and keep their workers current with the latest reports.
The cloud also powers BI Sites, the product's collaboration piece. The Web- and mobile-friendly dedicated workspaces and resource centers allow authorized users to save, share and search for reports, visualizations and data queries created in Excel's Power Query tool.
Capping off the user-friendly feature set is natural language support. Taking a cue from modern search engines, including the company's own Bing offering, Power BI for Office 365's Q&A features encourage users to "type questions they have of the data in natural language and the system will interpret the question and present answers in the form of interactive visualizations," said the company in a blog post.
During an online demo, Ari Schorr, Office product marketing manager, showed how a user can explore data and generate reports by simply asking Power BI. Using data from New York City's 311 non-emergency help and information service, he was able to quickly generate visualizations showing the number of calls and complaints logged by the system during the recent Super Bowl weekend.
Taking things further, Schorr overlaid the 311 data over an interactive map of the city to show how hotspots rose and waned over time—all without a lick of code or specialized software. These "geospatial insights" and other easily-digestible interactive experiences are more engaging than "just looking at flat data," he added.
Easy-to-use self-service analytics and rich visualizations also make it more likely that employees will share their discoveries, said Schorr. Power BI for Office 365 lays the groundwork for employees "taking what they already know and extending it out in an organization," he asserted.
Practically any employee can leverage the technology to become a knowledge worker, White said. "I use Power BI and this experience on a daily basis," she said. Business intelligence and big data analytics is "not in the hands of data wonks anymore," she added.
Microsoft has officially released Power BI for Office 365, the company's answer to the big data skills shortage.
The product first debuted during the Microsoft Worldwide Partner Conference in July. It enables Office 365 users to explore data and derive potentially business-boosting insights in Excel. The draw, according to the company, is that everyone, not just data scientists or trained specialists, can perform big data analytics.
"Democratizing data availability to users" is Microsoft's goal, Julia White, general manager of Office product marketing, told eWEEK during an interview. Currently, few employees—an estimated 10 percent—have access to business analytics, but "a lot more can benefit from it," White said.
Microsoft is "bringing BI to a billion users," she said. The company boasts a billion-strong Office user base.
Power BI for Office 365 leverages cloud computing to help power its data analysis and visualizations tools. Organizations can leverage its Data Management Gateway to link on-premise data sources, schedule refreshes and keep their workers current with the latest reports.
The cloud also powers BI Sites, the product's collaboration piece. The Web- and mobile-friendly dedicated workspaces and resource centers allow authorized users to save, share and search for reports, visualizations and data queries created in Excel's Power Query tool.
Capping off the user-friendly feature set is natural language support. Taking a cue from modern search engines, including the company's own Bing offering, Power BI for Office 365's Q&A features encourage users to "type questions they have of the data in natural language and the system will interpret the question and present answers in the form of interactive visualizations," said the company in a blog post.
During an online demo, Ari Schorr, Office product marketing manager, showed how a user can explore data and generate reports by simply asking Power BI. Using data from New York City's 311 non-emergency help and information service, he was able to quickly generate visualizations showing the number of calls and complaints logged by the system during the recent Super Bowl weekend.
Taking things further, Schorr overlaid the 311 data over an interactive map of the city to show how hotspots rose and waned over time—all without a lick of code or specialized software. These "geospatial insights" and other easily-digestible interactive experiences are more engaging than "just looking at flat data," he added.
Easy-to-use self-service analytics and rich visualizations also make it more likely that employees will share their discoveries, said Schorr. Power BI for Office 365 lays the groundwork for employees "taking what they already know and extending it out in an organization," he asserted.
Practically any employee can leverage the technology to become a knowledge worker, White said. "I use Power BI and this experience on a daily basis," she said. Business intelligence and big data analytics is "not in the hands of data wonks anymore," she added.
Big data-driven power to the people? Microsoft's self-service business intelligence platform officially goes live.
Microsoft has officially released Power BI for Office 365, the company's answer to the big data skills shortage. The product first debuted during the Microsoft Worldwide Partner Conference in July. It enables Office 365 users to explore data and derive potentially business-boosting insights in Excel. The draw, according to the company, is that everyone, not just data scientists or trained specialists, can perform big data analytics. "Democratizing data availability to users" is Microsoft's goal, Julia White, general manager of Office product marketing, told eWEEK during an interview. Currently, few employees—an estimated 10 percent—have access to business analytics, but "a lot more can benefit from it," White said. Microsoft is "bringing BI to a billion users," she said. The company boasts a billion-strong Office user base. - See more at: http://www.eweek.com/enterprise-apps/microsoft-power-bi-office-365-launch-means-analytics-for-all.html#sthash.Hy8HxNkY.dpufMonday, February 3, 2014
Windows Server 2012 R2 Private Cloud Virtualization and Storage Poster and Mini-Posters
Microsoft is making available a number of posters that provide a visual
reference for understanding key private cloud storage and virtualization
technologies in Windows Server 2012 R2.
The posters focus on understanding storage architecture, virtual hard
disks, cluster shared volumes, scale-out file servers, storage spaces,
data deduplication, Hyper-V, Failover Clustering, and virtual hard disk
sharing.
Note: There are multiple files available for this download. When you click Download, you will be prompted to select the files you want. The posters include:
Windows Server 2012 R2 Private Cloud Virtualization and Storage Poster (whole poster) and a set of mini-posters:
Note: There are multiple files available for this download. When you click Download, you will be prompted to select the files you want. The posters include:
Windows Server 2012 R2 Private Cloud Virtualization and Storage Poster (whole poster) and a set of mini-posters:
- Virtual Hard Disk and Cluster Shared Volumes Mini Poster
- Virtual Hard Disk Sharing Mini Poster
- Understanding Storage Architecture Mini Poster
- Storage Spaces and Deduplication Mini Poster
- Scale-Out and SMB Mini Poster
- Hyper-V and Failover Clustering Mini Poster
HPC Cloud Anatomy 101
High Performance Computing workloads are not web applications. This
is why it's important that your cloud is designed to run them, rather
than generic web services. Today we'll learn how an HPC Cloud is
architected, and why...
In other clouds, Instances suffer a much less desirable fate when resources are not available: oversubscription. This is in fact one way "web services" clouds make money - by putting more jobs to work than what their hardware can handle. When instances are virtualized, the end user has no visibility into how busy their resources actually are, other than drastically reduced performance. This is because overloaded hypervisors have to "time slice" between instances, since there is not enough hardware to run in real time. Depending on SLA, the cloud may even reject the instance altogether, asking the user to try again later!
An HPC Cloud, on the other hand, ensures deterministic, real-time performance for all work submitted, even if some of the jobs may queue until resources are available. A well designed HPC Cloud will alert operators of resource shortfalls ahead of time so they can anticipate and expand accordingly.
Commodity web service clouds also require end users to configure parallel or distributed communication themselves, since they don't typically offer a workload manager that orchestrates this automatically. That means more time spent configuring, less time spent doing productive work - and the end user pays the cloud provider regardless.
While the API allows programmatic orchestration of cloud resources for automation, the web portal gives end users a convenient way to submit work for processing. In HPC Cloud terms, this means kicking off complex jobs with just a few touches on your tablet or smartphone, as opposed to "spinning up" virtual servers, logging into them, and typing Linux commands.
Traditional HPC clusters do not offer API nor portals, instead requiring end users to write and submit "batch scripts". This is by no means "self-service" in the spirit of the NIST Cloud Computing Definition. Imagine writing a batch script on your smartphone, or even learning how to write batch scripts in order to get work done? A real HPC Cloud requires both an API and a portal!
Jobs versus Instances
It would be silly to dive into architecture without examining HPC workloads a bit more, and how they differ from other applications:- HPC jobs process data (often times, Big Data), and return results. In other types of clouds, Instances run when launched and listen for requests. Instances are later shut down after some time when they are no longer needed.
- HPC jobs "shut down" as soon as they finish. In a pay-per-use model, the end user need not worry about managing the infrastructure in order to save money, as the infrastructure charges the end user only for the processing cycles their jobs consume.
- Instances tend to be virtual machines with entire operating system stacks on them. HPC jobs run best on bare metal, where they can take full advantage of the high performance hardware underneath them without having to waste cycles dealing with an abstraction layer in a hypervisor. HPC jobs also spend far less time "starting" than instances do, again, due to their non-virtualized nature. In a pay-per-use model, this means less money spent on non-productive computing overhead.
HPC Cloud Job Scheduling
All clouds have "Cloud Controllers", no matter what type of work they do. At a high level, Cloud Controllers put resources to work, and often feature load balancing and metering capabilities. An HPC Cloud uses a Job Scheduler to assign work when requested. Basically, this puts work in queues for future execution on appropriate resources. If resources are available right away, jobs run right away.Queuing versus Oversubscription
When resources are not available, the cloud is busy, and the Cloud Controller has a decision to make. An HPC Cloud Controller will queue the work for later execution. This is also known as "batch queuing". Since the job has all the parameters and data it needs, there is no need for the user to "watch" it run. In fact some jobs take hours or days to run, even if resources are immediately available. The end user submits the request, and later gets notified with the results. The HPC Cloud Controller runs the job as soon as resources become available for it, without the user having to "resubmit" or even care.In other clouds, Instances suffer a much less desirable fate when resources are not available: oversubscription. This is in fact one way "web services" clouds make money - by putting more jobs to work than what their hardware can handle. When instances are virtualized, the end user has no visibility into how busy their resources actually are, other than drastically reduced performance. This is because overloaded hypervisors have to "time slice" between instances, since there is not enough hardware to run in real time. Depending on SLA, the cloud may even reject the instance altogether, asking the user to try again later!
An HPC Cloud, on the other hand, ensures deterministic, real-time performance for all work submitted, even if some of the jobs may queue until resources are available. A well designed HPC Cloud will alert operators of resource shortfalls ahead of time so they can anticipate and expand accordingly.
Scalability and Elasticity
Elasticity is a key element of Cloud Computing as a way to scale applications for large scale processing. An HPC Cloud supports jobs that span across many physical nodes, without requiring that the job itself configure the infrastructure underneath. The Nimbix Cloud, for example, supports both distributed and parallel HPC application models, leveraging 56Gbps FDR Infiniband technology. At this speed, applications can pass up to 137 million messages per second between parallel runs! Compare that to up to around 1 million messages per second on commodity web services clouds. Since parallel HPC applications may run millions of data processing iterations during a job, they must be able to communicate quickly to finish faster. Since you are paying for compute cycles, this makes a big difference on your bottom line. The faster the job runs, the less it costs, all other things being equal.Commodity web service clouds also require end users to configure parallel or distributed communication themselves, since they don't typically offer a workload manager that orchestrates this automatically. That means more time spent configuring, less time spent doing productive work - and the end user pays the cloud provider regardless.
API and Portal
Most clouds offer end users self-service through both a web portal and an API. An HPC Cloud offers a "processing API", where other clouds offer a "machine API". A processing API allows end users to submit jobs, parameters, and data. A machine API requires end users to start and stop instances, so they can later install applications inside of them to do work. Obviously a processing API is key for an HPC Cloud since users shouldn't be expected to configure their own infrastructure before they can even do work.While the API allows programmatic orchestration of cloud resources for automation, the web portal gives end users a convenient way to submit work for processing. In HPC Cloud terms, this means kicking off complex jobs with just a few touches on your tablet or smartphone, as opposed to "spinning up" virtual servers, logging into them, and typing Linux commands.
Traditional HPC clusters do not offer API nor portals, instead requiring end users to write and submit "batch scripts". This is by no means "self-service" in the spirit of the NIST Cloud Computing Definition. Imagine writing a batch script on your smartphone, or even learning how to write batch scripts in order to get work done? A real HPC Cloud requires both an API and a portal!
Summary
Not all clouds are created equal. Sure, they all have Cloud Controllers, which assign work to various types of resources. They all bill users for cycles consumed, and allow self-service. They all offer elastic scalability, through API and/or web portal. An HPC Cloud optimizes all this for data processing jobs, not "web services". As not all workloads are created equal either, why would you try to run your HPC applications on a "web services" cloud?
Subscribe to:
Comments (Atom)