Sunday, April 11, 2010
Share your Citrix Education story and win a trip to Citrix Synergy 2010!
Share your story and enter to win an all expenses paid trip to either Synergy San Francisco or Berlin, depending on your geographic location. The grand prize includes a Synergy full conference pass, airfare, hotel, cash stipend and more! Plus, five runners-up will receive a self-paced online training course and a Citrix Education polo shirt.
Citrix will select the winning video based on authenticity, clarity of message, passion for Education and the degree to which your story will inspire others. Creativity encouraged! To enter, simply click on the ‘Register Now’ link to create an account and upload your video submission before April 16.
What’s your story?
Enter your video submission by clicking here.
Contest Requirements:
Contest Period: 12:00 AM PDT - February 16, 2010 -- 11:59 AM PST – April 16, 2010
How will winner be chosen: Internally Judged
Number of entries per person: Unlimited unique entries. Please do not enter the same video twice.
Prize Draw Date: 11:59 AM PST – April 19, 2010
Prizes (Approximate retail value in parentheses)
Grand prize: Synergy Full Conference pass; round-trip airfare to Synergy 2010 (San Francisco or Berlin dependent upon winner’s geographic location); 3 night hotel stay; $500 stipend; recognition on Citrix Education website and other online marketing outlets; Citrix Education polo shirt ($3,700)
5 Runners-up: Citrix Education self-paced online training & Citrix Education polo shirt. ($2,000)
Eligibility: Citrix employees are not eligible to participate
AppZero Leads in Virtualizing and Deploying Server Applications in Datacenter and to Cloud
AppZero was named this month to the AlwaysOn list of top 100 private companies likely to disrupt existing markets with game-changing approaches and technologies.
Previously, AppZero was dubbed "Best Business Application" by Computerworld, a Demo show favorite by PCWorld and eWeek as well as "Best of Demo" by Next Big Thing blogger Don Dodge. AppZero has recently been named one of the "20 Coolest Cloud Computing Infrastructure Vendors" by CRN, as well as a "Top 25 Cloud Services Provider" by Virtualization Journal, and a "Top 7 Cloud Computing Acquisition Target" by Datamation.
Poised to ship a new version this month that will feature support for new operating platforms, application services and new tools for even easier application mobility, AppZero shipped its initial product in March 2009, providing the first opportunity for enterprise applications, middleware and database systems such as those from IBM, SAP, Microsoft and Oracle to benefit from the promise of virtualization and virtual appliances for reducing software deployment complexity and cost. With AppZero, these server-based applications could, for the first time, be easily turned into VAAs and run wherever practical and cost effective for the enterprise. Significant savings in application lifecycle, operating systems licenses and huge reduction in application supporting infrastructure such as management and anti-virus have been achieved by decoupling applications from the operating system.
As other vendors join the market for server application automation and deployment, AppZero is accelerating innovation and product expansion. In addition to AppZero 4.5, due this month, AppZero recently launched AppZero for SAP, designed to reduce SAP application and suite installation time from days to less than 15 minutes for instantaneous deployment across physical and virtual servers and to cloud computing environments.
AppZero is also growing its partner base. Among AppZero's technology partners are leading cloud computing infrastructure providers including GoGrid, where AppZero teams to make it easy for large enterprises to provision applications between internal and public clouds - a "hybrid" model popular among organizations getting started in the cloud. AppZero is a charter partner in Amazon's (AMZN) AWS Solution Provider Program, alongside industry leaders such as CA, Citrix, IBM and Oracle.
AppZero channel partners include VARs and solution providers using AppZero install, upgrade, patch and migrate enterprise applications as well as for movement to the cloud.
According to AppZero CEO Greg O'Connor, "In just a year, we've seen a new industry focus on providing an increasingly comprehensive set of tools for delivering, managing and optimizing cloud computing as part of the overall enterprise IT environment. In 2010, we see our job as making server application mobility a core component of these services, a critical part of the overall mission of attacking IT complexity and costs."
Citrix Systems to Announce First Quarter 2010 Financial Results on Wednesday, April 21
The conference call may also be accessed by dialing:
(888) 799-0519 or (706) 634-0155
Using passcode: CITRIX
A replay of the webcast can be viewed by visiting the Investor Relations section of the Citrix corporate website at http://www.citrix.com/investors for approximately 30 days. In addition, an audio replay of the conference call will be available for approximately thirty days by dialing (800) 642-1687 or (706) 645-9291 (passcode required: 66478600).
Intel Xeon 7500 Takes Aim at IBM Power, Oracle SPARC
NEW YORK—The way Shannon Poulin sees it, about 95 percent of the servers shipped every year are x86 systems powered by chips from Intel or Advanced Micro Devices.
The other 5 percent, however, make up about 40 percent of worldwide server revenue.
“That’s a small number of units for a large part of the revenue,” Poulin, Xeon platform director at Intel, said during a meeting with reporters here April 6. “We’re trying to bring volume economics to that space.”
Intel is doing so with its newly released Xeon 7500 “Nehalem EX” processors, four- to eight-core chips that Intel officials say give enterprises a legitimate x86 alternative for workloads that traditionally have run on RISC (IBM’s Power and Sun Microsystems’—now Oracle’s—SPARC) platforms and mainframes.
Intel officials began the drumbeat last year when discussing Nehalem EX, and have continued it since the platform launched March 30.
Resource Library:
Optimizing IBM System x and BladeCenter Servers with Intel Xeon 5500
How to Optimize the Solaris OS on Sun Systems based on Intel Xeon 5500 Proce
Intel Xeon Processor 5500 Series: An Intelligent Approach to IT Challenges
Go Green with IBM System x Servers and Intel Xeon Processors
During the meeting with reporters, Poulin touted the numbers—the three times performance improvement over the previous generation, the 20-to-1 consolidation ratio, the scalability from two- to 256-socket systems, the four times the memory capacity—and the wide market potential for Xeon 7500.
The chip platform is primarily aimed at servers with four or more sockets, and Poulin listed almost a dozen OEMs—including Hewlett-Packard, Oracle, Cray, IBM, NEC, SGI and Bull—that are looking to release eight-socket systems. At the same time, there are workloads that run on two-socket systems that need a lot of memory capacity that can run the Nehalem EX chips, he said.
Poulin also pointed to the more than 20 new RAS (reliability, availability and serviceability) features that previously had only been found in high-end RISC and mainframe platforms, he said. These aim to reduce downtime, protect the data and increase availability of the systems.
“A lot of these are things that have been in Itanium [Intel’s high-end non-x86 processor], things that have been in RISC-based products, or are things we didn’t have in Xeon before,” he said.
Throughout the meeting, Poulin was asked about the Xeon 7500’s impact on Itanium, the high-end chip that runs primarily on HP’s largest systems, including Integrity, OpenVMS and NonStop. Those systems run similar workloads as to IBM’s Power and Oracle’s SPARC.
As have other Intel officials, Poulin said there is room at the top for both architectures, though acknowledging that should Xeon steal away any business of Itanium, it would still all be good for Intel.
“We’re not going to hold back Xeon in any way,” Poulin said. “We’re going to put as much as we can into both of those products.”
He also pointed out that Intel already has plans for the next two generations of Itanium, with platforms code-named “Poulson” and “Kittson.”
Itanium’s future has continued to be questioned as the performance and capabilities of x86 chips from both Intel and AMD have advanced. Those questions were raised again earlier this month, when Microsoft officials said they were going to end Itanium support in future versions of their server software.
The key reason cited was the increased capabilities of the chips from Intel and AMD, which the day before the Xeon 7500 chips, released its eight- to 12-core Opteron 6000 “Magny-Cours” processors.
Poulin downplayed Microsoft’s move, pointing out that only about 5 percent of Itanium’s business came from Windows servers. The bulk of it comes from HP-UX systems, he said.
At least one analyst agreed. In a report issued April 7, Charles King, an analyst with Pund-IT Research, noted the impressive performance of the new Intel and AMD chips. However, King also pointed out that the market for RISC and Itanium systems was about $12 billion in 2009, and that didn’t include the market for IBM’s System z mainframes.
“No matter how dramatic Microsoft’s move might have seemed, its exit is not likely to significantly impact the company’s revenues or overall Itanium platform sales,” King wrote, noting that HP’s investment in Itanium makes it unlikely that the OEM will alter its plans.
Not everyone sees a long future for Itanium. In a report in February, Joe Clabby, president of Clabby Associates, said he sees the server market consolidating around three platforms—Xeon, Power and IBM’s mainframes.
"Intel's enhancements to Xeon processors now put that chip in competition with its own Itanium technologies," Clabby said. "We believe this is marginalizing Itanium, leading buyers to see the light and start moving off of the platform."
Juniper Buys Ankeena to Help Compete with Cisco
Juniper Networks is adding to its ability to handle the rapidly increasing amount of video running over networks with the acquisition of Ankeena Networks.
Juniper officials announced the deal April 8, and said the company would form the foundation of Juniper’s new Content and Media Business Unit, which will be part of the Junos Ready Software group. Juniper said the deal cost was less than $100 million.
The deal comes at a time when the amount of video running over networks is ramping up considerably, putting pressure on service provider infrastructures. The Ankeena acquisition not only gives Juniper more capabilities in this area, but also will help it better compete with rival Cisco Systems, which is aggressively addressing the rise of online video traffic.
Resource Library:
Your Network at Half the Price: Slash Network Hardware Costs with Pre-Owned Equipment
How to Sell Your Surplus Network Equipment
Journey to the Virtualized Data Center: From Vision to Reality
ESG Lab Field Audit
“The time is now for networking companies to offer solutions that help service providers prioritize and deliver media solutions,” Manoj Leelanivas, executive vice president and general manager at Juniper, said in a blog post.
Leelanivas pointed out the research company Nielsen found that online video viewership in the United States grew 16 percent in 2009. In addition, Coda Research Consultancy is predicting that mobile handset data traffic in the United States will reach 327 petabytes a month this year, with the bulk of that traffic being video, he said.
For its part, Cisco officials have predicted that video will account for 91 percent of all Internet traffic by 2013, up from about 30 percent today.
“Juniper’s acquisition of Ankeena reflects our commitment to transforming the experience and economics of networking—in this case by delivering an enhanced TV-like user experience of both fixed and mobile video traffic, while enabling crucial TCO reductions for operators,” Leelanivas said in a statement.
Through the deal, Juniper will get Ankeena’s Media Flow Director, which is designed to improve the user’s viewing experience through support of various streaming technologies. The technology gets rid of buffering and stuttering by detecting the available bandwidth and adapting the delivery bit-rate accordingly, according to Ankeena.
The two-year-old company’s Media Optimized caching offering helps reduce by a 10-to-1 ratio the number of servers needed to deliver the same amount of media.
Juniper’s acquisition of Ankeena is the latest step in the relationship between the two companies. Juniper sold Ankeena’s Media Flow Director—in February at Mobile World Congress, Juniper announced its Juniper Media Flow offering—and Ankeena was part of Juniper’s booth during the CTIA show in March.
“The rise of video traffic on the network (both fixed and mobile) is creating challenges for operators and Juniper believes it can better capitalize on this opportunity by acquiring Ankeena,” Brian White, an analyst with Ticonderoga Securities, said in an April 9 report. “Although this is not a large acquisition, we believe it adds more tools to Juniper's toolbox as it relates to the growing trend toward video over the network.”
Unified Threat Management Market Poised for Growth, Report Finds
During 2008, the market witnessed a growth rate of 32.2 percent; however, the growth rate dropped to 20.1 percent in 2009 due to the recession. As the market rebounds, Frost & Sullivan predicts growth rates will increase in 2010 and 2011 before leveling off. Vendors in this space are seeking to invigorate this market with expansions in terms of both regional and target markets.
Analysis from Frost & Sullivan’s World UTM Products Market report found that the market earned revenues of over $1.97 billion in 2009 and estimated that this figure will reach approximately $7 billion in 2016. Company analysts said UTM's strongest value proposition lies in its ability to consolidate multiple security products. UTM seeks to combine essential and common security functions such as firewall/virtual private network (VPN), intrusion prevention system (IPS), anti-spam, antivirus, content filtering, and even data leakage protection (DLP) and vulnerability management capabilities.
Resource Library:
Unified Communications and Process Automation Combine to Maximize ROI
Manage Growth eKit
Lower Costs and Higher Efficiency for Rich Media Storage
12 Ways a UTM Solution Can Consolidate your Security
The firm said despite these advantages, the best-of-breed functionality offered by competing point products continues to deter customers. Additionally, it is a challenge to enable multiple security functions on a single appliance without impeding network performance. These issues cause UTM vendors to experience difficulty in penetrating the enterprise market. Furthermore, customers are exercising caution while considering investment and are waiting for a more stable economic climate.
"This aligns well with the larger trend of convergence in the network security industry, as customer demand for a one-stop shop continues to increase," says Frost & Sullivan research analyst Chris Rodriguez. "Investing in several point products, including installation and support, is an expensive proposition. Consolidating these products reduces costs and accelerates green initiatives by reducing power consumption."
Rodriguez said further acceptance into the enterprise segment is crucial for sustained long-term growth. To do this, vendors looking to penetrate this market must strive to offer best-of-breed functionality. While many UTM solutions are based on open-source software security solutions, and although this helps to keep overhead costs down, uptake has been slow in the enterprise arena, he noted.
"UTM vendors have been working toward expediting functionality and performance through in-house research and development, strategic partnerships, and acquisitions. While UTM meets the needs of smaller organizations nicely, these solutions have previously been lacking in enterprise-level performance and features," said Rodriguez. "Vendors that can continue to innovate and successfully convey these improvements to customers will succeed in rapidly growing market share."
Frost & Sullivan’s report also predicted small to medium-size businesses (SMBs) and remote/branch/small/home office (ROBO/SOHO) products will remain as a key segment of the UTM market. These customers will continue to value broader product coverage and cost-effective solutions. Rodriguez said the market is extremely vertical-independent as UTM integrates security technologies that are in demand across all industry verticals, so vendors must roll out solutions that address pertinent issues in different verticals, such as regulatory compliance, networking challenges or security threats.
Windows Service Packs, Project Pink Rumors Dominated Microsoft Week
News from Microsoft and other tech companies this week was largely eclipsed by Apple, whose iPad tablet dominated headlines during its first days in general release. Nonetheless, a number of rumors surfaced related to several large Microsoft projects, including a Windows 7 Service Pack 1 and “Project Pink,” the company’s long-speculated branded smartphone initiative.
After Microsoft sent out invitations to a San Francisco event April 12, with the title “It’s Time to Share,” various online pundits speculated that the company was on the verge of finally revealing Project Pink; the general expectation is that Microsoft will debut two smartphones, dubbed “Turtle” and “Pure” and supposedly aimed at a younger, social-networking-happy demographic. According to the Wall Street Journal, quoting “people familiar with the matter,” the phones’ hardware has been designed by Sharp.
If the rumors prove true, the debut of Pink would be a logical progression on months of rumors about an imminent unveiling, highlighted by a March report from Reuters that Verizon and Microsoft planned on teaming up to release social-networking-centric devices in either late spring or early summer. Also in March, Gizmodo posted spy images of what was purported to be the “Pure” phone.
While Project Pink is decidedly consumer-oriented, Microsoft made some key enterprise announcements this week; on April 7, Microsoft announced that it would offer a beta of its Exchange Server 2010 Service Pack 1 (SP1) for North American download in June, incorporating a number of changes to the user interface, integrated archiving and other areas.
Resource Library:
Maximize Your Email and VoiceMail Systems: A Migration Guide to Microsoft Excha
System Monitoring using System Center and SUSE Linux Enterprise Server Managemen
Optimize your Messaging Environment for Profitability and Success in an Uncertai
Best Practices for Windows 7 Migrations Using Microsoft Application Virtualizati
“SP1 will include fixes and tweaks in areas you’ve helped us identify, including a roll-up of the roll-ups we’ve released to date,” team member Michael Atalla wrote in an April 7 posting on the Microsoft Exchange Team Blog. “I also wanted to flag some of the feature enhancements we’re excited to bring you with Sp1, including archiving and discovery enhancements, [OWA] Outlook Web App … improvements, mobile user and management improvements, and some highly sought-after additional UI for management tasks.”
The SP1 supposedly enhances Exchange Server’s archiving functionality, allowing administrators to “provision a user’s Personal Archive to a different mailbox database from their primary mailbox,” according to Atalla. In effect, this allows an IT administrator to implement tiered storage for certain types of e-mail, while importing historical e-mail data from .pst files directly into Exchange.
A new feature in the SP1 will also create Retention Policy Tags via the Exchange Management Console, automating e-mail archiving and deletion. New UI enhancements to the Exchange Management Console and Exchange Control Panel include the ability to configure Transport Rules and Journal Rules in ECP, in addition to provisioning and configuring the Personal Archive.
Other SP1 changes include tweaks to OWA. “With new work to prefetch message content, the OWA reading experience becomes faster,” Atalla wrote. “With delete, mark as read and categorize operations running asynchronously, these actions feel instantaneous to the user.” He added: “We’ve also made sure that certain long-running operations, such as attaching a very large file, will not block the rest of the OWA experience, protecting the user from irritating Web UI hang-ups. You’ll see a number of other UI improvements as well to declutter a bit.”
In the “Service Pack” category, rumors circulated this week of a Windows 7 Service Pack 1 in the works, with a purported build leaking onto a variety of Torrent Websites. That build had a compile date of March 27, along with the string “build 6.1.7601.16537.amd64fre.win7.100327-0053.” Screenshots quickly leaked onto sites such as GeekSmack, which described the download and installation process as “faster than the install process for service packs on Vista.”
Microsoft’s adjustments to its existing platforms extended to a newly released version of its Microsoft Dynamics CRM customized for non-profits and non-governmental organizations (NGO), with additional tools such as donation and pledge management, basic membership management, basic volunteer tracking, support for online payment solutions and campaign management.
“Nonprofits and NGOs are always challenged with doing more with less,” Sarah Barnhart, senior program manager for community affairs at Microsoft, wrote in an April 7 statement. “We see technology as being a key enabler of helping nonprofits to reduce administration and focus their resources on where they can have the biggest impact. Microsoft Dynamics CRM for nonprofits and NGOs includes customized features that simplify administration and management for organizations of every size.” This customized version of Microsoft Dynamics CRM is apparently available for $9.99 per seat per month.
Expect Microsoft’s news next week to be dominated by Project Pink, if this week’s rumors pan out April 12.
HP Labs Outlines Breakthroughs in Memristor Chip Research
At the Flash Memory Summit in August 2009, updates on several new technologies involving NAND flash were presented to conference attendees. One of them was given by Stan Williams, Hewlett-Packard senior fellow and director of Quantum Science Research, and it involved something called the "memristor," a term condensed from "memory resistor."
On that day, Williams described the memristor this way: "This is sort of the missing element of the processor puzzle. It takes its place alongside the resistor, capacitor and inductor [as the fourth basic circuit element in chip engineering]. And it could change the way we do IT."
In summary, let's just say adding a memristor to a solid-state NAND flash drive can be like putting it on steroids.
Resource Library:
Predictive Change Impact Analysis and Your Configuration Management System (CMS)
HP NNMi: the i is for Integration
Underground Economy
Interop Experts Tom Hanrahan & Suzanne Forsburg Introduce the Lab
Since flash media already owns the fastest I/O speeds known to IT science, increasing that speed tenfold or by a higher magnitude—HP's conservative estimate at this time—is certainly an intriguing proposition for processor engineers and IT systems makers.
On April 8, HP Labs published an update on advances in memristor research. These findings are also detailed in a paper published the same week in the journal "Nature" and written by Williams and five other researchers who work at HP's Information and Quantum Systems Laboratory, headquartered in Palo Alto, Calif.
HP Labs has six other locations around the world, in Bangalore, India; Beijing; Haifa, Israel; Bristol, England; St. Petersburg, Russia; and Fusionopolis, Singapore.
Following two years of research, Williams and his team discovered that the memristor has more capabilities than was previously thought. The team said in its report that "in addition to being useful in storage devices, the memristor can perform logic, enabling computation to one day be performed in chips where data is stored, rather than on a specialized central processing unit."
Bringing the logic closer to the data is key
The idea of distributing logic directly into a dedicated processor, instead of exclusively in a CPU somewhere away from the data, is a revolutionary move. Where data and processors are physically close is always where the best performance is found. Google's home-grown systems have proven this for more than a decade.
"Memristive devices could change the standard paradigm of computing by enabling calculations to be performed in the chips where data is stored," Williams said. "Thus, we anticipate the ability to make more compact and power-efficient computing systems well into the future, even after it is no longer possible to make transistors smaller via the traditional Moore's Law approach."
AMD Upcoming Phenom II Chip to Get Turbo Core Feature
Intel earlier this year brought its Turbo Boost technology to its Core i7 desktop processors.
The technology powers down idle cores in a processor and moving some of that power over to active cores, essentially overclocking them and improving the performance of the chip.
Now rival Advanced Micro Devices is reportedly ready to add a similar power-optimizing feature to its upcoming six-core Phenom II X6 “Thuban” line of desktop chips.
Resource Library:
How to Optimize the Solaris OS on Sun Systems based on Intel Xeon 5500 Proce
High-Performance Virtualization: A Benchmark Guide from Sun and AMD
Top Five Strategies for Combating Modern Threats: Is Anti-Virus Dead?
The Next Generation UPS APC Makes All Other UPS Obsolete!
According to information released this week by AMD, the Turbo Core feature will increase speeds on active cores by up to 500MHz when three or more of the cores aren’t being used the application. At that point, the chip will be in what AMD calls a “boost-eligible” state.
Turbo Core will be on the new Phenom chips and upcoming quad-core processors based on the Thuban chip design, according to reports.
AMD officials have yet to say when the Phenom II X6 chips will launch, only that it will be later this year. Few other details have been released.
AnandTech claims to have some more details on the upcoming Thuban line, though the writers there say the information was not given to them by AMD.
The chip maker has said that the new chips will fit in the current Socket AM3 and AM2+ boards, and that IT administrators will only need to update the BIOS to support the new chips.
Intel released its first six-core desktop chip, the Core i7 980X Extreme Edition, in March. The chip, codenamed “Gulftown,” is aimed at gamers and high-end multimedia professionals, and offers such features as Hyper Threading and Turbo Boost.
Apple iPad, Google Android to Grab 75% of Tablet Market in 2010
Apple and Google Android will grab 75 percent of the tablet market through 2010, as the iPad's immediate success has proven the consumer market is hungry for tablet computers, according to IMS Research.
IMS projects Apple will capture 51 percent of a tablet market that could be worth $3.6 billion through 2010, which leaves plenty of opportunity for others.
IMS analyst Anna Hunt said she expects Android to follow Apple with 24 percent of the market, with 10 percent going to Windows 7 and the remaining 15 percent going to some proprietary and Linux-based platforms for the home.
Resource Library:
Managing Personal Devices in the Enterprise
Driving Growth and Cutting Costs in 2010 with Online Collaboration
Maximize Your Email and VoiceMail Systems: A Migration Guide to Microsoft Excha
Data in Action: Effective data management for smarter outcomes
HP's Slate, for example, will run Windows Home 7 Premium Edition. Nokia is reportedly building a Windows 7-based device.
For developers who want more choice and flexibility, Android stands to be the perfect alternative to iPad and Windows 7, with a handful of Android machines coming to the fore, some of which are already out and well known. The Android-based Archos 5 Internet Tablet is a strong seller in Europe.
Dell's Android 2.0-based Mini 5 tablet is expected in the U.S. this year, while several Android models will feature Nvidia's delayed Tegra chipset, including the Adam from Notion Ink. ViewSonic offers the VTablet 101.
Currently, iPad is king. The iPad went on sale April 3 and by April 8 Apple CEO Steve Jobs said the company had sold more than 450,000 units.
While Android is expected to only command half the tablet market iPad will in 2010, Android tablet sales will be buoyed by a strong developer ecosystem and competitively priced content and services.
Hunt noted that many of the applications designed for the iPad platform are actually more expensive than apps for Apple's iPhone OS platform. This presents an opportunity for suppliers that can offer a tablet that is more price competitive for the hardware and the content.
Thursday, April 1, 2010
iPhone OS 4.0 Said to Support Expose-Like Multitasking
According to "people familiar with Apple's plans for the new firmware," a keystroke combination -- hitting the Home button twice -- will bring up the icons of currently running apps, allowing users to quickly choose the one they want to switch to. AppleInsider notes that this sounds more like the basic "Command + Tab" app switcher (similar to Microsoft Window's "Ctrl + Tab" option) than Expose, which scales all open windows down to tiny little versions of themselves, but their sources insist that the new iPhone multitasking will exhibit "several characteristics of the Expose brand."
Multitasking, or its absence, has long been an issue for iPhone -- and now, iPad -- users. AppleInsider points out that Apple's iPhone OS 3.X actually has no issues with multitasking; a number of bundled iPhone apps, including the phone and the iPod functions, are perfectly capable of running in the background while the user performs other tasks. It's third-party app multitasking that isn't supported. Currently, users must quit any third-party app they are using in order to run another third-party app.
Apple addressed the multitasking issue (sort of) in its iPhone OS 3.0 update, when it introduced push notifications. Push notifications notify users of changes in third-party apps (e-mail, instant messages, and so on), so that users can switch over to that app if necessary. Push notifications are implemented in two ways -- pop-up windows that appear on screen and give you the option of immediately switching over to that app, and as little number badges on the corners of third-party app icons (on the Home screen), so you know how many missed messages/e-mails/Scramble challenges you have waiting for you.
Currently, multiple smartphone operating systems support multitasking -- including Palm's WebOS, Google's Android OS, RIM's BlackberryOS, and Windows Mobile (though the new Windows Phone 7 series will reportedly not support multitasking) -- so Apple is a bit behind.
This is all just speculation, though, so I wouldn't hold my breath -- after all, multitasking capabilities were predicted for the iPhone OS 3.0 update, and those didn't come through.
Details of AMD's Six-core Phenom II Chips Leaked Online
AMD announced plans to ship a series of six-core desktop chips during the Cebit exhibition in Germany last month, saying the chips would be available during the second quarter, but held back on details of the chips, including clock speeds and cache size. However, copies of four AMD presentation slides containing details of the chips, dated March 2010 and marked "Confidential -- NDA Required," referring to a non-disclosure agreement, were posted online by tech Web site VR-Zone, and later removed from the site without explanation.
Removing the slides from VR-Zone didn't stop the information from spreading online as the slides were reposted on another site, Softpedia.
The slides, which appear to be authentic, show AMD plans to begin production of three six-core Phenom II X6 chips in April, with a fourth model to enter production during the third quarter.
The first three chips -- the 1090T, 1055T and 1035T -- will run at clock speeds of 3.2GHz, 2.8GHz and 2.6GHz, respectively, according to the slides. In Turbo mode, the chips run even faster, at 3.6GHz, 3.3GHz and 3.1GHz. All of the chips have 9MB of cache and are manufactured using a 45-nanometer process. No information on pricing was given.
The fourth chip, due during the third quarter, is the 1075T, which runs at a clock speed of 3GHz -- or 3.5GHz in Turbo mode -- and also has 9MB of cache.
Three of the four Phenom II X6 chips -- the 1090T, 1075T and 1055T -- have a thermal design power (TDP) of 125 watts, suggesting they are designed for high-end PCs, including gaming machines. The other chip, the 1035T, has a TDP of 95 watts, which matches AMD's lineup of mainstream desktop chips.
Asked to confirm the specifications, AMD first said they were "not accurate," but then appeared to back away from that statement when informed of the source of the information.
"I think you are referring to leaked information and a slide which was not published by AMD. Therefore, AMD can not confirm the accuracy of the details you have sent," wrote Jason Coates, a spokesman for AMD Asia-Pacific, in an e-mail.
Regardless of the exact specifications, users will soon see PCs based on six-core chips like the Phenom II X6 or Intel's upcoming six-core Gulftown processor. Intel has not given a timeframe for when the Gulftown chips will be available.
Exactly how users can expect to benefit from so many processor cores is unclear. Most desktop PC software doesn't benefit from multiple processor cores, but that hasn't slowed the shift to PCs with multiple processor cores, first to quad-core chips and now six-core versions.
How to Upgrade Your Laptop's Hard Drive to an SSD
Intel SSDHowever, upgrading to a solid-state drive isn't as easy as buying a drive and throwing it in your PC. Here are a few tips for picking out the right model, making sure that it will work with your setup, carefully cloning your old drive, and keeping the install process clean and painless. (Don't forget to read "How to Switch to a Solid-State Drive" for more advice.)
SSD Basics
Opening up an SSDThe main drawback of a solid-state drive is the cost: Per gigabyte, SSDs are much more expensive than standard hard drives, which have come down dramatically in price in the past several years. While it's easy to find an inexpensive laptop with a 160GB, 250GB, or even 320GB hard drive, a high-quality 256GB SSD would likely set you back over $700. That's a high price to pay, particularly if your current laptop or netbook is a fairly low-cost unit.
SSDs come in two major types: SLC (single-level cell) and MLC (multi-level cell). An SLC SSD stores data as one bit per flash memory cell, while an MLC drive stores two or more bits per cell. As a result, MLCs are less expensive than SLCs at the same capacity point, since you need fewer physical flash memory components for greater capacity.
The downside is that MLC drives are slower than SLC units, though usually still much faster than regular hard drives. You'll typically find SLC drives in data centers and workstation-class environments, where the greater cost is mitigated by gains in productivity and reliability.
Even MLC drives can be expensive, especially at capacities of 200GB or more. Somewhere in the middle are MLC drives with a capacity of 80GB to 120GB; these tend to run from $200 at the 80GB point to $400 at the high end of 120GB MLCs. You can find lower capacities--as small as 30GB--but for the upgrade described in the following pages, we chose a 120GB MLC drive. Drives at the 120GB or 128GB capacity point (depending on the flash supplier) deliver the best blend of price, performance, and capacity.
Is Your Laptop Ready?
Before jumping in and swapping drives willy-nilly, consider whether your laptop is well suited for a solid-state drive. Here are a few concerns to keep in mind.
* Does your laptop run Windows XP? If you have an older portable that shipped with Windows XP several years ago, dropping in an SSD is not a good idea. While SSDs can work with Windows XP, that OS isn't as well optimized for SSDs as Vista--and, more particularly, Windows 7. The newest Windows supports the TRIM command, which helps keep SSD performance optimized. We recommend not replacing your XP laptop's hard drive with an SSD.
* Does your laptop's BIOS support SSDs? The BIOS of some older laptops won't work properly with solid-state drives. Unfortunately, there's no easy rule of thumb to follow in this regard, so before you buy, try doing a Web search for your PC model and "SSD compatible" to see if other users have had upgrade issues.
* Can your laptop be physically upgraded? Some older laptops don't allow for easy upgrading of the hard drives. This is especially true for certain Macbook and Macbook Pro models. Make sure that upgrading won't void your warranty or require you to perform serious surgery on your laptop.
If you have any doubts, be cautious and check online forums and other resources before attempting a swap to a solid-state drive. The technology is still new enough that the kinks and the potential backward-compatibility issues haven't been completely ironed out.
You have a wide array of SSDs from which to choose, since more companies offer SSDs than standard drives today. Of those companies, however, many are simply rebadging drives manufactured by others as their own.
We recommend sticking with a manufacturer that makes the flash drive itself, or that has its own engineering team behind the drive's components. If you're looking for suggestions, consult PCW's Top 5 Solid-State Drives chart to see which models came out ahead.
In the following pages you'll see how we upgraded an ultralight notebook from a 250GB hard drive to an OCZ Apex 120GB solid-state drive. The OCZ drive is a midrange, MLC SSD that lacks TRIM support, but we've seen it speed up our test laptop in everyday use.
Brewery on desktop virtualization: Tastes great, costs less
None of the stereotypes include dual-monitor thin-clients on the manufacturing floor of a fast-growing regional brewery, running a high-end brewhouse with graphics showing every stage of brewing, filtering and packaging and letting brewmasters control the process via touchscreens.
Boulevard Brewing Co. is just lucky, according to Tony Lux who, until he hired a full-time programmer this year, was the sole IT staff for a 91-person, 140,000-barrel-per-year Kansas City brewery and who revels in the job title purveyor of technology.
Though Lux didn't intend to use desktop virtualization -- and had never heard of the vendor he ended up hiring -- he finished the company's migration to an IT-controlled brewing system by plugging in virtual-client hardware from Pano Logic, whose claim to fame is to run native Windows applications and drivers and graphics entirely from the server without requiring any processing power on the client at all.
[Desktop Virtualization: It's Microsoft vs. VMware in Cost Smackdown]
Pano Logic and competitors such as NComputing are attracting attention from some companies that would never have considered virtual desktops before, more because virtualization has become common enough to be one of the standard short-list options for hardware upgrades, according to Mark Bowker, infrastructure analyst at Enterprise Strategy Group.
"We see more interest in VDI when security is the issue or when people say compliance is the issue that lets them sleep at night," Bowker says. "There are a lot of people looking at it on an application by application basis, though."
Customers understand the difference between computing hardware and computing resources and are perfectly happy to shift to virtual editions of one or the other if the performance and price are right, agrees Chris Wolf of the Burton Group.
There are enough thin-client implementations available that it's not hard to match one to a set of requirements, though there's no guarantee they'll work better than traditional versions, Wolf says.
Boulevard Brewing did move almost all its data center applications onto VMware ESX virtual servers, but it wasn't interested in virtual desktops any more than it was in new and unproven brewing technology, Lux says.
A Physical Move and a Virtual One
Three years ago, the 20-year-old company was operating out of a turn-of-the-century brick building in a historic part of the city, using what the company's promotional copy calls "a vintage Bavarian brewhouse," designed for the kind of artisanal brewing founder John McDonald had in mind. The building and the brewing equipment brought a lot of historic flavor to the product, but the company was topping out its beer capacity.
"We could only do 35 barrels per batch, so when we hit around 100,000 barrels, that was it," Lux says. "We had to run 365 days a year to do that."
So the company sold the old building and bought a pair of new ones that doubled its capacity, and upgraded its brewing equipment to new versions of the venerable stuff it had always used. "All the brewing stuff is from Germany; Krones, which is the Mercedes-Benz of brewing equipment," Lux says.
The IT migration wasn't simple, but not unusually difficult, either: Ethernet over Fibre backbone links to the headquarters building eight miles away, off-site backup-and-recovery, new hardware for the data center and VMware ESX virtual-server infrastructure to run it all, Lux says.
The only persistent problem was how to run the automation controls that brewhouse workers use to control the mixing, fermentation, filtering and packaging.
The brewhouses run on manufacturing execution system software from Wonderware, which uses touch-control screens to give workers graphical links to the automation controls it maintains with the brewing equipment, as well as back-end connections to the ERP system that keeps tabs on inventory and shipping.
"The old brewhouse was less automated, so we only had to worry about one Windows 2000 machine," Lux says. "Here we were talking about let the users see everything in the process, so we'd have to put in workstations everywhere, with dual monitors and touch controls and put it all inside industrial cabinets to keep them away from the moisture and heavy equipment."
The company was also loath to put relatively expensive PCs in the manufacturing danger zone.
"We didn't want to put money out there with the guys driving forklifts," Lux says.
Pano Logic Hardware Fits Right In
The company looked at implementations from Citrix and hardware versions from Wyse and Jack PC, but couldn't overcome some of the practicalities.
Among other problems, the Wonderware clients didn't understand virtualization, so some implementations forced Lux to patch together two sessions running under different licenses, which worked except the cursor and some of the controls would work on one monitor and not the other.
A Wyse unit mounted in a waterproof washdown cabinet almost made the cut. But graphic performance was so bad screen refreshes were visible to the naked eye and each unit cost a disappointing $750.
Finally, at a VMware training session, Lux says, another customer mentioned to Lux that his own boss had been impressed by a Pano Logic demo at VMworld a few weeks before. Neither he nor Lux knew anything about Pano Logic except that it was supposed to be inexpensive.
"I said, 'OK, I'm cheap; I'll call," Lux says. "We got it in and it turned out to be a great product for us."
Dual-monitor Pano Logic machines cost $350 and were less vulnerable to the environment and performed much better than the Wyse machines, Lux says.
Boulevard ended up installing 15 Pano Logic machines and hiring a programmer who keeps customizing the Wonderware to make the brew controls more granular and easier to use. The only holdup was a refresh-frequency problem with one type of monitor, which Pano Logic fixed at no cost, Lux says.
Compare that to the cost of $2000 per station for the same number of PCs plus $1,500 each for protective cabinets and Boulevard saved $47,000 on hardware on a budget that runs around $60,000 per year for capital costs outside of special projects.
"We haven't had to replace anything yet, but if something does break or get killed, who cares, just throw it out," Lux says. "If something hangs, we just relaunch the session and we're right back up and running, so that's not an issue. The hardware is almost disposable."
Oracle eyes pharmaceuticals with new on-demand CRM app
New features in CRM on Demand Release 17 include tools for managing sales pipelines and performing forecasts of future business; a redesigned user interface; and added language support.
But one CRM industry observer flagged the Life Sciences product as a particular point of interest.
"Pharma sales is a different kettle of fish than almost anything else in CRM," Beagle Research analyst Denis Pombriant wrote in a blog post. "Sales reps never actually sell their wares to actual customers. They sell to the major recommender, the doctor, and even the MD doesn't buy anything. He or she simply writes a prescription. So you have this odd situation where the sales person is there simply to influence the recommender."
In addition, pharmaceutical sales representatives deal with rafts of paperwork and product samples, and therefore historically tend to leave their laptops in the car when pitching products to physicians and then perform data entry at the end of the day, said Anthony Lye, senior vice president of Oracle CRM. Depending totally on mental recall can be a "painful" process and also leads to inaccuracies, he added.
The new application is supposed to make this work easier for sales representatives so data is captured and entered into the system on an ongoing basis.
Oracle could potentially do big business with the new application. There are about 100,000 drug salespeople in the U.S. alone, according to Pombriant.
But down the road, pharmaceutical CRM application strategies may change, as pharmaceutical companies look to cut the significant costs associated with keeping salespeople on the road, Pombriant predicted.
"That means taking the call to the Web and with it, losing a significant number of jobs," he wrote. "A future pharmaceutical CRM product might be expected to offer a portal for each doctor the company targets. ... A drug maker would be able to provide all of the information usually associated with a [sales] call and more, such as custom-designed video and audio that the doctor or pharmacist could access when convenient, rather than in the middle of a busy day."
Oracle's own products haven't gone that far yet, but the notion is "a logical extension" of its current strategy, Lye said.
Oracle looks for updated Tuxedo get mainframe users onto x86s
Tuxedo 11G, the version of the software released Wednesday, includes Oracle Tuxedo Application Runtime for CICS, an application programming interface-based emulator for running IBM's Customer Information Control System transaction server software for mainframes.
"Mainframes continue to be the backbone of business computing, but these systems tend to be expensive, rigid, and really hard to maintain," said Ajay Patel, vice president of Oracle's Fusion line of middleware. With the CICS runtime, "You can take your existing business applications running on the mainframe and migrate them to open system," he said.
The software also includes Batch 11g, for taking on mainframe-based batch processing, and Oracle Tuxedo Application Rehosting Workbench 11g, which provides some automation tools for migrating the mainframe data and code over to a distributed x86 architecture.
The company claims that moving to a distributed x86 environment could save as much as 50 percent in operational costs, with little or no reduction in performance. Patel said, "You can take a pool of servers and pull them together to get any kind of workflow," up to "several thousand MIPS," or million instructions per second, a commonly used metric for evaluating mainframe performance.
The company promises that CICS applications that serve as many as 100,000 users, and execute 50,000 transactions per second, can perform equally well in a Tuxedo-based distributed environment. Once an application is running on Tuxedo, an organization can also expose some of the functionality as Web services, Patel said.
Oracle acquired Tuxedo as part of the BEA Systems acquisition in 2008. Much like Oracle's (formerly BEA's) WebLogic Server handles Java applications, Tuxedo acts as an application server for components written in COBOL, C, and C++, all widely used languages for mainframe applications. It also has a service bus to allow such components to interoperate over a network, via messaging. Tuxedo instances can be clustered to provide high availability for applications.
The company promises that this new software will make the migration process easier, albeit not totally automated. Since the application will run in an emulator, the code itself will not need to be reworked, which will be a big time- and aggravation-saver. Some work will still be required to redefine the new database and file calls, though the Workbench software was designed to automate at least the most obvious of changes.
Stefan Ried, a senior analyst for Forrester Research, notes even with these tools, migrating an application off a mainframe will still require some work. "One major challenge can't be neglected: A migration off the mainframe always involves manual steps and requires a deep understanding of the business logic," he wrote in a blog posting.
Oracle contends that this approach will still be far easier than other approaches. Currently, the task of migrating software from mainframes requires rewriting the software for the new environment. This tends to be a costly and error-prone approach, Patel contended.
In addition to these mainframe migration tools, Oracle has made a number of other changes to Tuxedo as well. Most notably the software can now run programs written in Ruby and Python.
Verizon, IBM launch private cloud backup service
Warren Sirota, strategy and program development executive at IBM, said the base service, to be offered by both Verizon and IBM, provides remote electronic storage at an offsite data center.
Optional upgrades can include an on-site storage appliance, the ability to replicate data to an alternate vault and data backup tapes for long-term retention.
"The combination of Verizon's secure networking expertise and data center capabilities and IBM's long history of providing electronic data management and protection solutions yields a unique end-to-end backup and recovery solution that is integrated with an enterprise's existing wide-area network environment and fully managed in the cloud," said IDC analyst Melanie Posey in a statement.
The initial iteration of Managed Data Vault is aimed at users in the New York metropolitan region, with the primary data center located at IBM's Sterling Forest recovery center. The companies expect to expand the service later this year using both IBM and Verizon data centers, Sirota said.
The Managed Data Vault service's disk-based backup offers set recovery point objectives (RPO) and recovery time objectives (RTO).
Managed Data Vault also enhances RPO of database applications: the online database backup ability enhances database and application availability by allowing backups to be performed while the database is being used, according to Sirota.
"There is dramatic RTO improvement by eliminating the time necessary to recall and transport tapes for recovery. The planned for RTO is also more reliably met because Managed Data Vault lets users know that a successful backup has occurred," Sirota said.
The backup service is priced per gigabyte of data stored -- the specific price per gigabyte depends on which level of service is used and how long a user wants the data retained.
Sirota said the price "will be very competitive relative to Internet-based services that do not have consistent nor reliable throughput characteristics, with the added benefit of avoiding ongoing capital expense as data stores grow."
IBM would not release the throughput rates associated with Managed Data Vault, instead saying it is designed to backup large data stores (terabytes) in normal backup windows, and at LAN speed.
New Intel Xeon Processor Pushes Mission Critical into the Mainstream
Expandable to include from two to 256 chips per server, the new Intel Xeon processors have an average performance three times that of Intel’s existing Xeon 7400 series on common, leading enterprise benchmarks, and come equipped with more than 20 new reliability features.
Twenty Old Servers – To One New One
The combined scalable performance, advanced reliability and total cost of ownership advantages of the Xeon 7500 series will further accelerate the shift from proprietary systems to industry-standard Intel processor-based servers. These new capabilities enable IT managers to consolidate up to 20 older single-core, 4-chip servers onto a single server using Intel Xeon 7500 series processors while maintaining the same level of performance. In doing so, they could also see up to a 92 percent estimated reduction in energy costs and a return on their investment estimated within 1 year due to reductions in power, cooling and licensing costs.
“The Xeon 7500 brings mission critical capabilities to the mainstream by delivering the most significant leap in performance, scalability and reliability ever seen from Intel,” said Kirk Skaugen, vice president of the Intel architecture group and general manager of Intel’s data center group. “This combination will help users push to new levels of productivity, and accelerate the industry’s migration away from proprietary architectures. We are democratizing high-end computing.”
New Standards in Reliability and Scalability
Mission-critical workloads run by customers that simply cannot afford unscheduled downtime such as hospitals or stock exchanges can take advantage of more than 20 new features that deliver a leap forward in reliability, availability and serviceability (RAS). These reliability capabilities are designed to improve the protection of data integrity, increase availability and minimize planned downtime.
For example, this is the first Xeon processor to possess Machine Check Architecture (MCA) Recovery, a feature that allows the silicon to work with the operating system and virtual machine manager to recover from otherwise fatal system errors, a mechanism until now found only in the company's Intel® Itanium® processor family and RISC processors.
The Intel Xeon processor 7500 series offers unique scalability through modular building blocks enabled by Intel® QuickPath Technology (QPI) interconnect. With QPI, cost-effective and highly scalable eight-processor servers that don’t require specialized third-party node controller chips to “glue” the system together can be built. Intel is also working with system vendors to deliver “ultra-scale” systems with 16 processors for the enterprise, and up to 256 processors and support for 16 terabytes (one terabyte is equal to 1,000 gigabytes) of memory for high- performance computing “super nodes” running bandwidth-demanding applications such as financial analysis, numerical weather predictions and genome sequencing.
Record-Shattering Performance
The Intel Xeon processor 7500 series represents the largest performance leap in Xeon family history, with the chip being an average three times faster across a range of benchmarks, setting over 20 new world records including stellar results from Cisco*, Dell*, Fujitsu*, IBM*, NEC* and SGI*.
| Model | No. of | World Record Benchmark Claims2 | ||
| SGI* Altix* UV 1000 | 64 | SPECint*_rate_base2006, SPECfp*_rate_base2006 | ||
| Fujitsu* PRIMEQUEST*1800E | 8 | SAP* Sales and Distribution (SD) two-tier, SPECjbb*2005, SPECfp*_rate_base2006, and SPECint*_rate_base2006 | ||
| NEC* Express*5800/A1080a-E | 8 | TPC Benchmark* E | ||
| Cisco* UCS C460 M1 | 4 | SPECint*_rate_base2006, LS-Dyna* car2car high-performance computing (HPC), SPECompL*_base2001 | ||
| Dell* PowerEdge* R910 | 4 | SPECjAppServer*2004 | ||
| Fujitsu* PRIMERGY* RX600-S5 | 4 | SAP* BI-Datamart | ||
| IBM* System x* 3850 X5 | 4 | VMmark*, TPC Benchmark* E, SAP* SD two-tier, SPECjEnterprise*2010, SPECfp*_rate_base2006, and SPECjbb*2005 | ||
| SGI* Altix* UV 10 | 4 | SPECompM*2001 | ||
| Dell* PowerEdge* R810 | 2 | SPECjbb*2005 | ||
| IBM* System x* 3950 X5 | 2 | SPECint*_rate_base2006 |
For detailed performance results and more information about all the world record claims see the accompanying performance fact sheet and visit www.intel.com/performance/server/xeon_mp/summary.htm
Large-Scale Virtualization
The Intel Xeon processor 7500 series meets the growing trend of IT organizations virtualizing large mission-critical workloads for applications such as Enterprise Resource Planning. With up to eight times the memory bandwidth of the Intel Xeon processor 7400 series and four times the memory capacity with 16 memory slots per processor, the Xeon 7500 series can support one terabyte (or 1,000 gigabytes) of memory in a four-socket platform. Intel Virtualization Technologies, which include new I/O virtualization capabilities and Intel® Virtualization Technology (VT) FlexMigration, enables live VM migration across all Intel® CoreTM microarchitecture-based platforms to ensure investment protection for administrators seeking to use pools of virtualized systems to facilitate failover, disaster recovery, load balancing and optimal server maintenance and downtime.
Two-Chip and Cost Optimized Servers
New two-chip expandable class platforms with large memory capacity based on the Intel Xeon processor 7500 series are ideal for memory intensive databases and virtualization environments. The Intel Xeon processor 7500 series is available in quad, six and eight core versions with twice the number of threads thanks to Intel Hyper-Threading Technology. The Intel Xeon processor 6500 series provides a lower cost solution for 2-chip servers with large memory requirements.
Product Details
The Intel Xeon processor 7500 series supports up to eight integrated cores and 16 threads, and can scale up to 32 cores and 64 threads per 4-chip platform or 64 cores and 128 threads per 8-chip platform, and is available with frequencies up to 2.66 GHz, and 24 MB of Intel® Smart Cache memory, four Intel QPI links and Intel Turbo Boost technology. Thermal Design Point (TDP) power levels range from 95 watts to 130 watts.
The Intel Xeon processor X7560, with eight cores and 24MB cache size, is built for highly parallel, data demanding and mission-critical workloads, whereas the Intel Xeon processor X7542 is a frequency-optimized 6-core option at 2.66 GHz targeted for super node high-performance computing applications in science and financial services.
Pricing and Availability
The innovative modular scaling of the Xeon® 7500 processor works with the Intel 7500 Chipset and Intel 7500 Scalable Memory Buffers to enable unique OEM system designs and brings a wide range of socket, memory and I/O, form factor, and reliability feature sets never before available to the mainstream server market. Enterprise software vendors expected to support the high-end features of Intel Xeon processor 7500-based platforms, include Citrix*, IBM*, Microsoft*, Novell*, Oracle*, Red Hat*, SAP AG* and VMware*. System vendors are lining up to take advantage of the high-end Intel Xeon processor 7500 series capabilities and deliver highly innovative solutions at much lower costs than older proprietary solutions. With more than double the amount of designs versus the previous generation Intel Xeon processor 7400 series, system manufacturers were expected to announce systems based on the Intel Xeon processor 7500/6500 processor starting today.
These manufacturers include Bull*, Cisco*, Cray*, Dell*, Fujitsu*, Hitachi*, HP*, IBM*, Inspur*, NEC*, Oracle*, SGI*, Supermicro* and Quanta*.
Symantec Workspace Virtualization Layer Definition Tool
1. Exports a virtual application package to a layer definition file and
2. Creates or Modifies a virtual application layer using a layer definition file.
The SWV Layer Definition Tool allows administrators to export Virtual Application (i.e. layer) information to a “Layer Definition File” (LDF file) and to create or modify a layer from a layer definition file.
As an example, the below information discusses how to create a virtual application package for Internet Explorer using the new SWV Layer Definition Tool.
Symantec Workspace Virtualization provides the capability to capture an application installation to a redirected area (i.e. sub-layer) in which all of the files, folders and registry information that comprise the application are stored. After the application has been captured, it can be exported to a package (VSA file) that can be deployed to endpoints. While this is the typical case for creating a virtual application, there are scenarios where it might be useful, or even necessary, to have a mechanism to reliably create virtual applications where capturing the application installation is not possible.
For example, suppose you want to create a virtual application package for Internet Explorer 6 (IE6). Because IE6 is included with Windows XP, it is not well suited for the capture mechanism described above. To virtualize this application, the Layer Definition tool lets us handcraft an IE6 layer using the SWV Administration tool, and then export the definition to a layer definition file, which can later be used to re-create the virtual application layer in a reliable way. Some might question why Symantec doesn’t just package Internet Explorer 6 and make the package available for download. The answer lies in the fact that an IE6 virtual package is comprised of Microsoft system files that can’t just be packaged up and made available to customers due to file redistribution restrictions. Therefore, the Layer Definition Tool was created to ensure that customers could reliably create virtual applications where installation capturing is not feasible.
To capture IE6, we need the ability to supply a layer definition file that contains all the information required to create an IE6 virtual application layer, without including the vendor-owned files. Enter the Layer Definition tool. Let’s take a closer look at how this tool can help us address this problem.
Virtualization's Third Dimension: Everything Channel Awards DataCore Software the 2010 Five-Star Partner Rating for Its Commitment to the Channel
DataCore Takes Virtualization to the Third Dimension
DataCore provides the essential storage dimension to virtualization, enabling solution providers to accelerate sales cycles and increase margins by making it easier for solution providers to leverage their virtual server and virtual desktop practices and increase business from existing sales opportunities.
The Five-Star recognition is significant since DataCore has recently undergone a major push to make it easier for partners to reap the rewards and benefits of having DataCore storage virtualization solutions in their arsenal of product offerings. A new Solution Advisor Resource Center includes a wealth of sales training and positioning tools and materials to help resellers and solution advisors grow their virtualization business with DataCore's storage virtualization solutions - http://www.datacore.com/partner/resources/.
“We are pleased to deliver to the Channel an exclusive list of the leading vendor channel programs with details about key technologies, program opportunities and information on the hottest new program offerings,” said Kelley Damore, vice president, editorial director, Everything Channel. “The quality of a technology vendor´s partner program determines how profitable its partners will be. For their commitment to their business partners and their efforts to build quality programs, we congratulate those recognized as this year’s Five-Star Partner Program winners for helping to drive greater revenue in the channel.”
The PPG list and those companies recognized as five-star appear in the March 29 issue of CRN magazine and online at www.Channelweb.com.
"We are very pleased that Everything Channel has recognized DataCore as a Five-Star Partner Program winner,” states Bill Ferara, director of channel sales for the Americas, DataCore Software. "The word is getting out that DataCore storage virtualization software overcomes the major storage-related obstacles and upheavals that slow down or stop virtualization projects from getting off the ground or being successful when deploying a virtual infrastructure. We are confident that resellers will profit from adding DataCore storage virtualization to its business portfolio, particularly since storage virtualization software has become more and more a strategic part of any virtualization deployment.”
Background on the Five-Star Partner Program
Research for the 2010 Everything Channel Partner Program Guide and the Five-Star Partner Program rating was conducted by Everything Channel´s research department. Everything Channel analyzed 130 vendor programs rating vendors´ responses to in-depth questions about their partner programs in the five elements of sales support, partner profitability, partner ecosystem development/management, partner communication/marketing and demand generation.
To ensure fair comparisons, companies were placed in one of four categories based on company size (enterprise, midsize, small and emerging). Questions from each section were scored individually and weighted appropriately. The Five-Star Partner Program rating recognizes the elite subset of Partner Program Guide vendors who give solution providers the best partnering elements in their channel programs. The Five-Star rating is bestowed on programs whose overall rating is among the elite based on company size.
Virtualization’s Third Dimension – Storage Virtualization
DataCore software provides the strategic, third dimension to value-added resellers’ and IT solution providers’ virtualization businesses. DataCore storage virtualization software eliminates storage-related disruptions, performance bottlenecks and funding roadblocks that jeopardize virtualization projects.
Capture the Storage Virtualization Opportunity
For information about reselling DataCore Software, please contact info@datacore.com or contact an authorized distributor.
DataCore’s North American distributors are: Alternative Technology, Ingram Micro, Lifeboat Distribution, and Tech Data Corporation.
VARs and solution providers that would like to partner with DataCore can also visit http://www.datacore.com/partners.
Thursday, March 11, 2010
Xangati Shatters Pricing Barrier to Live Enterprise-Wide Virtualization Management
Xangati, the first management solution to provide live, video-like visibility into all communications across both the virtual and physical worlds, today released two groundbreaking new products – Xangati for ESX and the Xangati Management Dashboard. Designed to decrease the management burdens of virtualization experts, Xangati’s new line of virtual appliances offers extensive monitoring and troubleshooting capabilities like no other solution on the market today and does so at a fraction of the cost. To take a free test drive and see what you’ve been missing, download Xangati for ESX at http://www.xangati.com/landing_page-software_download.php.
“Virtualization has proven to be a revolutionary force in today’s computing infrastructure,” said Zeus Karravala, senior vice president of enterprise research at Yankee Group. “It has already driven down the cost of servers; but until now, the old-school management solutions have failed to adapt to the economics of virtualization. With the release of its new virtual appliances, Xangati proves leadership once again by riding the virtualization wave to provide comprehensive management at a price that the virtualization manager themselves can pull the trigger on.”
Alan Robin, CEO of Xangati said, “The inability to explicitly see virtual communications within a hypervisor using traditional management solutions is perhaps the most fundamental ‘blind spot’ in a virtual infrastructure. We believe that to effectively manage the virtual world, you have to be of the virtual world, which is why we’ve come to market with a suite of virtual appliances that solve this virtual communication void. Couple our innovative product introduction with an aggressive ‘try-before-buy’ pricing model that allows virtualization administrators to solve problems before even committing to a purchase, and Xangati not only provides a win-win for everyone but positions itself as a disruptive force for virtualization management as we know it.”
Xangati is specifically designed to complement VMware’s vCenter by adding 360-degree communication visibility into all the critical elements that it manages. Just minutes from installation to instant visibility, and for as little as $2 per server, Xangati’s new virtualization management solution provides an instantaneous video-like view of how each and every component of a virtual infrastructure is performing. The solution also offers virtualization administrators DVR-like recordings of actual activity to be replayed after-the-fact for better problem analysis and faster time to resolution. Xangati’s virtual appliance suite consists of the following products:
- Xangati for ESX – tightly integrates with VMware’s hypervisor to instantly summarize traffic traversing a vSwitch, providing valuable visibility that completely eliminates these once troublesome “blind spots.” The turnkey virtual appliance is offered via a 14-day high-value trial, with the true plug-and-play nature of the solution enabling it to be installed and yield value in less than 30 minutes.
- Xangati Management Dashboard – can track up to 5,000 identities (e.g. a hypervisor, virtual server, virtual desktop, VoIP phone), providing complete enterprise-wide visibility, alerting, reporting and recording for hundreds of applications. The product comes in both a Standard and Enterprise edition, depending on the size and scope of the IT infrastructure and budget. It too is plug-and-play, with ROI in less than two hours.
Xangati’s initial pilot program has already yielded incredible value and helped virtualization experts in the following ways:
- Prove that the virtual infrastructure is not the cause of emerging performance issues
- Foster better communication between the virtualization teams and the owners of the servers that they have created
- Ensure virtual desktop infrastructure (VDI) pilot success by showing both end-user experience with VDI and the root cause of the problem
- Catch misconfigurations with virtual IP storage and use the data to prevent performance or security issues
- Track the virtual server as it talks to an Oracle database in the physical world
“Countless organizations are struggling with ‘blind spots’ on their infrastructure and need real-time results to successfully get through their virtualization deployments – whether they are at the desktop, server or storage level,” continued Robin. “Xangati is unique in providing complete communication visibility across the enterprise to help identify and resolve performance issues right away.”
Davin Garcia, an IT manager at Innovative Technical Solutions, Inc. said, “Xangati is a powerful solution that now lets me see not only virtual-to-physical communications but virtual-to-virtual communications as well – eliminating critical ‘blind spots’ in this growing world of virtual infrastructures. The ability to have visibility into communication within the ESX host truly covers all bases and offers the most complete solution I’ve been able to find on the market today.”
Availability and Pricing
For hundreds to thousands of dollars less per managed object than traditional solutions that are agent-based, Xangati’s aggressive new pricing model offers a transparent view of the entire host and all associated virtual machines, as well as record and playback functionality of any found issues.
Xangati’s new suite of virtual appliances is available now. Xangati for ESX can be downloaded directly from http://www.xangati.com/landing_page-software_download.php for a free 14-day trial; or, at an immediate cost of $299. The Xangati Management Dashboard Standard edition is available for $4,999; and the Enterprise edition for $9,999. Xangati is also making available a Starter Kit at a discounted rate of $9,999, which includes Xangati for ESX for up to 20 hosts and the Xangati Management Dashboard Enterprise edition.
Xangati’s new virtual appliances support visibility into not just VMware environments, but heterogeneous virtualization environments including those also driven by Citrix and Microsoft, among others. To reach a sales representative for more information and to have a discussion about the solution that’s right for your organization, please contact sales@xangati.com.
SpringSource Unveils tc Server Spring Edition as the Best Place to Build and Run Spring Applications
SpringSource, a division of VMware, Inc. and the leader in Java application infrastructure and management, today introduced SpringSource tc Server Spring Edition, the best place to build and run Spring applications. tc Server Spring Edition provides a lightweight platform for running modern applications and is ideally-suited for the virtualized datacenter as well as private and public cloud environments. The tc Server Spring Edition bridges the divide between development teams and IT operations by equipping them with tools to manage the health, performance and quality of their Spring applications.
As part of the introduction of SpringSource tc Server Spring Edition, VMware is unveiling the “Spring on VMware” Promotion whereby licenses of tc Server are offered at no additional cost for a limited time with the sale of qualifying VMware products. The promotion marks the first time that SpringSource solutions are available from VMware channel partners.
Based on the popular Apache Tomcat server, tc Server Spring Edition delivers new levels of developer productivity, operational control and deployment flexibility for customers building and deploying Spring applications. By providing developers and operators with top-down insight into the performance and health of their Spring applications, customers can reduce the time it takes to deliver new applications to meet their business goals. Additionally, the lightweight footprint of tc Server helps ensure optimal resource utilization across virtual and cloud environments.
“Not only is tc Server Spring Edition the best place to run Spring apps but it is ideally architected for next-generation datacenter technologies,” said Shaun Connolly, vice president of product management for the SpringSource division of VMware. “Our customers have proven that to deploy applications properly in virtual environments, the application server must require a very small footprint. In this way, tc Server Spring Edition is ideal for virtual deployments which are the onramp to cloud computing.”
This new lightweight edition of tc Server makes it easy for customers currently running Spring applications on Java EE servers to migrate to tc Server Spring Edition. The lean architecture also allows customers running on Tomcat servers who want enterprise features and customer support to migrate to tc Server Spring Edition. The approach provided by tc Server is a foundation for a streamlined application lifecycle fitted to the evolving datacenter. The solution is part of SpringSource’s product line, which provides modern offerings for building, running and managing Java applications.
SpringSource’s approach is championed by many independent perspectives. As a September 2009 report by Forrester Research Inc., titled, “Lean, the New Business Technology Imperative,” states: “Don’t buy and install more stuff than you need. When you evaluate software for any given project, make brevity and compactness a consideration. Too many clients spend far too much time and effort trying to find the products with the most features. Lean shops look for just enough, no more1.”
“Without tc Server, I could not have deployed my web-based applications into the internal private cloud environment I created,” said Jon Brisbin, portal webmaster for NPC International. “Other solutions, including the previous application server I was using, are too process-heavy. SpringSource tc Server’s small footprint allows me to deploy a dozen app server instances on one physical box virtualized by VMware software, with plenty of capacity left over.”
Key Features of tc Server Spring Edition
SpringSource tc Server Spring Edition harmonizes the application lifecycle by providing a modern platform that delivers new levels of developer efficiency, operational control and deployment flexibility.
Developer Efficiency: Increase productivity and decrease time to deliver applications from hours to minutes with the following features:
- Deep visibility into the health and performance of Spring applications
- Enhanced build and deploy process through integration with Maven
- Agile Spring development experience with SpringSource Tool Suite designed to speed the code / test cycle and ability to find underperforming areas of your applications
Operational Control: Help ensure Spring applications meet business goals with the following features:
- Performance and SLA management of Spring applications provides the top-down insight required to help ensure applications are operating as expected
- Rich alert definition, workflows and flexible control actions enable application operators to proactively manage application issues
- Group availability and event dashboards make it easy to see the problem areas and possible root causes of issues
Deployment Flexibility: Future-proof applications by leveraging an agile platform that addresses needs from developer to datacenter to cloud with the following features:
- Template-driven server instance creation enables multiple application server instances per machine to be configured and deployed in a matter of seconds
- Integrated experience with VMware Workstation and VMware Lab Manager enables applications to be easily deployed and debugged within encapsulated virtualized environments
- Open, secure API for all operations ensures system administrators have access to the operational capabilities they need using the scripting tools they are familiar with
SpringSource tc Server Offerings and Availability
SpringSource tc Server 2.0 will be generally available in early April 2010 and offered at list price in the following three packages:
- Spring Edition – Starts at $750 per CPU
- Standard Edition – Starts at $500 per CPU
- Developer Edition – Free to download standalone or as part of SpringSource Tool Suite
“Spring on VMware” Special Promotion
Under the “Spring on VMware” Promotion, all customer orders fulfilled between March 8, 2010 and May 8, 2010 that include products (license only) from VMware vSphere™, the VMware vCenter™ family of products, VMware View™ or VMware ThinApp™ will receive 2 CPU licenses of tc Server Spring Edition and 60 days of evaluation support (collectively referred to as the “Spring on VMware Bundle”). For more information on VMware’s promotion visit: www.vmware.com/go/tcserver-promotion.
Upcoming tc Server Webinar
For further information on tc Server Spring Edition and the special VMware promotion, SpringSource will host a webinar titled, “SpringSource tc Server: The Best Place to Build and Run Spring Applications,” on March 23, 2010.
To register for the webinar click on one of the following links:
- Europe: 2:00PM Central European Time/1:00PM GMT https://vmwareevents.webex.com/vmwareevents/onstage/g.php?t=a&d=664353614
- North America: 11:00AM Pacific Time/2:00PM Eastern https://vmwareevents.webex.com/vmwareevents/onstage/g.php?t=a&d=668126377
Swedish Internet Leader to Standardize on Red Hat Enterprise Virtualization
Red Hat, Inc., the world's leading provider of open source solutions, today announced that Swedish Internet company Voddler has standardized its movie service on Red Hat Enterprise Virtualization. Voddler has also selected Red Hat Enterprise Linux and Red Hat Network Satellite as the basis for its new infrastructure, providing the company with a centralized and scalable server platform.
Voddler offers its customers a wide variety of movies on-demand, and Red Hat Enterprise Virtualization will enable the company to virtualize its web servers responsible for the movie service and the database servers containing all of its film titles.
“The powerful combination of Red Hat Enterprise Linux and Red Hat Enterprise Virtualization made Red Hat the obvious solution to virtualize our key business-critical systems,” said Roger Kemmler, IT manager at Voddler. "Red Hat's subscription-based model will help us support our infrastructure as we grow and launch the service in new markets. For this we require a stable and secure platform that is cost-effective, centrally administered and extremely scalable. Red Hat Enterprise Virtualization meets all of those criteria.”
Voddler continually updates its catalogue of film titles and its growing customer base is using the service more frequently, which means the demand on web server and database capacity is continually increasing. A key requirement for Voddler is to have the flexibility to scale the server platform and redistribute resources to meet the demand for increased capacity. Red Hat Enterprise Virtualization allows Voddler to consolidate and increase the effectiveness of its physical infrastructure by using features such as live migration and dynamic resource scheduling to help ensure that the physical systems are used close to their maximum capacity.
Red Hat Network Satellite, a systems management tool, helps Voddler to quickly and easily configure and maintain the virtualized servers running on the new Red Hat platform.
Consultancy EnjoyIT, a Red Hat Ready Partner, is providing support for the project implementation and is also acting in an advisory role.
For more information about Red Hat Enterprise Virtualization, visit www.redhat.com/rhev.
For more information about Red Hat, visit www.redhat.com. For more news, more often, visit www.press.redhat.com.
Cal Net Technology Group and Microsoft Will Kick-Off Hyper-V Campaign on St. Patrick's Day
Cal Net Technology Group, recently named one of the fastest growing private companies in the U.S. by Inc. Magazine, has announced the official launch of “Hyper-V Treasure Hunt: Uncover Hidden Cost Savings with Virtualization.” The campaign, in partnership with Microsoft, seeks to educate and inform current Cal Net customers about the benefits of Microsoft Hyper-V, including cost savings and energy efficiency.
The campaign will kick-off with an invite-only open house on Wednesday, March 17, 2010 from 6pm – 9pm at Cal Net’s headquarters in Los Angeles. In addition to learning about Hyper-V, and other complimentary products, all attendees will have the opportunity to socialize with industry leaders, receive complimentary IT-related swag, and enter to win a copy of Windows 7.
“As budgets continue to dwindle, and small to mid-size businesses look for ways to cut costs and streamline processes, I believe that it’s part of our job as their IT consultant to consistently offer better products that meet current needs,” remarked Zack Schuler, CEO of Cal Net Technology Group. “The Hyper-V campaign is one in a series of many that we will be conducting with Microsoft over the next year, with the goal of staying ahead of the trends and steering our clients toward more efficient, cost-effective ways of conducting business.”
"As a VAR Champion Club partner, Cal Net Technology Group plays an integral role in educating consumers about Microsoft products and services,” added Ken Stone, Region Marketing Manager of Microsoft. “We look forward to working with partners like Cal Net Technology.”