Bob Gilbert gives a high level overview of Riverbed's products.
Bob Gilbert gives a high level overview of Riverbed's products.
For this week's Federal IT Q&A with Steve Riley, we examine the considerations for agencies looking to deploy desktop virtualization, the associated considerations, the drivers, user behaviors, applications, as well as how Riverbed solutions play a critical role in ensuring the best possible user experience.
To kick things off, Steve breaks it down about what are some of the drivers for VDI. Simply put, the consumerization of IT is high on the list. An agency can allow agents to bring in their own gear, or purchase -- with a budget -- gear, and then provide and manage a virtual desktop with applications securely. From an IT and budgetary perspective, desktop virtualization allows agencies to not have to purchase devices, manage and refresh them.
Virtual desktop is also truly enabling the dual use personal-professional device. And, as you may expect, iPads and Android-based tablets are the devices of choice. But, the beauty of VDI is it is device independent.
So, what is the Riverbed play? How is Riverbed accelerating virtual desktop infrastructure (VDI)? Earlier this year, we announced continued and enhanced support for Citrix XenDesktop. At around the same time, we announced an optimization solution for Microsoft RemoteFX. And, at VMworld in the summer, we announced an upcoming partnership with Teradici, the innovator of the PC-over-IP protocol. Clearly a lot of developments around VDI with more to come.
If you have been keeping count, then you'll know that we're approaching the end of the Federal IT Q&A series with Riverbed technical leader Steve Riley. Next week, tune in for a recap and finale discussion on how everything we discussed (data center consolidation, cloud computing, data protection, mobility and teleworking, and desktop virtualization) is tied together.
But for now, watch the below video Q&A with Steve.
Riverbed will demonstrate its cloud performance solutions at AWS Gov Cloud Summit II, taking place October 18 at the Washington Marriott Metro Center in Washington, D.C. Amazon's AWS Gov Cloud Summitt II will provide government IT leaders and agency executives with the information to succeed in their cloud computing projects. Attendees can visit the Riverbed station to learn about the company’s cloud performance solutions, which help government agencies meet mandates to consolidate data centers, reduce costs for IT, and execute on the cloud first policy. In addition, Riverbed technical leader Steve Riley will lead a discussion on cloud implementation.
Conference attendees can visit the Riverbed station to learn about how the company’s application-aware network performance management (NPM) and wide area network (WAN) optimization solutions that are critical elements for providing network visibility and control that enable the migration, and accelerates the transmission of applications and data, to cloud environments.
Conference attendees can also learn more about cloud performance and cloud implementation in a panel session with Steve Riley.
What: Cloud Implementation Panel
Who: Steve Riley, technical leader, Riverbed
When: Tuesday, October 18, 4:15 PM – 5:00 PM
Where: Solutions Breakout area
Riverbed will demonstrate its IT performance leadership at Gartner Symposium/ITxpo 2011, taking place October 16-20 at the Walt Disney Dolphin in Orlando, Florida. Gartner Symposium/ITxpo is the IT industry's largest and most strategic conference, providing business leaders with a look at the future of IT. Attendees can visit Riverbed® (booth# 309) to learn about the company’s IT performance solutions, which have been selected by more than 13,000 organizations to help consolidate IT infrastructure and reduce costs while increasing employee collaboration and productivity.
Conference attendees can learn about the Riverbed product families. In addition to its core WAN optimization solutions, Riverbed will showcase its wide range of IT performance solutions spanning application-aware network performance management (NPM), application delivery and web content optimization (WCO), and cloud data protection for back up, archive, and disaster recovery.
Teleworking - that is this week’s federal IT initiatives topic. And, everyone who I have spoken with has an opinion on the subject. One point of debate is whether or not government agents or employees should or should not be allowed to work remotely. Regardless of your stance on the matter, I talked with Steve Riley, about how to enable a teleworking model with the user experience in mind, should the user be offered the option to work remotely.
As users work remotely or on the go, they are moving farther away from the data and applications. So naturally, the cloud seems like the perfect place to store data and application for access anywhere, from any device at any time. But, therein lies the challenge.
Data center consolidation (moving the data and application farther away from the user) + cloud first policy (mandate) + telework (remote and on the go users) = Challenges
What does Riverbed offer that enables telework models? Steelhead. It works in data centers, in the cloud, as well as on mobile devices. Steelhead Mobile is based on the same technology as our Steelhead appliances, as Cloud Steelhead, as Virtual Steelhead. Bottom line is federal IT leaders can help remote agents experience the best performance possible, as if they were working next door, even if they are located across the country.
Watch the video Q&A with Steve. As a heads up, Steve dropped a stat, citing an IDC report claiming that an Exabyte of new data is created every day. To put it into perspective, an Exabyte is 10 to the 18th power. That is one with 18 zeros (1,000,000,000,000,000,000). That is one billion gigabytes of new data every day.
Where is all that data coming from? Users like you and me.
If you plan to attend Dell World from October 12–14 in Austin, Texas, then come and visit the Riverbed booth to learn about Riverbed and Dell solutions that enable customers to optimize the performance of their IT infrastructure.
As businesses, from SMB to large enterprises, continue to innovate the way they do business and adopt leading solutions for virtualization and cloud, CIOs are examining cost-effective technologies to satisfy their requirements. Moving data and applications to the cloud, data center to data center replication and disaster recovery are all key initiatives affected by virtualization and cloud computing.
Riverbed and Dell have been collaborating as strategic partners to bring IT performance solutions to customers. Specifically, Riverbed Steelhead products with Dell EqualLogic storage help optimize WAN-based iSCCI data replication. Have a read at the solution brief here. And, Riverbed Steelhead with Dell Compellent storage, delivers accelerated disaster recovery operations and increased data protection. Read more in the Compellent and Riverbed solution brief.
What’s the status of standards for the cloud? What do agencies need to keep in mind as they develop their cloud strategies?
Typically, standards lag innovation. Cloud computing is one of the IT industry’s most rapidly-evolving developments—providers add new features and services several times every month, it seems. Although standards around APIs are in their nascent stages, I’m not sure that cloud standards are mature enough yet to be a major part of the decision process for choosing a provider. More important, I think, is that a provider offer a graceful way to retrieve and remove your data and processing workloads should you ever decide to move elsewhere. Look for providers who clearly state that your data belongs to you, not to them. Avoid providers who won’t make this assurance.
What does moving to the cloud mean for an agency's IT resources? Will “regular” IT skills suffice, or is something else needed?
Technical challenges aside, the personnel issue is, in my observation, one of the biggest barriers to cloud adoption. No one ever publicly declares that they’re going to resist cloud for fear of losing their job, but I know from experience that such fears exist. IT staff will require new skills. Good IT staff will relish the opportunities—they can gain a better understanding of the agency’s business and provide greater value. Bernard Golden, CEO of Hyperstratus, regularly writes about how cloud computing will fundamentally alter the human element of IT. The entire history of technological advancement has affected every form of work ever devised. There are no more buggy-whip manufacturers in the United States; the good ones figured out how to build automobile starters.
At what point can you call a cloud-based IT project a success?
To call something a success sounds like it has to reach some kind of conclusion—a way to know that a project is finished. Not to sound evasive, but one intriguing aspect of using the cloud for IT projects is that they never truly have to be done. “Done” is a side-effect of old-style waterfall development methodologies, which began with an end state in mind. Agile development methodologies have largely replaced waterfall development, and cloud computing is the ideal platform for agile development. The cloud’s on-demand resource elasticity permits continuous updates and improvements. IT projects become iterative and can easily adapt to meet the ever evolving needs of agency business. “Done” is no longer a requirement; success comes from knowing that new functionality can be envisioned, developed, tested, and deployed quickly without disrupting existing operations.
What are going to be the major drivers in the government cloud space in the next 3-5 years? Is there anything else that could emerge that's not evident now?
I believe finding a champion to replace Vivek Kundra’s passion is absolutely essential. While on-going financial pressures could conceivably be the primary (or even sole) driver for government compute consolidation, someone who can keep prodding all agencies with a grand vision is still important at this stage. Also, as IT staff members retire, I’d suggest that agencies look for replacements with some experience developing for and managing cloud resources. Such staff will already understand how to adapt their work skills and strategies as cloud computing continues its relentless evolution. As for predicting how the cloud space itself will evolve, well, today’s reality certainly looks different than predictions from three years ago! I’m certain, though, that the explosive growth of data we’ve seen over the past few years will continue apace. All that data has to go somewhere and the cloud is the best place for that.
What will be your company's strategy for the government cloud space over the next few years?
We’ll continue to strive to make the cloud easier and faster for agencies. We work closely with our Federal customers and partners to ensure we’re building the right products and creating useful guidance. We’ll continue to pursue appropriate certifications and compliance so that agencies can rely on Riverbed’s technology to safely accelerate their move to the cloud.
As I’ve been out talking to people about using the public cloud as a target for data protection, I continue to be surprised by how much pain many organizations regularly go through for backup and recovery, as well as the variety of methods used to protect company data. Of course, the old standby is tape, and even in my days at Data Domain, where the mantra “Tape Sucks” was like a religion, everyone was predicting the rapid demise of that 1928 invention’s role in IT. And yes, tape has lost some if its place in the market for data protection, but it continues to hang around, despite all of the pain that I hear from IT professionals about it.
Why? There are probably as many theories about that as who shot Kennedy but I think it is safe to say that tape holds on for a couple of reasons:
• Disk is still relatively expensive, even if deduped, and still complex to manage
• Some (not the majority of) regulatory requirements can best be filled by tape
• Tape is a known quantity, familiar, “better the devil you know” and all that
So people seem to make do, kludging together a patchwork of solutions to keep ahead of that dreaded backup window, often at the expense of any kind of DR planning. In fact, for most SMBs and SMEs, data protection is only a secondary part of someone’s IT job. So it doesn’t always get attacked with the same vigor and focus as other IT issues. Like I said, people make do.
But that is changing. I’ve been seeing people start to take a look at the potential of doing away with all the cost and hassle of standard data protection solutions and replacing it with the public cloud. I know about all the hype about “The Cloud” but over the course of this year, the view of the cloud I’ve seen has become more measured, with people asking deeper questions about the implications of using the cloud. For storage in particular, professionals are starting to see that not all storage lends itself as easily to the cloud. The performance implications and management difficulties of moving primary storage to the cloud has tripped up both trial customers and solution providers and has strengthened the focus for cloud storage on functions such as backup and archiving, which are much better suited for the cloud in terms of performance requirements and storage methodologies. And the majority of people in that camp are looking to jettison the shackles of tape backup and adopt cloud storage.
Mainstream backup solutions are also promoting the extension of data protection to the public cloud. Last week, I wrote about IBM recently releasing a video showing how the Riverbed® Whitewater® cloud storage gateway enables Tivoli Storage Manger users to deploy a drop-in Whitewater appliance and essentially convert all the headaches of managing a backup infrastructure into freed up capital and hours that can be spent on more pressing IT needs.
I’m sure there will be some data protection issues for which tape is a compelling solution, at least for the near future. But there’s a reason you don’t find 8-track or cassette players in cars anymore, nor video tapes available from movie rental outfits. It’s also getting more difficult to find outfits that rent hard copies of movies, and even Netflix is separating off its DVD business and applying its golden brand name to its business based on cloud streaming of videos (BTW, Netflix uses Amazon’s Simple Storage Service (S3) for its own business).
Trends are unmistakably toward more and more use of cloud storage. As technologies like Whitewater address the difficulties and/or concerns about using the cloud, this trend can only accelerate. Will tape and disk disappear? No. But if a TSM user can drop a small box in their datacenter and essentially get access to fast, secure, infinitely scalable storage, the rules of the game have undoubtedly changed and cloud storage will command a big seat at the data protection table.
Thanks for tuning in for part three (of five) of the Federal IT initiatives Q&A video series with our illustrious technical leader Steve Riley. As projected by many industry research and analyst firms, data will continue to grow. This is not surprising. And as you may remember, with the Cloud First policy, agencies have a mandate to move data and applications to the cloud. So, for this week's video Q&A, we shift gears, and examine some of the considerations for agencies to protect their data in the cloud.
Steve answers the following:
1. How data is protected in the cloud.
2. What are the technical considerations and strategies for protecting data in the cloud.
3. How Riverbed, specifically, helps protect data in the cloud. Here is a hint – it has something to do with FIPS certification.
Next week, I'm taking a break from posting. But, tune in again October 11 for a Q&A video on teleworking and mobility. It would be appropriate to watch the video on a smart phone or tablet, outside of your workplace.
Today we resume with part two of my three-part series on government cloud computing. Be sure to read part one, in case you missed it.
The benefits of the cloud are supposedly self-evident, but how can agencies actually measure the ROI?
Curiously, in General Alexander’s testimony [see the last question in part 1], while he praised the capabilities of cloud security, he questioned some of the promised economic benefits. Many providers publish online calculators that allow you to compare the costs of a cloud deployment to the costs of running on-premise infrastructures. Frequently these fail to account for the personnel costs of installing and maintaining on-premise equipment, so in one sense they aren’t as good as they could be. However, trying to measure cloud ROI and comparing that to traditional infrastructure ROI ignores the cloud’s most important benefit: elasticity. The cloud allows you to add and remove resources according to demand. Traditional on-premise infrastructures are either under-utilized (and thus waste resources) or over-subscribed (and thus perform poorly). Applications designed to take advantage of the cloud’s elasticity largely eliminate the guesswork associated with predicting demand. A resource availability curve that always matches your demand curve appears a lot like perfect ROI.
What does Riverbed bring to this space that sets you apart from others?
Over eight years Riverbed has built a reputation of making wide area networks feel like they’re right next door. As organizations consolidated dispersed branch office resources into fewer large data centers, our technology has helped eliminate the typical problems that arise from computing at a distance. The cloud is a natural next step for us, because in many ways the cloud is similar to a WAN. Users can be situated anywhere and we can apply the same optimization techniques to make applications feel local. With Steelheads of various flavors you can vastly accelerate the movement of data from on-premise to the cloud and back, and also between clouds—even if the providers are different. In our Whitewater appliance we’ve adapted our optimization technology to remove the drudgery from backups, allowing you to point your backup software to a target that in turn compresses, encrypts, and backs up to the cloud—no more tape. For cases where you aren’t able to deploy our flagship symmetric optimization technology, we offer application acceleration that you can add to your cloud-based applications through two recent acquisitions: Zeus and Apptimize. And soon, through our partnership with Akamai, you can accelerate third-party SaaS applications by optimizing the high-latency link between your current location and a point of presence topologically very close to your ultimate destination. Regardless of which cloud providers you choose and what technology they’re built on, we can make any cloud perform better.
Is the cloud necessarily a permanent solution? When does it make sense to use the cloud as a temporary resource?
This would seem to conflict with the “cloud first” mandate and the notion that the cloud is the new default. It can be tempting to consider the cloud as an extension of an existing data center. Unfortunately, such thinking imposes limits—you’re less free to build applications that incorporate full cloud functionality and you can’t move to a full scale-up/scale-down resource curve. Also, I think this can create a mindset where the cloud becomes that “extra” thing that ends up not being managed well, or at all.
Is moving to the cloud strictly an IT issue? What other stakeholders need to be included in the discussions, and why?
IT organizations that choose, on their own, to move production workloads to the cloud do so at their peril. Capacity planning and disaster recovery require input from the agency’s working units. Data location and portability require consultation with legal and compliance teams. Cloud provider procedures and certifications require review by internal audit groups. Budgetary changes require working with finance folks. Don’t allow cloud projects to become line items in some developer’s monthly expense report!
Agencies will need to develop applications and services for their specific needs. Does the cloud change how they do that?
There are fundamental differences in the way applications should be built to run on clouds. Probably one of the most shocking changes is that servers are now disposable horsepower. Infrastructure is code: when you need compute resources, you simply issue a few API calls and within a matter of minutes those resources are available. Vast amounts of distributed storage are also there, waiting for you to allocate and use. In many cases this storage incorporates automatic replication, so you no longer need to build that into your application. Cloud computing also simplifies the process of updating applications: you clone an existing application, add and test updates, then move users over to the new version. Cloud providers often publish detailed technical guidance for how to develop on their particular platforms.
Part three will follow two weeks from today.
IBM recently highlighted new options for data protection using Tivoli Storage Manager in a company Flash video about the benefits of cloud storage.
The video describes how to think through a cloud strategy and how Riverbed's Whitewater cloud storage gateway enables TSM users to replace tape and disk backup with cloud storage at significant cost and management savings, all without any changes to their TSM environment. Whitewater maximizes data transfer performance and secures data both locally and in the cloud while minimizing capacity requirements with deduplication and compression. Essentially, Whitewater looks and acts like you have the cloud as a backup disk target right in your datacenter.
If you are one of the many TSM users that are tired of struggling with cumbersome tape or expensive disk backup systems, take a look at the video and see if Whitewater and the cloud can help address your data protection headaches.
For part two of our federal IT initiatives Q&A with Steve Riley, we focus on the Cloud First policy. Now, if you do not work in the the federal IT space, the Cloud First policy is the federal government's strategy for cloud computing, which is part of the greater plan to reform federal IT. The general estimate is $20 billion of the federal government's $80 billion in IT spending could be used for cloud computing.
In the enterprise, cloud computing is a trend that has been discussed and migrated to for many years. However, the push for cloud computing in the federal IT space was kicked-off and championed by Vivek Kundra, the U.S. government's first CIO. And, although Kundra recently left his post last month, former Microsoft executive and managing director at the Federal Communications Commissions Steve VanRoekel has taken the position and reigns, and plans to use Kunrda's grand vision for IT reform as a foundation for even greater changes to federal IT.
Grab your coffee, tea or something stronger, and watch the below Q&A, which covers what spurred the Cloud First policy (cost reductions and collaboration among agencies), considerations (safety and security), as well as how Riverbed helps agencies to execute on the Cloud First policy.
Stay tuned. Next week, we'll talk about data protection.
It's the final countdown – not only for 2011 Federal Fiscal Year, but also for a major phase in the Federal Data Center Consolidation Initiative (FDCCI), which is integral to the 25-point plan for reforming federal IT. By September 30, officials at federal agencies are required to complete their data center consolidation plans and to-date progress reports. The following week, the plans will be posted to CIO.gov. And, every quarter, agencies must post data centers they plan to close, as well as provide an update on what has already been closed or consolidated. A list of the data centers planned for closure to date is available here.
If you're not familiar with the FDCCI, which was launched in February 2010, the high-level on this initiative is federal agencies are required to close 800 of the U.S. government’s 2,094 data centers by 2015. To make good on that goal, 373 federal data centers will be closed by the end of 2012.
Why is there a push to consolidate data centers? One of the key objectives is to save $3 billion annually, mainly from gaining efficiencies in energy consumption, maintenance and management of data centers. Another objective is to gain IT efficiencies across agencies and foster greater collaboration. But, this should not be done at the cost of performance, especially to the user experience.
Today's video kicks-off the first in a series of video interviews, in which I ask Steve for his perspective on a federal IT initiative – FDCCI, cloud first and cloud computing, data protection, telework and mobility, and desktop virtualization — what should federal IT leads take into consideration, and how Riverbed address challenges around IT performance for helping ensure success.
Below is the first video interview on data center consolidation. Steve discussed key areas of consideration for how to determine which data centers to consolidation based on applications and information type. He also outlined the challenges associated with moving applications and information farther away from a user, as well as how to ensure that the user experience is optimized, let alone not affected. The takeaway: when data centers are further dispersed across great distances, yet the user is staying put or they are used to accessing an application that is located on-premises, then the WAN becomes even more critical. In order to make FDCCI a success, and not impact the user experience, then federal IT leaders will want agents to feel like the application is hosted locally. In short, Riverbed accelerates the movement of data, information and applications, and eliminates latency that is often associated with computing over great distances. Why is this a consideration? With all the strides to bring efficiencies and reduce costs, ensuring that an agent's productivity is not impacted should also be tops on the list of considerations.
Be sure to tune in every week over the coming several weeks for more interviews with Steve.
One of Vivek Kundra's most significant contributions in his position as first CIO of the United States was to introduce a "cloud first" policy for government computing projects. Mr. Kundra's replacement, Steven VanRoekel, vows to continue this policy, which will help numerous government agencies streamline their missions and improve citizen services.
Recently I was interviewed as part of a series of technology provider perspectives on government cloud computing. I'd like to share that interview with you, our blog readers. I plan to post the questions and answers in a three-part series, the first of which follows here. As always, we welcome your thoughts and reactions.
Agencies are under a “cloud first” mandate for procuring IT services, so awareness of the cloud should be there. But what's the level of understanding about how agencies can benefit from it?
Cloud providers love to wax rhapsodic about the benefits of utility computing, and you can find plenty of appealing goodness on their marketing web pages. What’s missing, I think, is a way for agencies to translate the generic promises into specific benefits that they can then measure. Of course, this means you already need a fairly good understanding of what you have, what works well, and what doesn’t work well. From this you can then more easily evaluate the benefits of the cloud in general and also compare specific benefits of various providers. Unfortunately, if you don’t have a good idea of what you’re already doing, it’s difficult to truly know whether moving to the cloud will bring positive results.
Is moving to the cloud a “no brainer” for agencies, and they should just go ahead and do it? What process do they need to go through to decide if they are ready?
Assuming you can accurately translate the promises into measurable benefits, I’d say yes, agencies should adopt cloud computing as the new default deployment model for new projects and for existing projects that are planned to undergo a refresh cycle. I don’t like characterizing it as a “no brainer,” though. To wring maximum value from a cloud deployment requires a fair amount of brains: cloud architecture is fundamentally different from traditional on-premise architecture, and this is reflected in how you develop applications, where you locate data, how you plan for disaster recovery, and how you implement information security controls.
Are there any agency applications or services that should never move to the cloud, or is everything an agency does open to that move? In either case—why?
One way to influence change is to set new defaults. For example, in states where applicants for driver licenses have to opt in to organ donation, only 20% chose to do so—vastly limiting organ availability. Some states have reversed this; drivers are organ donors by default unless they opt out. 80% stick with the default, and all residents of these states benefit from the greater availability of organs. So the “cloud first” mandate along with the mental shift to cloud as default requires that an agency must obtain an exception if it wishes to deploy a project on premise. If you make the exception process sufficiently painful, this will discourage agencies from inventing convenient excuses to continue doing things the old (meaning familiar) way. Clearly there are certain exception criteria that will prevent some workloads from moving to shared infrastructures. But does each one need its own dedicated data center? Could, perhaps, all these workloads share a single private “top secret” cloud? I’d imagine so.
How can agencies decide which flavor of cloud—private, public, or hybrid—is right for them?
It doesn’t make much sense to choose a deployment model from the start and then attempt to force all workloads into that one model. Different workloads can use different models—that’s one of the neat things about cloud and emerging technologies that make it easy to port workloads between clouds. So I’d say that the decision of which deployment model to use for any particular workload is driven by the answer to the previous question and, of course, the following question.
Many potential agency users of the cloud believe it's not yet secure enough for their needs. Are they right?
Perhaps we should let General Keith Alexander, chief of the US Cyber Command, answer that one for us:
“This architecture would seem at first glance to be vulnerable to insider threats—indeed, no system that human beings use can be made immune to abuse—but we are convinced the controls and tools that will be built into the cloud will ensure that people cannot see any data beyond what they need for their jobs and will be swiftly identified if they make unauthorized attempts to access data... The idea is to reduce vulnerabilities inherent in the current architecture and to exploit the advantages of cloud computing and thin-client networks, moving the programs and the data that users need away from the thousands of desktops we now use—up to a centralized configuration that will give us wider availability of applications and data combined with tighter control over accesses and vulnerabilities and more timely mitigation of the latter.”
These are quotes from his testimony to Congress in March 2011. His statements reveal a remarkably keen understanding of where risk to information lies and how to mitigate those risks. If the world’s largest online retail company stores and retrieves its entire product catalog from the public cloud, if Treasury.gov, Recovery.gov, and NASA all use the public cloud, if major pharmaceutical manufacturers use public cloud resources for testing the protein folding sequences of trade-secret chemical compounds, if the world’s largest movie streaming/subscription service runs its whole business—front and back office plus its intellectual property—from the public cloud, then just who are these people who claim “oh, the cloud isn’t secure enough for me”? Cloud providers are under constant pressure to prevent their services from becoming attractive to bad guys and to make it exceptionally difficult for one customer to interfere with another. And they’re constantly striving to obtain ever more stringent certifications. That’s a lot of work, more work than most private or single-purpose data centers have the staff or budget to undertake. Now, having said all that, if your cloud provider refuses to be transparent about how they manage their security, I suggest you take your business elsewhere.
Part 2 will follow two weeks from today, and part 3 will follow two weeks after that.
Good Monday morning! Evan Marcus, esteemed caretaker of this here fine blog, recently updated our posting schedules. Since I rather enjoy writing, I volunteered for an every-other-Monday slot. Coincidentally, then, I've got something to share with you: our first interview on Cloud Cover TV.
I've known Jo Maitland, executive editor at SearchCloudComputing.com, ever since my days at Amazon Web Services. Recently I joined her at TechTarget's San Francisco studios for an in-person interview. We discussed a range of topics, covering the state of cloud security and Riverbed's broadening of its product portfolio. Also we talked about why I felt the time was right to take what I learned during my 18 months at AWS and apply that to what I see as the next major barrier to cloud computing: performance. While security still tops all surveys asking respondents what's stopping them from moving to the cloud, I'm having trouble squaring that with what I've actually seen: large enterprises in verticals handing sensitive information are already running production workloads on various clouds. And for many of these customers, once they design proper security, performance is usually the next hurdle to tackle. Moving data fast is what we do best here at Riverbed. As companies migrate their IT into the cloud, I can't imagine a better place to work.
Anyway, please enjoy the interview. And let us know what you think, we love feedback!
Today's Guest Blogger is Mark Day, Riverbed's Chief Scientist.
Back when I was studying electrical engineering, I first learned about lumped circuit models. In a lumped model, we considered the wires to be ideal and just focused on the behavior of the connected components (resistors, capacitors, inductors). After we knew what we were doing (more or less) in that simplified world, we learned about transmission-line models, where we modeled the behavior of the wires. And we learned that for certain kinds of real-world problems like managing an electrical grid, a lumped model would give you hopelessly wrong answers.
A little later, when I was in graduate school for computer science, I read an entertaining rant about ideal wires vs. the reality of building a fast parallel computer. Today that item came back to mind as I was thinking about explaining WAN issues and cloud performance to people whose frame of reference (their model, if you will) might be mostly LANs.
I was happy to find that my memory had mostly served me correctly. Here’s a relevant excerpt from Danny Hillis’s book The Connection Machine (MIT Press, 1985):
“Fundamental to our old conception of computation was the idealized connection, the wire. A wire, as we once imagined it, was a marvelous thing. You put in data at one end and simultaneously it appears at any number of useful places throughout the machine. Wires are cheap, take up little room, and do not dissipate any power.
“Lately, we have become less enamored of wires. As switching components become smaller and less expensive, we begin to notice that most of our costs are in wires, most of our space is filled with wires, and most of our time is spent transmitting from one end of the wire to the other. We are discovering that it previously appeared as if we could connect a wire to as many places as we wanted, only because we did not yet want to connect to many places.”
I think you can see that some of the same reality-check critique applies to ideas about the performance of cloud computing and distributed computing across WANs, with the network taking the place of the wire. It sure is simple when the network’s behavior doesn’t matter, but unfortunately that isn’t always true.
At Riverbed it’s nice to have a variety of technologies that can be brought to bear on those network-related issues. One subtle problem is that because we think about it all the time, it’s easy for us to take for granted that “switch of models” that is sometimes harder for our customers. People sometimes have to shift from assuming that everything “just works” in ideal fashion, to actually thinking about the WAN as an element of the system.
Does anyone else miss writing those high school history papers? You know, the ones that all start “With the Industrial Revolution came widespread change across the economic, political and social fabric of western civilization…” Blah, blah, blah. Anyone? Anyone?
Not so much, eh? Okay, it wouldn’t be the first time I’ve flown solo in the history-nerd department.
But the reason the Industrial Revolution was such a great opener for many a high-school history paper was that it really did transform the economics of production, which had wide-reaching implications for modern civilization. Production went from small-scale, localized, cottage-industry to large-scale, concentrated production centers benefiting from economies of scale and scope. Various innovations, from the flying shuttle to the assembly line, were instrumental in creating the efficiencies of industrial revolution factories, but at the end of the day, creating more products, more quickly wasn’t worth much unless you had exposure to enough customers to buy them. In other words, you had to get all those products to market.
Enter the steam engine. With a steam-powered railway infrastructure, industrialized manufacturers could get their products to more markets, faster. Which is a good thing when you just churned out more pairs of pants in a year than everyone in a hundred mile radius could wear in their combined lifetimes.
So, why am I going off about steam engines and the Industrial Revolution? Well, here in IT land, we’re having a bit of an Industrial Revolution redux. Virtualization has enabled IT administrators to consolidate servers and gain economies of scale, and companies like Amazon, Rackspace and AT&T are beginning to offer basic IT services on-demand, passing on even GREATER economies of scale. But all the cheaper, on-demand compute and storage in the world isn’t worth much if you can’t get the product (applications) to market (users).
We need a steam engine for the cloud revolution. Oh, wait! WAN optimization has proven to accelerate network-based applications by reducing bandwidth consumption and the impact of latency. Choo-choo!! Layer on network performance monitoring (who’s keeping track of all these trains?), web content optimization (how are we loading these products on the trains?), and application delivery controllers (what train is going where, when?) and you have yourself the speed and intelligence for a high-performance cloud delivery system.
Extra credit: Join Amazon Web Services Senior Evangelist, Jeff Barr, and me on August 17 for a webinar on how to optimize your cloud server deployments. Register here!
This has long been an area of attention at Riverbed; for years now we have been helping Enterprises address and solve the challenges they've faced with business applications performing poorly across their private WANs. Riverbed's award-winning Steelhead family of WAN Optimization appliances have held a leading position in the global market for the last several years, according to several leading industry analyst firms.
Now, in the era of Cloud-based IT services, the performance problems created by the increased distance between users and their data, combined with the lack of QoS and un-guaranteed internet performance are significantly worse than those faced within a structured and well-known corporate IT environment. Thus the need for performance optimization in these cloud environments is even greater than in traditional, private corporate IT.
These requirements have prompted Riverbed to develop and offer a whole range of products and technologies, to address the vast majority of Cloud-based IT applications and environments. In his recent blog post, David mentioned only one Riverbed product in this context, the Steelhead Appliance.
In addition to this though Riverbed also has the following products available to address the Acceleration & Optimization needs of virtual and cloud environments :
Additionally with the recent acquisition of both Zeus and Aptimize, Riverbed now also has two new Single-Ended technologies - Application Delivery Controller and Web Content Optimization - to help accelerate both public and private cloud-based web content and applications.
So in summary, Riverbed really should be your first port of call for any cloud IT service acceleration & optimization requirements.
Riverbed Technology (NASDAQ: RVBD), the IT performance company, today announced that it has acquired Aptimize Limited, a privately-owned company that is a market leader in web content optimization. The Aptimize organization, based in Wellington, New Zealand, will become the new Web Content Optimization product group, led by the former CEO of Aptimize, Ed Robinson. Riverbed® also announced today the acquisition of Zeus Technology, a privately-owned company that delivers high-performance software-based load balancing and traffic management solutions for virtual and cloud environments. The acquisitions of the two companies will form the cornerstone of Riverbed’s asymmetric optimization strategy.
It's a natural transition for us for so many reasons to add these technologies to our portfolio. At the end of the day, customers have come to rely on Riverbed to solve their performance problems for any application over any network. They don't necessarily care what tool they use - a WOC, an ADC, or NPM - just so long as their businesses can operate the way they need to.
Zeus and Aptimize make sense because they have created software the way application owners and devops teams want to consume application delivery - and build it right into the application stack. They are designed to be deployed into modern public and private clouds, unlike much of the legacy hardware ADCs that are sold today.
With Zeus and Aptimize, customers will get faster, more reliable, more secure Web applications, regardless of whether they are consumer facing or behind the firewall. We look forward to sharing this new technology with you in the coming days!
Riverbed and EMC are coming to a city near you! Starting today, in Chicago, you can meet with WAN optimization and data protection experts from Riverbed at EMC Forum events, and learn how to effectively and efficiently protect your data in private, public and hybrid clouds environments. At these day-long events, you will experience keynotes, breakouts and exhibits – all focused on addressing performance for managing your data in the cloud. Riverbed executives will discuss how WAN optimization plays a critical role in a data backup cloud infrastructure, off-site data security, lowered data protection costs, optimized storage utilization and lowered TCO from automation, as well as demonstrate integrated Riverbed and EMC solutions for data protection and backup to the cloud.
Join Riverbed at the following EMC Forum events:
For additional information on the EMC Forum events, including event locations and registration, visit http://www.emc.com/campaign/global/forum2011/.
Also, learn more about how Riverbed and EMC work together — for your data protection needs — at http://www.riverbed.com/emc/.
The 4th of July, The Glorious Fourth, Independence Day or the birthday of the United States of America are all names that are used to recognize the day the declaration of Independence was adopted by the Continental Congress. This day marked the legal separation of the 13 colonies from Great Britain and opened new opportunities for the people of the United States to have certain unalienable rights.
Second sentence from the Declaration of Independence states “ We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that amongst these are life, liberty, and the pursuit of happiness.”
Much like the 13 colonies that declared their independence in July 4, 1776, knowledge workers should also have unalienable rights including the liberty to access data anywhere, any time without having to deal with bandwidth and latency issues which we would all agree, if not addressed, would dampen our pursuit of happiness!
So in tribute to the 4th let me suggest 4 ways to gain independence and performance happiness.
1. Know your applications and make sure to choose a WAN optimization vendor that can give you application specific acceleration. Riverbed prides ourselves in supporting the broadest range of layer-7 application-specific optimizations of any WAN optimization vendor. Chances are that we can improve the performance of the applications you are running
2. Make sure that your choice supports all environments - data center, branch office, mobile workers, VDI, private cloud, public cloud, etc. Riverbed offers WAN optimization technology that delivers best-of-breed performance for all environments eliminating the need to have different vendor interfaces, integration, and performance standards.
3. Know where your performance issues are and have the right tools to quickly drill down to the root cause. Riverbed’s application-aware Network Performance Management (NPM) integrated architecture gives customers robust real-time network and application performance analytics, resulting in complete top-down visibility into their network and applications. Cascade is the only NPM solution on the market that fully consolidates real-time business-level performance views with packet-level analysis all in a single data set that users can seamlessly navigate in order to quickly and accurately diagnose and troubleshoot network and application performance problems.
4. Chose a vendor that has industry leading support. In April of this year Riverbed was recognized and certified by J.D. Power and Associates and TSIA for excellence in Global Customer Service and Support. These certifications acknowledge excellence in delivering outstanding service and support on a worldwide basis to Riverbed customers. Riverbed is one of a select few companies to receive this distinction for global certification under both the J.D. Power and Associates CTSS and the TSIA Excellence in Service Operations program in the same year.
Typical 4th of July tradition is to celebrate by hosting a barbeque, watching fireworks or by decorating in the colors of the American flag – red, white, and blue. For our Riverbed customer’s we encourage you to celebrate your performance independence by celebrating in the spirit of the 4th.
So whether you enjoy a nice steak and Steelhead, have some Cascade and cocktails, or decide to parade the streets decorated in Riverbed orange, we wish you a very happy Independence day!
Las Vegas was a buzz last week as networking vendors showcased their latest and greatest product offerings. Taking a stroll around the IT Expo show floor, it was obvious that cloud computing was the hot topic for vendors ranging from Intel to F5.
Vendors were demonstrating a variety of cloud technologies that essentially help to overcome many of the obstacles that companies are facing when leveraging the cloud for infrastrusture, platforms, and software as a service. Much of the focus of the show was around security, management, and connectivity.
Riverbed on the other hand focused on another key cloud consideration and that is performance. As I blogged about last week, performance is a major consideration for organizations looking to take advantage of the benefits that the cloud has to offer. Riverbed hammered home this point with a daily booth demonstration of its Cloud Steelhead product. Cloud Steelhead essentially provides LAN-like performance when accessing servers and applications hosted in the cloud.
To further demonstrate the cloud performance point, Riverbed held a contest where we asked folks to guess how long it would take to transfer (unoptimized and optimized) a 50MB file from the cloud to the Interop show floor.
I believe our very own Sr. Director of Marketing Miles Kelly summed it up nicely when he basically said that Interop is demonstrating that the cloud is moving from hype to reality.
How good is your cloud? It's a hard question to answer. Part of it depends on how you define cloud, and part of it depends on how you define good. At Riverbed, we don't have plans to define the first one for you (plenty of other people are doing that); and the question of "good" always comes down to performance in our minds. Is it local-like performance? And do you get that performance with the reality of cloud-like cost efficiency? In the spirit of driving performance of the cloud higher, I'm excited to announce Riverbed's strategic partnership and product direction with Akamai, the leader in Internet performance optimization.
Earlier today, Akamai and Riverbed announced their intention to develop a joint application acceleration solution for hybrid cloud networks that leverages the combination of Internet optimization and WAN optimization. The planned solution will accelerate the broad array of cloud-based applications.
So what's the story? You already know that Riverbed has some pretty powerful cloud acceleration tools like Cloud Steelhead, Virtual Steelhead, and Whitewater. This new offering from Riverbed and Akamai will add to this toolset, giving enterprises a way to leverage their existing Riverbed Steelhead investment while combining it with the incredibly powerful Akamai Distributed Edge Platform.
This offering is designed for businesses and government organizations of all sizes that would like to use public cloud resources and receive the same local-like performance that their end users depend on to be productive with private cloud applications. The offering is targeted primarily at Software as a Service offerings. Examples of those offerings include Microsoft Office 365, Salesforce.com, NetSuite, SuccessFactors, and many, many more.
SaaS is becoming more and more important to businesses, but enteprises have a distinct lack of control in terms of distance (where the cloud data center is located), data (how much of it is required to go back and forth over long distances), and access (bandwidth, efficient routing, and even access to the DC itself to place performance optimization technologies). The end result? Customers needed a new way to accelerate SaaS applications that would address private WAN performance, Internet performance, and operate across manySaaS applications without access to the SaaS data center itself.
There's more to this partnership than just a logo exchange - there's real technology integration going on behind the scenes. Riverbed will integrate Akamai Internet optimization software directly onto the Steelheads that live in enterprise data centers, essentially extending the edge of the Akamai footprint to the enterprise data center. At the same time, Akamai will integrate Riverbed Steelhead technology into the Akamai edge platform, extending the customers’ Steelhead footprint up to the doorstep of the SaaS provider.
The result is that the combined optimizations of Akamai and Riverbed will deliver end-to-end acceleration from the front doorstep of the broad array of SaaS applications available all the way down to the branch office and mobile worker of the enterprise. This ability to accelerate end to end, from the front door step of the SaaS provider all the way to the branch or mobile enterprise user, solves the hybrid cloud network challenge.
The 'hybrid cloud network' deserves a little bit of extra explanation. When you typically think of accessing an application, you might do it in one of two ways. If it’s an application in your enterprises’ private cloud, you’d access it over the WAN. If it’s a general web site, or a partner portal, you would access it over the internet.
But things are changing and getting more complex. With SaaS applications, remote & branch enterprise users are typically backhauled across the private WAN and then go across the Internet to access an application – accessing an application over a hybrid network. While enterprises have tools to accelerate WANs and separate tools to deal with Internet performance limitations, they do not have an integrated solution that solves the problem end-to-end in a seamless fashion.
Combining the performance constraints of the Internet with performance constraints of the private WAN makes for a big, hairy challenge. That's why there's no better combination than Riverbed and Akamai to solve this. Combine the largest & smartest Internet optimization technology with the smartest WAN optimization technology and the advanced platform WOC marketshare leader, and you've got both the brains and the brawn to solve the SaaS performance problem right.
Welcome to the Riverbed partner family, Akamai!
Having spent a large part of my career working in information security, both at Microsoft and Amazon Web Services, I tend to read a lot of security news -- especially when it invokes "cloud." (Indeed, at the New York Cloud Expo in June, I'm delivering an entire presentation on cloud security.) A very interesting bit of news crossed my RSS reader the other day. A couple quotes:
The idea is to reduce vulnerabilities inherent in the current architecture and to exploit the advantages of cloud computing and thin-client networks, moving the programs and the data that users need away from the thousands of desktops we now use -- up to a centralized configuration that will give us wider availability of applications and data combined with tighter control over accesses and vulnerabilities and more timely mitigation of the latter.
This architecture would seem at first glance to be vulnerable to insider threats -- indeed, no system that human beings use can be made immune to abuse -- but we are convinced the controls and tools that will be built into the cloud will ensure that people cannot see any data beyond what they need for their jobs and will be swiftly identified if they make unauthorized attempts to access data.
These words are from Gen. Keith Alexander, chief of the U.S. Cyber Command, in testimony to Congress during March. Especially notable is the chief's view that clouds can be built suffiently secure, while they have yet to prove their promised savings of manpower and money. Wow!
It reminded me of a reglar mantra from my security talks: the answer to the question of "How much security?" is "Just enough." Of course, quantifying "just enough" takes a bit of work. Alas, with so many security checklists floating across the 'tubes, it's tempting to blindly follow someone else's advice. This is exactly the wrong thing to do.
A key point to remember is that many security decisions involve making some kind of tradeoff. Bruce Schneier describes this very well in the beginning of his TED talk:
To be secure in the cloud requires trading off one form of control for another. Traditional security controls are grounded in location: if you know where something is, and you can claim ownership of it, then it's probably secure. If you don't know where something is, and someone else appears to own it, then it's probably not secure.
In the cloud, location-based security as a concept falls apart. You can't pinpoint the exact location of your data (building, room, rack, unit, drive). Someone else is the steward of your data -- though cloud providers should be clear that you still retain full ownership. Does this mean that, to achieve the promised benefits of cloud, your tradeoff requires giving up all security?
No. The tradeoff you make is one of kinds. You give up the old model and instead adopt a new one. This model is built from service level agreements, auditable security standards, and encryption plus digital signatures. You can retain control of the data even though you don't have control of the infrastructure. In one respect, the model isn't so new: we use it already for connectivity. Where shared pipes (the Internet) have replaced dedicated pipes (leased lines), we rely on the three elements to keep data in transit secure. The model extends to compute and storage, as well.
More to Gen. Alexander's point, he hints at something I call a disinterested third party. Cloud providers don't know about the context of your data and how valuable it is to you. This can reduce insider threats a lot. Providers work to build massive scale with as much automation as possible: fewer humans means fewer errors and less risk. Fundamentally, "how much security?" isn't the right question. Instead, ask yourself "how much risk?" Security decisions guided by sound risk assessment always strike the right balance and make the right trade-offs.
You might be wondering why I chose this moment -- given the recent troubles experienced by cloud providers and other online services -- to write a positive article about cloud security. One could argue that there's never a good time, so why not write when cloud security is on everyone's minds? Cloud computing solves a lot of problems really well. And it's maturing -- compared to just a couple years ago, offerings are more diverse and flexible, coming from well-known and trusted companies. If cloud security is becoming good enough for all but the most sensitive workloads of the Department of Defense, it's probably becoming good enough for the rest of us, too.
A few days ago, Evan wrote about our plans for Interop 2011 in Las Vegas. The first Interop event I attended was waaaay back in 1995, during its Networld + Interop days in Atlanta. I was working for an electric utility company at the time, and my major project was connecting the corporate network to the Internet, configuring firewalls, and writing policies. The show's big deal then was for vendors to demonstrate interoperability; I recall being overwhelmed by the sheer amount of live network gear on the expo floor and (for the most part) it was all working very smoothly. The best part was the "behind the scenes" tour -- I learned things there that I could immediately apply to my work.
I hope to return the favor next week during the two panel discussions I'm participating in. The first panel, Optimizing Hybrid Cloud Communications, occurs on Monday 9 May at 3:35 PM in South Seas B, and is part of the larger private cloud day portion of the Enterprise Cloud Summit. As the cloud marketplace and its technologies begin to mature, we're seeing enterprises adopt a mixed strategy -- they're happy to move some workloads to public clouds, while they prefer to keep certain workloads on-premise. These on-premise workloads have characteristics that suit them to cloud-like development and deployment, and occasionally may even use public clouds for a subset of their tasks. "Stretched clouds," then, might actually become the prevalent model. A typical enterprise knowledge worker might be using a local resource at one moment, then a public cloud resource at another. As users switch among these resources, how can enterprises ensure a consistent experience, one that always feels local and keeps productivity high? We'll explore how to achieve this goal during the panel discussion.
The second panel, New Age of WAN Optimization, occurs on Thursday 12 May at 11:30 AM in Breakers L, and is part of the networking conference track. Each panelist will deliver a brief presentation -- mine illustrates some interesting examples of optimization to, from, and between clouds. Following the presentations we'll take questions from the audience.
I plan to lurk in our booth in the expo hall during the remainder of the event. Hope to see you next week!
Both are seminal achievements that are much easier to understand than they may otherwise seem. While the result of dragging scotch tape through pencil shavings earned the Two Russian-born scientists the sciences greatest honor, Cloud Steelhead has the ability to earn you a similar distinction within your organization.
With this release Cloud Steelhead adds compatibility for ESX-based public cloud environments and extends cloud partner ecosystem.
Cloud Steelhead now offers validated technical compatibility with a number of cloud service providers, including Terremark, ZettaServe and Xtium, as well as solution technology partners CloudSwitch and Media Platform. These companies join Amazon EC2 and VPC as part of the wide ecosystem of cloud services supported by Cloud Steelhead.
Just as Alfred Nobel made his name with a BANG, Cloud Steelhead can be just as impactful on your cloud infrastructure, by speeding your migration to and performance from the cloud for all of the applications you run from Amazon Web Services or other ESX based enviornments. Just think, all of these benefits, without that pesky trip to Sweden....
While some enterprises are still unsure of how the cloud will benefit them, conference organizers certainly don't share that trepidation: my calendar is packed with events. I wish I could say that people have agreed on what the cloud is, but no -- not only is cloud computing keeping the events industry afloat, it provides a platform (get it?) for endless debate and gasbaggery.
Since I always enjoy being in the middle of the fray, I'll take my turn. Fundamentally, the cloud is one great big WAN. And where there's a WAN, there's...a...(here it comes)...way! To eliminate inherent performance problems! To provide users that lush, LAN-like experience! (Hm, is that an orchestra swelling in the background somewhere?)
The performance challenges posed by cloud computing are really very similar to those posed by other forms of computing from a distance. Fortunately, you can solve these in much the same way you solved general WAN performance problems you encountered during your data center consolidation project. I'm working on a paper some of the cool things you can with Cloud Steelhead and Amazon EC2. I'd like to float a few of my ideas here.
Many organizations get started with cloud computing by duplicating an important (but not critical) application to the cloud for a period of testing time. If the testing is successful, the on-premise deployment is decommissioned and the cloud deployment becomes primary. Steelhead can help this project succeed in two ways:
Distributed organizations can dispense with centralized IT altogether and instead deploy everything in the cloud. In some cases, venture capital funding even requires this. The economic and technical benefits are clear: applications can be written to take full advantage of the cloud's capabilities and resources can be added or removed to match demand. Bandwidth usage in these scenarios can be quite high, though. This is an ideal opportunity to combine branch office Steelheads, mobile Steelheads, and Cloud Steelhead together to drive further cost reduction from your monthy telecommunications and cloud transfer charges.
Cloud Steelheads can communicate with each other, too. Global enterprises using AWS often need to transfer data from one region to another. Because AWS relies on public Internet connections for data transfer between regions, Cloud Steelhead can accelerate these transfers and reduce your bandwidth costs, often by a significant amount. When you configure a Cloud Steelhead in each region, the auto-discovery agent will detect the Steelheads and ensure that cross-region traffic is optimized.
You can take advantage of AWS's international peering agreements to accelerate traffic to other cloud providers that don't offer Steelhead as an option in their data centers or on virtual machines. As an example, assume you’re a business based in Australia and you wish to consume PaaS or SaaS style cloud services from a provider whose closest data centers are in Singapore. Even under the best of network conditions, latency between these locations can be unbearably high. It's likely, though, that AWS's Singapore region is topologically very close to your ultimate destination, perhaps with latencies as low as five milliseconds. You can install Steelhead appliances in your Australia office, deploy a few Cloud Steelhead instances in AWS Singapore, and route traffic to and from your ultimate destination via AWS. Users will experience performance almost like that of a LAN because the long-distance, high-latency links are carrying only optimized traffic.
If you're wondering whether Cloud Steelhead would benefit your particular project, drop me a note in the comments below and I'll reply. Cloud Steelhead is an ideal application for compute requirements on Amazon EC2 where users are separated from resources by long distances.
Today's guest blogger is Mark Lewis. Mark is the Senior Director of Marketing and Alliances for Riverbed in Europe, the Middle East, and Africa (EMEA) for Riverbed. He is based in London, England.
I was talking with a customer recently about his storage needs, how they had grown over the years and how they were likely to grow in to the future. Truth is, he said, “It’s difficult to know what to keep and what to dispose of, so we keep everything, just in case”!
It reminded me of the ‘man drawer’ sketch by comedian Michael McIntyre. A ‘man drawer’ is the kind of place you store batteries, even old ones you haven’t had a chance to throw away yet. You’ll store instruction manuals for appliances you no longer own, new and old light bulbs, keys from homes you don’t live in any more and, of course the most masculine device key of all, the radiator bleeding key.
Why do we keep all this? I have asked this question of many customers and they all agree they know they are keeping more than they need to, but the reasons are very convincing. For some it’s regulatory needs, though they admit not all documents are regulated but it’s ‘too complicated to separate them’. For others the argument of falling disk prices means the cost of storing is always coming down, excluding management of storage of course.
But how are we going to protect all that data? Even with deduplication storage technologies, the rate of growth is going to continue at a tremendous rate and more importantly what happens at the point of recovery? There are going to be a lot of old batteries, bulbs and keys to sift through or rather their business document equivalents.
One assuring bit of news are the new services being launched by various organizations such as Amazon and AT&T with many more to follow, offering backup options in a multi-tenanted environment. Some call this ‘cloud storage’.
The great news is your back up really can be someone else’s problem and with technology from organizations like Riverbed with its Whitewater appliances these third party solutions can seamlessly integrate in to any IT environment. If you want to learn more look out for a number of online seminars and trade shows taking place. Or you can read more about these solutions here
Psst. Got a sec? Yeah, you. See, there's thing happening in IT Land. It's called cloud computing. It's kind of like the kitty box of technology: people are simultaneously drawn to it yet repelled by it. Guess what, though? It's here to stay. And it's completely changing relationships between groups of folks who spend a not insignificant amount of time avoiding each other socially and professionally. Well, that's about to change—starting Thursday 31 March. IEEE, Riverbed, Santa Clara University School of Engineering and Leavey School of Business invite you to an evening presentation: "Cloud Computing: A Multi-Disciplinary View From Technology, Business and Law."
Cloud computing requires the engagement of many disciplines across technology, business, and law. Contention between these disciplines is common, yet the success of cloud computing requires a new level of understanding how these fields can learn to work closer with each other. The adage “together we can be greater than the sum of our parts” has never been truer than now. Four panelists representing two technical aspects (security and performance), the business benefits, and the legal implications will join together for a lively discussion from which every attendee—no matter one’s specialty—is sure to learn something new and derive immediate benefit.
During the networking hour we invite you to enjoy wine with delicious food and mingle with like-minded professionals. Connect with the event sponsors (Riverbed and IEEE) and engage with the speakers:
I'd like to thank Sachin Desai, one of our kernel developers with a keen interest in the intersection of technology and society. It was his idea to bring together multiple disciplines and he deftly handled all the behind-the-scenes organization.
Make room on your agenda to join us from 5:45 PM to 8:30 PM on Thursday 31 March at Santa Clara University. Enroll today, space is limited!
Last week on TechTarget's Cloud Computing web site, Carl Brooks wrote a piece where he interviewed some IT people in the financial services and other IT-hungry industries about their use of Cloud Computing. His and their conclusion is that cloud computing adoption will be limited by the performance that critical applications can expect when connecting to remote cloud-based resources.
In the article, he discusses other issues such as the quantity of applications in a typical enterprise IT organization and the infrequency with which older applications get retired. The migration of applications that were originally written for local environments to cloud environments is another area of discussion.
Obviously application performance is a major success factor in a pretty much any major IT project, including cloud migration. While Riverbed can't help with migration or reduction of applications, we can most definitely help with issues around application performance when those applications are deployed across a WAN.
If users are in one or more locations remote to the data center where applications are deployed, then it's a call for Riverbed Steelhead appliances. We used to call that architecture distributed computing, but nowadays, it's called a Private Cloud. If you have physical access to both sides of the connection, then Steelhead appliances are the way to go.
If you have applications and other services within Amazon Web Services’ Elastic Cloud Compute (EC2) and Virtual Private Cloud (VPC) environments, then you can get the same kind of performance improvement with Cloud Steelhead. Since you can't physically enter Amazon's data center and install a Steelhead appliance, a different approach is required, and that's what Cloud Steelhead gives you.
And if you've got a hybrid cloud (which is really the most common arrangement, after all), you'll need a hybrid solution. But the bottom line is that if you've got distant computing resources, private or public, then you need WAN Optimization to make sure that your users get the kind of performance that they need to be successful and happy.
Regardless of the architecture, Riverbed will make sure you get the most out of your remote applications.
Tom trainer recently blogged in Network Computing about how EMC could become the Amdahl of cloud storage. Tom starts off his blog by saying that there are some fundamental issues around how EMC sells its Atmos cloud storage platform to customers. The key issue he focuses on is how EMC charges for storage up front and not for storage that is used. According to Tom, the utility aspect of cloud storage is missing from Atmos.
This is totally incorrect as EMC sells Atmos to service providers such as AT&T, who in turn sell cloud storage to their customers. AT&T's service is called Synaptic and it is indeed based on a utility or pay for what you use model.
Personally I believe the more accurate cloud storage discussions should be around how to address the roadblocks that exist when deploying cloud storage. Specifically, how to deploy cloud storage without changing your backup platform ecosystem. What about ensuring security when transferring data to the cloud and ultimately storing your sensitive data in the cloud? And obviously performance? If you move to a cloud storage environment, the performance when accessing that disk in the sky will be much slower compared to your local area network environment.
I will end my rant with a shameless plug. Riverbed's recently launched Whitewater product addresses the cloud storage roadblocks and enables organizations to seamlessly deploy cloud storage and reap all the cloudy benefits that you would expect. This includes paying for only what you use.
You can watch a demo of Whitewater here:
A recent Harvard Business Review blog authored by Sinan Aral, Arun Sundararajan, and Mingdi Xin discusses the results from research they conducted on cloud computing and what the strategic implications were for firms considering cloud-based solutions.
Although the research sample was relatively small (about 2 dozen CIOs and senior IT managers), the research involved highly qualitative, in depth interviews. The survey results found that although the cloud promises to create value on economic, technical and strategic fronts, firms report a wide range of performance gains from their adoption. Importantly, and not surprisingly, the firms that orchestrate a set of complementary capabilities report higher returns from their cloud adoption.
The research also found that agility is a plus when adopting cloud services. Firms that are structured to quickly increase or decrease their commitment to new applications and innovations were better suited to cloud-based solutions, which themselves allow rapid scaling of resources and thus lower the risk of organizational innovation.
Finally, the blog states that their findings suggest that cloud-based models offer advantages for some applications and to some clients but not others. They are extending the research on this front and posing a survey to hear what your company's experience has been with cloud services.
Riverbed's position continues to be that we focus on the performance aspect of cloud services. Once an organization gets past various cloud computing concerns ranging from security to vendor lock-in, Riverbed will assist with addressing the performance challenges.
Good day, everyone. Following Evan’s recent announcement about adding more voices to the Riverbed blog, I’d like to take a moment to introduce myself. I’m Steve Riley, recently joining Riverbed’s Strategic Technology Group—the same team to which Josh Tseng, a regular blogger here, belongs. I came to Riverbed from Amazon Web Services, where as a technical evangelist I helped customers understand how to solve the sometimes confusing problems around security, privacy, and compliance in the cloud. Before that I was at Microsoft, where for a time I worked in the telecommunications and security practices of Microsoft Consulting Services and mostly in the Trustworthy Computing Group as public speaker, customer advisor, author of several articles, and co-author of a Windows security book.
I’m thrilled to join the community of Riverbed employees, customers, and partners. Riverbed isn’t a new name to me; when people would ask for my opinions of how to squeeze the most out of their expensive network connections, Riverbed was the obvious choice and I’d recommend Steelhead consistently. While I thoroughly enjoyed my years of working to help others protect their networks and fight the bad guys, it feels good to return to my networking roots and be a part of the company truly revolutionizing the foundation of the 21st century: constant connectivity.
Among other fun and challenging things, part of my role at Riverbed is to assist you, our current (and future) customers, derive maximum value from virtualization and cloud computing. Enterprise adoption of the cloud will happen—indeed, for many enterprises, the verb should be is already happening. Cloud Steelhead, in conjunction with Steelhead appliances in regional offices and Steelhead Mobile for road warriors, enable businesses to fundamentally rebuild themselves. Location matters: if you can live and work closer to your customers yet still communicate with your colleagues as easily as if you were in the office, do you really need a massive central headquarters nowadays? (I have many ideas here and will share them with you over time.) Whitewater wipes out the drudgery of backups and gives you the assurance that all of your data is preserved multiple times in multiple distinct physical locations.
At Riverbed we love pushing networks to their limits. Let us take care of quickly and safely moving and storing your critical information so that you can concentrate on the business of your business. My role is customer-facing; I look forward to meeting many of you in person soon. In the meantime, or at any time, feel free to reach out to me directly at firstname.lastname@example.org.
Riverbed's Steelhead WAN optimization appliance has been arguably one of the most successful IT infrastructure products since it started shipping in May 2004. While the overwhelming market success of Steelhead is undeniable, many industry followers continued to wonder if Riverbed would be able to expand our market reach beyond our core WAN optimization competency.
My initial response to the industry followers is that if you are going to have a single product and market focus, what better market to focus on than the WAN optimization market? Riverbed's Steelhead family of WAN optimization products has a profound impact on a variety of IT initiatives ranging from application performance to infrastructure consolidation to disaster recovery and business continuity. Not only is there huge value with our WAN optimization products (ask our customers), there is also a large untapped market with future customers that have yet to deploy WAN optimization.
That being said, I am happy to say that the last 2 years has seen Riverbed expand beyond our core Steelhead family of WAN optimization products. The result is new technology, capabilities, and value for markets that extend beyond the core WAN optimization market.
The Jan 2009 acquisition of Mazu Networks, and ultimately the release of Cascade, was a clear signal that Riverbed was serious about expanding our solution set beyond core WAN optimization. Cascade combines advanced analytics with a powerful drill-down capability to deliver an awesome solution for obtaining visibility into network and application performance. With the recent acquisition of CACE technologies, we have rounded out our network monitoring solution set with the addition of a robust suite of packet capture, and analysis technology.
The recent launch of Whitewater, our groundbreaking solution for optimizing cloud storage, is yet another example of how Riverbed continues to innovate in areas adjacent to the WAN optimization market. Whitewater enables customers to leverage public cloud storage provider environments to perform data backup and recovery.
While I am excited about our new products and new market potential, I am also happy to say that we continue to focus on our foundation Steelhead WAN optimization family of products with innovative features and capabilities delivered with every major software update. Keep your eyes open for an upcoming announcement on this front.
Can we finally shed the one trick pony label?
Riverbed Whitewater Appliance is a new product that we introduced back in November. It optimizes backups into the cloud, using the same kinds of deduplication and optimization that Steelhead appliances use across the WAN. It brings tremendous security (SSL, 256-bit AES), and a huge reduction in the amount of data that flows into the cloud, while keeping a local copy of the backed up data on the appliance. And it works with most current backup applications (NetBackup, Backup Exec).
Below you'll find four brand new videos of Riverbed customers saying great things about how easy Whitewater is to implement, and the value they've received.
Check them all out. Each speaker discusses different benefits that he and his organization have seen.
First up, Jeff Roundtree of Pump Solutions.
Next it's Mitchel Weinberger, IT Manager at GeoEngineers.
Third, we have Jeff Cummings from Lighthouse Document Technologies.
And finally, we've got Ben Bailey at Applied Voice and Speech Technology.
Riverbed customers have been using their Steelheads in private clouds for a number of years now. Since private cloud infrastructures are dedicated to a single user organization, it's usually not a big deal to physically install a Steelhead appliance behind the router or to configure WCCP in the switches supporting the private cloud.
However, the public cloud is a different matter. Public cloud users can install their own software applications on into the Cloud, but they lack access to the underlying network infrastructure supporting the public cloud. Without this access, it's not possible to deploy physical Steelheads into the public cloud network, or to configure WCCP in the switches in order to re-direct traffic to Steelhead software deployed in virtual machines.
To address this challenge, Riverbed recently introduced Cloud Steelhead, the first and only solution that is specifically designed to deliver WAN optimization services to the public cloud. Cloud Steelhead is one of two new products introduced by Riverbed in the past month (the other being Riverbed Whitewater). Unlike other WAN optimization products, Cloud Steelhead can be deployed into a public cloud without configuration changes in the underlying network infrastructure supporting that cloud environment. Once deployed, these Cloud Steelheads can communicate with Steelhead appliances at the customer's location or Steelhead mobile software clients installed on employee laptops to deliver fast LAN-like performance to public cloud applications.
The Riverbed innovation that makes this possible is the Discovery Agent, which allows traffic to be transparently re-directed to Cloud Steelheads without WCCP, PBR, or physical in-path deployment of the Steelheads. The Discovery Agent also provides clustering, load balancing, and high availability/failover for WAN optimization services delivered by the Cloud Steelheads--the same capabilities that Riverbed customers enjoy in their private cloud and enterprise Steelhead deployments.
A final key component of Cloud Steelhead is the Riverbed Portal, an online resource used by Riverbed customers to deploy, manage, and monitor their Cloud Steelheads within the public cloud. After logging in to the Riverbed Portal, Riverbed customers activate or extend the Cloud Steelhead licenses that they have purchased from their Riverbed reseller. The Riverbed Portal also provides detailed central reporting information on the health and status of their Cloud Steelheads.
We are interested in hearing from you: was this worthwhile? Did you enjoy it? Did you watch it live? Did you stick around until the end? Did you also watch the live stream? Please feel free to post comments below.
Room starting to fill up. More than 150 people expected in person, and more than 40 watching our web stream live. We'll get started in a couple of minutes...
More than 80 people watching on the video stream right now. Things get started with a brief video about Riverbed, briefly introducing Cloud Steelhead and Whitewater (two new products). Collaboration. Security. Speed. Imagine a world where the network isn't the barrier. Step into the cloud with Riverbed.
The presentation begins with Eric Wolford, Riverbed's Senior VP of Marketing and Business Development welcoming everyone and thanking them for coming. Eric will serve as the MC for the entire event.
Eric emphasizes the simultaneous launch going on in New York, London, and San Francisco. Same event at the same time; a big event for Riverbed. We have 6 different key players in delivering cloud services: AT&T, BT, Amazon, Orange, EMC, and Nirvanix. And 6 customers and two industry analysts. Runs through the complete agenda.
Safe Harbor Statement. (best part of the day!)
Cutting thru the cloud hype. 60% in survey say that cloud is part of their strategy. But 40% are confused about how to use it well. How do you sort through the morass and the hype?
Promise of Cloud is very efficient. Up to 300% more efficient, Elastic Capacity, Shift from CapEx to OpEx. Lets the enterprise focus their attention on other things. Like the business. Riverbed has dozens of customers who have asked for WAN Optimization in the Cloud; the cloud does not work without it. Large companies (including Fortune 50s) are moving all of their mail and collaborations into the cloud.
Service providers are maturing and offering new and better services. Analysts see a $40 to $70 billion market (Storage, Infrastructure, etc. as a Service).
Top 5 Challenges: Performance, Security, Bottlenecks, Availability, Data Locking, Vendor Lockin. These are all opportunities for service providers.
Cloud Steelhead is targeted at applications that need to move into the cloud. Whitewater is targeted at storage in the cloud. These are not FUTURE products; we will deliver these two products this quarter.
What is Cloud Steelhead? Regular Steelheads sit at either end of a WAN and accelerate data and applications in the data center. (84 watching the live stream right now) Cloud Steelhead brings that technology to the Public Cloud. How is that different from traditional Steelhead appliances and from Virtual Steelhead? (Glad you asked!) 1) Simple portal-based management. 2) Seamless cloud integration. 3) Subscription pricing model. 4) Instant deployment. 5) Easy cloning.
Example of seamless integration. Cloud vendors who have signed up will have Steelhead acceleration available. We use a discovery agent that will be bundled into virtual servers that vendors will place into the cloud; essentially enables auto-discover between the client and the local Steelhead and remote Cloud Steelhead. (Details available separately) It means a minimum of "futzing around". Makes it easy and seamless to implement Cloud Steelhead.
Bob (The Orange Haired Evangelist) Gilbert is introduced to show a demo of Cloud Steelhead. Has Steelhead Mobile on his Mac laptop, and has a T1 link with 185ms of latency connecting him to Amazon EC2 Cloud Service. He has two servers set up there. First we ping, and see about 180ms of latency. Bob will demo Microsoft Sharepoint.
Accessing the Sharepoint repository at Amazon. Downloading the first document, a 10MB file; takes a couple of seconds on a LAN. Windows estimates 3 minutes. Bob continues to talk as the file downloads slowly. How can you leverage the cloud with performance like this? And since Amazon doesn't let you access the data center, it's hard to initialize storage devices from what's already there.
With just a couple of clicks of the mouse, you can turn on acceleration to the Amazon EC2 cloud in less than 5 minutes via the Riverbed Cloud Portal. This boots up a Cloud Steelhead at Amazon. Then you connect your servers and apps (via optimization groups) to the Cloud Steelhead via the simple GUI.
Eric comes back up on stage. Mentions and thanks some Beta customers Gensler, Razorfish, The International Justice Mission, and AVST. Invites up Tom Marcello, Dir of IT Engineering from Razorfish.
Tom: Razorfish is an interactive marketing agency; 2nd largest according to Advertising Age. Over 2000 professionals in 21 cities across 10 countries. "Why are we here? Because we have fish in our name, and Riverbed likes that.... :)" They've been a Riverbed customer for 5 years.
When Razorfish was sold by Microsoft, they had to get out of the MSFT data centers, and decided to move into the cloud for their IT. But they had serious concerns about Cloud Performance. Their experience with Riverbed led them to believe that Riverbed could seriously help their concerns. Their customers want high-quality and high-speed analytics about their advertising (50GB/night). Amazon EC2 gives them the flexibility to upload lots of data, process it, and get it off. Performance is still a big problem for them though. When Riverbed approached them about Cloud Acceleration, they were immediately interested.
They had 3 use scenarios: DB queries, HTTP, File transfers. Cloud Steelhead gave them 300% faster performance, and 80% reduction in data traffic. Transfers went from 220 seconds down to about 72 seconds (3x faster). Result: Customers get data faster. Big cost savings. Very happy with the Cloud Steelhead.
Eric is back. Note that one Steelhead in the remote office can accelerate both regular and Cloud Steelhead traffic. No new hardware is required in the remote office.
Eric now introduces Whitewater (or the Cloud Storage Accelerator). Essentially storage is geographically distributed; more flexible and elastic. First time a WAN has come between a user and his data. Analysts estimate a multi-billion dollar market here.
Eric shows a continuum of storage types, going from Tier 1 (critical local data) to Archival and Backup. We see a lot more interest in this from the Archival and Backup space then the critical local data. We hope to evolve the product to Tier 1, 2, and 3 over time, but for now we'll start with Backup and Archive. History lesson on the history of Backup. Backups to tape... backups to disk... backups via replication... etc.
Backups to the cloud... Good news: elastic, protection, low cost. Bad news: security, performance, rewriting apps to work in cloud environment. Riverbed had the opportunity to introduce something in the middle to address these concerns.
Whitewater is an appliance, available both in a physical box, and to a virtual appliance. Can be deployed in the local DC or in the remote site. Whitewater is asymmetric. There is only a box on one end. This is Riverbed's first and only asymmetric solution. It will make using the cloud for storage much easier because we'll take care of all of that. Will be low cost, secure, and fast.
Whitewater accelerates, using some of what we've done before, and some that's new. Whitewater deduplicates data and leaves it deduplicated onto the cloud storage, making the storage utilization more efficient. AES-256 encryption will make sure that data in transit, and at rest, is secure. The key will be kept on YOUR premises. Nobody else will have access to it. Whitewater will easily integrate to your existing infrastructure. All you'll need to do is point your apps to a new destination. (Slide showing lots of backup type apps interfacing to a variety of cloud storage vendors.)
Unlike most 1.0 products, we are reusing a lot of technology from our existing products. The solution is proven. there will be 3 models. Whitewater is stateless; if a box blows up, a new Whitewater box in a new place can easily step in. Whitewater appliances can also be clustered. Eric now re-introduces Bob Gilbert to show a Whitewater demo.
To really show a Whitewater demo, it can take 10 hours or more. So Bob took 25 hours of demo time and compressed it down to just 7 minutes in a video. Demo has a 80ms T1 connection to a Cloud Storage provider (AT&T Synaptic, powered by EMC Atmos). Bob also put a file server in the cloud for comparison. First: integration. How do you connect your existing backup app to the cloud when these apps want a filesystem destination. Whitewater presents a CIFS or NFS filesystem to the backup app, solving that problem.
Export the shares from Whitewater, and tell the backup app (Backup Exec) to point to the shares in Whitewater. Unoptimized new backup operation via Backup Exec. The 5GB Egger 2001 dataset. Performance is 8-9MB/minute. At that rate, backing up 5GB will take 9+ hours. With Whitewater in place, reported performance increases to 503MB/minute (60-70x faster), and time to completion drops from over 9 hours down to just 10 minutes. Now we'll try a restore operation. Whitewater restore was also 10 minutes. This is LAN-type performance.
Security is the 2nd issue. Customers are uncomfortable putting secure data in the cloud; how do you ensure that it's secure? Bob did a sniff of the datastream without Whitewater; data is easily readable in plaintext. Whitewater provides AES-256 encryption in transit and at rest, and the data is not readable.
Deduplication. Weekly full backups and daily incrementals. Deduping the Egger dataset the first time results in a 6x reduction in data via deduplication. The followinng week (the second time) sees a dedupe ratio of 11 to 1. On Day 14, the ratio improves to 16 to 1. The Day 21 full backup is reduced 21x. And since the data is stored in the public cloud, there is a HUGE reduction in cloud storage costs. Some customers have seen 30x data reduction. Back to Eric.
Eric thanks customers: AVST (Applied Voice and Speech Technologies), GeoEngineers, SM Energy, and PEG Manufacturing. And introduces Ben Bailey, Director of IT from AVST. 15 million users for their product, offices in Irvine CA, London, Seattle and others. Uses Backup Exec 12.5 and ships tapes offsite. Data totals 3-4TB/week. They expect that to grow 5+ by the middle of next year; they will outgrow their weekend-long backup window. Considered many solutions; disk to disk and shipping disks is painful and expensive... not workable.
They decided to look at Whitewater. Ben totally agrees with Bob's details and statistics. They are seeing 10-15x data reduction and that's a huge savings in bandwidth and storage. They are using AT&T Synaptic for cloud storage. Reduced admin overhead, OpEx, and dependency on tape. They hope to eliminate tape for backups. Saves up to 5 hours/week in 120 employee company. 40%+ reduction in backup window. Restore times improved by 25-30%. No more new tape libraries or annual support. Super-easy streamlined deployment. It's the next generation alternative to tape. If you can back up to disk, you can backup to Whitewater.
Eric is back. We know that customers will migrate slowly toward the cloud, and we support a hybrid implementation. Reviewing two announcements: Cloud Steelhead and Whitewater. We want to surround the WAN and remove it as a constraint. We want acceleration available anywhere: remote, virtual, data center, DR, public cloud, private cloud, etc.
Eric introduces Dave Russell from Gartner. He is a storage analyst (22 years) who is currently focusing on Cloud Storage. Seen a lot of trends come and go, especially since IT is, by nature, very risk averse. He has about 1000 conversations a year with end users. Helps identify trends and hype vs hope. Discusses Cloud Storage Hype vs Hope.
Cloud model requires a lot of trust in outside companies, resources, and people. SLAs come into play. Compliance comes into play, especially for some industries. Proving that data is retained, and that only the right people have access to it. Not that different from traditional storage, actually. SMB and Enterprise Cloud Storage Concerns survey reveals that Security dramatically leads all other concerns (performance, vendor viability, unpredictable costs, and lock-in).
Top projects for cloud deployment over the next 3 years: Backup and recovery, storage, DR. Data needs to be safe off in the cloud. 2005 was the year of Rolling Disasters; the hurricanes hit both Florida and the Texas/Louisiana Gulf Coast. Some companies were not prepared to be hit in both places at once. Some businesses lost their data for extended periods or permanently because trucks got stuck on the highway, not because of inadequate backup procedures and policies.
Deduplication. Hasn't it been done? Survey: only 30% are doing some sort of deduplication today (70% aren't). Bob's 9 hour backup is using daily data. In fact, backups need to be done multiple times per day; who wants to lose 23 hours of data? Deduplication can provide exactly that. Yet it's the fastest adopted storage technology since tape.
Survey of 400 CIOs, how do you plan to contain costs? He highlighted #2 (Data reduction techniques) and #6 (cloud computing and cloud services). #1 is Server Virtualization. Nearly all of these items revolve around the WAN, and Riverbed can help optimize them. Back to Eric.
Eric introduces Chris Costello, Asst VP of Product Management for Managed hosting and Cloud Services at AT&T. AT&T Synaptic Storage as a Service is fully integrated with Whitewater. Customers are using it as a replacement for tape backups. Costs can be reduced from dollars/GB down to cents/GB by putting storage into the cloud. Security is an important component. Is there redundancy? SLAs? How are charges computed? Deduplication and encryption is a valuable piece, and Whitewater provides that. AVST has been a wonderful joint customer for Riverbed and AT&T.
Virtual public cloud. It's a combination of traditional public and private clouds. Combines benefits of private cloud (security, QoS, bandwidth and speed), with scale and flexibility of public cloud. Back to Eric.
Eric introduces Mike Feinberg. Mike had worked with Riverbed back in 2002 and 2003 before we even had a product. Mike is GM and Sr VP for Cloud Infrastructure for EMC (and a former colleague of mine). Cloud is creating an IT Revolution. Cloud Computing is not a computing problem, it's a data problem. It can be solved by storage AND network technologies (not one or the other; both). The problem we're trying to solve is one of a scale that we've never really seen before. For a service provider, you're talking about Petabytes or Exabytes, accessible worldwide. Huge changes in capacity and proximity.
EMC Atmos was designed to solve these problems. Designed for choice; enabling the architects that they need with the right technology. EMC is enabling a series of service providers (enterprise, secure, etc) to provide cloud technologies. Riverbed lets these providers address the distance problem. The whole ecosystem is about enabling choice to help our customers to embrace cloud. It's about both private AND public cloud. A lot of interest in Backup and Archive, but there is more. There are content-rich applications from eBay and other vendors that also need Cloud Storage. Ease of integration and deployment that Riverbed provides, with the choice of physical and virtual is incredibly valuable to their customers.
Eric is back to introduce a co-founder of a Cloud Provider service. Geoff Tudor, co-founder of Nirvanix, and SVP of Business Development. Exponential storage growth is driving cloud storage in a big way. 5 Key questions about cloud storage: 1) Can I use cloud storage with my legacy apps? 2) Can I integrate it seamlessly with my IT processes? 3) Can I do it without overloading my networks? 4) Can I get it into production quickly an reliably? 5) How can I maximize my cloud ROI? The right answers to these questions will lead customers to adopt Cloud Storage.
Riverbed provides a fast on-ramp to the Cloud. Enables integration with legacy apps. Integrated dedupe minimizes cloud storage and bandwidth costs. On board "cache" creates highly responsive and reliable customer experience. Consider cost of 1TB in cloud: $1800/year. Compare to cost of 1 new TB in data center: $3000-4000 just for storage. Add to that cost of cooling, power, maintenance, admin, support, etc. And de-duping can turn that 1TB into 100GB, further reducing costs. Whitewater is a truly disruptive technology that can really drive savings for IT organizations. Geoff quickly discussed some Riverbed/Nirvanix Use Cases.
Eric is back again. This time he is introducing the final speaker Riverbed Co-Founder and CEO, Jerry Kennelly. "40 years in data processing, and this is a really exciting time. This is the ultimate vision for IT: a truly virtual world, where global computing for knowledge workers of the 21st century gives users real time LAN-speed access to everything. A cloud *is* white water, literally, thus the name. Thanks to everyone."
And the event is concluded. Thanks to everyone for following along.
In just a couple of hours, Riverbed will be hosting a global product announcement that we're calling Into The Cloud. It will be going on simultaneously in New York, London, and San Francisco.
I can't reveal any of the details yet, but make sure you are here at 1:00 Eastern Time (10:00 am Pacific) to see what all the hubbub is about.
I can show you a quick peek of the stage right now. Check back; this is gonna be cool.
I'm currently at the GigaOm Structure conference, listening to the CTO of Amazon. One of the more entertaining comments he made was that private clouds should be called false clouds. He certainly got a good chuckle from the audience , but what do you think?
A big day today in technology news. On the consumer side, Apple will be unveiling their new gadgets during their worldwide developer conference in San Francisco.
Not to be outdone, Riverbed is generating excitement today in the IT landscape with the launch of the RiOS 6.1, a hefty software update to Riverbed's award-winning Steelhead appliance.RiOS 6.1 has new optimizations for Microsoft applications like Exchange 2010 and Microsoft Online Services and storage optimizations for SRDF and FCIP. There is HA support for RSP. With this release Riverbed delivers superior acceleration across a wide variety of infrastructure, thereby providing maximum performance for any cloud, architecture or application.
Riverbed is announcing a lot of Industry Firsts with this software update.With RiOS 6.1, Riverbed is the first WAN Optimization vendor to offer optimizations for the following:
Riverbed Services Platform (RSP) High Availability
Improved support for RSP in HA environments. RSP has been deployed successfully as a branch office box (BOB) solution across a wide array of customers. HA support for RSP allows customers the flexibility to consolidate branch office services without impacting availability.
Kenny Quan demonstrates Riverbed's flagship WAN optimization solution, the Steelhead appliance.
Bob Gilbert and Eric Wolford discuss cloud computing and Riverbed's recent announcement regarding a product direction for accelerating cloud environments.
Thomas Prokop, Manager of Information and Remote Services at Prokop attended Riverbed's Cloud9 event in New York and answered the "what does the cloud mean to you?" question.
At Riverbed's Cloud9 party during Interop in New york, Michael Vassallo, Sr. Network Administrator at Dancker, Sellew, Douglas, offered his opinion on what the cloud means.