Back to Home
 

Author Archive: russellrothstein


Russell has over 20 years of technical, marketing and management experience in the software industry. He currently runs product marketing at OpTier, the leader in Business Transaction Management and performance management for cloud and other environments. Previously, Russell was AVP Product Marketing at OPNET Technologies (Nasdaq: OPNT), a provider of application performance management software. He ran marketing for Open Sesame, a Web 1.0 startup that was acquired by Bowne (NYSE:BNE). Russell began his career at Oracle, deploying Oracle Applications for Fortune 1000 companies. Russell has spoken at key industry events including Interop, CMG, and Red Herring. Russell received a BA in Computer Science from Harvard University, an MS in Technology and Policy from MIT and an MS in Management from the MIT Sloan School of Management. You can follow Russell on twitter at russrothsteinit

Posts:

 
Published by

Many companies are developing their strategy for migration of business applications to private and public clouds. During this critical stage, it is vital to ensure that service levels are not impacted by migrating the application from dedicated to shared IT resources. It’s no wonder that according to analyst firm IDC, two of the top three concerns that CIO’s have about private clouds are performance and availability.

We see in the market that enterprises are forming new cloud teams and internal committees, with a diverse set of skills, to plan for an effective organizational cloud strategy. One of their mandates in the organization’s journey to cloud is to plan for how to monitor and manage the performance and behavior of applications after deployment. These organizations undoubtedly have a range of infrastructure monitors in the data center. And most cloud service providers, whether internal or external, will provide services for monitoring cloud resources. Yet these tools typically do not provide an accurate picture of what end users are truly experiencing and how to quickly isolate and fix performance issues in application components located inside and/or outside the cloud.

This blog entry points out a few of the key application performance challenges that you are likely to encounter when pursuing a cloud strategy, so that you can address them proactively. I hope that during my session in the Cloud Performance Summit at CloudConnect (Instrumenting Applications When Access Goes Away on Monday March 7) the esteemed panel will address some of these challenges with a variety of perspectives – it should be informative and thought-provoking!

1. How do you know if an application is ready for the cloud?

Not all applications are ready for “cloud time”, and sometimes one part of an application is cloud ready while other components are not. You need to identify the best components for migration as well as potential problems such as chattiness and latency that are amplified in the cloud.

2. How do you find server-related root causes when performance issues arise?

In fully-dedicated environments, we sometimes use infrastructure metrics and events to diagnose performance issues. But inferring application performance from tier-based statistics becomes challenging – if not impossible – when applications share dynamically allocated resources. In the cloud, you must be able to understand application performance and its correlation with the underlying physical and virtual components.

3. How can you minimize the risk of change to the cloud infrastructure or the application?

In a shared environment, any change to the application, or to the infrastructure, is high risk. Cloud owners, operations staff and application teams must be able to test the impact of change on service delivery – whether that change is in an application before deployment, or in the cloud infrastructure.

4. How do you implement or verify chargeback?

Traditional application performance monitoring (APM) tools do not collect resource utilization per transaction to enable business-aligned costing and chargeback paradigms. For the cloud, you need a solution that monitors consumption for every service across multiple applications and tiers, so you can accurately cost services, decide on appropriate chargeback schemes, and tune applications and infrastructure for better resource utilization and lower cost.

5. How do you ensure that services are allocated according to business priority?

To ensure that SLAs in the cloud are met, you must be able to prioritize the allocation of resources based on measurements of real end user performance and an accurate view of where additional resources can truly alleviate SLA risks. To make that possible, you need a clear picture of resource consumption at the transaction level and business intelligence about the impact of each infrastructure tier on performance.

6. How can you maintain a real-time up-to-date view of how each service flows through the cloud when VMs are moving around dynamically?

In the cloud more than ever, you need a real-time picture of service dependencies that does not need to be manually updated. The environment is simply too dynamic (e.g. so called “VMotion sickness”) to make it feasible to keep manual models and static infrastructure dependency maps up to date.

7. How can you right-size capacity and prevent over-provisioning that undercuts ROI?

In the cloud, a complete history of all transaction instances, including precise resource utilization metrics and SLAs, is essential for making intelligent decisions about provisioning. And with an accurate picture of resource consumption for each business transaction, cloud owners can plan future capacity requirements (e.g. servers, storage, VMs, databases) in the most cost-efficient manner possible.

Russell Rothstein is Vice President of Product Marketing at OpTier, a supplier of software for cloud performance management, application performance management and business transaction management. Follow him on twitter at russrothsteinit

 

 
  • LinkedIn
  • Photos
  • Blog