Technorati Profile Blog Flux Local IT Transformation with SOA
>

Friday, February 12, 2010

Performance Planning with Modeling & Simulation


SOA environments are characterized by an eclectic mix of components, service flows, processes and infrastructure systems (servers, local area networks, routers, gateways, etc.). This complexity makes it difficult to predict the capacity of the needed infrastructure.  In addition, trying to evaluate the impact of changes in how services are routed or used is more an exercise in the art of divination than in the scientific method.  Understanding the dynamics of an SOA system usually takes place over a period of time. Having to wait months to optimize a system is not usually a good option.
An alternative approach is to create a model of the system in order to simulate its current and future performance. Depending upon the complexity of the model, you will be able to simulate the actual system latency and the predicted response times of a variety of service flows. 
Simulation can help identify potential bottlenecks and streamline processing times by pinpointing areas where resources can be best optimized.  Imagine knowing the answers to these questions in more concrete terms:
·         What are the transaction response times?
·         How many servers, data bases or links do I actually need?

Without the ability to simulate, system designers and administrators are left with the choice of deploying what they believe to be the best system, praying, and then taking a reactive approach based on the on-going measurement of actual performance data via monitoring tools. By then, it might be too late or too expensive to fix the system.
In general, simulations fall within one of the following levels of details:
Rapid Model (also known as “NapkinSim”). You’ve probably been simulating in this manner for quite some time. If a clerk takes 10 minutes to serve a customer, and on average two customers arrive every 20 minutes, what is the average wait time? “Simple,” you might say, “the answer is zero.”  The answer is, of course, never simple and it is not zero. The answer depends upon how the customers arrive. If two customers arrive at the same time, one of them will have to wait at least 10 minutes. When running a simulation tool you will soon come to realize the importance  inter-arrival distributions have  on simulation results.
Aside from the simplest SOA problems, you cannot predict the desired resource-requester relationship by resorting to simple napkin arithmetic. 
Mathematical Analysis. Mathematics is not entirely helpful either. Significant work has been done to analyze the so called M/M/1 (single queue with exponential arrivals and services) problem. However, most mathematical approaches cannot satisfactorily cope with dynamic or transient effects and quickly become too complex for multi-server environments.  In real life, however, most queuing problems cannot be solved easily by resorting to linear equations. Indeed, the norm is for complexity to quickly drive the problem area to behave like a non-linear system. This in turn requires the assistance of complex mathematics for a reliable solution.  What then is the alternative?
Queuing  Simulation. Regardless of the level of abstraction chosen for the system under simulation, you will want to have the most precise and reliable information for the expected behavior of the system. In this case, simulation known as Queuing Simulation can be the most helpful.
Queuing simulation is particularly suited to SOA because you can simulate almost any process in which a “client” requests a service and a “resource” provides that service. No doubt about it, queuing simulation is the most viable and obvious way to model and predict how an SOA system will behave. 

To be clear, the simulation approach is not a panacea. First of all, you have to learn about the simulation tools. Secondly, detailed modeling can be time consuming. Modeling should not be viewed as a quick way to get answers to questions. You should also keep in mind that simulations yield only approximate answers which—in many cases—are difficult to validate. In the end, simulation is merely a more precise way to venture a guess. You should not accept simulation results as gospel. It is easy to forget that the simulation is an abstraction of reality; not reality itself.  A thorough validation of the results must be made, especially prior to publication of the results. Simulations should be supported by careful experiment design, an understanding of the assumptions, and reliability of the input data used in the model. Despite these caveats, you will find that simulation can be an invaluable tool in your day to day business activities.

While you could develop a simulation by writing a program yourself, you could also use one of the many simulation tools on the market. Today’s simulation tools are not as expensive as in the past, but they do demand the discipline to capture and create the base model and to keep the simulation model current for future simulation runs. A modern simulation tool for SOA should provide a visual interactive modeling and simulation tool for queuing systems that has the following attributes:
General purpose.  You can simulate almost anything that involves a request, a queue and a service, whether this includes a complex computer network or the service times at a fast food counter. This capability will give you the option of simulating the SOA system at various levels of granularity; from the underlying packet-level communications layers to the upper service flows.
Real-time. Unlike other costlier programs, you can view how the resources in your system behave as the simulation progresses.
Interactive. You can dynamically modify some essential parameters to adjust the behavior of the simulated components even as the simulation runs!
Visual Oriented. Allows you to enter the necessary information via a simple, and intuitive user interface, while removing the need to know a computer language. In addition to running the simulation, it also provides you with important information to help you fine tune it.
Discrete oriented.  Discrete-event systems change at discrete points in time, as opposed to continuous systems which change over time. 
Flexible. You can see the dynamic effects of the simulated system, or the accumulated averages representing the overall mean behavior of the system.
 

As a Valentine’s Day gift to the readers of this article, I am making Prophesy—A Complete Workflow Simulation System available for free!
Prophesy is a simulation product that I developed and marketed back in the roaring 90’s (when in retrospect I should have been putting my efforts into developing something for the exploding World Wide Web—but that’s another story). Prophesy meets the requirements listed above, but unfortunately, the product is aged. It’s no longer supported, and it will not run under Windows 7 (“thank” Microsoft for their lack of backward compatibility).
You can visit http://www.abstraction.com/prophesy to download it for free and hopefully to use as a learning tool.
Enjoy!

Labels: , , , , , , , , , , ,

Friday, November 13, 2009

ESB and the SOA Fabric


A number of new needs have emerged with the advent of SOA. First of all, there was no standard way for an application to construct and deliver a service call.  Secondly, there was no standard way to ensure the service would be delivered.  Thirdly, it was not clear how this SOA environment could be managed and operated effectively. Fourthly . . .  well, you get the idea; the list goes on and on.
SOA demands the existence of an enabling infrastructure layer known as middleware. Middleware provides all necessary services, independent of the underlying technical infrastructure.  To satisfy this need, vendors began to define SOA architectures around a relatively abstract concept: the Enterprise Service Bus, or ESB.   Now, there has never been disagreement about the need to have a foundational layer to support common SOA functions—an enterprise bus of sorts. The problem is that each vendor took it upon himself to define the specific capabilities and mechanisms of their proprietary ESB, oftentimes by repackaging preexisting products and rebranding them to better fit their sales strategies.
As a result, depending on the vendor, the concept of Enterprise Service Bus encompasses an amalgamation of integration and transformation technologies that enable the cooperative work of any number of environments: Service Location, Service Invocation, Service Routing, Security, Mapping, Asynchronous and Event Driven Messaging, Service Orchestration, Testing Tools, Pattern Libraries, Monitoring and Management, etc. Unfortunately, when viewed as an all-or-nothing proposition, ESB’s broad and fuzzy scope tends to make vendor offerings somewhat complex and potentially expensive.
The term ESB is now so generic and undefined that you should be careful not to get entrapped into buying a cornucopia of vendor products that are not going to be needed for your specific SOA environment.  ESBs resemble more a Swiss army knife, with its many accessories, of which only a few will ever be used. Don’t be deceived; vendors will naturally try to sell you the complete superhighway, including rest stops, gas stations and the paint for the road signs, when all you really need is a quaint country road. You can be choosy and build your base SOA foundation gradually.  Because of this, I am willfully avoiding use of the term “Enterprise Service Bus”, preferring instead to use the more neutral term, “SOA Fabric.”
Of all the bells and whistles provided by ESB vendors (data transformation, dynamic service location, etc.), the one key function the SOA Fabric should deliver is ensuring that the services and service delivery mechanisms are abstracted from the SOA clients.
A salient feature that vendors tell us ESBs are good for is their ability to integrate heterogeneous environments. However, if you think about it, since you are going through the process of transforming the technology in your company (the topic of this writings after all!), you should really strive to introduce a standard protocol and eliminate as many of legacy protocols as you can.
Ironically, a holistic transformation program should have the goal of deploying the most homogeneous SOA environment possible; thus obviating the need for most of the much touted ESB’s transformation and mapping functions. In a new system, SOA can be based upon canonical formats and common protocols; thus minimizing the need for data and service format conversion. This goal is most feasible when applied to the message flows occurring in you internal ecosystem.
Now, you may still need some of those conversion functions for several other reasons, migration and integration with external systems being the most obvious cases. If the migration will be gradual, and therefore requires the interplay of new services with legacy services, go ahead and enable some of the protocol conversion features provided by ESBs. The question would then be how important this feature is to you, and whether you wouldn’t be better off following a non-ESB integration mechanism in the interim.  At least, knowing you will be using this particular ESB function only for migration purposes, you can try to negotiate a more generous license with the vendor.
There are cases whereby, while striving for a homogeneous SOA environment, you may well conclude that your end state architecture must integrate a number of systems under a  federated view. Your end state architecture in this case will be a mix of hybrid technologies servicing autonomous problem domains. Under this scenario, it would be best to reframe the definition of the problem at hand from one of creating an SOA environment to one of applying Enterprise Application Integration (EAI) mechanisms. If your end state revolves more around integration EAI, it would be better suited to performing boundary-level mapping and transformation work.  In this case, go and shop for a great EAI solution; not for an ESB.
If the vendor gives you the option of acquiring specific subsets of their ESB offering (at a reduced price) then that’s something worth considering. At the very least, you will need to provide support for service deployment, routing, monitoring, and management, even if you won’t require many of the other functions in the ESB package. Just remember to focus in deploying the fabric that properly matches your SOA objectives and not the one that matches your vendor’s sales quota.
A quick word regarding Open Source ESBs. . . There are many, but the same caveats I’ve used for vendor-based ESB’s apply. Open Source ESBs are not yet as mature, and the quality of functions they provide varies significantly according to the component. Focus on using only those components you can be sure will work in a reliable and stable manner or those which are not critical to the system. Remember you are putting in place components that will become part of the core fabric. Ask yourself, does it make sense in order to save a few dollars to use a relatively unsupported ESB component for a critical role (Service Invocation or Messaging, come to mind), versus using a more stable vendor solution?
In the end, if you are planning to use the protocol conversion features packaged in a vendor-provided or open source ESBs, I suggest you use them in a discrete, case-by-case basis, and not as an inherent component of your SOA fabric. This way, even as you face having to solve integration problems associated with the lack of standards, at least you won’t be forced into drinking the Kool-Aid associated with a particular vendor’s view of ESB!

Labels: , , , , , , , , , , , ,