Technorati Profile Blog Flux Local IT Transformation with SOA
>

Friday, December 4, 2009

Taming the SOA Complexities


Remember when I used to say, “Architect for Flexibility; Engineer for Performance”? Well, this is where we begin to worry about engineering for performance. This section, together with the following SOA Foundation section represents the Level III architecture phase. Here we endeavor to solve the practical challenges associated with SOA architectures via the application of pragmatic development and engineering principles.


On the face of it, I wish SOA were as smooth as ice cream. However, I regret to inform you that it is anything but.  In truth, SOA is not a panacea, and its use requires a fair dose of adult supervision. SOA is about flexibility, but flexibility also opens up the different ways one can screw up (remember when you were in college and no longer had to follow a curfew?).  Best practices should be followed when designing a system around SOA, but there are also some principles that may be counter-intuitive to the “normal” way of doing architecture. So, let me wear the proverbial devil’s advocate hat and give you a list from “The Proverbial Almanac of SOA Grievances & Other Such Things Thusly Worrisome & Utterly Confounding”:
·         SOA is inherently complex. Flexibility has its price. By their nature, distributed environments have more “moving” pieces; thereby increasing their overall complexity.
·         SOA can be very fragile. SOA has more moving parts, leading to augmented component interdependencies.  A loosely coupled system has potentially more points of failure.
·         It’s intrinsically inefficient. In SOA, computer optimization is not the goal. The goal is to more closely mirror actual business processes. The pursuit of this worthy objective comes at the price of SOA having to “squander” computational resources. 
The way to deal with SOA’s intrinsic fragility and inefficiency is by increasing its robustness.  Unfortunately, increasing robustness entails inclusion of fault-tolerant designs that are inherently more complex.  Why? Robustness implies deployment of redundant elements. All this runs counter to platonic design principles, and it runs counter to the way the Level I architecture is usually defined. There’s a natural tension because high-level architectures tend to be highly optimized, generic, and abstract, referencing only the minimum detail necessary to make the system operate. That is, high level architectures are usually highly idealized—nothing wrong with it. Striving for an imperfect high level architecture is something only Homer Simpson would do. But perfection is not a reasonable design goal when it comes to practical SOA implementations.  In fact, perfection is not a reasonable design goal when it comes to anything.
Consider how Mother Nature operates.  Evolution’s undirected changes often result in non-optimal designs. Nature solves the problem by “favoring” a certain amount of redundancy to better respond to sudden changes and to better ensure the survival of the organism. “Perfect” designs are not very robust. A single layered roof, for example, will fail catastrophically if a single tile fails. A roof constructed with overlapping tiles can better withstand the failure of a single tile. 
A second reason SOA is more complex is explained by the “complexity rule” I covered earlier: the more simplicity you want to expose, the more complex the underlying system has to be. Primitive technology solutions tend to be difficult to use, even if they are easier to implement.  The inherent complexity of the problem they try to solve is more exposed to the user. If you don’t believe me consider the following instructions from an old Model T User Manual from Ford:
 “How are Spark and Throttle Levers Used? Answer: under the steering wheel are two small levers. The right- hand (throttle) lever controls the amount of mixture (gasoline and air) which goes into the engine. When the engine is in operation, the farther this lever is moved downward toward the driver (referred to as “opening the throttle”) the faster the engine runs and the greater the power furnished. The left-hand lever controls the spark, which explodes the gas in the cylinders of the engine.”
Well, you get the idea. SOA is all about simplifying system user interactions and about mirroring business processes.  These goals force greater complexity upon SOA. There is no way around this law.
There are myriad considerations to take into account when designing a services-oriented system.  Based on my experience I have come up with a list covering some of the specific key techniques I have found effective in taming the inherent SOA complexities.  The techniques relate to the following areas that I will be covering next:
State-Keeping/State Avoidance. Figuring out under what circumstances state should be kept has a direct relevance in determining the ultimate flexibility of the system.
Mapping & Transformation. Even if the ideal is to deploy as homogenous a system as possible, the reality is that we will eventually need to handle process and data transformations in order to couple diverse systems. This brings up the question as to where is best to perform such transformations.
Direct Access Data Exceptions. As you may recall from my earlier discussion on the Data Sentinel, ideally all data would be brokered by an insulating services layer. In practice, there are cases where data must be accessed directly. The question is how to handle these exceptions.
 Handling Bulk Data. SOA is ideal for exchanging discrete data elements. The question is how to handle situations requiring the access, processing and delivery of large amounts of data.
Handling Transactional Services.  Formalized transaction management imposes a number of requirements to ensure transactions have integrity and coherence. Matching a transaction-based environment to SOA is not obvious.
Caching. Yes, there’s a potential for SOA to exhibit a slower performance than grandma driving her large 8-cylinder car on a Sunday afternoon. The answer to tame this particular demon is to apply caching extensively and judiciously.
All the above techniques relate to the actual operational effectiveness of SOA. Later on I will also cover the various considerations related to how to manage the SOA operations.
Let’s begin . . .

Labels: , , , , , , ,

Friday, October 23, 2009

The Access Layer


Many who have been around long enough to remember the good old days of data processing may still long for the simplicity and maturity of centrally located mainframes which could be accessed via a simple line protocol from basic screen devices and keyboards at each client location. Older “dumb-terminals”, such as Teletypes, ICOT and 3270 devices simply captured the keystrokes which were then duly sent to the mainframe either in character-by-character mode or in blocks. The mainframe then centrally formatted the response which was then displayed by the access device in the same blind manner Charlie Chaplin hammered widgets in the assembly line of Modern Times.
For a period of time, with the advent of the PC back in the 80’s, a debate ensued about the idea of moving all processing to client devices. For a while, the pendulum swung towards having PCs do the computations, earning them the “fat clients” moniker. After enjoying the exhilarating freedom of not having to depend on the DP priesthood behind the central mainframe glass house, local IT departments began to learn what we all are supposed to have learned during our teenage years: with freedom come responsibilities. As it turned out, trying to keep PC software current with the never-ending stream of versions updates and configuration changes or trying to enforce corporate policies in this type of distributed environment, no matter how flimsy, soon became a Nightmare on IT Street.
Newton said it best: for every action there is always a reaction. Soon voices from the “other-side-of-the-pendulum” began to be heard. Mainly as a strategy to counter Microsoft which in the early nineties was still the eight-hundred pound gorilla that Google is today, the folks at Sun Microsystems began pushing for the “Network Computer” concept. This was in reality a cry for the dumb terminals of yore; only this time designating the Java Virtual Machine as the soul of the distributed machine.  To be fair, given the maintenance burden presented by millions of PCs requiring continuous Windows upgrades, these network computers did make some sense. After all, network computers were actually capable of executing applications autonomously from the central system and thus were not strictly the same as old-fashioned “dumb-terminals”. 
In the end, the pendulum did swing back towards Thin Clients. Enter the original Web Browser. This time the appeal of Web Browsers was that thanks to Tim Bernes-Lee, the inventor of the Web, they accelerated the convergence of technology platforms around a standardized access layer. Whereas in the past each company might have used proprietary access technologies, or even proprietary interfaces, web browsers became a de-facto standard. The disadvantage was that, well, we were once again dealing with a very limited set of client level capabilities. The narrow presentation options provided by HTML limited the interface usability. Java Applets solved this constraint somewhat but then ironically increased the “thickness” of the client as programmers tended to put more processing within the Applet. Thankfully we are now reaching the point where we can strike the proper balance between “thinness” and “thickness” via the use of Cascading Style Sheets and, more recently, Ajax and Dojo.
Now, a word about two of today’s most popular client access solutions: Proprietary Multimedia extensions, such as Macromedia Flash and what I refer to as “Dumb Terminal” emulators, such as Citrix.  Using Macromedia Flash is great if you are interested in displaying cool animations, enhanced graphics and such.  It is fine to use languages such as Action Script for basic input field verification and simple interface manipulation (i.e. sorting fields for presentation, etc.), but writing any sort of business logic with these languages is an invitation to create code that will be very difficult to maintain.  Business logic should always be contained in well-defined applications, ideally located in a server under proper operational management. 
Technologies such as Citrix basically allow the execution of “Fat Client” applications under a “Thin Client” framework by “teleporting” the Windows-based input and output under the control of a remote browser. My experience is that this approach makes sense only under very specific tactical or niche needs such as during migrations or when you need to make a rare Windows-based application available to remote locations that lack the ability to host the software.  Citrix software has been used successfully to enable rich interfaces for web-based meeting applications (GoToMeeting) when there is a need to display a user’s desktop environment via a browser, or when users want to exercise remote control of their own desktops. Other than in these special cases, I recommend not basing the core of your client strategy around these types of dedicated technologies. Remember, when it comes to IT Transformation you should favor open standards and the use of tools that are based on sound architecture principles rather than on strong vendor products.
As a close to this discussion on Access technologies; just as we no longer debate the merits of one networking technology over another, network technologies have become commoditized. I suspect we will soon move the discussion of Access technologies to a higher level, rather than debating the specific enabling technology to be used. Access-level enabling technologies such as Ajax and others are becoming commodity standards that will support a variety of future access devices in a very seamless fashion.  So, pull out your mobile phone, your electronic book reader, and bring your Netbook, or laptop, or access your old faithful PC, or turn on your videogame machine, if you don’t want to fetch your HDTV remote control. It behooves you in this new world of IT Transformation to make it all work just the same!

Labels: , , , , , , , , , , , , , , , , , , ,

Thursday, July 9, 2009

The Architecture Reference Model

Let’s say you are a clothing designer competing on an episode of Project Runway (I reluctantly confess to be a fan of this show), and are asked to design a man’s suit. Chances are you will design it according to the desired needs and characteristics: winter or summer, business or black-tie, silk or wool fabric, etc. Also, you’ll likely incorporate your own particular professional touches even as you use pre-built suit patterns to tailor it.

For the most part, despite the range of variations that can be expected in the design, chances are that when creating a typical western suit you will end up with a pair of pants and a jacket; not a Scottish kilt or a clown outfit.

Fundamentally, architecture models are no different. When solving a similar business problem, most architecture models will end up looking similar to each other regardless of the vertical industry to which they apply. For instance, the architecture model for a high transaction airline reservation system is going to be very similar to the architecture model for a high transaction banking system, and indeed, these verticals share many of the same technologies. The phenomenon of “convergent evolution” also applies to architectures. Whenever nature requires a solution to a specific problem, evolution tends to arrive at an analogous solution for survival challenges. For instance, it is known that the eye has been arrived at independent by different animal species (the eyes of insects, octopi, and humans are different) and the flight of bats is an independent result of the flight of birds.

But even if most architecture models will resemble each other, as the saying goes: “the devil is in the details.” It will not suffice to copy a model from a textbook and brand it as the definite model for your company. At a minimum, your model must meet these characteristics:

§ It must be Realistic. No matter how abstract a model, it should ultimately be implementable. Also, the term “realistic” here means that the model should apply to the specifics of your company.

§ It must be Mutually Exclusive and Collectively Exhaustible. This is a fancy way of saying that it should be complete but also economical in its definition—without overlaps and without gaps. Unlike actual implementation designs, which I believe should be slightly redundant and overlapping, reference models can strive for Platonic-like perfection.

§ It will clearly describe its key areas as inter-related, decoupled modules. In today’s terms this means that it should be SOA-friendly. Even though, we are still at Level I, and SOA will be formally introduced on Level II, we know already that unless you are architecting something extremely unique and with a very specialized domain as, for example, software for biomedical devices or for games, SOA is the way to go.

The actual Architecture Model will probably be a brief reference document outlining in narrative form the key concepts of the model: components, key technologies, interaction maps, component inter-dependencies, etc. Remember, we are still in Level I, at the hundred-thousand foot level.

The next step is to represent the architecture model via a highly iconic diagram that incorporates all the key elements of the logical architecture as well as representation of the architecture mapping the business processes. I call this diagram the Architecture Reference Model (ARM), and this diagram is destined to become the friendly-face of the Architecture Model you will document. The ARM is to become both your banner in all subsequent architecture socialization efforts. It serves multiple purposes:

§ Synthesizes in a single chart the key architecture elements and messages you wish to convey

§ Serves as a communications icon, identifying the focus of your technology team.

§ It becomes the rallying flag for all your technology design efforts. It gives a point of reference to help the project team place into context the various detailed efforts.

The ARM should be not so detailed that it drowns in complexity; nor so high level and so generic that it could apply as the architecture reference model for just about anything. If your ARM could apply to the coffeehouse across the street, it is likely too generic. The ARM should include substantial content.

The ARM should be visually appealing and compelling. Enlist the services of a professional graphic designer, if necessary. Remember you’re looking for the ultimate iconic point of reference to summarize all your technology design efforts.

Initially, expect the ARM to change fluidly as you receive feedback from the initial drafts, and as you become more assured of the specific architecture representations. After a few revisions, however, the ARM should become more stable. This is not to say that it will never change, but if you’ve developed a properly designed architecture framework model diagram, adding elements to it should not require extreme changes to the core graphic.

The ARM shown below is based on an ARM designed for an innovative hospitality solutions company, and it is reproduced here with their appreciated consent (AltiusPAR Hospitality Solutions—www.altiuspar.com). What should become immediately apparent is that, even though the diagram was initially created for a specific type of industry (Hospitality), it could easily be applied to other industries. Admittedly, I deliberately used generic terms (e.g. Customers instead of Guests), but tailoring and narrowing the specification of this diagram to other industries would be relatively simple.

See how the ARM above visually highlights the following specific aspects of the proposed architecture model:

· External users (Merchants, Web, and Business-to-Business) will be handled via an external access layer.

· The Intranet and internal customer support systems will be placed inside the DMZ.

· All users access the SOA layer to obtain Shopping, Content, and other Services

· The Data Bases are “hidden” from the service users by the service layer.

· Subsystems such as Campaign Management, Sales Support, Loyalty, etc. must support the external and Intranet users and be accessible from regional systems (presented at the bottom of the cylinder)

· You will be providing Publish/Subscribe services to the field offices, which will only be able to access the core system via VPN

· There will be support and management systems for all layers in the hierarchy

The list could go on and on. A good ARM is a little like a good painting whereby, the more you inspect it, the more information you can gather from it. Don’t underestimate the importance of achieving an ARM that can convey all key architecture model aspects in a single diagram. The intrinsic value it provides by focusing the transformation project framework is invaluable. Besides, it will make for a great poster for your office wall!

Labels: , , , ,

Tuesday, June 30, 2009

The Architecture Scope

Because architectures should strive for flexibility, there is always a danger that, if they are not properly scoped, you may end up with a pile of “architecture” documents that no one will care to read, let alone implement. When defining the architecture it is essential for you and your team to define it with the right scope. If you are to keep it real, boiling the ocean is not an option. (“Let’s boil the ocean to get rid of those German U2s, the implementation is up to you.”)

For example, when defining the scope, ask yourself if you are trying to address only the customer-facing automation side of your business, or if you will also include the backend support systems [1]. If the latter, chances are you will need to consider integration to the ERP vendor. In any case, you’ll need to define whether to take an end-to-end customer lifecycle perspective or to focus on specific core business processes only.

Should you support international environments? If your company is a conglomerate, will you cover multiple business units or just one particular division? Are you even empowered to define the backend architecture?

Answering these questions requires a candid, non-delusional understanding of whether you have the kind of organizational governance to define far-reaching enterprise architectures or are limited to specific areas under a narrower span of control. That subsidiary headquartered in Paris might not appreciate your trying to dictate their technology!

Understanding your own limitations is essential. Clearly, it would be nice if the scope of the architecture encompassed as much of the “problem-space” as possible, but the fact is that real-life problem-spaces are highly irregular.

Were you to depict a typical problem space using a shape, chances are you would end up with a splattered blob like the one that follows:

Non-withstanding your governance limitations, you could define the architecture with a scope so broad that it covers all possible points of the problem space—like throwing a stick of dynamite into lake to do some fishing. This ‘solve-world-hunger’ approach will specify a great deal of things that just don’t belong to your specific transformation initiative. While all-encompassing, this ambitious approach will most likely be very expensive and non implementable.

The next diagram depicts a more realistic approach. The architecture scope here represents a compromise between specifying the largest possible conjunction of common elements in the problem set while ignoring the more “far-out” cases. It is this conjunction that forms the core of enterprise architecture—the main area of focus for the strategic architecture group.

Areas of the problem space not covered by the core architecture can always be handled as exceptions on a case-by-case basis. Who cares if content integration with the three person art-department happens with an 8GB USB stick? And yes, you have to accept the branch office in Namibia entering their one daily purchase order on a typewriter and then faxing the batched weekly orders on a Friday night. When the day comes that Internet connections become cost-effective there, you can then bring them into the fold of your architecture.

Such pragmatic exceptions avoid the need to over-architect data bases or data exchange formats and do help keep the architecture simpler. Of course, it is best to minimize these special cases as much as possible. Were you to set the scope in a too-narrow a fashion then most everything will becomes an exception! In addition to assuring yourself hours of the extra work needed by this fly-swatting approach to solving issues, you will also be costing the company more than necessary.

Even though the exceptions should be in the periphery of interest for your architecture team, you should at least be aware of their existence. Better yet, the architecture group should establish preferred guidelines on how best to admit exceptions so that the possibility of their future alignment with the core architecture is not compromised. For instance, establishing the use of a standardized Comma-Separated-Value (CSV) file for “non-traditional” data exchanges (including the art department’s USB stick), or defining the product codes to be typed in the manual purchase orders from Namibia, would at least ensure that exceptions can be better handled in the future.

It’s also worth remembering that the fluid nature of a business requires the ability to append new problem space areas to the original space (imagine when a business merger or company reorganization occurs). Typically, these “out-of-space” problems will have to be first handled on a reactive basis following the rule that new features are always needed “by yesterday”. However, you should have you team work on integrating these to the mainstream architecture as soon as practical. When this occurs, you will need to start the process of redefining the architecture scope all anew.

But what does the term “Scope” means in practical terms? What’ is it precisely that your architecture team should be defining as being either in or outside of scope?

The architecture scope should define the layers of technology to use (off-the-shelf or homegrown). Also, the scope should detail the applicability of the architecture as it concerns your organization’s structure, both horizontally (how many departmental functions and geographies it spans), and vertically (what level of detail will the architecture attain? Will you stop at the regional level or define the departmental elements?). In regard to timelines, your scope should go all the way to the long-term—there is no such a thing as a short-term architecture.

Perhaps you recall that back in the late eighties there was an Open Systems Interconnect standard (OSI) being pushed by the International Standards Organization. The specification reflected a design-by-committee approach that failed to take hold when faced with more effective de-facto standards such as TCP/IP, but one of its most enduring legacies was the definition of the seven-layer interoperability stack. This stack divided the various system elements in accordance to the type of service provided by each layer.

This framework can be useful in showing how the IT industry has been traversing the OSI layers toward commoditization. In the early days of IT, much blood was shed in battles centered on which technologies were to be chosen for the networking layers. Architecture scopes had to deal, for example, with whether to use SNA or DECnet, or even whether to implement something proprietary (it happened; one of my deliverables in a project long ago was for a proprietary communications protocol). Today it would be incomprehensible to debate against the use of TCP/IP as the protocol of choice, unless you are in a research organization testing the next big thing. The industry has converged to standards that moved the battleground upwards in the stack; toward OSI’s highest layers.

The diagram below shows the OSI layers. Shown in the horizontal axis are the trends toward standardization. In consequence, these days the focus is on actual application and service specifications; not on the underlying networking layers that, for all practical purposes are now completely commoditized.

For all the criticisms against it (the OSI specification truly became a boiling-the-ocean example[2]), it at least served as a unifying schema. The problem is that the OSI Reference Model is now outliving its usefulness. Despite the industry trend toward standardization and commoditization, an area of “infrastructure” that remains yet to be standardized is the Service Distribution Fabric[3]. SOA is today in a similar state of maturity as the networking world back in the eighties. With the so-called Enterprise Service Bus (ESB) we have vendors resisting standardization measures in order to sell their own proprietary solutions and get competitive advantage. Just as happened with TCP/IP, the world awaits a vendor independent SOA standards specification.

More work is needed to define the detailed structure standards for the Presentation and Application layers. This burden has been taken up by a variety of organizations such as the W3C, the OMG (Object Management Group), the Java Community Process, IEEE, many others, but we are not yet at a point where we can say that application and presentation layer standards have been universally adopted (Web browsers are still not guaranteed to display content the same way), but I have no doubt the industry will continue to converge towards more standards for SOA and the upper application development layers.

While that happens, your role will be to apply you know-how to best adapt from the range of available technologies and standards the solutions that best meet your specific needs. Defining the scope of the architecture, even when “only” focusing on the highest layers of the OSI stack, remains a challenging and fundamental proposition.

Life is still fun!


[1] As per my definition, a pure backend ERP project does not an IT Transformation project make, but there can be super-duper projects that require front-facing and backend integration.

[2] Ironically, the OSI standard is an example of an architecture scope run amok. Its broadness made it extremely difficult to implement. In fact, the standard became so bloated that various “implementation subsets” were defined. The irony of that compromise was that if you had Vendor A implementing subset X, there were no guarantees this vendor could interoperate with Vendor B! implementing subset Y. The whole point of having a standard in order to ensure vendor interoperability was lost!

[3] This is commonly referred to as Enterprise Service Bus, but since we are not yet at that point where vendors even agree on what the ESB term really encompasses, I prefer to use the more generic term: Service Distribution Fabric.

Labels: , , , , , , , , , , , ,