Technorati Profile Blog Flux Local IT Transformation with SOA: August 2009
>

Friday, August 28, 2009

An SOA Taxonomy

It is one thing to say SOA is the most natural way to architect systems; another to figure out how SOA should be implemented. The way we define the SOA structure (its “taxonomy”) is important because it has a direct impact on the best organizational governance needed for its successful use.

While there are many ways to split the SOA-cake, in my experience it makes sense to borrow from the world around us. After all, we have already established that there is nothing new under the sun when it comes to SOA, and that humans have been using the service model quite naturally for thousands of years. Also, humanity has tested a variety of social structures that allegedly have improved over time. It’s easy to be cynical when looking at the issues facing us today, but feudal structures are no longer considered appropriate (at least by most of us in western societies), and sacrificing prisoners to appease the gods is frowned upon these days. It makes sense to look at how we operate today as a possible model for defining a proper SOA taxonomy.

Let’s start with the use of language, (human language, mind you; not the computer programming-kind). Think of sentences with their nouns and predicates and the concept of service interface can emerge naturally from it. “Give me the lowest fare for a flight to New York in April,” or “Find the address for Mr. John Jones,” are service requests you could make equally well to your assistant or to a software program.

Just as a language’s sentence has to follow specific grammatical rules, how we articulate the request follows an implied structure intended to guarantee the request is understood and that it is in fact actionable. Satisfaction of the service requires an agreement as to how the service request is going to be handled and an expectation of who is going to act on the service.

Acting upon language instructions would be impossible without a specialization framework. When you call for a taxi, you don’t expect a hot-dog cart to show up, and if you need to summon emergency services, you dial 911 and not the pizza delivery service (you can tell I am kind of hungry as I write this!)

In fact, much of the social fabric related to the various roles and institutions usually referred to as ‘civilization’ are nothing more than a broad framework for the services we deliver or receive.

The streets we traverse, the sewage and plumbing systems beneath it, and the way electricity is delivered via power grids are infrastructure elements we all take for granted in support of the ‘civilization’ framework.

Finally, requesting services via the right language, using the means of civilization and the needed infrastructure would be all for naught if we were only capable to utter nonsensical commands (‘ride me to the moon on your bike’), or even requests not followed by the magic words (‘here is my credit card’). There are protocols that must be followed to ensure the entire system works as expected and these represent the social and legal ecosystem from which everything else flows.


This is essence the SOA taxonomy I will be discussing in more detail next. The elements encompassed by SOA are no different from the Language-Civilization-Infrastructure-Protocols pattern I’ve just discussed. For SOA, the equivalent pattern is Services-Framework-Foundation-Techniques:

a. The Services as the language of SOA. It is not the same thing ordering a meal at McDonalds as it is at a five star restaurant (yes, I’m still hungry!). There are services and then there are services. A clear understanding of what constitutes a service is essential to the SOA approach.

b. The emerging SOA Framework as the civilization. Trying to approach SOA in the same mindset as a traditional design is not feasible. SOA demands the establishment of new actors and their roles. Here I’ll discuss the proposed introduction of a common enterprise service bus (ESB) as a potential common transport for all SOA interactions, and the guidelines related to the access of data by services.

c. The physical Foundation. SOA is a beautiful approach, but it still relies on actual wires and moving bits and bytes—a suitable infrastructure. Here I will cover the distributed model, and the systemic approaches to scaling and managing SOA.

d. The Techniques needed to make SOA work. Imagine that you are given the opportunity to race against a professional NASCAR driver using a superior car that of the professional. Or that you are to compete against Tiger Woods in a round of golf using a better golf club. Odds are that, even with better equipment, you won’t win. Ultimately, you can have the best equipment, but it’s the way you use it that makes a difference in the results. Good equipment, like good infrastructure and services, can only be leveraged with appropriate techniques that only expert hands and minds can apply.

Next, let’s dive into the services . . .

Labels: , , , , , , ,

Thursday, August 20, 2009

The SOA Distributed Processing Pattern

It’s said that one of the keys to human intelligence is the ability for abstract thought and to instinctively rely on patterns. By expediently matching new situations to a “library” of pre-existing patterns normally referred to as “experience”, humans have been able to react more quickly in the face of new challenges. The sky is covered with dark clouds? No matter the shape of the clouds, their darkness and conglomeration indicate a storm is on its way. A large animal growls and salivates as it menacingly stares at you? I doubt you will stop to investigate what’s this is all about. If you did, your chances at reproducing would be as low as those of an ascetic monk. There’s no question that pattern recognition has been a key to our survival as a species.

Patterns have hierarchies, and the highest level pattern hierarchy deals with the overall system structure. I will discuss more about the use of patterns specific to SOA later, but first I want to discuss the broader Distributed Processing Pattern because the introduction of SOA has forced a rethink on how this pattern is defined. Just as a typical DMV office has the frowning employee at the window, the sullen clerk riffling and stamping papers in the back, and the rack with files along the back wall, most traditional distributed systems models have converged to a pattern consisting of these three tiers:

1. A Presentation tier which displays the program’s output and allows the user’s input.

2. A Business-Process tier that deals with the “heart” of the application. The actual business rules and processes are performed here.

3. The Data tier. Applications request user data via the presentation; then process the request within the business-processing area and interact with data as appropriate.

This three-component pattern has traditionally been referred to as 3-Tier architecture. Furthermore, traditional proponents of distributed processing use this 3-Tier architecture term to physically map each of the parts with actual distributed components. In this very literal interpretation of the model, the desktop devices perform presentation functions, and an intermediate server computer does some processing and then accesses data, usually via SQL or Store procedures. This fixed distributed model is typical of what was originally promoted by Data Base vendors as part of their preferred architectural model (e.g. Oracle Forms, using PL/SQL). The problem with this view of distributed processing is that it takes such a physical view of the distributed system that it soon becomes very static and inflexible, failing to accommodate new technology capabilities.

Because the PCs emerged outside the realm of the mainframe priesthood, the sad reality is that just as with a very intelligent blonde (not an oxymoron, all joking aside!) desperately trying to get a date, PCs had to sneak into the corporate world by pretending to be dumb terminals, a good fit within the static boundaries of a traditional presentation device. Also, while old intermediate systems were mostly used as communication switches or to act as specialized gateways, the physical view of the 3-Tier model tended to view today’s Servers just as database front-ends. Things have changed significantly. "Access" devices such as today's personal computers and wireless devices like your phone have tremendous power. The traditional 3-Tier view can’t accommodate their broader use.

Whereas the traditional distributed processing pattern separated processing into three physical tiers (presentation, business processing, and data), in reality, data rarely resides in a single source, and business processes cannot always be executed from a single server. Also, in real life, computation can take place anywhere, and even though organizations tend to be hierarchical, the actual business flows look more like a network than a strict hierarchy.

If SOA is to mirror this meshed topology then we must shift the paradigm somewhat. A proper SOA design should support true distributed environments; not just three tiers, but rather an n-Tier meshed topology with an intrinsic 3-Layer logical pattern.

The fundamental distributed pattern with SOA is that there are three layers; and multiple tiers—something I describe as the n-Tier/3-Layer SOA Distributed Processing pattern. The shift from Tiers to Layers has important implications: the layers in SOA are logical and are not meant to directly represent the underlying physical systems.

A typical SOA scenario is shown below:


This n-Tier/3-Layer pattern exists independently of the actual number of computers or entities. For example, imagine that the service pattern above depicts airport Kiosks displaying flight information. The user inputs the desired airline via the touch-screen terminal P1. This entry originates a service request to business process B1. Business process B1 logs the request by calling an authentication and log service that front-ends the database D1. Once the request has been authenticated, B1 requests the assistance of business process B2 (either one of the two B2’s shown). Process B2 may call the assistance of B3 for as many services as needed. It then extracts the flight information for the selected airline by calling service front-ending database D2. Finally, B2 returns the information to B1 which then passes the result onto P1 to output the requested flight information.

When dealing with this level of system design, little is assumed about the physical nature of the environment. It might well be that, initially, all business processes depicted (B1, B2, B3) execute in the same machine in which databases D1 and D2 reside. A second instance could have B2 running in a separate server, and so on.

The system can be scaled up by allowing the deployment of multiple service instances on different systems. Multiple instances also happen to improve the system robustness. The presentation services P1 and P2 may or may not reside in separate computers (remember the transparency tenets discussed earlier). Furthermore, assume there is an increase in the number of transactions going to the computer handling the business processes, and so we now wish to move business processes B2 and B3 to another machine. No problem. A key attribute of the n-Tier/3-Layer service oriented pattern is that there is no need to change applications when deploying services in separate computers.

Say we find a vendor is offering a cheaper and faster way to do things than our own B3 service. No problem. B3 can then run from the external vendor’s system. As a final note, you may have noticed that not once have I mentioned whether these computers run on Microsoft or Linux software, or are a mainframe or PC. Why not? Because the technology transparency tenet is that all software should be able to run on any given platform.

The concept of Cloud Computing, based on computer infrastructure available as a virtualized computing service via Internet-like mechanisms, is emerging as one of IT’s future directions. The idea is that the higher penetration of standards and the convergence of technologies is driving commoditization to the point that we don’t much care about what kind of technology provides the service we receive. Having an n-Timer/3-Layer pattern is a necessary (but not sufficient) condition to allow your solution to eventually garner the benefits of cloud computing in the future.

Having said this, while it’s easy to appreciate the flexibility that this type of architecture provides, keep in mind that it does have its drawbacks! For starters, there could be overhead in computing processing and message delivery latencies. This type of architecture is not designed for performance but for flexibility. On the plus side, a smart service-oriented design can optimize the way services are called and how data is passed between components via judicious use of caching techniques. Secondly, n-Tier/3-Layer can be complex, especially when deployed in a distributed fashion. SOA demands an extra focus on management and control. Thirdly, you’ll need to tighten your deployment guidelines or you might end up with a zoo of redundant services, just like when you see a traffic cop signaling traffic even though the semaphores are working just fine. Lastly, we began with patterns and end with a reaffirmation for their need.

A meshed system like the one shown has an exponential number of combinations, and it would not make sense to try and architect specific SOA arrangements over and over. Instead, the industry has now defined a series of SOA patterns that system architects can apply. Managing and taming the complexity of an SOA solution demands a disciplined use of patterns.

The story of how to make SOA work on the face of these challenges will be my next topic.

Labels: , , , , , , ,

Thursday, August 13, 2009

Advantages of the Service Oriented View

If SOA is so “old” why do we still have all this excitement about service oriented architecture applied to IT? Isn’t SOA an obvious choice anyway? In fact, the history of computer science has always been about movement from the very complex to the obvious. While in the pioneering days of computing it took a John von Neumann to work out programming concepts, these days even an Alfred E. Neuman could easily write a decent program. The reason the “obvious” solution was not used in earlier decades is because “obvious” solutions require more complex technologies (recall my earlier discussion on the preservation of complexity). Think of the electronic spreadsheet which was invented in the early 80’s (Visicalc). In hindsight this invention is something obvious. Why then wasn’t the electronic spreadsheet invented even earlier? If we take a look at the character oriented computer screens prevalent in the decade of the 70’s, it’s clear that a spreadsheet model would not have been adequate for the teletype-like devices of that era. It took cheaper bit-mapped displays for the electronic spreadsheets idea to become viable.

What’s obvious these days is that implementing systems with SOA can lead to better solutions as long as the inherent issues of SOA are tamed and appropriate service interfaces and service management governances are established. In this context, a service-oriented view provides many advantages:

1. Allows a direct mapping to a business perspective. SOA allows the implementation of solutions that can directly mirror the business processes. In the past, system designers had to translate business processes into computer driven structures. System implementations based on awkward mappings of business requirements did work. However, given that the most natural way to describe business processes and organizational flows is through a consumer view and given the fact that older computers couldn’t cope with these views, the resulting IT systems almost always ended up being difficult to use.

In fact, SOA’s facilitation of a direct mapping with the business is key to the emergence of higher-layer business tools such as Business Process Modeling (BPM). BPM represents an even higher level of abstraction of automation which can be still be used to dynamically generate software solutions. The capability to give the definition of business processes to actual business users and then have these definitions used to generate actual applications is only feasible when these processes can call predefined services.

Without SOA there is no BPM.

2. Enables Reuse of well-defined logic blocks. This is a Lego approach. Just as a building contractor assembles the skills needed to build a house, services can, in turn, call services. A general-purpose service can be re-used by several applications.

Admittedly, reuse is not a new concept. It has been the holy-grail of computer science for many years. For example, in the early days code reuse was sought via definition of macros or subroutines—chunks of source code that could be embedded into the application.

While this approach had the advantage of reusing source code providing generic functions (“Convert Data”, “Hash a Table”, etc.) it did have its issues. First of all, programmers would sometimes tweak the library code to better match their requirements; making the code non-reusable. Also, as new programming languages emerged, these libraries became outdated and could no longer be used. This is not to say the use of macros is no longer valid. The use of macros or functions for repetitious snippets of code is still a recommended best practice, but only within the confines of a single application.

In time, other code reuse techniques emerged. The most important of these were the linkable libraries. Unlike macros, the programmer could use the library without having to know the source code; thus giving some degree of protection against improper changes. However, as with macros, these static libraries become embedded in the executable code, using more memory and ultimately forcing the updating of programs whenever the libraries changed.

Enter the Dynamic Libraries. The familiar Windows DLL (Dynamic Link Library) is basically a library that becomes dynamically linked to a program, pretty much on demand. However, dynamic libraries require the use of a specific running environment (i.e. MS/Windows) making them viable only when executed in the same system as the calling application.

Dynamic Libraries represented a good step forward, and indeed they have been extremely popular as commercially available add-ons for development tools such as Visual Basic and more broadly under MS/Windows frameworks. Still, what if you wanted to use a dynamic library from a different platform? What if you could place dynamic libraries into any system and be able to call them from any platform?

Enter the concept of Services!

Just as with DLLs, you can acquire and use external tools and services, but more fundamentally, because of the loose coupling, you can run your service in a system completely different from your own. You can call a Linux service from a Windows application, or call a service located somewhere in a cloud.

SOA is all about transparency . . .

3. SOA is the foundation for transparency. If you call a help desk these days, chances are that the person at the other end of the line is based in a foreign country. Thanks to the lower communication costs and the benefits provided by educational standards and globalization, companies benefit by sourcing the services wherever they are the most cost effective. Likewise, you can run services anywhere and have them accessed by any authorized user. SOA allows you to place a function where it makes the most sense. But, since what makes sense today may not make sense tomorrow, SOA is also about allowing change with a minimum of effort. We can finally decouple the way we logically partition functions from the way we deploy physical computer systems. Service Oriented Architectures should provide as many of the following transparency tenets as possible:

· Access Transparency. Provide the ability to access the system from different devices and mechanisms.

· Failure Transparency. Designing the systems with automatic service failure fallback, without affecting the application.

· Location Transparency. The ability to deploy the system in any location.

· Migration Transparency. Allow minimum or no impact to the existing system when upgrading service implementations.

· Persistence Transparency. If the desired service has not been used, you can load it automatically. If it has been used before you can reuse the code already resident in memory.

· Relocation Transparency. The system should allow you to movie a service from machine A to machine B without impacting clients.

· Replication Transparency. The system should be able to provide the same service from different locations. This supports failure transparency and it can also be used to increase performance via horizontal scalability.

· Technology Transparency. As long as you get the service to do what the service is meant to do, the client shouldn’t care about the technology used to implement the service.

These transparency attributes facilitate legacy integration with new technologies. Since the implementation of the service is hidden from the service consumer, the service-oriented approach enables the integration of older legacy software with emergent software. Once older applications are properly encapsulated under the guise of services, it is also possible to gradually transition a system by re-implementing services one step at a time. This removes the necessity to incur in risky “big-bang” system migrations. Also, these transparency attributes are what makes emerging technologies such as Cloud Computing possible. Without transparency there are no clouds!

4. Simplifies software development by decoupling business processes, decisions, and data. The agility gained from SOA comes from the inherent simplification of the software design. It becomes easier to assign development of different modules to different groups and isolates the way the program accesses the data. More importantly, because SOA can mirror the business processes, you can create organizational structures in IT that truly mirror the business structures.



The diagram above shows this concept pictorially. It’s far easier to decompose and assign each of the various business processes inside the box to the right, using external services, than having to adopt responsibility for the way data is accessed or manipulated.

Now that I have sung the praises of SOA, I want to bring us down to reality just a bit. There are certainly many traps and difficulties in using SOA, and it makes sense to be aware of them so that you can use the well-travelled roads of successful past experiences as much as possible. This means that designing SOA is all about figuring out the concept of patterns and the way services fit into these patterns.

That’s next.

Labels: , , , , , , , , ,

Friday, August 7, 2009

SOA as the Solution Architecture

It’s time to move on to the more detailed Level II architecture. There are myriads of variations in Level II architectures and most have probably been tried in the past. Every few years or so, you wake up to find new technologies that promise to solve all your IT ailments: high level languages, structure programming, fourth generation programming, CASE Tools, Object Oriented Programming and an ever-expanding list of software development methodologies. However, given that this is the dawn of a new millennium and that we have over six decades of commercial computing under our belts, I would suggest that not planning to adopt SOA (Service oriented Architecture) for a new system would be like planning to build a house using straw and mud. Yes, there might be reasons for preferring to build a primitive hut (as a part of a movie set, perhaps?), but in general I’d rather build a house using modern construction materials. Wouldn’t you?

Defining a high level architecture these days is all about adapting SOA precepts to support the chosen architecture. Even though SOA can aid in simplifying the definition of the Level II architecture, you and your team still have to make the key decisions related to SOA-specific choices. Remember, this is the stage where the architecture moves from abstract to more pragmatic levels. With Level II you will still be high enough up so that you won’t need to worry about negotiating the ground level, but at thirty-thousand feet you have to keep a watch out for weather patterns. It is in this stage that you can better discern the horizon and where true innovation can be applied with unique solutions that can ultimately serve as key success differentiators in the ultimate deliverable.

The first thing to keep in mind is that SOA is about simplifying the business process automation and not about introducing technology for technology’s sake.

A friend of mine related this anecdote after attending a ceremony celebrating the activation of the first automated phone exchange in a small town in Mexico. As the mayor gave a glowing discourse on how the town was finally “entering the 20th century” and people would now be able to automatically place calls simply by dialing the numbers on their phone, an elderly woman sitting next to him complained, “Automatic? This ain’t automatic! Automatic was when I lifted the receiver and asked Maria, the switchboard lady, to connect me to my daughter!”

She was right. From a user’s perspective, all that matters is the ability to articulate a need in a simple way, and then have the need satisfied by the appropriate service. Alas, this woman will have to wait many years before she once again receives the same level of service she received from Maria. Maybe if she (or her grandkids) programmed her cell phone, she could once again connect to her daughter simply by speaking her name. However, if she ever wanted a connection by voicing something like, “connect me to someone who can fix my stove”, this would require the emergence of software that can truly understand natural-language and take intelligent action (don’t get me started with the so-called Interactive Voice recognition systems of today!).

The beauty of thinking in terms of services is that you can avoid getting bogged down by how the service is provided. The manner in which the service is provided should be, in the end, immaterial to the person requesting the service. What it does matter is having a well defined interface to the service be well defined. If you order a meal in a language that is not understood by the waiter, then you can be assured the request either won’t be met or that you will get served a dish full of proteins and fats of unknown origins (something like this actually happened to me while in Hong Kong, after wrongly assuming I had ordered chicken!)

How then do we define SOA? Simply stated, SOA deals with the ability to ask a system to do something (typically a coarse-grained business or system process) without having to tell it HOW to do it. Think about it, when you go to a restaurant and order a dish from the menu in the correct language, you are applying SOA principles. SOA is about abstracting the request so that the business need can be posed directly to the system via the use of a proper interface request.

In fact, SOA is nothing new. From my perspective, Service Oriented Architecture was actually invented more than ten thousand years ago with the advent of modern civilization. The SOA inventor is unknown, but most assuredly was some lazy bum trying his best (let’s face it folks . . . it was a he!) to avoid work and pass on responsibilities to others. Specialization resulted in people becoming more competent in their chores and the framework of rules and processes needed to facilitate this delineation of responsibilities became part of the societal laws we have today. Back then you had a merchant asking a scribe to log a transaction, or a king requesting a priest to plead his case to the gods, or a man of commerce paying someone to carry his produce. Agriculture, war, religion, the construction of temples and edifices, all the core activities we associate with modern human endeavors, are the results of someone doing another one’s bidding along the concepts we now refer to as Service Oriented Architecture. Once SOA became firmly entrenched, there was no turning back. SOA became the paradigm of civilization. As it expanded, it created the specializations and professions we see in today’s world.

So, why wasn’t SOA used in IT systems from the start?

Earlier generations of computer technology did not have enough “juice” to support SOA. RAM was too expensive, disks were too slow, and communication speeds were hilariously sluggish (300 bauds[1] was super-fast, and data at that speed would have taken you something like twenty-four hours to download just one song from, say, The Gabe Dixon band—one of my favorites). Still, information systems had to support business systems and business needed IT; so an implicit compromise was struck. When it came to IT, SOA was abandoned in favor of an approach that forced business to adapt to computers rather than the other way around. For example, computers did not have sufficient storage space to store dates, so only the last two digits of a year were stored. Computers didn’t have the ability to present information in plain English. No problem, only abbreviated codes, upper-case text, and cryptic commands were used.

The result is that traditional IT quickly devolved into an assemblage of monolithic processes, inflexible data schemas, and unfriendly interfaces. Eventually, as a consequence of Moore’s Law, computers became more and more powerful, and more capable of tackling increasingly complex tasks. We have now come full circle. Instead of having to adapt to the computer’s limitations, it is the computers that are now expected to adjust to human ways of interaction and even to handle high-level questions and processes. Computers can now provide natural interfaces, follow complex heuristic-driven reasoning logic, and seamlessly tap large amounts of stored information; all in real-time. SOA is the natural way to architect systems, and its use in computer systems is the result of finally being able to effectively mirror business processes with technology thanks to the arrival of powerful computers, cheaper storage, and faster networks. Now, effectively doesn’t mean efficiently. Applying SOA conveys a certain acceptance that we can afford to “waste” computer resources to achieve the flexibility and transparency advantages SOA provides (more on this next week)—something similar to the way we have accepted the performance impact of using a higher level language over machine language programming.

Now, don’t get me wrong, badly implemented SOA can result in major costly failures. Just because we can now afford computational power to better mirror business processes doesn’t mean that resources are infinite, that budgets are boundless, and that the laws of physics can be suspended. In fact, SOA gives you flexibility, and we all know that with flexibility comes plenty more ways to screw up! SOA is not a panacea, but when properly applied, our computer systems can become part of the SOA future, just as SOA has always been a part of our past.




[1] In old modems this was equivalent to 300 bps.

Labels: , , , , , , ,