Software Solutions Developed With
High Perfection & High Quality

Our Software Methodology

Agile software development

Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen interactions throughout the development cycle. The Agile Manifesto introduced the term in 2001.

Incremental software development methods have been traced back to 1957. In 1974, a paper by E. A. Edmonds introduced an adaptive software development process. Concurrently and independently the same methods were developed and deployed by the New York Telephone Company's Systems Development Center under the direction of Dan Gielan. In the early 1970s, Tom Gilb started publishing the concepts of Evolutionary Project Management (EVO), which has evolved into Competitive Engineering. During the mid to late 1970s Gielan lectured extensively throughout the U.S. on this methodology, its practices, and its benefits.

So-called lightweight agile software development methods evolved in the mid-1990s as a reaction against the heavyweight waterfall-oriented methods, which were characterized by their critics as being heavily regulated, regimented, micromanaged and overly incremental approaches to development. Proponents of lightweight agile methods contend that they are a return to development practices that were present early in the history of software development.

Early implementations of agile methods include :

    • Rational Unified Process (1994),
    • Scrum (1995),
    • Crystal Clear,
    • Extreme Programming (1996),
    • Adaptive Software Development,
    • Feature Driven Development, and
    • Dynamic Systems Development Method (DSDM) (1995).
  • These are now collectively referred to as agile methodologies, after the Agile Manifesto was published in 2001.

    Agile Manifesto

    In February 2001, 17 software developers met at the Snowbird, Utah, resort, to discuss lightweight development methods. They published the Manifesto for Agile Software Development to define the approach now known as agile software development. Some of the manifesto's authors formed the Agile Alliance, a nonprofit organization that promotes software development according to the manifesto's principles.

    The Agile Manifesto reads, in its entirety, as follows:
    We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

    • Individuals and interactions over processes and tools
    • Working software over comprehensive documentation
    • Customer collaboration over contract negotiation
    • Responding to change over following a plan
  • That is, while there is value in the items on the right, we value the items on the left more.

    Agile development is focused on quick responses to change and continuous development.

    The Agile Manifesto is based on twelve principles :

    • Customer satisfaction by rapid delivery of useful software
    • Welcome changing requirements, even late in development
    • Working software is delivered frequently (weeks rather than months)
    • Working software is the principal measure of progress
    • Sustainable development, able to maintain a constant pace
    • Close, daily cooperation between business people and developers
    • Face-to-face conversation is the best form of communication (co-location)
    • Projects are built around motivated individuals, who should be trusted
    • Continuous attention to technical excellence and good design
    • Simplicity the art of maximizing the amount of work not done is essential
    • Self-organizing teams
    • Regular adaptation to changing circumstances
  • .

    Waterfall Development :
    Waterfall Development

    The Waterfall model is a sequential development approach, in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirements analysis, design, implementation, testing (validation), integration, and maintenance. The first formal description of the method is often cited as an article published by Winston W. Royce in 1970 although Royce did not use the term "waterfall" in this article.

    The basic principles are:

    Project is divided into sequential phases, with some overlap and splashback acceptable between phases. Emphasis is on planning, time schedules, target dates, budgets and implementation of an entire system at one time.

    Tight control is maintained over the life of the project via extensive written documentation, formal reviews, and approval/signoff by the user and information technology management occurring at the end of most phases before beginning the next phase.

    The Waterfall model is a traditional engineering approach applied to software engineering. It has been widely blamed for several large-scale government projects running over budget, over time and sometimes failing to deliver on requirements due to the Big Design Up Front approach. Except when contractually required, the Waterfall model has been largely superseded by more flexible and versatile methodologies developed specifically for software development. See Criticism of Waterfall model.

    Prototyping :

    Software prototyping, is the development approach of activities during software development, the creation of prototypes, i.e., incomplete versions of the software program being developed.

    The basic principles are:
    Not a standalone, complete development methodology, but rather an approach to handle selected parts of a larger, more traditional development methodology (i.e. incremental, spiral, or rapid application development (RAD)).
    Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.

    User is involved throughout the development process, which increases the likelihood of user acceptance of the final implementation. Small-scale mock-ups of the system are developed following an iterative modification process until the prototype evolves to meet the users’ requirements.

    While most prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system.

    A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problem.

    Incremental Development :
    Incremental development

    Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.

    The basic principles are :
    A series of mini-Waterfalls are performed, where all phases of the Waterfall are completed for a small part of a system, before proceeding to the next increment, or

    Overall requirements are defined before proceeding to evolutionary, mini-Waterfall development of individual increments of a system, or

    The initial software concept, requirements analysis, and design of architecture and system core are defined via Waterfall, followed by iterative Prototyping, which culminates in installing the final prototype, a working system.

    The Spiral Model :
    The Spiral Model

    The spiral model is a software development process combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. It is a meta-model, a model that can be used by other models. The basic principles are:

    Focus is on risk assessment and on minimizing project risk by breaking a project into smaller segments and providing more ease-of-change during the development process, as well as providing the opportunity to evaluate risks and weigh consideration of project continuation throughout the life cycle.

    "Each cycle involves a progression through the same sequence of steps, for each part of the product and for each of its levels of elaboration, from an overall concept-of-operation document down to the coding of each individual program."

    Each trip around the spiral traverses four basic quadrants:

    • (1) determine objectives, alternatives, and constraints of the iteration.
    • (2) evaluate alternatives; Identify and resolve risks.
    • (3) develop and verify deliverables from the iteration and
    • (4) plan the next iteration. Begin each cycle with an identification of stakeholders and their win conditions, and end each cycle with review and commitment.

  • Rapid Application Development :
    Rapid Application Development

    Rapid application development (RAD) is a software development methodology that uses minimal planning in favor of rapid prototyping. The "planning" of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster, and makes it easier to change requirements. RAD is not appropriate when technical risks are high.


    Rapid Application Development (RAD) is a term originally used to describe a software development process first developed and successfully deployed during the mid-1970s by the New York Telephone Co's Systems Development Center under the direction of Dan Gielan. Following a series of remarkably successful implementations of this process, Gielan lectured extensively in various forums on the methodology, practice, and benefits of this process.

    RAD involves iterative development and the construction of prototypes. In 1990, in his book RAD, Rapid Application Development, James Martin documented his interpretation of the methodology. More recently, the term and its acronym have come to be used in a broader, general sense that encompasses a variety of methods aimed at speeding application development, such as the use of software frameworks of varied types, such as web application frameworks.

    Rapid application development was a response to processes developed in the 1970s and 1980s, such as the Structured Systems Analysis and Design Method and other Waterfall models. One problem with previous methodologies was that applications took so long to build that requirements had changed before the system was complete, resulting in inadequate or even unusable systems. Another problem was the assumption that a methodical requirements analysis phase alone would identify all the critical requirements. Ample evidence attests to the fact that this is seldom the case, even for projects with highly experienced professionals at all levels.

    Starting with the ideas of Brian Gallagher, Alex Balchin, Barry Boehm and Scott Shultz, James Martin developed the rapid application development approach during the 1980s at IBM and finally formalized it by publishing a book in 1991, Rapid Application Development.

    Four phases of RAD :

    Requirements Planning phase -

    combines elements of the system planning and systems analysis phases of the Systems Development Life Cycle (SDLC). Users, managers, and IT staff members discuss and agree on business needs, project scope, constraints, and system requirements. It ends when the team agrees on the key issues and obtains management authorization to continue.

    User design phase -

    during this phase, users interact with systems analysts and develop models and prototypes that represent all system processes, inputs, and outputs. The RAD groups or subgroups typically use a combination of Joint Application Development (JAD) techniques and CASE tools to translate user needs into working models. User Design is a continuous interactive process that allows users to understand, modify, and eventually approve a working model of the system that meets their needs.

    Construction phase -

    focuses on program and application development task similar to the SDLC. In RAD, however, users continue to participate and can still suggest changes or improvements as actual screens or reports are developed. Its tasks are programming and application development, coding, unit-integration and system testing.

    Cutover phase -

    resembles the final tasks in the SDLC implementation phase, including data conversion, testing, changeover to the new system, and user training. Compared with traditional methods, the entire process is compressed. As a result, the new system is built, delivered, and placed in operation much sooner.


    This section possibly contains original research. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research may be removed. (January 2012)

    Since rapid application development is an iterative and incremental process, it can lead to a succession of prototypes that never culminate in a satisfactory production application. Such failures may be avoided if the application development tools are robust, flexible, and put to proper use. This is addressed in methods such as the 2080 Development method or other post-agile variants.

    Furthermore, there is a phenomenon behind a failed succession of unsatisfactory prototypes: End-users intuitively and primarily look to the Graphical User Interface (GUI) as a first measure of software qualities. The faster they "see" something working, the more "rapid" they perceive the development cycle to be. This is because the GUI has a strong natural directly viewable presence, while other problem domain code must be deduced or inferred, going largely unnoticed.

    As programmers are generally software end-users first and programmers later, this natural gravitation towards "seeing" is engrained from the beginning (e.g. "Hello World" is not about the data or business domain, but is about "seeing" something on a screen). Programmers without clear training towards understanding this deceptive quality of software also tend to gravitate their measurement of "rapid" in RAD by how quickly the GUI progresses. The tendency is further strengthened by the fact that end-users hold the financial reward programmers seek, leading them to ignorantly cater to the measurement mind-set of the end-user rather than a suitable and more realistic alternative. Consequently, many software development tools strongly and primarily focus on the application GUI as well. This tendency to measure "rapid" by means of the GUI is strengthened further by programmers who (in turn) become RAD tool producers. They also have a strong financial interest and cater to the software development community GUI-centric model. Finally, an anecdotal review of RAD tools reveals the strong GUI focus, where "Hello World" and other GUI-centric examples abound and a dearth of data or business problem domain examples exist.

    Ultimately, the triad of players (e.g. end-users, programmers, tool-developers), each with a strong and misplaced focus on the GUI aspects of software applications leads to a cultural underemphasis of the bulk of what the software text is actually about: The business and data problem domains. Furthermore, the culture of RAD (and indeed of software in general), with the central GUI emphasis, presents a multifaceted problem for engineers who recognize the problem and seek to overcome and engender an opposite environment (end-users, developers and tools).

    First, there is the lack of rapid application development tools that emphasize the appropriate problem domain. Next, appropriately trained engineers have the task of training and educating end-users to see the counter-intuitive data and business code as the appropriate measure of how well or rapidly a software product is being developed. Finally, there is a lack of project engineers who understand the misplaced problem domain and are trainable and capable to the task of reorienting themselves to the counter-intuitive viewpoint.

    Practical Implications

    When organizations adopt rapid development methodologies, care must be taken to avoid role and responsibility confusion and communication breakdown within a development team, and between team and client. In addition, especially in cases where the client is absent or not able to participate with authority in the development process, the system analyst should be endowed with this authority on behalf of the client to ensure appropriate prioritisation of non-functional requirements. Furthermore, no increment of the system should be developed without a thorough and formally documented design phase.

    Agile Unified Process ( AUP ) :
    Agile Unified Process

    Agile Unified Process (AUP) is a simplified version of the IBM Rational Unified Process (RUP) developed by Scott Ambler. It describes a simple, easy to understand approach to developing business application software using agile techniques and concepts yet still remaining true to the RUP. The AUP applies agile techniques including test driven development (TDD), Agile Modeling, agile change management, and database refactoring to improve productivity.

    Unlike the RUP, the AUP has only seven disciplines :

    Model :

    Understand the business of the organization, the problem domain being addressed by the project, and identify a viable solution to address the problem domain.


    Transform model(s) into executable code and perform a basic level of testing, in particular unit testing.

    Test :

    Perform an objective evaluation to ensure quality. This includes finding defects, validating that the system works as designed, and verifying that the requirements are met.


    Plan for the delivery of the system and to execute the plan to make the system available to end users.

    Configuration Management:

    Manage access to project artifacts. This includes not only tracking artifact versions over time but also controlling and managing changes to them.

    Project Management:

    Direct the activities that take place within the project. This includes managing risks, directing people (assigning tasks, tracking progress, etc.), and coordinating with people and systems outside the scope of the project to be sure that it is delivered on time and within budget.

    Environment :

    Support the rest of the effort by ensuring that the proper process, guidance (standards and guidelines), and tools (hardware, software, etc.) are available for the team as needed.


    The Agile UP is based on the following philosophies :

    • 1. Your staff knows what they're doing: People are not going to read detailed process documentation, but they will want some high-level guidance and/or training from time to time. The AUP product provides links to many of the details, if you are interested, but doesn't force them upon you.
    • 2 Simplicity. Everything is described concisely using a handful of pages, not thousands of them.
    • 3 Agility. The Agile UP conforms to the values and principles of the agile software development and the Agile Alliance.
    • 4 Focus on high-value activities. The focus is on the activities which actually count, not every possible thing that could happen to you on a project.
    • 5 Tool independence. You can use any toolset that you want with the Agile UP. The recommendation is that you use the tools which are best suited for the job, which are often simple tools.
  • You'll want to tailor the AUP to meet your own needs.

    Extreme programming :
    Extreme programming

    Extreme Programming (XP) is a software development methodology which is intended to improve software quality and responsiveness to changing customer requirements. As a type of agile software development, it advocates frequent "releases" in short development cycles, which is intended to improve productivity and introduce checkpoints where new customer requirements can be adopted.

    Other elements of Extreme Programming include: programming in pairs or doing extensive code review, unit testing of all code, avoiding programming of features until they are actually needed, a flat management structure, simplicity and clarity in code, expecting changes in the customer's requirements as time passes and the problem is better understood, and frequent communication with the customer and among programmers. The methodology takes its name from the idea that the beneficial elements of traditional software engineering practices are taken to "extreme" levels.


    Extreme Programming Explained describes Extreme Programming as a software-development discipline that organizes people to produce higher-quality software more productively.

    XP attempts to reduce the cost of changes in requirements by having multiple short development cycles, rather than a long one. In this doctrine, changes are a natural, inescapable and desirable aspect of software-development projects, and should be planned for, instead of attempting to define a stable set of requirements.

    Extreme programming also introduces a number of basic values, principles and practices on top of the agile programming framework.


    XP describes four basic activities that are performed within the software development process: coding, testing, listening, and designing. Each of those activities is described below.


    The advocates of XP argue that the only truly important product of the system development process is code's software instructions that a computer can interpret. Without code, there is no working product.

    Coding can also be used to figure out the most suitable solution. Coding can also help to communicate thoughts about programming problems. A programmer dealing with a complex programming problem, or finding it hard to explain the solution to fellow programmers, might code it in a simplified manner and use the code to demonstrate what he or she means. Code, say the proponents of this position, is always clear and concise and cannot be interpreted in more than one way. Other programmers can give feedback on this code by also coding their thoughts.


    Extreme programming's approach is that if a little testing can eliminate a few flaws, a lot of testing can eliminate many more flaws.
    Unit tests determine whether a given feature works as intended. A programmer writes as many automated tests as they can think of that might "break" the code; if all tests run successfully, then the coding is complete. Every piece of code that is written is tested before moving on to the next feature.

    Acceptance tests verify that the requirements as understood by the programmers satisfy the customer's actual requirements.

    System-wide integration testing was encouraged, initially, as a daily end-of-day activity, for early detection of incompatible interfaces, to reconnect before the separate sections diverged widely from coherent functionality. However, system-wide integration testing has been reduced, to weekly, or less often, depending on the stability of the overall interfaces in the system.


    Programmers must listen to what the customers need the system to do, what "business logic" is needed. They must understand these needs well enough to give the customer feedback about the technical aspects of how the problem might be solved, or cannot be solved. Communication between the customer and programmer is further addressed in the Planning Game.


    From the point of view of simplicity, of course one could say that system development doesn't need more than coding, testing and listening. If those activities are performed well, the result should always be a system that works. In practice, this will not work. One can come a long way without designing but at a given time one will get stuck. The system becomes too complex and the dependencies within the system cease to be clear. One can avoid this by creating a design structure that organizes the logic in the system. Good design will avoid lots of dependencies within a system; this means that changing one part of the system will not affect other parts of the system.


    Extreme Programming initially recognized four values in 1999: Communication, Simplicity, Feedback, and Courage. A new value, Respect, was added in the second edition of Extreme Programming Explained. Those five values are described below.


    Building software systems requires communicating system requirements to the developers of the system. In formal software development methodologies, this task is accomplished through documentation. Extreme programming techniques can be viewed as methods for rapidly building and disseminating institutional knowledge among members of a development team. The goal is to give all developers a shared view of the system which matches the view held by the users of the system. To this end, extreme programming favors simple designs, common metaphors, collaboration of users and programmers, frequent verbal communication, and feedback.


    Extreme programming encourages starting with the simplest solution. Extra functionality can then be added later. The difference between this approach and more conventional system development methods is the focus on designing and coding for the needs of today instead of those of tomorrow, next week, or next month. This is sometimes summed up as the "You aren't gonna need it" (YAGNI) approach. Proponents of XP acknowledge the disadvantage that this can sometimes entail more effort tomorrow to change the system; their claim is that this is more than compensated for by the advantage of not investing in possible future requirements that might change before they become relevant. Coding and designing for uncertain future requirements implies the risk of spending resources on something that might not be needed, while perhaps delaying crucial features. Related to the "communication" value, simplicity in design and coding should improve the quality of communication. A simple design with very simple code could be easily understood by most programmers in the team.


    Within extreme programming, feedback relates to different dimensions of the system development:

    • Feedback from the system: by writing unit tests, or running periodic integration tests, the programmers have direct feedback from the state of the system after implementing changes.
    • Feedback from the customer: The functional tests (aka acceptance tests) are written by the customer and the testers. They will get concrete feedback about the current state of their system. This review is planned once in every two or three weeks so the customer can easily steer the development.
    • Feedback from the team: When customers come up with new requirements in the planning game the team directly gives an estimation of the time that it will take to implement.
    • Feedback is closely related to communication and simplicity. Flaws in the system are easily communicated by writing a unit test that proves a certain piece of code will break. The direct feedback from the system tells programmers to recode this part. A customer is able to test the system periodically according to the functional requirements, known as user stories. To quote Kent Beck, "Optimism is an occupational hazard of programming. Feedback is the treatment."
  • Courage

    Several practices embody courage. One is the commandment to always design and code for today and not for tomorrow. This is an effort to avoid getting bogged down in design and requiring a lot of effort to implement anything else. Courage enables developers to feel comfortable with refactoring their code when necessary. This means reviewing the existing system and modifying it so that future changes can be implemented more easily. Another example of courage is knowing when to throw code away: courage to remove source code that is obsolete, no matter how much effort was used to create that source code. Also, courage means persistence: A programmer might be stuck on a complex problem for an entire day, then solve the problem quickly the next day, but only if they are persistent.


    The respect value includes respect for others as well as self-respect. Programmers should never commit changes that break compilation, that make existing unit-tests fail, or that otherwise delay the work of their peers. Members respect their own work by always striving for high quality and seeking for the best design for the solution at hand through refactoring.

    Adopting the four earlier values leads to respect gained from others in the team. Nobody on the team should feel unappreciated or ignored. This ensures a high level of motivation and encourages loyalty toward the team and toward the goal of the project. This value is very dependent upon the other values, and is very much oriented toward people in a team.


    The first version of rules for XP was published in 1999 by Don Wells at the XP website. 29 rules are given in the categories of planning, managing, designing, coding, and testing. Planning, managing and designing are called out explicitly to counter claims that XP doesn't support those activities.

    Another version of XP rules was proposed by Ken Auer in XP/Agile Universe 2003. He felt XP was defined by its rules, not its practices (which are subject to more variation and ambiguity). He defined two categories: "Rules of Engagement" which dictate the environment in which software development can take place effectively, and "Rules of Play" which define the minute-by-minute activities and rules within the framework of the Rules of Engagement.


    The principles that form the basis of XP are based on the values just described and are intended to foster decisions in a system development project. The principles are intended to be more concrete than the values and more easily translated to guidance in a practical situation.


    Extreme programming sees feedback as most useful if it is done frequently and promptly. It stresses that minimal delay between an action and its feedback is critical to learning and making changes. Unlike traditional system development methods, contact with the customer occurs in more frequent iterations. The customer has clear insight into the system that is being developed. He or she can give feedback and steer the development as needed. With frequent feedback from the customer, a mistaken design decision made by the developer will be noticed and corrected quickly, before the developer spends much time implementing it.

    Unit tests also contribute to the rapid feedback principle. When writing code, running the unit test provides direct feedback as to how the system reacts to the changes one has made. This includes running not only the unit tests that test the developer's code, but running in addition all unit tests against all the software, using an automated process that can be initiated by a single command. That way, If the developer's changes cause a failure in some other portion of the system, that the developer knows little or nothing about, the automated all-unit-test suite will reveal the failure immediately, alerting the developer of the incompatibility of his change with other parts of the system, and the necessity of removing or modifying his change. Under traditional development practices, the absence of an automated, comprehensive unit-test suite meant that such a code change, assumed harmless by the developer, would have been left in place, appearing only during integration testing--or worse, only in production; and determining which code change caused the problem, among all the changes made by all the developers during the weeks or even months previous to integration testing, was a formidable task.

    Assuming simplicity

    This is about treating every problem as if its solution were "extremely simple". Traditional system development methods say to plan for the future and to code for reusability. Extreme programming rejects these ideas.

    The advocates of extreme programming say that making big changes all at once does not work. Extreme programming applies incremental changes: for example, a system might have small releases every three weeks. When many little steps are made, the customer has more control over the development process and the system that is being developed.

    Embracing change

    The principle of embracing change is about not working against changes but embracing them. For instance, if at one of the iterative meetings it appears that the customer's requirements have changed dramatically, programmers are to embrace this and plan the new requirements for the next iteration.

    Scrum (software development) :
    Scrum (software development)

    Scrum is an iterative and incremental agile software development framework for managing software projects and product or application development. Its focus is on "a flexible, holistic product development strategy where a development team works as a unit to reach a common goal" as opposed to a "traditional, sequential approach".


    There are three core roles and a range of ancillary role's core roles are often referred to as pigs and ancillary roles as chickens (after the story The Chicken and the Pig).

    The core roles are those committed to the project in the Scrum process , they are the ones producing the product (objective of the project). They represent the scrum team.

    Product Owner

    The Product Owner represents the stakeholders and is the voice of the customer. He or she is accountable for ensuring that the team delivers value to the business. The Product Owner writes (or has the team write) customer-centric items (typically user stories), prioritizes them, and adds them to the product backlog. Scrum teams should have one Product Owner, and while they may also be a member of the development team, it is recommended that this role not be combined with that of the Scrum Master.

    Development Team

    The Development Team is responsible for delivering potentially shippable product increments at the end of each Sprint. A Development Team is made up of 3-9 people with cross-functional skills who do the actual work (analyse, design, develop, test, technical communication, document, etc.). The Development Team in Scrum is self-organizing, even though they may interface with project management organizations (PMOs).

    Scrum Master

    Scrum is facilitated by a Scrum Master, who is accountable for removing impediments to the ability of the team to deliver the sprint goal/deliverables. The Scrum Master is not the team leader, but acts as a buffer between the team and any distracting influences. The Scrum Master ensures that the Scrum process is used as intended. The Scrum Master is the enforcer of rules. A key part of the Scrum Master's role is to protect the Development Team and keep it focused on the tasks at hand. The role has also been referred to as a servant-leader to reinforce these dual perspectives. The Scrum Master differs from a Project Manager in that the latter may have people management responsibilities unrelated to the role of Scrum Master. The Scrum Master role excludes any such additional people responsibilities.

    The ancillary roles in Scrum teams are those with no formal role and infrequent involvement in the Scrum process but nonetheless, they must be taken into account.


    The stakeholders are the customers, vendors. They are people who enable the project and for whom the project produces the agreed-upon benefit that justify its production. They are only directly involved in the process during the sprint reviews.


    People who control the work environment.

    Service-Oriented Modeling :
    Service-Oriented Modeling

    Service-oriented modeling is the discipline of modeling business and software systems, for the purpose of designing and specifying service-oriented business systems within a variety of architectural styles, such as enterprise architecture, application architecture, service-oriented architecture, and cloud computing.

    Any service-oriented modeling methodology typically includes a modeling language that can be employed by both the 'problem domain organization' (the Business), and 'solution domain organization' (the Information Technology Department), whose unique perspectives typically influence the 'service' development life-cycle strategy and the projects implemented using that strategy.

    Service-oriented modeling typically strives to create models that provide a comprehensive view of the analysis, design, and architecture of all 'Software Entities' in an organization, which can be understood by individuals with diverse levels of business and technical understanding. Service-oriented modeling typically encourages viewing software entities as 'assets' (service-oriented assets), and refers to these assets collectively as 'services.

    Popular approaches

    There are many different approaches that have been proposed for service modeling, including SOMA and SOMF.

    Service-oriented modeling and architecture

    IBM announced service-oriented modeling and architecture (SOMA) as the first publicly announced SOA-related methodology in 2004. SOMA refers to the more general domain of service modeling necessary to design and create SOA. SOMA covers a broader scope and implements service-oriented analysis and design (SOAD) through the identification, specification and realization of services, components that realize those services (a.k.a. "service components"), and flows that can be used to compose services.

    SOMA includes an analysis and design method that extends traditional object-oriented and component-based analysis and design methods to include concerns relevant to and supporting SOA. It consists of three major phases of identification, specification and realization of the three main elements of SOA, namely, services, components that realize those services (aka service components) and flows that can be used to compose services.

    SOMA is an end-to-end SOA method for the identification, specification, realization and implementation of services (including information services), components, flows (processes/composition). SOMA builds on current techniques in areas such as domain analysis, functional areas grouping, variability-oriented analysis (VOA) process modeling, component-based development, object-oriented analysis and design and use case modeling. SOMA introduces new techniques such as goal-service modeling, service model creation and a service litmus test to help determine the granularity of a service.

    SOMA identifies services, component boundaries, flows, compositions, and information through complementary techniques which include domain decomposition, goal-service modeling and existing asset analysis.

    Life cycle modeling activities

    Service-oriented modeling and architecture (SOMA) consists of the phases of identification, specification, realization, implementation, deployment and management in which the fundamental building blocks of SOA are identified then refined and implemented in each phase. The fundamental building blocks of SOA consists of services, components, flows and related to them, information, policy and contracts.

    • Service-oriented modeling framework
    • Service-oriented modeling framework (SOMF) characteristics
    • Driving modeling paradigm
    • Holistic
    • Anthropomorphic
    • Discipline-specific
    • Virtual modeling
    • Visual modeling
    • Style and pattern oriented
    • Modeling generations: as-is, to-be, used-to-be
    • Architectural applications
    • Enterprise architecture
    • Application architecture
    • Service-oriented architecture (SOA)
    • Cloud computing
    • Chief business goals
    • Asset consolidation
    • Expenditure reduction
    • Time to market
    • Business agility
    • Overall technological goals
    • Architecture flexibility
    • Technological extensibility
    • Interoperable implementations

  • .

    The service-oriented modeling framework (SOMF) has been proposed by author Michael Bell as a holistic and anthropomorphic modeling language for software development that employs disciplines and a universal language to provide tactical and strategic solutions to enterprise problems. The term "holistic language" pertains to a modeling language that can be employed to design any application, business and technological environment, either local or distributed. This universality may include design of application-level and enterprise-level solutions, including SOA landscapes or cloud computing environments. The term "anthropomorphic", on the other hand, affiliates the SOMF language with intuitiveness of implementation and simplicity of usage. Furthermore, The SOMF language and its notation has been adopted by Sparx Enterprise Architect modeling platform that enables business architects, technical architects, managers, modelers, developers, and business and technical analysts to pursue the chief SOMF life cycle disciplines.

    SOMF is a service-oriented development life cycle methodology, a discipline-specific modeling process. It offers a number of modeling practices and disciplines that contribute to a successful service-oriented life cycle development and modeling during a project (see image on left).

    SOMF Version 2.0

    It illustrates the major elements that identify the 'what to do' aspects of a service development scheme. These are the modeling pillars that will enable practitioners to craft an effective project plan and to identify the milestones of a service-oriented initiative ' either a small or large-scale business or a technological venture.

    The provided image thumb (on the left hand side) depicts the four sections of the modeling framework that identify the general direction and the corresponding units of work that make up a service-oriented modeling strategy: practices, environments, disciplines, and artifacts. These elements uncover the context of a modeling occupation and do not necessarily describe the process or the sequence of activities needed to fulfill modeling goals. These should be ironed out during the project plan ' the service-oriented development life cycle strategy ' that typically sets initiative boundaries, time frame, responsibilities and accountabilities, and achievable project milestones.

    Feature-Driven Development :
    Feature-Driven Development

    Feature-driven development (FDD) is an iterative and incremental software development process. It is one of a number of Agile methods for developing software and forms part of the Agile Alliance. FDD blends a number of industry-recognized best practices into a cohesive whole. These practices are all driven from a client-valued functionality (feature) perspective. Its main purpose is to deliver tangible, working software repeatedly in a timely manner.


    FDD was initially devised by Jeff De Luca, to meet the specific needs of a 15-month, 50-person software development project at a large Singapore bank in 1997. Jeff De Luca delivered a set of five processes that covered the development of an overall model and the listing, planning, design and building of features. The first process is heavily influenced by Peter Coad's approach to object modeling. The second process incorporates Peter Coad's ideas of using a feature list to manage functional requirements and development tasks. The other processes and the blending of the processes into a cohesive whole is a result of Jeff De Luca's experience. Since its successful use on the Singapore project, there have been several implementations of FDD.

    The description of FDD was first introduced to the world in Chapter 6 of the book Java Modeling in Color with UML by Peter Coad, Eric Lefebvre and Jeff De Luca in 1999. Later, in Stephen Palmer and Mac Felsing's book A Practical Guide to Feature-Driven Development (published in 2002), a more general description of FDD was given, as decoupled from Java modeling in color.

    The original and latest FDD processes can be found on Jeff De Luca's website under the 'Article' area. There is also a Community website available at which people can learn more about FDD, questions can be asked, and experiences and the processes themselves are discussed.


    FDD is a model-driven short-iteration process that consists of five basic activities. For accurate state reporting and keeping track of the software development project, milestones that mark the progress made on each feature are defined. This section gives a high level overview of the activities. In the figure on the right, the meta-process model for these activities is displayed. During the first two sequential activities, an overall model shape is established. The final three activities are iterated for each feature. For more detailed information about the individual sub-activities have a look at Table 2 (derived from the process description in the 'Article' section of Jeff De Luca's website).

    Process model for FDD

    Develop overall model

    The project started with a high-level walkthrough of the scope of the system and its context. Next, detailed domain walkthroughs were held for each modeling area. In support of each domain, walkthrough models were then composed by small groups, which were presented for peer review and discussion. One of the proposed models, or a merge of them, was selected which became the model for that particular domain area. Domain area models were merged into an overall model, and the overall model shape was adjusted along the way.

    Build feature list

    The knowledge that was gathered during the initial modeling was used to identify a list of features. This was done by functionally decomposing the domain into subject areas. Subject areas each contain business activities, the steps within each business activity formed the categorized feature list. Features in this respect were small pieces of client-valued functions expressed in the form "", for example: 'Calculate the total of a sale' or 'Validate the password of a user'. Features should not take more than two weeks to complete, else they should be broken down into smaller pieces.

    Plan by feature

    After the feature list had been completed, the next step was to produce the development plan. Class ownership has been done by ordering and assigning features (or feature sets) as classes to chief programmers.

    Design by feature

    A design package was produced for each feature. A chief programmer selected a small group of features that are to be developed within two weeks. Together with the corresponding class owners, the chief programmer worked out detailed sequence diagrams for each feature and refines the overall model. Next, the class and method prologues are written and finally a design inspection is held.

    Build by feature

    After a successful design inspection a per feature activity to produce a completed client-valued function (feature) is being produced. The class owners develop the actual code for their classes. After a unit test and a successful code inspection, the completed feature is promoted to the main build.


    Since features are small, completing a feature is a relatively small task. For accurate state reporting and keeping track of the software development project it is however important to mark the progress made on each feature. FDD therefore defines six milestones per feature that are to be completed sequentially. The first three milestones are completed during the Design By Feature activity, the last three are completed during the Build By Feature activity.

    Best practices

    Feature-Driven Development is built around a core set of industry-recognized best practices, derived from software engineering. These practices are all driven from a client-valued feature perspective. It is the combination of these practices and techniques that makes FDD so compelling. The best practices that make up FDD are shortly described below. For each best practice a short description will be given.

    Domain Object Modeling. Domain Object Modeling consists of exploring and explaining the domain of the problem to be solved. The resulting domain object model provides an overall framework in which to add features.

    Developing by Feature. Any function that is too complex to be implemented within two weeks is further decomposed into smaller functions until each sub-problem is small enough to be called a feature. This makes it easier to deliver correct functions and to extend or modify the system.

    Individual Class (Code) Ownership. Individual class ownership means that distinct pieces or grouping of code are assigned to a single owner. The owner is responsible for the consistency, performance, and conceptual integrity of the class.

    Feature Teams. A feature team is a small, dynamically formed team that develops a small activity. By doing so, multiple minds are always applied to each design decision and also multiple design options are always evaluated before one is chosen.

    Inspections. Inspections are carried out to ensure good quality design and code, primarily by detection of defects. Configuration Management. Configuration management helps with identifying the source code for all features that have been completed to date and to maintain a history of changes to classes as feature teams enhance them.

    Regular Builds. Regular builds ensure there is always an up to date system that can be demonstrated to the client and helps highlighting integration errors of source code for the features early.

    Visibility of progress and results. By frequent, appropriate, and accurate progress reporting at all levels inside and outside the project, based on completed work, managers are helped at steering a project correctly.

    Metamodel (MetaModeling)

    Metamodeling helps visualizing both the processes and the data of a method, such that methods can be compared and method fragments in the method engineering process can easily be reused. The advantage of the technique is that it is clear, compact, and consistent with UML standards.

    The left side of the metadata model, depicted on the right, shows the five basic activities involved in a software development project using FDD. The activities all contain sub-activities that correspond to the sub-activities in the FDD process description on Jeff De Luca's website. The right side of the model shows the concepts involved. These concepts originate from the activities depicted in the left side of the diagram.

    Lean Software Development :
    Lean Software Development

    Lean software development is a translation of lean manufacturing and lean IT principles and practices to the software development domain. Adapted from the Toyota Production System, a pro-lean subculture is emerging from within the Agile community.


    The term lean software development originated in a book by the same name, written by Mary Poppendieck and Tom Poppendieck. The book presents the traditional lean principles in a modified form, as well as a set of 22 tools and compares the tools to agile practices. The Poppendiecks' involvement in the Agile software development community, including talks at several Agile conferences has resulted in such concepts being more widely accepted within the Agile community.

    Lean Principles

    Lean development can be summarized by seven principles, very close in concept to lean manufacturing principles:

    • Eliminate waste
    • Amplify learning
    • Decide as late as possible
    • Deliver as fast as possible
    • Empower the team
    • Build integrity in
    • See the whole

  • .

    Eliminate waste

    Everything not adding value to the customer is considered to be waste (muda). This includes:

    • unnecessary code and functionality
    • delay in the software development process
    • unclear requirements
    • insufficient testing, leading to avoidable process repetition
    • bureaucracy
    • slow internal communication

  • .

    In order to be able to eliminate waste, one should be able to recognize it. If some activity could be bypassed or the result could be achieved without it, it is waste. Partially done coding eventually abandoned during the development process is waste. Extra processes and features not often used by customers are waste. Waiting for other activities, teams, processes is waste. Defects and lower quality are waste. Managerial overhead not producing real value is waste. A value stream mapping technique is used to distinguish and recognize waste. The second step is to point out sources of waste and eliminate them. The same should be done iteratively until even essential-seeming processes and procedures are liquidated.

    Amplify learning

    Software development is a continuous learning process with the additional challenge of development teams and end product sizes. The best approach for improving a software development environment is to amplify learning. The accumulation of defects should be prevented by running tests as soon as the code is written. Instead of adding more documentation or detailed planning, different ideas could be tried by writing code and building. The process of user requirements gathering could be simplified by presenting screens to the end-users and getting their input.

    The learning process is sped up by usage of short iteration cycles - each one coupled with refactoring and integration testing. Increasing feedback via short feedback sessions with customers helps when determining the current phase of development and adjusting efforts for future improvements. During those short sessions both customer representatives and the development team learn more about the domain problem and figure out possible solutions for further development. Thus the customers better understand their needs, based on the existing result of development efforts, and the developers learn how to better satisfy those needs. Another idea in the communication and learning process with a customer is set-based development - this concentrates on communicating the constraints of the future solution and not the possible solutions, thus promoting the birth of the solution via dialogue with the customer.

    Decide as late as possible

    As software development is always associated with some uncertainty, better results should be achieved with an options-based approach, delaying decisions as much as possible until they can be made based on facts and not on uncertain assumptions and predictions. The more complex a system is, the more capacity for change should be built into it, thus enabling the delay of important and crucial commitments. The iterative approach promotes this principle , the ability to adapt to changes and correct mistakes, which might be very costly if discovered after the release of the system.

    An agile software development approach can move the building of options earlier for customers, thus delaying certain crucial decisions until customers have realized their needs better. This also allows later adaptation to changes and the prevention of costly earlier technology-bounded decisions. This does not mean that no planning should be involved - on the contrary, planning activities should be concentrated on the different options and adapting to the current situation, as well as clarifying confusing situations by establishing patterns for rapid action. Evaluating different options is effective as soon as it is realized that they are not free, but provide the needed flexibility for late decision making.

    Deliver as fast as possible

    In the era of rapid technology evolution, it is not the biggest that survives, but the fastest. The sooner the end product is delivered without major defects, the sooner feedback can be received, and incorporated into the next iteration. The shorter the iterations, the better the learning and communication within the team. Without speed, decisions cannot be delayed. Speed assures the fulfilling of the customer's present needs and not what they required yesterday. This gives them the opportunity to delay making up their minds about what they really require until they gain better knowledge. Customers value rapid delivery of a quality product.

    The just-in-time production ideology could be applied to software development, recognizing its specific requirements and environment. This is achieved by presenting the needed result and letting the team organize itself and divide the tasks for accomplishing the needed result for a specific iteration. At the beginning, the customer provides the needed input. This could be simply presented in small cards or stories - the developers estimate the time needed for the implementation of each card. Thus the work organization changes into self-pulling system - each morning during a stand-up meeting, each member of the team reviews what has been done yesterday, what is to be done today and tomorrow, and prompts for any inputs needed from colleagues or the customer. This requires transparency of the process, which is also beneficial for team communication. Another key idea in Toyota's Product Development System is set-based design. If a new brake system is needed for a car, for example, three teams may design solutions to the same problem. Each team learns about the problem space and designs a potential solution. As a solution is deemed unreasonable, it is cut. At the end of a period, the surviving designs are compared and one is chosen, perhaps with some modifications based on learning from the others - a great example of deferring commitment until the last possible moment. Software decisions could also benefit from this practice to minimize the risk brought on by big up-front design.

    Empower the team

    There has been a traditional belief in most businesses about the decision-making in the organization - the managers tell the workers how to do their own job. In a Work-Out technique, the roles are turned - the managers are taught how to listen to the developers, so they can explain better what actions might be taken, as well as provide suggestions for improvements. The lean approach favors the aphorism "find good people and let them do their own job," encouraging progress, catching errors, and removing impediments, but not micro-managing.

    Another mistaken belief has been the consideration of people as resources. People might be resources from the point of view of a statistical data sheet, but in software development, as well as any organizational business, people do need something more than just the list of tasks and the assurance that they will not be disturbed during the completion of the tasks. People need motivation and a higher purpose to work for - purpose within the reachable reality, with the assurance that the team might choose its own commitments. The developers should be given access to the customer; the team leader should provide support and help in difficult situations, as well as ensure that skepticism does not ruin the team's spirit.

    Build integrity in

    The customer needs to have an overall experience of the System - this is the so-called perceived integrity: how it is being advertised, delivered, deployed, accessed, how intuitive its use is, price and how well it solves problems.

    Conceptual integrity means that the system's separate components work well together as a whole with balance between flexibility, maintainability, efficiency, and responsiveness. This could be achieved by understanding the problem domain and solving it at the same time, not sequentially. The needed information is received in small batch pieces - not in one vast chunk with preferable face-to-face communication and not any written documentation. The information flow should be constant in both directions - from customer to developers and back, thus avoiding the large stressful amount of information after long development in isolation.

    One of the healthy ways towards integral architecture is refactoring. As more features are added to the original code base, the harder it becomes to add further improvements. Refactoring is about keeping simplicity, clarity, minimum amount of features in the code. Repetitions in the code are signs for bad code designs and should be avoided. The complete and automated building process should be accompanied by a complete and automated suite of developer and customer tests, having the same versioning, synchronization and semantics as the current state of the System. At the end the integrity should be verified with thorough testing, thus ensuring the System does what the customer expects it to. Automated tests are also considered part of the production process, and therefore if they do not add value they should be considered waste. Automated testing should not be a goal, but rather a means to an end, specifically the reduction of defects.

    See the whole

    Software systems nowadays are not simply the sum of their parts, but also the product of their interactions. Defects in software tend to accumulate during the development process - by decomposing the big tasks into smaller tasks, and by standardizing different stages of development, the root causes of defects should be found and eliminated. The larger the system, the more organisations that are involved in its development and the more parts are developed by different teams, the greater the importance of having well defined relationships between different vendors, in order to produce a system with smoothly interacting components. During a longer period of development, a stronger subcontractor network is far more beneficial than short-term profit optimizing, which does not enable win-win relationships.

    Lean thinking has to be understood well by all members of a project, before implementing in a concrete, real-life situation. "Think big, act small, fail fast; learn rapidly" - these slogans summarize the importance of understanding the field and the suitability of implementing lean principles along the whole software development process. Only when all of the lean principles are implemented together, combined with strong "common sense" with respect to the working environment, is there a basis for success in software development.

    Lean software practices

    Lean software development practices, or what the Poppendiecks call "tools" are expressed slightly differently from their equivalents in agile software development, but there are parallels. Examples of such practices include:

    • Seeing waste
    • Value stream mapping
    • Set-based development
    • Pull systems
    • Queuing theory
    • Motivation
    • Measurements

  • .

    Some of the tools map quite easily to agile methods. Lean Workcells, for example are expressed in Agile methods as cross-functional teams.

    Domain-Driven Design :
    Domain-Driven Design

    Domain-driven design (DDD) is an approach to develop software for complex needs by connecting the implementation to an evolving model. The premise of domain-driven design is the following:

    • Placing the project's primary focus on the core domain and domain logic.
    • Basing complex designs on a model of the domain.
    • Initiating a creative collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems.

  • .

    Core definitions

    Domain: A sphere of knowledge (ontology), influence, or activity. The subject area to which the user applies a program is the domain of the software.

    Model: A system of abstractions that describes selected aspects of a domain and can be used to solve problems related to that domain.

    Ubiquitous Language: A language structured around the domain model and used by all team members to connect all the activities of the team with the software.

    Context: The setting in which a word or statement appears that determines its meaning.

    Prerequisites for the successful application of DDD
    • The domain is not trivial
    • The project team has experience and interest in Object Oriented Programming/Design
    • The project has access to domain experts
    • There is an iterative process in place

  • .

    Strategic domain-driven design

    Patterns in strategic domain-driven design and the relationships between them Ideally, it would be preferable to have a single, unified model. While this is a noble goal, in reality it typically fragments into multiple models. It is useful to recognize this fact of life and work with it.

    Strategic Design is a set of principles for maintaining model integrity, distillation of the Domain Model and working with multiple models.

    Bounded context

    Multiple models are in play on any large project. Yet when code based on distinct models is combined, software becomes buggy, unreliable, and difficult to understand. Communication among team members becomes confusing. It is often unclear in what context a model should not be applied.

    Therefore: Explicitly define the context within which a model applies. Explicitly set boundaries in terms of team organization, usage within specific parts of the application, and physical manifestations such as code bases and database schemas. Keep the model strictly consistent within these bounds, but don't be distracted or confused by issues outside.

    Continuous integration

    When a number of people are working in the same bounded context, there is a strong tendency for the model to fragment. The bigger the team, the bigger the problem, but as few as three or four people can encounter serious problems. Yet breaking down the system into ever-smaller contexts eventually loses a valuable level of integration and coherency.

    Therefore: Institute a process of merging all code and other implementation artifacts frequently, with automated tests to flag fragmentation quickly. Relentlessly exercise the ubiquitous language to hammer out a shared view of the model as the concepts evolve in different people's heads.

    Context Map

    An individual bounded context leaves some problems in the absence of a global view. The context of other models may still be vague and in flux.
    People on other teams wont be very aware of the context bounds and will unknowingly make changes that blur the edges or complicate the interconnections. When connections must be made between different contexts, they tend to bleed into each other.

    Therefore: Identify each model in play on the project and define its bounded context. This includes the implicit models of non- object-oriented subsystems. Name each bounded context, and make the names part of the ubiquitous language. Describe the points of contact between the models, outlining explicit translation for any communication and highlighting any sharing. Map the existing terrain.

    Kanban (Development) :
    Kanban (Development)

    Kanban is a method for managing knowledge work with an emphasis on just-in-time delivery while not overloading the team members. In this approach, the process, from definition of a task to its delivery to the customer, is displayed for participants to see and developers pull work from a queue.

    Kanban can be divided into two parts:

    Kanban ' A visual process management system that tells what to produce, when to produce it, and how much to produce. The Kanban method ' An approach to incremental, evolutionary process improvement for organizations.

    The Kanban method

    The name 'Kanban' originates from Japanese, and translates roughly as "signal card". The Kanban method as formulated by David J. Anderson is an approach to incremental, evolutionary process and systems change for organizations. It uses a work-in-progress limited pull system as the core mechanism to expose system operation (or process) problems and stimulate collaboration to continuously improve the system. One example of such a pull system is a kanban system, and it is after this popular form of a work-in-progress, limited pull system that the method is named.

    The Kanban method is rooted in four basic principles:

    Start with what you do now
    The Kanban method does not prescribe a specific set of roles or process steps. The Kanban method starts with the roles and processes you have and stimulates continuous, incremental and evolutionary changes to your system. The Kanban method is a change management method and is distinct from the Kanban software development process and Kanban project management.

    Agree to pursue incremental, evolutionary change

    The organization (or team) must agree that continuous, incremental and evolutionary change is the way to make system improvements and make them stick. Sweeping changes may seem more effective but have a higher failure rate due to resistance and fear in the organization. The Kanban method encourages continuous small incremental and evolutionary changes to your current system.

    Respect the current process, roles, responsibilities & titles

    It is likely that the organization currently has some elements that work acceptably and are worth preserving. We must also seek to drive out fear in order to facilitate future change. By agreeing to respect current roles, responsibilities and job titles we eliminate initial fears. This should enable us to gain broader support for our Kanban initiative. Perhaps presenting Kanban against an alternative more sweeping approach that would lead to changes in titles, roles, responsibilities and perhaps the wholesale removal of certain positions will help individuals to realize the benefits.

    Leadership at all levels

    Acts of leadership at all levels in the organization from individual contributors to senior management should be encouraged.

    Six core practices

    Anderson identified five core properties that had been observed in each successful implementation of the Kanban method. They were later relabeled as practices and extended with the addition of a sixth.


    The workflow of knowledge work is inherently invisible. Visualising the flow of work and making it visible is core to understanding how work proceeds. Without understanding the workflow, making the right changes is harder.

    A common way to visualise the workflow is to use a card wall with cards and columns. The columns on the card wall representing the different states or steps in the workflow.

    Limit WIP

    Limiting work-in-process implies that a pull system is implemented on parts or all of the workflow. The pull system will act as one of the main stimuli for continuous, incremental and evolutionary changes to your system.

    The pull system can be implemented as a kanban system, a CONWIP system, a DBR system, or some other variant. The critical elements are that work-in-process at each state in the workflow is limited and that new work is 'pulled' into the new information discovery activity when there is available capacity within the local WIP limit.

    Manage flow

    The flow of work through each state in the workflow should be monitored, measured and reported. By actively managing the flow the continuous, incremental and evolutionary changes to the system can be evaluated to have positive or negative effects on the system.

    Make policies explicit

    Until the mechanism of a process is made explicit it is often hard or impossible to hold a discussion about improving it. Without an explicit understanding of how things work and how work is actually done, any discussion of problems tends to be emotional, anecdotal and subjective. With an explicit understanding it is possible to move to a more rational, empirical, objective discussion of issues. This is more likely to facilitate consensus around improvement suggestions.

    Implement feedback loops

    Collaboration to review flow of work and demand versus capability measures, metrics and indicators coupled with anecdotal narrative explaining notable events is vital to enabling evolutionary change. Organizations that have not implemented the second level of feedback - the operations review - have generally not seen process improvements beyond a localized team level. As a result, they have not realized the full benefits of Kanban observed elsewhere.

    Improve collaboratively, evolve experimentally (using models and the scientific method)
    The Kanban method encourages small continuous, incremental and evolutionary changes that stick. When teams have a shared understanding of theories about work, workflow, process, and risk, they are more likely to be able to build a shared comprehension of a problem and suggest improvement actions which can be agreed by consensus.

    The Kanban method suggests that a scientific approach is used to implement continuous, incremental and evolutionary changes. The method does not prescribe a specific scientific method to use.

    Common models used are:
    • Theory of constraints (the study of bottlenecks)
    • Deming System of Profound Knowledge (a study of variation and how it affects processes)
    • Lean economic model (based on the concepts of 'waste' (or muda, muri and mura)).

  • .

    Don't Repeat Yourself :
    Don't Repeat Yourself

    In software engineering, don't repeat yourself (DRY) is a principle of software development aimed at reducing repetition of information of all kinds, especially useful in multi-tier architectures. The DRY principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." The principle has been formulated by Andy Hunt and Dave Thomas in their book The Pragmatic Programmer. They apply it quite broadly to include "database schemas, test plans, the build system, even documentation." When the DRY principle is applied successfully, a modification of any single element of a system does not require a change in other logically unrelated elements. Additionally, elements that are logically related all change predictably and uniformly, and are thus kept in sync. Besides using methods and subroutines in their code, Thomas and Hunt rely on code generators, automatic build systems, and scripting languages to observe the DRY principle across layers.

    Applying DRY

    Also known as Single Source of Truth, this philosophy is prevalent in model-driven architectures, in which software artifacts are derived from a central object model expressed in a form such as UML. DRY code is created by data transformation and code generators, which allows the software developer to avoid copy and paste operations. DRY code usually makes large software systems easier to maintain, as long as the data transformations are easy to create and maintain. Tools such as XDoclet and XSLT are examples of DRY coding techniques. An example of a system that requires duplicate information is Enterprise Java Beans version 2, which requires duplication not just in Java code but also in configuration files. Examples of systems that attempt to reduce duplicate information include the Symfony, web2py, Yii, Play Framework and Django web frameworks, EiffelStudio, Ruby on Rails application development environment, Microsoft Visual Studio LightSwitch and Enterprise Java Beans version 3.

    DRY vs WET solutions

    Violations of DRY are typically referred to as WET solutions, which stands for "write everything twice".

    Aspect-Oriented Programming :
    Aspect-oriented programming

    In computing, aspect-oriented programming (aop) is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns. AOP forms a basis for aspect-oriented software development.

    AOP includes programming methods and tools that support the modularization of concerns at the level of the source code, while "aspect-oriented software development" refers to a whole engineering discipline.


    Aspect-oriented programming entails breaking down program logic into distinct parts (so-called concerns, cohesive areas of functionality). Nearly all programming paradigms support some level of grouping and encapsulation of concerns into separate, independent entities by providing abstractions (e.g., procedures, modules, classes, methods) that can be used for implementing, abstracting and composing these concerns. But some concerns defy these forms of implementation and are called crosscutting concerns because they "cut across" multiple abstractions in a program.

    Logging exemplifies a crosscutting concern because a logging strategy necessarily affects every logged part of the system. Logging thereby crosscuts all logged classes and methods.

    All AOP implementations have some crosscutting expressions that encapsulate each concern in one place. The difference between implementations lies in the power, safety, and usability of the constructs provided. For example, interceptors that specify the methods to intercept express a limited form of crosscutting, without much support for type-safety or debugging. AspectJ has a number of such expressions and encapsulates them in a special class, an aspect. For example, an aspect can alter the behavior of the base code (the non-aspect part of a program) by applying advice (additional behavior) at various join points (points in a program) specified in a quantification or query called a pointcut (that detects whether a given join point matches). An aspect can also make binary-compatible structural changes to other classes, like adding members or parents.


    Standard terminology used in Aspect-oriented programming may include:

    Cross-cutting concerns

    Even though most classes in an OO model will perform a single, specific function, they often share common, secondary requirements with other classes. For example, we may want to add logging to classes within the data-access layer and also to classes in the UI layer whenever a thread enters or exits a method. Even though each class has a very different primary functionality, the code needed to perform the secondary functionality is often identical.


    This is the additional code that you want to apply to your existing model. In our example, this is the logging code that we want to apply whenever the thread enters or exits a method.


    This is the term given to the point of execution in the application at which cross-cutting concern needs to be applied. In our example, a pointcut is reached when the thread enters a method, and another pointcut is reached when the thread exits the method.


    The combination of the pointcut and the advice is termed an aspect. In the example above, we add a logging aspect to our application by defining a pointcut and giving the correct advice.

    IBM Rational Unified Process :
    IBM Rational Unified Process

    The Rational Unified Process (RUP) is an iterative software development process framework created by the Rational Software Corporation, a division of IBM since 2003. RUP is not a single concrete prescriptive process, but rather an adaptable process framework, intended to be tailored by the development organizations and software project teams that will select the elements of the process that are appropriate for their needs. RUP is a specific implementation of the Unified Process.


    The Rational Unified Process (RUP) is a software process product, originally developed by Rational Software, which was acquired by IBM in February 2003. The product includes a hyper linked knowledge base with sample artifacts and detailed descriptions for many different types of activities. RUP is included in the IBM Rational Method Composer (RM C) product which allows customization of the process. Combining the experience base of companies led to the articulation of six best practices for modern software engineering:

    • Develop iteratively, with risk as the primary iteration driver
    • Manage requirements
    • Employ a component-based architecture
    • Model software visually
    • Continuously verify quality

  • .

    Rational Unified Process topics
    RUP building blocks

    RUP is based on a set of building blocks, or content elements, describing what is to be produced, the necessary skills required and the step-by-step explanation describing how specific development goals are to be achieved. The main building blocks, or content elements, are the following:

    Roles (who) A Role defines a set of related skills, competencies and responsibilities.

    Work Products (what) A Work Product represents something resulting from a task, including all the documents and models produced while working through the process.

    Tasks (how) A Task describes a unit of work assigned to a Role that provides a meaningful result.

    Within each iteration, the tasks are categorized into nine disciplines:

    • Six "engineering disciplines"
    • Business Modeling
    • Requirements
    • Analysis and Design
    • Implementation
    • Test
    • Deployment
    • Three supporting disciplines
    • Configuration and Change Management
    • Project Management
    • Environment

  • .

    Four Project Life cycle Phases
    RUP phases and disciplines.

    The RUP has determined a project life cycle consisting of four phases. These phases allow the process to be presented at a high level in a similar way to how a 'waterfall'-styled project might be presented, although in essence the key to the process lies in the iterations of development that lie within all of the phases. Also, each phase has one key objective and milestone at the end that denotes the objective being accomplished. The visualization of RUP phases and disciplines over time is referred to as the RUP hump chart.

    Inception Phase

    The primary objective is to scope the system adequately as a basis for validating initial costing and budgets. In this phase the business case which includes business context, success factors (expected revenue, market recognition, etc.), and financial forecast is established. To complement the business case, a basic use case model, project plan, initial risk assessment and project description (the core project requirements, constraints and key features) are generated. After these are completed, the project is checked against the following criteria:

    Stakeholder concurrence on scope definition and cost/schedule estimates.

    • Requirements understanding as evidenced by the fidelity of the primary use cases.
    • Credibility of the cost/schedule estimates, priorities, risks, and development process.
    • Depth and breadth of any architectural prototype that was developed.
    • Establishing a baseline by which to compare actual expenditures versus planned expenditures.
    • If the project does not pass this milestone, called the Lifecycle Objective Milestone, it either can be cancelled or repeated after being redesigned to better meet the criteria.

  • .

    Elaboration Phase

    The primary objective is to mitigate the key risk items identified by analysis up to the end of this phase. The elaboration phase is where the project starts to take shape. In this phase the problem domain analysis is made and the architecture of the project gets its basic form.

    The outcome of the elaboration phase is:

    • A use-case model in which the use-cases and the actors have been identified and most of the use-case descriptions are developed. The use-case model should be 80% complete.
    • A description of the software architecture in a software system development process.
    • An executable architecture that realizes architecturally significant use cases.
    • Business case and risk list which are revised.
    • A development plan for the overall project.
    • Prototypes that demonstrably mitigate each identified technical risk.
    • A preliminary user manual (optional)
    • This phase must pass the Lifecycle Architecture Milestone criteria answering the following questions:
    • Is the vision of the product stable?
    • Is the architecture stable?
    • Does the executable demonstration indicate that major risk elements are addressed and resolved?
    • Is the construction phase plan sufficiently detailed and accurate?
    • Do all stakeholders agree that the current vision can be achieved using current plan in the context of the current architecture?
    • Is the actual vs. planned resource expenditure acceptable?
    • If the project cannot pass this milestone, there is still time for it to be cancelled or redesigned. However, after leaving this phase, the project transitions into a high-risk operation where changes are much more difficult and detrimental when made.
    • The key domain analysis for the elaboration is the system architecture.

  • .

    Construction Phase

    The primary objective is to build the software system. In this phase, the main focus is on the development of components and other features of the system. This is the phase when the bulk of the coding takes place. In larger projects, several construction iterations may be developed in an effort to divide the use cases into manageable segments that produce demonstrable prototypes. This phase produces the first external release of the software. Its conclusion is marked by the Initial Operational Capability Milestone.

    Transition Phase

    The primary objective is to 'transit' the system from development into production, making it available to and understood by the end user. The activities of this phase include training the end users and maintainers and beta testing the system to validate it against the end users' expectations. The product is also checked against the quality level set in the Inception phase.
    If all objectives are met, the Product Release Milestone is reached and the development cycle is finished.

    Cowboy Coding :
    Cowboy Coding

    Cowboy coding is software development where programmers have autonomy over the development process. This includes control of the project's schedule, languages, algorithms, tools, frameworks and coding style.

    A cowboy coder can be a lone developer or part of a group of developers working with minimal process or discipline. Usually it occurs when there is little participation by business users, or fanned by management that controls only non-development aspects of the project, such as the broad targets, timelines, scope, and visuals (the "what", but not the "how").

    Cowboy coding typically has more negative connotations, depending on one's opinions on the role of management and formal process in software development; "cowboy coding" is often used as a derogatory term by supporters of software development methodologies.

    Disadvantages of cowboy coding

    In cowboy coding, the lack of formal software project management methodologies may be indicative (though not necessarily) of a project's small size or experimental nature. Software projects with these attributes may exhibit:

    Lack of release structure

    Lack of estimation or implementation planning may cause a project to be delayed. Sudden deadlines or pushes to release software may encourage the use of quick and dirty or code and fix techniques that will require further attention later.

    Inexperienced developers

    Cowboy coding can be common at the hobbyist or student level where developers may initially be unfamiliar with the technologies, such as testing, version control and/or build tools, usually more than just the basic coding a software project requires. This can result in time required for learning to be underestimated, causing delays in the development process. Inexperience may also lead to disregard of accepted standards, making the project source difficult to read or causing conflicts between the semantics of the language constructs and the result of their output.

    Uncertain design requirements

    Custom software applications, even when using a proven development cycle, can experience problems with the client concerning requirements. Cowboy coding can accentuate this problem by not scaling the requirements to a reasonable timeline, and may result in unused or unusable components being created before the project is finished. Similarly, projects with less tangible clients (often experimental projects, see independent game development) may begin with code and never a formal analysis of the design requirements. Lack of design analysis may lead to incorrect or insufficient technology choices, possibly requiring the developer to port or rewrite their software in order for the project to be completed.


    Many software development models, such as Extreme Programming, use an incremental approach which stresses that the software must be releasable at the end of each iteration. Non-managed projects may have few unit tests or working iterations, leaving an incomplete project unusable.

    Advantages of cowboy coding

    Developers maintain a free-form working environment that may encourage experimentation, learning, and free distribution of results.
    It allows developers to cross architectural and/or tiered boundaries to resolve design limitations and defects.
    Without a development/designer framework, the programmer, as opposed to the project manager, is responsible for removing roadblocks. This may improve the speed of development.

    Independent developers can begin with cowboy coding techniques before later selling them to commercial use or creating community-supported projects.
    Small projects may be burdened by heavy software management methodologies; cowboy coding removes this burden.
    By coding in their own time, a hobby project may come to fruition which otherwise wouldn't have.

    Don't Make Me Think :
    Don't Make Me Think

    Don't Make Me Think is a book by Steve Krug about human-computer interaction and web usability. The book's premise is that a good software program or web site should let users accomplish their intended tasks as easily and directly as possible. Krug points out that people are good at satisficing, or taking the first available solution to their problem, so design should take advantage of this. He frequently cites as an example of a well-designed web site that manages to allow high-quality interaction, even though the web site gets bigger and more complex every day.

    The book itself is intended to be an example of concision (brevity) and well-focused writing. The goal, according to the book's introduction, was to make a text that could be read by an executive on a two-hour flight of an airplane.

    Originally published in 2000, the book is in its second edition (2005) and has sold more than 300,000 copies.

    In 2010, the author published a sequel, Rocket Surgery Made Easy, which explains how anyone working on a Web site, mobile app, or desktop software can do their own usability testing to ensure that what they're building will be usable.

    GRASP (Object-Oriented Design) :
    GRASP (object-oriented design)

    General Responsibility Assignment Software Patterns (or Principles), abbreviated GRASP, consists of guidelines for assigning responsibility to classes and objects in object-oriented design.

    The different patterns and principles used in GRASP are: Controller, Creator, Indirection, Information Expert, High Cohesion, Low Coupling, Polymorphism, Protected Variations, and Pure Fabrication. All these patterns answer some software problem, and in almost every case these problems are common to almost every software development project. These techniques have not been invented to create new ways of working, but to better document and standardize old, tried-and-tested programming principles in object-oriented design.

    Larman states that "the critical design tool for software development is a mind well educated in design principles. It is not the UML or any other technology." Thus, GRASP is really a mental toolset, a learning aid to help in the design of object-oriented software.

    Patterns of GRASP

    The Controller pattern assigns the responsibility of dealing with system events to a non-UI class that represents the overall system or a use case scenario. A Controller object is a non-user interface object responsible for receiving or handling a system event.

    A use case controller should be used to deal with all system events of a use case, and may be used for more than one use case (for instance, for use cases Create User and Delete User, one can have a single UserController, instead of two separate use case controllers).

    It is defined as the first object beyond the UI layer that receives and coordinates ("controls") a system operation. The controller should delegate the work that needs to be done to other objects; it coordinates or controls the activity. It should not do much work itself. The GRASP Controller can be thought of as being a part of the Application/Service layer (assuming that the application has made an explicit distinction between the application/service layer and the domain layer) in an object-oriented system with Common layers in an information system logical architecture


    Creation of objects is one of the most common activities in an object-oriented system. Which class is responsible for creating objects is a fundamental property of the relationship between objects of particular classes. Simply, "Creator pattern is responsible for creating an object of class".

    In general, a class B should be responsible for creating instances of class A if one, or preferably more, of the following apply:

    • Instances of B contain or compositely aggregate instances of A
    • Instances of B record instances of A
    • Instances of B closely use instances of A
    • Instances of B have the initializing information for instances of A and pass it on creation.

  • .

    High Cohesion

    High Cohesion is an evaluative pattern that attempts to keep objects appropriately focused, manageable and understandable. High cohesion is generally used in support of Low Coupling. High cohesion means that the responsibilities of a given element are strongly related and highly focused. Breaking programs into classes and subsystems is an example of activities that increase the cohesive properties of a system. Alternatively, low cohesion is a situation in which a given element has too many unrelated responsibilities. Elements with low cohesion often suffer from being hard to comprehend, hard to reuse, hard to maintain and adverse to change.


    The Indirection pattern supports low coupling (and reuse potential) between two elements by assigning the responsibility of mediation between them to an intermediate object. An example of this is the introduction of a controller component for mediation between data (model) and its representation (view) in the Model-view-controller pattern.

    Information Expert

    Information Expert (also Expert or the Expert Principle) is a principle used to determine where to delegate responsibilities. These responsibilities include methods, computed fields, and so on.

    Using the principle of Information Expert, a general approach to assigning responsibilities is to look at a given responsibility, determine the information needed to fulfill it, and then determine where that information is stored. Information Expert will lead to placing the responsibility on the class with the most information required to fulfill it.

    Low Coupling

    Low Coupling is an evaluative pattern, which dictates how to assign responsibilities to support:

    • lower dependency between the classes,
    • change in one class having lower impact on other classes,
    • higher reuse potential.

  • .


    According to Polymorphism, responsibility of defining the variation of behaviors based on type is assigned to the types for which this variation happens. This is achieved using polymorphic operations.

    Protected Variations

    The Protected Variations pattern protects elements from the variations on other elements (objects, systems, subsystems) by wrapping the focus of instability with an interface and using polymorphism to create various implementations of this interface.

    Pure Fabrication

    A Pure Fabrication is a class that does not represent a concept in the problem domain, specially made up to achieve low coupling, high cohesion, and the reuse potential thereof derived (when a solution presented by the Information Expert pattern does not). This kind of class is called "Service" in Domain-driven design.

    Hollywood principle :
    Hollywood principle

    In computer programming, the Hollywood principle is stated as "don't call us, we'll call you." It has applications in software engineering; see also implicit invocation for a related architectural principle.


    The Hollywood principle is a software design methodology that takes its name from the clich response given to amateurs auditioning in Hollywood: "Don't call us, we'll call you". It is a useful paradigm that assists in the development of code with high cohesion and low coupling that is easier to debug, maintain and test.

    Most beginners are first introduced to programming from a diametrically opposed viewpoint. Programs such as Hello World take control of the running environment and make calls on the underlying system to do their work. A considerable amount of successful software has been developed using the same principles, and indeed many developers need never think there is any other approach. After all, programs with linear flow are generally easy to understand.

    As systems increase in complexity, the linear model becomes less maintainable. Consider for example a simple program to bounce a square around a window in your favorite operating system or window manager. The linear approach may work, up to a point. You can keep the moving and drawing code in separate procedures, but soon the logic begins to branch.

    What happens if the user resizes the window?
    Or if the square is partially off-screen?

    Are all those system calls to get such resources as device contexts and interacting with the graphical user interface really part of the solution domain?
    It would be much more elegant if the programmer could concentrate on the application (in this case, updating the coordinates of the box) and leave the parts common to every application to something else.

    The key to making this possible is to sacrifice the element of control. Instead of your program running the system, the system runs your program. In our example, our program could register for timer events, and write a corresponding event handler that updates the coordinates. The program would include other callbacks to respond to other events, such as when the system requires part of a window to be redrawn. The system should provide suitable context information so the handler can perform the task and return. The user's program no longer includes an explicit control path, aside from initialization and registration.

    Event loop programming, however, is merely the beginning of software development following the Hollywood principle. More advanced schemes such as event-driven object-orientation go further along the path, by software components sending messages to each other and reacting to the messages they receive. Each message handler merely has to perform its own local processing. It becomes very easy to unit test individual components of the system in isolation, while integration of all the components typically does not have to concern itself excessively with the dependencies between them.

    Software architecture that encourages the Hollywood principle typically becomes more than "just" an API instead, it may take on more dominant roles such as a software framework or container. Examples:

    In Windows:
    MFC is an example of a framework for C++ developers to interact with the Windows environment.
    .NET framework is touted as a framework for scalable enterprise applications.
    On the Java side:
    Enterprise JavaBeans (EJB) specification describes the responsibilities of an EJB container, which must support such enterprise features as remote procedure calls and transaction management.

    All of these mechanisms require some cooperation from the developer. To integrate seamlessly with the framework, the developer must produce code that follows some conventions and requirements of the framework. This may be something as simple as implementing a specific interface, or, as in the case of EJB, a significant amount of wrapper code, often produced by code generation tools.

    Recent paradigms

    More recent paradigms and design patterns go even further in pursuit of the Hollywood principle. Inversion of control for instance takes even the integration and configuration of the system out of the application, and instead performs dependency injection.

    Again, this is most easily illustrated by an example. A more complex program such as a financial application is likely to depend on several external resources, such as database connections. Traditionally, the code to connect to the database ends up as a procedure somewhere in the program. It becomes difficult to change the database or test the code without one. The same is true for every other external resource that the application uses.

    Various design patterns exist to try to reduce the coupling in such applications. In Java, the service locator pattern exists to look up resources in a directory, such as the Java Naming and Directory Interface. This reduces the dependency - now, instead of every separate resource having its own initialization code, the program depends only on the service locator.

    Inversion of control

    Inversion of control containers take the next logical step. In this example, the configuration and location of the database (and all the other resources) is kept in a configuration file external from the code. The container is responsible for resolution of these dependencies, and delivers them to the other software components - for example by calling a setter method. The code itself does not contain any configuration. Changing the database, or replacing it with a suitable mock object for unit testing, becomes a relatively simple matter of changing the external configuration. Integration of software components is facilitated, and the individual components get even closer to the Hollywood principle.

    Agile Development

    Agile development methodologies promise higher customer satisfaction, lower defect rates, faster development times and a solution to rapidly changing requirements. Plan-driven approaches promise predictability, stability, and high assurance. However, both approaches have shortcomings that, if left unaddressed, can lead to project failure. The challenge is to balance the two approaches to take advantage of their strengths and compensate for their weaknesses.
    Barry Boehm and Richard Turner

    Intelligent Quotes

    Enterprise Architecture (EA) is the discipline of designing enterprises in order to rationalize its processes and organisation. In practice it is the process of translating business vision and strategy into effective enterprise change by creating, communicating and improving the key requirements, principles and models that describe the enterprise's future state and enable its evolution......
    "Every software system needs to have a simple yet powerful organizational philosophy (think of it as the software equivalent of a sound bite that describes the system's architecture)... A step in thr development process is to articulate this architectural framework, so that we might have a stable foundation upon which to evolve the system's function points. "
    "All architecture is design but not all design is architecture. Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change"
    "The ultimate measurement is effectiveness, not efficiency "
    "It is argued that software architecture is an effective tool to cut development cost and time and to increase the quality of a system. "Architecture-centric methods and agile approaches." Agile Processes in Software Engineering and Extreme Programming.
    "Java is C++ without the guns, knives, and clubs "
    "When done well, software is invisible"
    "Our words are built on the objects of our experience. They have acquired their effectiveness by adapting themselves to the occurrences of our everyday world."
    "I always knew that one day Smalltalk would replace Java. I just didn't know it would be called Ruby. "
    "The best way to predict the future is to invent it."
    "In 30 years Lisp will likely be ahead of C++/Java (but behind something else)"
    "Possibly the only real object-oriented system in working order. (About Internet)"
    "Simple things should be simple, complex things should be possible. "
    "Software engineering is the establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently on real machines."
    "Model Driven Architecture is a style of enterprise application development and integration, based on using automated tools to build system independent models and transform them into efficient implementations. "
    "The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs. "
    "Software Engineering Economics is an invaluable guide to determining software costs, applying the fundamental concepts of microeconomics to software engineering, and utilizing economic analysis in software engineering decision making. "
    "Ultimately, discovery and invention are both problems of classification, and classification is fundamentally a problem of finding sameness. When we classify, we seek to group things that have a common structure or exhibit a common behavior. "
    "Perhaps the greatest strength of an object-oriented approach to development is that it offers a mechanism that captures a model of the real world. "
    "The entire history of software engineering is that of the rise in levels of abstraction. "
    "The amateur software engineer is always in search of magic, some sensational method or tool whose application promises to render software development trivial. It is the mark of the professional software engineer to know that no such panacea exist "

    Core Values ?

    Agile And Scrum Based Architecture

    Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration.....


    Core Values ?

    Total quality management

    Total Quality Management / TQM is an integrative philosophy of management for continuously improving the quality of products and processes. TQM is based on the premise that the quality of products and .....


    Core Values ?

    Design that Matters

    We are more than code junkies. We're a company that cares how a product works and what it says to its users. There is no reason why your custom software should be difficult to understand.....


    Core Values ?

    Expertise that is Second to None

    With extensive software development experience, our development team is up for any challenge within the Great Plains development environment. our Research works on IEEE international papers are consider....


    Core Values ?

    Solutions that Deliver Results

    We have a proven track record of developing and delivering solutions that have resulted in reduced costs, time savings, and increased efficiency. Our clients are very much ....


    Core Values ?

    Relentless Software Testing

    We simply dont release anything that isnt tested well. Tell us something cant be tested under automation, and we will go prove it can be. We create tests before we write the complementary production software......


    Core Values ?

    Unparalled Technical Support

    If a customer needs technical support for one of our products, no-one can do it better than us. Our offices are open from 9am until 9pm Monday to Friday, and soon to be 24hours. Unlike many companies, you are able to....


    Core Values ?

    Impressive Results

    We have a reputation for process genius, fanatical testing, high quality, and software joy. Whatever your business, our methods will work well in your field. We have done work in Erp Solutions ,e-commerce, Portal Solutions,IEEE Research....



    Why Choose Us ?

    Invest in Thoughts

    The intellectual commitment of our development team is central to the leonsoft ability to achieve its mission: to develop principled, innovative thought leaders in global communities.

    Read More
    From Idea to Enterprise

    Today's most successful enterprise applications were once nothing more than an idea in someone's head. While many of these applications are planned and budgeted from the beginning.

    Read More
    Constant Innovation

    We constantly strive to redefine the standard of excellence in everything we do. We encourage both individuals and teams to constantly strive for developing innovative technologies....

    Read More
    Utmost Integrity

    If our customers are the foundation of our business, then integrity is the cornerstone. Everything we do is guided by what is right. We live by the highest ethical standards.....

    Read More