Software Cost Estimation With Use Case Points

February 14, 2007

The technical factors are the first thing you asses when doing a use case point analysis. Technical factors describe the expectations of the users for the delivered software. Generally, it is an assessment of non-functional requirements. There are 13 technical factors that you have to analyze.


This is the second article in a series on applying use case points to create reliable software cost estimates. What makes use case points different is that they allow the project cost estimation to happen much earlier in the process. This cost estimation technique was developed by Gustav Karner for Rational Software Corporation in the mid 1990’s.

The introduction to software cost estimation is the right place to start if you came to this article first.

Technical Factors

When applying any general cost estimation technique, you have to account for many variables. Every software project is different, and if you don’t account for those differences, your estimation will not be reliable. In the use case points method there are 13 factors that have to be considered. All factors do not have the same potential impact on a project cost estimate, so each factor has a multiplier, representing the relative weights of the factors.

Here are the 13 technical factors of use case points estimation. Each factor is listed as Name (multiplier) – Description. For each factor, you will assign a relative magnitude of 0 (irrelevant) to 5 (critically important).

  1. Distributed System Required (2) – The architecture of the solution may be centralized or single-tenant , or it may be distributed (like an n-tier solution) or multi-tenant. Higher numbers represent a more complex architecture.
  2. Response Time Is Important (1) – The quickness of response for users is an important (and non-trivial) factor. For example, if the server load is expected to be very low, this may be a trivial factor. Higher numbers represent increasing importance of response time (a search engine would have a high number, a daily news aggregator would have a low number).
  3. End User Efficiency (1) – Is the application being developed to optimize on user efficiency, or just capability? Higher numbers represent projects that rely more heavily on the application to improve user efficiency.
  4. Complex Internal Processing Required (1) – Is there a lot of difficult algorithmic work to do and test? Complex algorithms (resource leveling, time-domain systems analysis, OLAP cubes) have higher numbers. Simple database queries would have low numbers.
  5. Reusable Code Must Be a Focus (1) – Is heavy code reuse an objective or goal?  Code reuse reduces the amount of effort required to deploy a project.  It also reduces the amount of time required to debug a project.  A shared library function can be re-used multiple times, and fixing the code in one place can resolve multiple bugs.  The higher the level of re-use, the lower the number.
  6. Installation Ease (0.5) – Is ease of installation for end users a key factor? The higher the level of competence of the users, the lower the number.
  7. Usability (0.5) – Is ease of use a primary criteria for acceptance? The greater the importance of usability, the higher the number.
  8. Cross-Platform Support (2) – Is multi-platform support required? The more platforms that have to be supported (this could be browser versions, mobile devices, etc. or Windows/OSX/Unix), the higher the value.
  9. Easy To Change (1) – Does the customer require the ability to change or customize the application in the future? The more change / customization that is required in the future, the higher the value.
  10. Highly Concurrent (1) – Will you have to address database locking and other concurrency issues? The more attention you have to spend to resolving conflicts in the data or application, the higher the value.
  11. Custom Security (1) – Can existing security solutions be leveraged, or must custom code be developed? The more custom security work you have to do (field level, page level, or role based security, for example), the higher the value.
  12. Dependence on Third Party Code (1) – Will the application require the use of third party controls or libraries? Like re-usable code, third party code can reduce the effort required to deploy a solution.  The more third party code (and the more reliable the third party code), the lower the number.
  13. User Training (1) – How much user training is required? Is the application complex, or supporting complex activities? The longer it takes users to cross the suck threshold (achieve a level of mastery of the product), the higher the value.

Note: For both code re-use (#5) and third-party code (#12), the articles I’ve read did not clarify if increased amounts of leverage would increase the technical factors or decrease them.  In my opinion, the more code you leverage, the less work you ultimately have to do.  This is dependent on prudent decisions about using other people’s code – is it high quality, stable, mature, and rigorously tested?  Adjust your answers based on these subjective factors.

Assigning Values To Technical Factors

For each of the thirteen technical factors, you must assign a relative magnitude of 0 to 5. This relative magnitude reflects that the decisions aren’t binary. They represent a continuum of effort / difficulty. Those (0-5) values are then multiplied by the multiplier for each factor. For example, a relative magnitude of 3 for cross-platform support would result in 6 points – because cross-platform support has twice the impact on work effort as a focus on response time.

Technical Complexity Factor

The final step of technical complexity analysis is to determine the technical complexity factor (TCF). You only have to remember TCF when talking to other folks about use case points. The acronym has meaning only in this context.

The TCF is calculated first by summing up the relative magnitudes (multiplied by the multipliers for each factor). That sum is divided by 100 and added to 0.6 to arrive at the TCF.

For example, if the relative magnitude of every technical factor were 2, the adjusted sum would be 28. The TCF would then be TCF = 0.6 + 0.28 = 0.88.

Next Step

The next step is to calculate the Environmental Complexity, a representation of the capability of the team and the environment in which the software is being developed.

source [tynerblain]


Ajax Frameworks, Toolkits & Libraries

January 22, 2007

Ten Requirements Gathering Techniques

November 23, 2006

The BABoK (Business Analyst Body of Knowledge) lists 10 techniques for gathering requirements. Here’s an overview of each one. For more details, check out the latest Guide to the BABoK.


  1. Brainstorming
  2. Document Analysis
  3. Focus Group
  4. Interface Analysis
  5. Interview
  6. Observation
  7. Prototyping
  8. Requirements Workshop
  9. Reverse Engineering
  10. Survey


1. Brainstorming

Brainstorming is used in requirements elicitation to get as many ideas as possible from a group of people. Generally used to identify possible solutions to problems, and clarify details of opportunities. Brainstorming casts a wide net, identifying many different possibilities. Prioritization of those possibilities is important to finding the needles in the haystack.


2. Document Analysis

Reviewing the documentation of an existing system can help when creating AS-IS process documents, as well as driving gap analysis for scoping of migration projects. In an ideal world, we would even be reviewing the requirements that drove creation of the existing system – a starting point for documenting current requirements. Nuggets of information are often buried in existing documents that help us ask questions as part of validating requirement completeness.


3. Focus Group

A focus group is a gathering of people who are representative of the users or customers of a product to get feedback. The feedback can be gathered about needs / opportunities / problems to identify requirements, or can be gathered to validate and refine already elicited requirements. This form of market research is distinct from brainstorming in that it is a managed process with specific participants. There is danger in “following the crowd”, and some people believe focus groups are at best ineffective. One risk is that we end up with the lowest common denominator features.


4. Interface Analysis

Interfaces for a software product can be human or machine. Integration with external systems and devices is just another interface. User centric design approaches are very effective at making sure that we create usable software. Interface analysis – reviewing the touch points with other external systems – is important to make sure we don’t overlook requirements that aren’t immediately visible to users.


5. Interview

Interviews of stakeholders and users are critical to creating the great software. Without understanding the goals and expectations of the users and stakeholders, we are very unlikely to satisfy them. We also have to recognize the perspective of each interviewee, so that we can properly weigh and address their inputs. Like a great reporter, listening is the skill that helps a great analyst to get more value from an interview than an average analyst.


6. Observation

The study of users in their natural habitats is what observation is about. By observing users, an analyst can identify a process flow, awkward steps, pain points and opportunities for improvement. Observation can be passive or active (asking questions while observing). Passive observation is better for getting feedback on a prototype (to refine requirements), where active observation is more effective at getting an understanding of an existing business process. Either approach can be used to uncover implicit requirements that otherwise might go overlooked.


7. Prototyping

Prototypes can be very effective at gathering feedback. Low fidelity prototypes can be used as an active listening tool. Often, when people can not articulate a particular need in the abstract, they can quickly assess if a design approach would address the need. Prototypes are most efficiently done with quick sketches of interfaces and storyboards. Prototypes are even being used as the “official requirements” in some situations.


8. Requirements Workshop

More commonly known as a joint application design (JAD) session, workshops can be very effective for gathering requirements. More structured than a brainstorming session, involved parties collaborate to document requirements. One way to capture the collaboration is with creation of domain-model artifacts (like static diagrams, activity diagrams). A workshop will be more effective with two analysts than with one, where a facilitator and a scribe work together.


9. Reverse Engineering

Is this a starting point or a last resort? When a migration project does not have access to sufficient documentation of the existing system, reverse engineering will identify what the system does. It will not identify what the system should do, and will not identify when the system does the wrong thing.


10. Survey

When collecting information from many people – too many to interview with budget and time constraints – a survey or questionnaire can be used. The survey can force users to select from choices, rate something (“Agree Strongly, Agree…”), or have open ended questions allowing free-form responses. Survey design is hard – questions can bias the respondents. Don’t assume that you can create a survey on your own, and get meaningful insight from the results. I would expect that a well designed survey would provide qualitative guidance for characterizing the market. It should not be used for prioritization of features or requirements.

source [tyner blain]

Free eBook – Getting Real

November 6, 2006

Getting Real by 37signals is an ebook that I bought it when it releases. It consists a bunch of essays to create and manage a better software company.

Based on 37signals’ software design philosophy, it forces you to think again on your business and development model:

Getting Real delivers better results because it forces you to deal with the actual problems you’re trying to solve instead of your ideas about those problems. It forces you to deal with reality.

Getting Real foregoes functional specs and other transitory documentation in favor of building real screens. A functional spec is make-believe, an illusion of agreement, while an actual web page is reality. That’s what your customers are going to see and use. That’s what matters. Getting Real gets you there faster. And that means you’re making software decisions based on the real thing instead of abstract notions.

Finally, Getting Real is an approach ideally suited to web-based software. The old school model of shipping software in a box and then waiting a year or two to deliver an update is fading away. Unlike installed software, web apps can constantly evolve on a day-to-day basis. Getting Real leverages this advantage for all its worth.

And guess what, now they’ve released the ebook free in html version.

I will not regret to pay $19 for it, but now it is pretty bargain for everyone because it is free.

Getting Real – 37signals

source []

SAP’s tools now feature RadRails and Eclipse

October 19, 2006

RadRails now a part of SAP SDNSAP’s new download includes a bunch of open-source tools including RadRails, Eclipse, PHP/Ruby/Python code generators, and SAP’s scripting tools. This shows the growing trend that large software companies are beginning to realize the huge value of free open-source frameworks and ideas. Eclipse and RadRails are excellent tools to use, even for the wizards at SAP so it only makes sense that they use these tools and bundle them for developers. I have always considered SAP to be somewhat forward thinking, but this proves they see the value in the tools already freely available to developers. They are only making life easier on themselves and keeping developers happy too. Sun, Microsoft, IBM, and others have shown signs of embracing open-source tools, though they haven’t quite brought their open-source offerings to their full potential. They haven’t quite reached critical mass, but they are on the way.

source [downloadsquad]

Timeline – Google Maps for time-based information

July 9, 2006

Timeline is a DHTML-based AJAXy widget for visualizing time-based events. It is like Google Maps for time-based information.

The Life of Monet: a live example of timeline showing the life of Monet.

How to Create Timelines

Link to website [timelines]


June 3, 2006

EasyEclipseI couldn't think of anything to add to the title of this post that wouldn't be redundant. EasyEclipse is what it sounds like: A prepackaged installer for the Eclipse IDE that makes getting up and running with Eclipse really simple on Windows, OS X, or Linux. It comes in a variety of flavors to match your programming language/environment of choice, including Java, LAMP, PHP, Python, and Ruby on Rails. Each distribution comes with preinstalled plugins to make your life easier, but the EasyEclipse web site also has a variety of other plugins that are packaged similarly for ease of installation. The project was inspired by the Eclipse download hell post on Simon Willison's Weblog which, a year and a half after its original posting, is still the third result for Google searches for "Eclipse download."


source post [download squad]