Software Cost Estimation With Use Case Points

February 14, 2007

The technical factors are the first thing you asses when doing a use case point analysis. Technical factors describe the expectations of the users for the delivered software. Generally, it is an assessment of non-functional requirements. There are 13 technical factors that you have to analyze.

Background

This is the second article in a series on applying use case points to create reliable software cost estimates. What makes use case points different is that they allow the project cost estimation to happen much earlier in the process. This cost estimation technique was developed by Gustav Karner for Rational Software Corporation in the mid 1990’s.

The introduction to software cost estimation is the right place to start if you came to this article first.

Technical Factors

When applying any general cost estimation technique, you have to account for many variables. Every software project is different, and if you don’t account for those differences, your estimation will not be reliable. In the use case points method there are 13 factors that have to be considered. All factors do not have the same potential impact on a project cost estimate, so each factor has a multiplier, representing the relative weights of the factors.

Here are the 13 technical factors of use case points estimation. Each factor is listed as Name (multiplier) – Description. For each factor, you will assign a relative magnitude of 0 (irrelevant) to 5 (critically important).

  1. Distributed System Required (2) – The architecture of the solution may be centralized or single-tenant , or it may be distributed (like an n-tier solution) or multi-tenant. Higher numbers represent a more complex architecture.
  2. Response Time Is Important (1) – The quickness of response for users is an important (and non-trivial) factor. For example, if the server load is expected to be very low, this may be a trivial factor. Higher numbers represent increasing importance of response time (a search engine would have a high number, a daily news aggregator would have a low number).
  3. End User Efficiency (1) – Is the application being developed to optimize on user efficiency, or just capability? Higher numbers represent projects that rely more heavily on the application to improve user efficiency.
  4. Complex Internal Processing Required (1) – Is there a lot of difficult algorithmic work to do and test? Complex algorithms (resource leveling, time-domain systems analysis, OLAP cubes) have higher numbers. Simple database queries would have low numbers.
  5. Reusable Code Must Be a Focus (1) – Is heavy code reuse an objective or goal?  Code reuse reduces the amount of effort required to deploy a project.  It also reduces the amount of time required to debug a project.  A shared library function can be re-used multiple times, and fixing the code in one place can resolve multiple bugs.  The higher the level of re-use, the lower the number.
  6. Installation Ease (0.5) – Is ease of installation for end users a key factor? The higher the level of competence of the users, the lower the number.
  7. Usability (0.5) – Is ease of use a primary criteria for acceptance? The greater the importance of usability, the higher the number.
  8. Cross-Platform Support (2) – Is multi-platform support required? The more platforms that have to be supported (this could be browser versions, mobile devices, etc. or Windows/OSX/Unix), the higher the value.
  9. Easy To Change (1) – Does the customer require the ability to change or customize the application in the future? The more change / customization that is required in the future, the higher the value.
  10. Highly Concurrent (1) – Will you have to address database locking and other concurrency issues? The more attention you have to spend to resolving conflicts in the data or application, the higher the value.
  11. Custom Security (1) – Can existing security solutions be leveraged, or must custom code be developed? The more custom security work you have to do (field level, page level, or role based security, for example), the higher the value.
  12. Dependence on Third Party Code (1) – Will the application require the use of third party controls or libraries? Like re-usable code, third party code can reduce the effort required to deploy a solution.  The more third party code (and the more reliable the third party code), the lower the number.
  13. User Training (1) – How much user training is required? Is the application complex, or supporting complex activities? The longer it takes users to cross the suck threshold (achieve a level of mastery of the product), the higher the value.

Note: For both code re-use (#5) and third-party code (#12), the articles I’ve read did not clarify if increased amounts of leverage would increase the technical factors or decrease them.  In my opinion, the more code you leverage, the less work you ultimately have to do.  This is dependent on prudent decisions about using other people’s code – is it high quality, stable, mature, and rigorously tested?  Adjust your answers based on these subjective factors.

Assigning Values To Technical Factors

For each of the thirteen technical factors, you must assign a relative magnitude of 0 to 5. This relative magnitude reflects that the decisions aren’t binary. They represent a continuum of effort / difficulty. Those (0-5) values are then multiplied by the multiplier for each factor. For example, a relative magnitude of 3 for cross-platform support would result in 6 points – because cross-platform support has twice the impact on work effort as a focus on response time.

Technical Complexity Factor

The final step of technical complexity analysis is to determine the technical complexity factor (TCF). You only have to remember TCF when talking to other folks about use case points. The acronym has meaning only in this context.

The TCF is calculated first by summing up the relative magnitudes (multiplied by the multipliers for each factor). That sum is divided by 100 and added to 0.6 to arrive at the TCF.

For example, if the relative magnitude of every technical factor were 2, the adjusted sum would be 28. The TCF would then be TCF = 0.6 + 0.28 = 0.88.

Next Step

The next step is to calculate the Environmental Complexity, a representation of the capability of the team and the environment in which the software is being developed.

source [tynerblain]


Ajax Frameworks, Toolkits & Libraries

January 22, 2007


Ten Requirements Gathering Techniques

November 23, 2006

The BABoK (Business Analyst Body of Knowledge) lists 10 techniques for gathering requirements. Here’s an overview of each one. For more details, check out the latest Guide to the BABoK.

 

  1. Brainstorming
  2. Document Analysis
  3. Focus Group
  4. Interface Analysis
  5. Interview
  6. Observation
  7. Prototyping
  8. Requirements Workshop
  9. Reverse Engineering
  10. Survey

 

1. Brainstorming

Brainstorming is used in requirements elicitation to get as many ideas as possible from a group of people. Generally used to identify possible solutions to problems, and clarify details of opportunities. Brainstorming casts a wide net, identifying many different possibilities. Prioritization of those possibilities is important to finding the needles in the haystack.

 

2. Document Analysis

Reviewing the documentation of an existing system can help when creating AS-IS process documents, as well as driving gap analysis for scoping of migration projects. In an ideal world, we would even be reviewing the requirements that drove creation of the existing system – a starting point for documenting current requirements. Nuggets of information are often buried in existing documents that help us ask questions as part of validating requirement completeness.

 

3. Focus Group

A focus group is a gathering of people who are representative of the users or customers of a product to get feedback. The feedback can be gathered about needs / opportunities / problems to identify requirements, or can be gathered to validate and refine already elicited requirements. This form of market research is distinct from brainstorming in that it is a managed process with specific participants. There is danger in “following the crowd”, and some people believe focus groups are at best ineffective. One risk is that we end up with the lowest common denominator features.

 

4. Interface Analysis

Interfaces for a software product can be human or machine. Integration with external systems and devices is just another interface. User centric design approaches are very effective at making sure that we create usable software. Interface analysis – reviewing the touch points with other external systems – is important to make sure we don’t overlook requirements that aren’t immediately visible to users.

 

5. Interview

Interviews of stakeholders and users are critical to creating the great software. Without understanding the goals and expectations of the users and stakeholders, we are very unlikely to satisfy them. We also have to recognize the perspective of each interviewee, so that we can properly weigh and address their inputs. Like a great reporter, listening is the skill that helps a great analyst to get more value from an interview than an average analyst.

 

6. Observation

The study of users in their natural habitats is what observation is about. By observing users, an analyst can identify a process flow, awkward steps, pain points and opportunities for improvement. Observation can be passive or active (asking questions while observing). Passive observation is better for getting feedback on a prototype (to refine requirements), where active observation is more effective at getting an understanding of an existing business process. Either approach can be used to uncover implicit requirements that otherwise might go overlooked.

 

7. Prototyping

Prototypes can be very effective at gathering feedback. Low fidelity prototypes can be used as an active listening tool. Often, when people can not articulate a particular need in the abstract, they can quickly assess if a design approach would address the need. Prototypes are most efficiently done with quick sketches of interfaces and storyboards. Prototypes are even being used as the “official requirements” in some situations.

 

8. Requirements Workshop

More commonly known as a joint application design (JAD) session, workshops can be very effective for gathering requirements. More structured than a brainstorming session, involved parties collaborate to document requirements. One way to capture the collaboration is with creation of domain-model artifacts (like static diagrams, activity diagrams). A workshop will be more effective with two analysts than with one, where a facilitator and a scribe work together.

 

9. Reverse Engineering

Is this a starting point or a last resort? When a migration project does not have access to sufficient documentation of the existing system, reverse engineering will identify what the system does. It will not identify what the system should do, and will not identify when the system does the wrong thing.

 

10. Survey

When collecting information from many people – too many to interview with budget and time constraints – a survey or questionnaire can be used. The survey can force users to select from choices, rate something (“Agree Strongly, Agree…”), or have open ended questions allowing free-form responses. Survey design is hard – questions can bias the respondents. Don’t assume that you can create a survey on your own, and get meaningful insight from the results. I would expect that a well designed survey would provide qualitative guidance for characterizing the market. It should not be used for prioritization of features or requirements.

source [tyner blain]


Free eBook – Getting Real

November 6, 2006

Getting Real by 37signals is an ebook that I bought it when it releases. It consists a bunch of essays to create and manage a better software company.

Based on 37signals’ software design philosophy, it forces you to think again on your business and development model:

Getting Real delivers better results because it forces you to deal with the actual problems you’re trying to solve instead of your ideas about those problems. It forces you to deal with reality.

Getting Real foregoes functional specs and other transitory documentation in favor of building real screens. A functional spec is make-believe, an illusion of agreement, while an actual web page is reality. That’s what your customers are going to see and use. That’s what matters. Getting Real gets you there faster. And that means you’re making software decisions based on the real thing instead of abstract notions.

Finally, Getting Real is an approach ideally suited to web-based software. The old school model of shipping software in a box and then waiting a year or two to deliver an update is fading away. Unlike installed software, web apps can constantly evolve on a day-to-day basis. Getting Real leverages this advantage for all its worth.

And guess what, now they’ve released the ebook free in html version.

I will not regret to pay $19 for it, but now it is pretty bargain for everyone because it is free.

Getting Real – 37signals

source [lifehack.org]


SAP’s tools now feature RadRails and Eclipse

October 19, 2006

RadRails now a part of SAP SDNSAP’s new download includes a bunch of open-source tools including RadRails, Eclipse, PHP/Ruby/Python code generators, and SAP’s scripting tools. This shows the growing trend that large software companies are beginning to realize the huge value of free open-source frameworks and ideas. Eclipse and RadRails are excellent tools to use, even for the wizards at SAP so it only makes sense that they use these tools and bundle them for developers. I have always considered SAP to be somewhat forward thinking, but this proves they see the value in the tools already freely available to developers. They are only making life easier on themselves and keeping developers happy too. Sun, Microsoft, IBM, and others have shown signs of embracing open-source tools, though they haven’t quite brought their open-source offerings to their full potential. They haven’t quite reached critical mass, but they are on the way.

source [downloadsquad]


Timeline – Google Maps for time-based information

July 9, 2006

Timeline is a DHTML-based AJAXy widget for visualizing time-based events. It is like Google Maps for time-based information.

The Life of Monet: a live example of timeline showing the life of Monet.

How to Create Timelines


Link to website [timelines]


EasyEclipse

June 3, 2006

EasyEclipseI couldn't think of anything to add to the title of this post that wouldn't be redundant. EasyEclipse is what it sounds like: A prepackaged installer for the Eclipse IDE that makes getting up and running with Eclipse really simple on Windows, OS X, or Linux. It comes in a variety of flavors to match your programming language/environment of choice, including Java, LAMP, PHP, Python, and Ruby on Rails. Each distribution comes with preinstalled plugins to make your life easier, but the EasyEclipse web site also has a variety of other plugins that are packaged similarly for ease of installation. The project was inspired by the Eclipse download hell post on Simon Willison's Weblog which, a year and a half after its original posting, is still the third result for Google searches for "Eclipse download."

 

source post [download squad] 


15 tips for writing smart user manuals

May 17, 2006

How often have you come across a user manual that claims to solve a problem, but actually ends up confusing more than helping? If you're a typical user, it probably happens more often than not. Such badly-designed content leads to dissatisfaction and frustration, a poor impression of product quality and (for the company that sold you the product) increased post-delivery support time and costs.

That's where smart documentation comes in. Smart documentation understands end-user behavior and is aligned to user needs in the most practical manner possible. And in this article, I'm going to offer some practical tips to help you build user content that is suitable, accessible, and readable.

  • Understand your audience
    Know who you are writing for and what the audience needs to know. This helps you to decide on both the depth and breadth of information that needs to be captured, and the kind of language to be used (for example, language and content would be different for experts and beginners). The key here is to give users only what they want — nothing more, nothing less.
  • Have a task-oriented approach
    Most products are functional in that they allow users to perform specific tasks. Adopt a task-oriented approach whereby you develop content based on the tasks that can be performed using the product. For example, if the product allows you to configure a network, your table of contents should include heads like "Creating a network", "Configuring network settings" and "Deleting a network".
  • Ensure a logical flow of information
    Study the product well enough to understand what happens first, next, and last, in a progressive fashion. This vastly improves the accessibility of information in your document.
  • Use modules
    Break your information into small and manageable chunks, where each portion supports one specific purpose or idea. Such chunks are easier to process by readers, and indicate clear thinking on your part. Modular writing also promotes ease of maintenance, and makes it possible to reuse information through internal linkage.
  • Use a table of contents
    A table of contents gives a birds-eye view of the scope of the document. Ensure that this is comprehensive, well structured, and has a modular layout. This approach enables users to better identify the information they need.
  • Use meaningful and consistent labels
    Clear and informative labels help users identify information quickly and correctly. Avoid using generic label titles, and keep labels short and to the point.
  • Write in a conversational tone
    Adopt a Frequently Asked Questions (FAQ) approach. This methodology allows you to bridge the gap between the product and the user with greater ease, and also include the most common information a user would need.
  • Consider the location of critical information
    It is human nature to first glance at the center of the page or screen and then at the upper-left corner. Attempt to format your content such that the crux of the material is close to the physical center of the layout, and the main headings are in the upper-left corner.
  • Use adequate illustrations
    Surprisingly, images are the most under-estimated component of any document. A document that is visually appealing is always more usable. Illustrations (pictorial representations, charts, process flow diagrams) form an integral part of the content, engage the reader's attention and reinforce the content they support.
  • Tabulate information wherever possible
    Tables improve the readability of information. Use tables when objects need to be described on different bases, or when comparing objects across different dimensions.
  • Provide examples
    Demonstrate your concepts and explanations with analogies, examples, or case studies. Examples help users grasp the concepts quickly and with better understanding.
  • Include troubleshooting tips
    When documenting procedures, analyze possible failure scenarios and tell the user how to deal with them when they occur. If you have a separate troubleshooting manual, direct the user to that document for more information.
  • Construct a good index
    It is generally observed that if a document is badly designed, users look for the information they need in the index. Cover your bases by capturing critical key words in the index to facilitate information retrieval.
  • Edit and review
    Edit your document to ensure that it conforms to appropriate guidelines for completeness, language, spelling and grammar, consistency and formatting.
  • Perform a "reality check"
    Ensure that the document is tested in tandem with the product, to expose any deviations between what has been written and reality. Deviations should be corrected and the test procedure should be repeated to ensure that no new errors were introduced in the correction process.

Customer Support

Remember that a user manual is all about enhancing user productivity and reducing customer support time, costs, and effort. A good document serves as training material for a new user and a support document for returning users. Conforming to the aforementioned guidelines ensures better information access and usability, reduced support time and improved customer satisfaction.

 

source post [tech republic]


Do Engineers Use Their Software?

April 30, 2006

As a software and product development project manager I always find it interesting when other's write about developers/engineers testing their code. I just read a brief post by J. Rothman: Do Engineers Use Their Software?. I know, I know – it's a touchy subject. A lot of the time there is such a schedule crunch that dev's barely have time to check an update in before the next scheduled build, and test/QA is often pushing for the code ASAP so they can get a head start on preparing test cases or updating automated smoke tests. However, far too often we see sub par code simply tossed over the wall with no thought of going beyond a few clicks on the engineers dev box. How worn out is the phrase "It works on my box." and how frustrating is it when a capable coder continues to churn out sloppy code or considers it QA's job to run the first pass? When did accountability for complete functionality, including, at a minimum, a first pass, and personal quality stop counting, and when did "good enough to get by" replace the desire to be the very best at what you do? I know, it sounds like I'm picking on developers today and perhaps I am, but only the sloppy Joe's that rip through a task and head for the door with no thought on ensuring the code will integrate successfully, or even function, for that matter. For all of the brilliant engineers out there – I salute you! It's a dirty job and you do it well! Now I'll step down from my soap box and refer a great post from J. Rothman: Do Engineers Use Their Software?, which is what brought my diatribe on. Check it out:

My friend and colleague, Stever Robbins, has started a blog, and one of his early posts is Are engineers living on another planet? Don'?t they use their software?

Unfortunately, not always. It takes self-discipline and the desire to look for problems to cause people to create systems that allow them to use their own software. If a project team only builds once a week, they're not going to use their software. If they fix a bunch of defects at one time, the testers can't do a complete install and test pieces in isolation. Instead, the testers need to install the whole darn thing and test everything together.

The current phrase for using your own software under development is "eating your own dog food." (Anyone know the origin of that phrase? I'm fairly sure I was using in the 80's, before Microsoft popularized it.) It's not easy to use the product under development. And, it's a great idea.

Direct URL to post: http://www.jrothman.com/weblog/2006/04/do-engineers-use-their-software.html

Thanks to Johanna Rothman for the great info and check out the post she references: Are engineers living on another planet? Don'?t they use their software? for another interesting view point on the topic. As always your comments are welcome – Enjoy!

source post: Raven (http://spaces.msn.com/members/ravenyoung/)


The featuritis Curve

April 15, 2006

Michael on High-Tech Product Management and Marketing has a fantastic “wish I wrote that” post about the importance of having the right number of features. He has several references, the best of which is Kathy Sierra’s Featuritis vs. the Happy User Peak post from June 2005. The two posts combined provide great insight into why having too many features is bad, while acknowledging that too few is just as bad. In this post we will look at what we can do to apply these insights and also change the rules some, making our software more desireable than our competition.

Kathy Sierra’s curve

//creativecommons.org/licenses/by-nc-sa/2.5/

Thanks Kathy Sierra for allowing re-use of your work.

Kathy’s basic point is that users get happier as we add features – up to a point – and then the confusion and complexity of dealing with extra features outweighs the benefits of having those features. In the discussion thread on her post, people use the Microsoft Word example – most people only use 10% of the features, and people counter with the position that different users use different features. Kathy’s post explores more than just software and addresses car radios and other interfaces.

Michael’s extension of ideas

Michael reviews the recent Business 2.0 article titled “Simple Minds” that in short says “more is more, and it always has been”. I guess there’s a bit of backlash about the quest to create minimally functional software. To quote Michael:

Simpler is indeed better, as long as your product meets your customers’ core needs. You may lose some customers because you don’t have some non-core features, but in most cases – I believe – that loss will be more than made up by those customers you gain since your product is simple, easy to use and yet meets their core needs.

His article is a fantastic and thought provoking read. I especially like his use of the pocket utility knife for feature comparison!

Tying ideas together

We’ve posted before about exceeding the suck-threshold by creating software that people can use. Another of Kathy’s great ideas. Visually, here’s what that looks like using the same framework Kathy has presented.

chart redrawn

suck threshold

We can see that to clear the suck threshold, we need to have more than some minimal amount of features, without having too many features. Our goal is to reach the peak of the curve, where we have the optimal amount of features (for competent users).
goal

How do we reach the goal?

When we use Kano analysis to prioritize features, we’re already halfway there (and then some). Recapping from that post:

Kano provides three relevant classifications of requirements (the fourth category is redundant). All requirements can be placed in one of these categories.

  1. Surprise and delight. Capabilities that differentiate a product from it’s competition (e.g. the nav-wheel on an iPod).
  2. More is better. Dimensions along a continuum with a clear direction of increasing utility (e.g. battery life or song capacity).
  3. Must be. Functional barriers to entry – without these capabilities, customers will not use the product (e.g. UL approval).

The must-be features are the first piece in the puzzle, and they are easy to overlay on the diagram.

must be diagram

What gets us to the goal is our differentiated innovations – the surprise and delight features.

surprise

Shifting the curve
As both Kathy and Michael point out, we still feel a lot of pressure to keep adding features. Even if we use Kano to hit the ideal software goals, what keeps us from having feature-creep and bloat until it’s all worthless. They both suggest investing in making the software better, instead of making it do more. And we agree about making it better. If we make the user experience better, we can make the software do more too without falling back below the suck-threshold.

Consider the more is better requirements. Think of them in two categories – user interaction improvements, and application performance improvements.

User interaction improvements remove complexity, and make software easier to use. This results in more user happiness from a given feature, and also allows us to implement more features at a given level of happiness (appeasing salespeople).

users

Application performance improvements don’t create as dramatic of a shift (they don’t make the application easier to use). They do, make it more enjoyable for a given feature set – shifting the curve up.

apps

Release Planning

We posted before about prioritizing requirements across releases. The initial release should focus 80/20 on must-be and surprise and delight requirements. After the first release, we should prioritize 50/50 effort on surprise and delight and more is better requirements. This split of effort balances the goal of product differentiation (adding features) with the goal of user happiness (shifting the curve).

Conclusion

We have to have a minimum set of features. Too many features is bad. The Kano approach helps us to pick the right requirements to prioritize. It also helps us change the shape of the curve for our software, allowing us to add more features while simultaneously increasing user satisfaction.

Thanks again to Michael and Kathy for their great contributions to this and other topics!

source post [tyner blain]