Java 版 (精华区)
发信人: rhine (System analysis), 信区: Java
标 题: A J2EE Testing Primer
发信站: 哈工大紫丁香 (2001年07月18日21:23:23 星期三), 站内信件
A J2EE Testing Primer May 2001
It takes a wide range of tools and techniques to evaluate the quality of
distributed software.
by Scott W. Ambler
It isn't enough to simply develop software—you need to build software
that works. And there's only one way to prove that your software works:
test it. Due to the inherent complexity of software developed with
Java, particularly software based on the Java 2 Enterprise Edition
(J2EE) platform, testing often proves more difficult than it first
appears. With J2EE, you're developing logic using a wide range of
technologies including Java Server Pages (JSPs), servlets, Enterprise
JavaBeans (EJBs) and relational databases—therefore, you need to
apply a wide range of testing techniques and tools. This month, I
explore several best practices for testing J2EE-based software and
describe how to organize your environment to support effective
software testing.
EJB Testing Best Practices
First, it's critical that your project team and stakeholders
understand that J2EE testing is difficult. In "Object Testing
Patterns" (Thinking Objectively, July 1999), I described several process
patterns for testing object-oriented software and summarized a
collection of techniques, such as inheritance regression testing and
function testing, applicable to the testing of J2EE software. That
article was just the tip of the iceberg, as Robert Binder's 1,190-page
book, Testing Object-Oriented Systems: Models, Patterns and Tools
(Addison Wesley, 2000) suggests. There is a wide range of testing
techniques at your disposal-techniques that you must understand to be
able to apply appropriately.
Second, J2EE presents unique testing challenges. J2EE software is
typically distributed across several types of logical components such as
firewalls, Web servers, EJB application servers and database servers.
The logical components are then distributed onto physical processors
that are often organized into farms of machines for scalability and
performance purposes. Organizations new to J2EE, often neophytes to
testing distributed software, may not be prepared to handle the task. To
test distributed software, you need tools and techniques that enable
you to run both single-machine and cross-machine tests. For example, a
typical end-to-end test may originate in a browser and connect to a
Web server to access a servlet, which interacts with session beans
that may access the database through Java Database Connectivity (JDBC),
and/or interacts with entity beans that access the database through the
EJB persistence container. The beans produce a result that the
servlet passes to a JSP to produce HTML that can be displayed in the
browser. Whew! To make matters worse, many aspects of J2EE are
encapsulated (for example, the internal workings of your EJB persistence
container); therefore, you're often limited to black box testing
techniques. This can make finding the true source of defects a
challenge—sometimes requiring excruciating effort to determine a
bug's cause. Furthermore, Web-based applications present a novel set
of security challenges to watch out for.
Another critical best practice for J2EE testing is the automation of
your regression tests. When you take an iterative approach to software
development (the most common way to develop J2EE-based software), it's
critical to prove that previous functionality still works after you've
made changes (the focus of regression testing). Without the ability to
press a button and rerun all tests against the system, a developer can
never feel safe making a change to his code.
Anything the team can build, they can test: requirements, review
design models, user documentation and source code. After all, if
something isn't worth testing, it probably isn't worth building. Your
testing process must address more than source code testing—you can
and should test all important project artifacts.
An important best practice is to recognize that silence isn't golden.
Your goal is to identify potential defects, not to cover them up. Good
tests find defects: Tests that don't find any defects may well have
failed.
Your team should test often and test early. First, because most mistakes
are made early in the life of a project: Developers tend to make more
errors when gathering and analyzing requirements than when designing and
writing source code. Second, the cost of fixing defects increases
exponentially the later they are found. This happens because of the
nature of software development—work is performed based on previous
work. For example, your code is based on your models, which in turn
are based on your requirements. If a requirement was misunderstood,
all modeling decisions based on that requirement are potentially
invalid, and all code based on the models is also in question. If
defects are detected during requirements definition, where they are
likely to be made, then they will probably be much less expensive to
fix—you have only to update your requirements model. For best results,
you need to test throughout the project's entire life cycle.
Also consider writing your testing code before you write the "real"
code, as developers following Extreme Programming (XP) techniques do.
This forces them to think about what they are developing, to ensure that
it meets the actual requirements, and that, in fact, it can be tested.
You'll have to write the test eventually, so you might as well
benefit from its side effects.
Finally, plan for rework. Testing your system means little if you
don't intend to make repairs. Include time in your project plan to
rework your system before it's delivered to your users. Too many project
teams fail to plan for this and, as a result, schedules slip.
Figure 1. How to Test Different Development and Production
Environments.
Solve the problem of differing development and production environments
by setting up three distinct technical environments: a development area,
a staging area and a production area.
Java Testing Environment
In addition to an effective testing tool suite, you also need a software
environment that reflects the realities of testing. For example, in
many organizations, it's quite common for development environments to
differ from production environments: Perhaps developers are working on
Intel-based machines running Windows NT, whereas the production
environment is made up of Sun servers running Solaris. This begs the
question of organizing your work to reflect this reality, particularly
with regard to testing. Figure 1 presents a common solution to this
problem, depicting three distinct technical environments: a
development area, a staging area and a production area.
Programmers do the bulk of their work in the development area, writing
and unit testing software on their personal workstations and then
integrating their results with the work of their teammates within a
shared integration environment. This environment typically consists of
one or more machines to which programmers deploy their work on a regular
basis (often daily or even hourly) to perform integration testing. A
common development best practice is to continuously integrate your
source code—a core XP practice that requires an integration environment
to support continuous integration testing of code.
Your staging area serves as a testing ground for software that will be
released into production. This enables development teams to determine
how a system is likely to work within your production environment
without putting actual production systems at risk: Your system may
work fine on its own, but it could have adverse effects on existing
systems, such as the corruption of shared data or the reduction of the
runtime performance of higher-priority applications.
Ideally, your staging area should be an exact replica of your production
environment, although it's often a reduced version due to replication
costs. For example, your production environment may be a cluster of 50
Sun servers, whereas your staging area is a cluster of three Sun
servers. The important thing is to provide a hardware and software
environment that is as close as possible to your production environment.
Your production environment is where your systems run to support the
day-to-day business of your organization. It's typically the domain of
your organization's operations department. One of your operations
department's primary goals is to ensure that no software is deployed
into production until it's ready. To move your system into production,
your project team will often have a long list of quality gates that it
must pass through, such as proper testing in your staging area.
So, how do you use these environments effectively? Throughout an
iteration, your developers will do their work in the development
environment. Toward the end of an iteration, they'll schedule a minor
release of their system into the staging area. It's important to
understand your organization's processes and procedures for releasing
software into the staging area because the area is typically shared
among all project teams; the managers of the staging area also need to
ensure that each team gets fair access to the environment without
adversely affecting the other teams' efforts. Finally, once your project
has passed your final testing in the large efforts, you'll take your
release one step farther and move it from the staging area into
production.
J2EE Testing Is Difficult
At the best of times, software testing is hard. Testing
object-oriented software often proves more difficult than structured and
procedural software because object technology is used to address more
complex problem spaces. Distributed technology is harder to test than
nondistributed technology because of the additional complexities
inherent in multinode deployment. J2EE is a distributed, object-oriented
software development platform, which means that it's one of the most
difficult platforms to test. To be successful, you need to use many,
if not all, of the best practices that I've described, and to adopt a
productive software environment that supports the realities of
developing, testing and then releasing software into production.
Fundamental Testing Concepts
Get back to basics with these essential tips
Although the focus here is on J2EE, these fundamental concepts of
software testing are independent of your development environment:
Test throughout the project's entire life cycle.
Develop your tests before you develop an artifact.
Test all artifacts.
Test continuously.
Determine the specific cause of a defect.
Do more, not less, testing of objects.
Make the goal of testing to find bugs, not cover them up.
Automate your regression tests.
Invest in simple, effective testing tools.
Have separate personal, integration and staging areas.
—Scott W. Ambler
Java Testing Tools
Try out these tools to reduce your test anxiety
Testing is a difficult process, but it can be eased by purchasing one or
more testing tools-luckily, there is a wide variety for Java-based
software:
Bean-test (http://www.testmybeans.com/) by RSW Software performs
scalability (load and stress) testing on EJB applications.
EJBQuickTest (http://www.ejbquick.com/) simulates method invocations
by clients of your EJB application, supporting regression testing,
generation of test data, and performance and stress testing. Man Machine
Systems' (http://www.mmsindia.com/) JStyle critiques the quality of
your Java source code, including the generation of code metrics.
Parasoft's (http://www.parasoft.com/) JTest supports sophisticated
code testing and quality assurance validation for Java, including
white box, black box, regression and coding standards enforcement.
Sitraka Software's (http://www.klgroup.com/) JProbe is a profiler and
memory debugger for Java code, including a server-side version for EJB
and a client-side version for ordinary Java code. JUnit (http://www.
junit.org/), a favorite among XP practitioners, is an open source
framework for unit- and code-testing Java code that enables you to
easily implement continuous code testing on your project.
Which tools should you adopt? The Extreme Modeling (XM) methodology
(Thinking Objectively, Nov. 2000 and Apr. 2001) provides several
insights to guide your tool selection efforts. First, use the simplest
tools that will do the job. Simpler tools require less training and less
effort to work with, and are often less expensive to purchase and
install. Second, adopt tools that provide value. A tool should reduce
the overall effort a task requires-if it doesn't, it detracts from
your project and your organization. Do the least work possible to finish
the job, so you can focus on the myriad remaining tasks necessary to
deliver your project to your users. To find out more about XM, visit
http://www.extreme-modeling.com/. If you want to add your two cents'
worth to the discussion, you can join the XM mailing list (www.
extreme-modeling.com/feedback.html).
—Scott W. Ambler
--
海纳百川,
有容乃大,
壁立千尺,
无欲则刚。
※ 来源:·哈工大紫丁香 bbs.hit.edu.cn·[FROM: dip.hit.edu.cn]
Powered by KBS BBS 2.0 (http://dev.kcn.cn)
页面执行时间:208.660毫秒