Skip to main content

Concurrent Sessions

Sessions are offered on Tuesday and Wednesday at the conference and do not require a pre-selection. Build your own custom learning schedule, or choose to follow one of our track schedules.

Concurrent Sessions
T1 Do You See What I See? NEW
Michael Bolton, DevelopSense
Tuesday, April 8, 2014 - 10:30am - 11:30am

Managing a software project is challenging. Managers often need to deal with overwhelming amounts of data and detail but they must also be able to see the big picture at a glance. What can testers do to help? Excellent testing involves composing, editing, narrating, and justifying a story about the product, about the testing, and about the quality of the testing. However, telling a good story doesn't just depend on the raw data; it depends on representing, organizing, and displaying information to illuminate key points, highlight trends, and show variation. Michael Bolton takes you on a tour of approaches to illustrating the testing story, designed to keep the entire project team informed of product status and testing activity, to foster discovery and productive conversation, and to help managers ask better questions and get better answers.

More Information
Learn more about Michael Bolton.
T2 Test Design Techniques in Exploratory Testing NEW
Gitte Ottosen, Sogeti Denmark
Tuesday, April 8, 2014 - 10:30am - 11:30am

Not all testers are lucky enough to get a good foundation for their testing—detailed requirements and system specifications. That, combined with the harsh reality of not having enough time, presents a challenge for the tester—How do I test to such an extent that I will be able to identify defects as early as possible and properly inform stakeholders about the quality of the product? Gitte Ottosen shares the approach she uses in agile projects—using a mind map to identify the structure of the system and its major workflows, and applying classic test design techniques and her experience to work in exploratory testing sessions. The result is documented as test notes, classification trees, process flow diagrams, defect reports, etc. Learn how a structured approach—using test design techniques in an exploratory approach—can help integrate continuous test into the agile lifecycle and ensure that the resulting system is fit for use.

More Information
Learn more about Gitte Ottosen.
T3 We’re Moving to Agile: What Are Our Testers Going to Do? NEW
Tuesday, April 8, 2014 - 10:30am - 11:30am

As more and more organizations transition to agile, many still do not understand how testing fits into agile teams. Does it simply mean placing a tester on every team? Or does it mean doing away with the role of testers? Pradeepa Narayanaswamy explains the importance of working in cross-functional teams that integrate development and testing. Pradeepa shares her insights into the keys of agile testing including understanding the agile testing mindset and goals. She discusses the responsibilities of a tester in an agile team and describes the diverse skill sets required in those teams. Pradeepa also shares her ideas on how to manage defects, what to measure, and what to document. She concludes by describing what is NOT agile testing and debunks certain agile testing myths. Review these important basics and align your testing with concepts that may have been overlooked, forgotten, or misunderstood in your teams.

More Information
Learn more about Pradeepa Narayanaswamy .
T4 Billion Dollar Bugs: When and How to Test a Spreadsheet NEW
Gregory Pope, Lawrence Livermore National Labs
Tuesday, April 8, 2014 - 10:30am - 11:30am

The world has become increasingly dependent on computer-based models to make informed decisions. These models may be financial or engineering based and often are built with spreadsheets. We may not think of spreadsheets as software and therefore tend to overlook applying robust testing techniques to them. However, spreadsheet error rates are often ten times those found in rigorously tested commercial software. The consequences of spreadsheet failure can range from overspending the family budget, to designing a physical structure improperly, to accounting errors of all types, some costing billions of dollars. Spreadsheets have become so complex that spotting errors is no longer an easy or intuitive task. Spreadsheet users, who may not be domain or testing experts, may not be able to tell the difference between a reasonable answer and a bogus one. To improve the quality of your organization’s spreadsheets and the decisions based on them, Greg Pope shares spreadsheet defect prevention methods, some built-in auditing tricks, and manual and automated testing techniques.

More Information
Learn more about Gregory Pope.
T5 Are Your Test Reports a Death Sentence? NEW
Nancy Kelln, FGL Sports, Ltd.
Tuesday, April 8, 2014 - 12:45pm - 1:45pm

Good software testers often interact more with people than with software, especially when reporting on the results of testing. Yet our industry provides little training or guidance on the social and psychological aspects of our jobs. There are times when the results testers deliver to project teams and stakeholders can be difficult to accept. It is fascinating to observe the reactions a negative testing message can provoke in people during the final, stressful phases of a project.  Based on the speaker’s experience in highly contentious test reporting and background in psychology, Nancy Kelln discusses the psychological side of test reporting and examines the challenges when reporting difficult information to project stakeholders. Learn why test reporting problems are often people problems and how to understand the emotional reactions from project stakeholders when delivering less than ideal testing results.

More Information
Learn more about Nancy Kelln.
T6 CAN I USE THIS?—A Mnemonic for Usability Testing NEW
David Greenlees, Innodev Pty Ltd
Tuesday, April 8, 2014 - 12:45pm - 1:45pm

Often, usability testing does not receive the attention it deserves. A common argument is that usability issues are merely “training issues” and can be dealt with through the product's training or user manuals. If your product is only for internal staff use, this may be a valid response. However, the market now demands easy-to-use products—whether your users are internal or external. David Greenlees shares a tool he has developed to generate test ideas for usability testing. His mnemonic—CAN I USE THIS?—provides a solid starting point for testing any product. C for Comparable Product, A for Accessibility, N for Navigation … David shares how he has used this mnemonic on past projects while the training argument took place around him, and how they realized product improvements and greater user acceptance. Learn how you can quickly and effectively use this mnemonic on any project so you can give usability testing the attention it deserves.

More Information
Learn more about David Greenlees.
T7 A Tester’s Guide to Collaborating with Product Owners NEW
Bob Galen, Velocity Partners
Tuesday, April 8, 2014 - 12:45pm - 1:45pm

The role of the Product Owner in Scrum is only vaguely defined—owning the Product Backlog and representing the “customer.” In many organizations, Product Owners go it alone, trying their best to represent business needs to their teams. What’s often missing is a collaborative connection between the teams’ testers and the Product Owner—a connection in which testers help to define and refine requirements, broaden the testing landscape and align it to customer needs, provide a conduit for collaboration between the customer and the team, assure that the team is building the right thing, and help demonstrate complete features. This relationship is central to the team and facilitates transparency to help gain feedback from the entire organization. Join seasoned agile coach Bob Galen as he shares techniques for doing just this. Return with new ideas and techniques for helping your Product Owner and team deliver better received and higher value products—not just by testing but by fostering collaboration.

More Information
Learn more about Bob Galen.
T8 Become a Big Data Quality Hero
Jason Rauen, LexisNexis
Tuesday, April 8, 2014 - 12:45pm - 1:45pm

Many believe that regression testing an application with minimal data is sufficient. However, the data testing methodology becomes far more complex with big data applications. Testing can now be done within the data fabrication process as well as in the data delivery process. Today, comprehensive testing is often mandated by regulatory agencies—and more importantly by customers. Finding issues before deployment and saving your company’s reputation—and in some cases preventing litigation—is critical. Jason Rauen presents an overview of the architecture, processes, techniques, and lessons learned by an original big data company. Detecting defects up-front is vital. Learn how to test thousands, millions, and in some cases billions—yes, billions—of records directly, rendering sampling procedures obsolete. See how you can save your organization time and money—and have better data test coverage than ever before.

More Information
Learn more about Jason Rauen.
T9 Agile Test Management and Reporting—Even in a Non-Agile Project NEW
Paul Holland, Testing Thoughts
Tuesday, April 8, 2014 - 2:00pm - 3:00pm

Whether you have dedicated test teams or testers distributed over Scrum teams, you have the challenge of planning, tracking, and reporting their testing not only in a meaningful way but also in a way that can adapt to the rapidly changing environment of software development projects. Many commonly used planning methods do not allow for flexibility, and reporting often relies on horribly flawed metrics including number of test cases executed or test pass percentage. Paul Holland explains a planning, tracking, and reporting method he developed during his last five years as a test manager at Alcatel-Lucent. Paul describes how he uses powerful “high-tech” tools like whiteboards and spreadsheets to create easy-to-understand visual representations of his group’s testing. Learn how you can create status reports that provide the details that upper management seeks. These status reports are effective in both waterfall and agile environments—and will stand up to management scrutiny.

More Information
Learn more about Paul Holland.
T10 Continuous Testing through Service Virtualization NEW
Tuesday, April 8, 2014 - 2:00pm - 3:00pm

The demand to accelerate software delivery and for teams to continuously test and release high quality software sooner has never been greater. However, whether your release strategy is based on schedule or quality, the entire delivery process hits the wall when agility stops at testing. When software or services that are part of the delivered system, or required environments are unavailable for testing, the entire team suffers. Al Wagner explains how to remove these testing interruptions, decrease project risk, and release higher quality software sooner. Using a real-life example, learn how service virtualization can be applied across the lifecycle to shift integration, functional, and performance testing to the left. Gain an understanding of how service virtualization can be incorporated into your automated build and deployment process, making continuous testing a reality for your organization. Learn what service virtualization can do for you and your stakeholders. The ROI is worth it!

More Information
Learn more about Allan Wagner.
T11 Balancing Exploratory and Automated Testing in Agile NEW
Matthew Attaway, Perforce Software
Tuesday, April 8, 2014 - 2:00pm - 3:00pm

Transitioning to agile development isn't any easier for the test team than it is for the development team. New features and fixes appear daily, priorities change fluidly as customer feedback shapes the backlog, and design documents are frequently sparse as developers use iterations to zero in on design solutions. On top of all this, with agile testing there is rarely time to build the giant spreadsheets of test cases that management loves to use for tracking testing progress. So, how do we handle the constant flood of change, balance exploratory testing and automated testing—and  communicate progress to senior management? Matt Attaway shares his experiences developing a successful lightweight agile testing process that helped his team move from testing two waterfall projects at once to six agile projects simultaneously. Matt discusses lightweight test plan development, the use of BDD tools such as Cucumber, and methods for measuring testing progress.

More Information
Learn more about Matthew Attaway.
T12 How Did I Miss That Bug? Managing Cognitive Bias in Testing NEW
Gerie Owen, Northeast Utilities, Inc.
Peter Varhol, Telerik
Tuesday, April 8, 2014 - 2:00pm - 3:00pm

How many bugs have you missed that were obvious to others? We all approach testing hampered by our own biases. Understanding our biases—preconceived notions and the ability to focus our attention—is key to effective test design, test execution, and defect detection. Gerie Owen and Peter Varhol share an understanding of how the testers’ mindsets and cognitive biases influence their testing. Using principles from the social sciences, Gerie and Peter demonstrate that you aren’t as smart as you think you are. They show how to use knowledge of biases—Inattentional Blindness, Representative Bias, the Curse of Knowledge, and others—not only to understand the impact of cognitive bias on testing but also to improve your individual and test team results. Finally, Gerie and Peter provide tips for managing your biases and focusing your attention in the right places throughout the test process so you won’t miss that obvious bug.

More Information
Learn more about Gerie Owen and Peter Varhol.
W1 When Testers Feel Left Out in the Cold NEW
Hans Buwalda, LogiGear
Wednesday, April 9, 2014 - 10:30am - 11:30am

When you're responsible for testing, it's almost a given that you will find yourself in a situation in which you feel alone and out in the cold. Management commitment for testing might be lacking, your colleagues in the project might be ignoring you, your team members might lack motivation, or the automated testing you had planned is more complicated and difficult than you anticipated. You feel you can't test enough, and you will be blamed for post-release quality problems. Hans Buwalda shares a number of typical chilly situations and offers suggestions for overcoming them, based on his experiences worldwide in large projects. Specifically, Hans focuses on management commitment, politics, project dependencies, managing expectations, motivating team members, testing and automation difficulties, and dealing with overwhelming numbers of day-to-day problems. Take away more than forty-five tips and approaches to use when temperatures drop on you.

More Information
Learn more about Hans Buwalda.
W2 A Guide to Cross-Browser Functional Testing NEW
Wednesday, April 9, 2014 - 10:30am - 11:30am

The term “cross-browser functional testing” usually means some variation of automated or manual testing of a web-based application on different mobile or desktop browsers. The aim of the testing might be to ensure that the application under test behaves or looks the same way on different browsers. Another meaning could be to verify that the application works with two or more browsers simultaneously. Malcolm Isaacs examines these different interpretations of cross-browser functional testing and clarifies what each means in practice. Malcolm explains some of the many challenges of writing and executing portable and maintainable automated test scripts which are at the heart of cross-browser testing. Learn some practical approaches to overcome these challenges, and take back manual and automated testing techniques to validate the consistency and accuracy of your applications—whatever browser they run in.

More Information
Learn more about Malcolm Isaacs.
W3 Test Automation Patterns NEW
Dorothy Graham, Independent Test Consultant
Wednesday, April 9, 2014 - 10:30am - 11:30am

When implementing test automation, many people encounter problems: where to start with automation, high maintenance costs for the automated tests, or unrealistic management expectations. The good news is that solutions to these problems exist and have been effectively used by many. A “pattern” is a general reusable solution to a commonly occurring problem. Patterns have been popular in software development for many years, but they are not commonly recognized in system-level test automation. Dorothy Graham shares a collection of common problems (issues) and their solutions (patterns) which she and others are now developing as a wiki. To help resolve typical issues, Dot gives you a brief guided tour of some patterns—from Maintainable Testware and Domain-Driven Testing to Fail Gracefully and Kill the Zombies. Dot helps you recognize test automation issues and shows you how to identify appropriate patterns to help solve them.

More Information
Learn more about Dorothy Graham.
W4 “It’s All About Me” (IAAM)—Or Maybe It Isn’t NEW
Steven “Doc” List, Santeon Group
Wednesday, April 9, 2014 - 10:30am - 11:30am

What we really know about other people is their behavior, their words, and their body language. But we assume a great deal more about what's going on in their heads. We behave as though our assumptions are both Valid and True. This frequently leads not only to misunderstandings but also to friction, frustration, and falling out. However, our assumptions could be INvalid and UNtrue! “Doc” List leads you through the basics of an approach called IAAM ("It's All About Me!"), and the implications that IAAM has for you in your everyday professional and personal life. Combining an entertaining and stimulating presentation with role play and participant interaction—and laughter and conversation- discover assumptions you have been making, share insights with others, and develop a new mindset. Discover a new understanding, a new way of seeing and hearing, and change the way YOU behave!

More Information
Learn more about Steven “Doc” List.
W5 Why Classic Software Testing Doesn’t Work Anymore NEW
Regg Struyk, Polarion Software
Wednesday, April 9, 2014 - 12:45pm - 1:45pm

The classic software testing team is becoming increasingly obsolete. Traditional processes and tools just don’t meet today’s testing challenges. With the introduction of methodologies such as agile, testing processes with a "test last" philosophy cannot succeed in a rapid deployment environment. To exacerbate our testing difficulties, we now have to deal with "big data" which introduces an entirely new set of problems. In the past, we have relied on tools such as test automation to solve these problems; however, classic test automation simply will not suffice on its own and must be integrated with the right testing activities while being supported by correct procedures. When you combine these problems with inadequately defined requirements and limited resources, you have a recipe for testing disaster. Regg Struyk shares real-world examples and offers constructive ways to move away from traditional testing methods to a more integrated process using concepts such as test-driven development and TestOps.

More Information
Learn more about Regg Struyk.
W6 Security Testing Mobile Applications
Jeff Payne, Coveros, Inc.
Wednesday, April 9, 2014 - 12:45pm - 1:45pm

Due to the sensitive nature of the personal information often stored on mobile phones, security testing is vital when building mobile applications. Jeff Payne discusses some of the characteristics that make testing mobile applications unique and challenging. These characteristics include how mobile devices store data, fluid trust boundaries due to untrusted applications installed on the device, different and unique aspects of device security models, and differences in the types of threats one must be concerned with. Jeff shares hints and tips for effectively testing mobile applications. Tips include how to test for data privacy, secure session management, the presence of malicious applications, and traditional application security vulnerabilities. Leave with an understanding of what it takes to security test your mobile applications.

More Information
Learn more about Jeff Payne.
W7 Test Automation—It’s a Journey, Not a Project NEW
Paul Maddison, The CUMIS Group
Wednesday, April 9, 2014 - 12:45pm - 1:45pm

Organizations implement automated testing in hopes of reducing time and cost while creating higher quality products. They invest in tools, training, and development; identify candidates for automation; and then develop and execute test scripts. Unfortunately, some of these projects are considered failures because the promised savings are not realized. Funding for future automation efforts may be reduced, or automation projects may be cancelled outright. Management support, effective test design and test data management, low maintenance requirements, along with meaningful reports are key to a successful and cost effective automation. Paul Maddison shares how CUMIS has invested more than ten years in test automation—continually growing their investment, reducing costs, and shrinking time-to-market. Paul details their approaches for script design, data management, test candidate selection, and effective reporting to management. Learn approaches that can mitigate challenges and give you a higher return on investment—today and well into the future.

More Information
Learn more about Paul Maddison.
W8 How Metrics Programs Can Destroy Your Soul NEW
Scott Barber, SmartBear
Wednesday, April 9, 2014 - 12:45pm - 1:45pm

Testers are often evaluated by metrics that don’t really quantify the value of their work. Metrics such as tests planned, tests executed, coverage achieved, and defects reported all measure effort rather than results. Since people generally want to meet metrics goals, measurements that focus on activity rather than effectiveness often encourage unintended behaviors. Since the true value of testers lies in their ability to analyze and communicate risks and impacts, we must change the focus of metrics from numbers to insights. Scott Barber shares what stakeholders are really looking for when they request specific metrics, how the metrics they request frequently fail, and how to help your organization create metrics that do provide real insight. Discover the tools you need to explain what can be measured, what those measurements mean, and how to combine measurements into metrics that tell insightful stories about your testing.

More Information
Learn more about Scott Barber.
W9 Test Managers: Stop Managing and Start Mastering NEW
Silvio Moser, SwissQ Consulting
Wednesday, April 9, 2014 - 2:00pm - 3:00pm

To be successful, test managers must keep pace with the constantly changing world of software development. The test manager’s job description—planning, supervising, and reporting the activities of the test process, with a focus on functional black-box tests—has remained virtually unchanged over the past decade. Meanwhile, the job requirements have changed dramatically—fast release cycles leading to widespread adoption of agile methodologies; increased security, performance, and usability requirements; and myriad smart devices in users' hands. And these are just the most obvious. Silvio Moser explains how these trends change the way we develop and test software, and describes strategies for tackling these challenges. Test managers must adapt to a new test management curriculum and a transformed role—the test master. While the classic test manager is mainly organizing and controlling, the test master acts as a mediator, moderator, and problem solver. Learn to stop managing and start mastering.

More Information
Learn more about Silvio Moser.
W10 Continuous Performance Testing: The New Standard NEW
Obbie Pet, Ticketmaster
Wednesday, April 9, 2014 - 2:00pm - 3:00pm

In the past several years the software development lifecycle has changed significantly with high-speed software releases, shared application services, and platform virtualization. The traditional performance assurance approach of pre-release testing does not address these innovations. To maintain confidence in acceptable performance in production, pre-release testing must be augmented with in-production performance monitoring. Obbie Pet describes three types of monitors—performance, resource, and VM platform—and three critical metrics fundamental to isolating performance problems—response time, transaction rate, and error rate. Obbie reviews techniques to acquire and interpret these metrics, and describes how to develop a continuous performance monitoring process. In conjunction with pre-release testing, this monitoring can be woven into a single integrated process, offering a best bet in assuring performance in today’s development world. Take away this integrated process for consideration in your own shop.

More Information
Learn more about Obbie Pet.
W11 Getting Started with Open Source Testing Tools NEW
Marcus Merrell, RetailMeNot, Inc.
Wednesday, April 9, 2014 - 2:00pm - 3:00pm

In the not-too-distant past, the only viable options for testing complicated web-based applications were commercial (i.e., expensive) tools. These tools were well designed but difficult to scale without significant investment in both human capital and licensing costs. Now, a number of open-source tools are available, allowing for rich, robust, expressive testing against applications as complicated as any in the world—and they’re free. However, your savings in licensing and support fees can potentially be eclipsed by the cost of maintaining a team of developers to support these "free" tools. But as the tools progress, the industry is starting to shift toward open-source test frameworks to help manage these tools, allowing their support and maintenance to be done through the “community.” This leaves your team with only the challenge of modeling your application. Marcus Merrell presents one such framework, which allows for quick modeling and implementation of a robust, low-maintenance test suite that requires minimal Java skills.

More Information
Learn more about Marcus Merrell.
W12 Rapid and Intelligent Test Engineering
Wednesday, April 9, 2014 - 2:00pm - 3:00pm

Today’s IT plans tend to be defined as much by constraints as by requirements. Conventionally, the testing phase starts after the design and build phase. In today’s competitive and cost-conscious business environment, companies look to adopt different test strategies to balance the four key constraints of cost, quality, speed, and risk. In the process, quick often wins, and short-term solutions end up causing more problems than they solve. As a testing framework for such scenarios, Cognizant has introduced RITE—comprised of lean and fast delivery strategy, just in-time scheduling, and QA trend analysis—which results in greater value generation. The RITE framework—with its four foundational components: kanban, pattern-based testing, speed, and business value testing—will squarely fit into today’s IT scenarios of continuous improvement.

More Information
Learn more about Mathi Natarajan.
W13 Building a Testing Center of Excellence: Experiences, Insights, and Failures NEW
Krishna Iyer, Zen Test Labs
Mukesh Mulchandani, Zen Test Labs
Wednesday, April 9, 2014 - 3:15pm - 4:15pm

As organizations mature in their testing processes, the perennial quest to achieve ultimate excellence has led many to establish a Testing Center of Excellence (TCoE). Many of these initiatives have been plagued with issues ranging from partial implementation to complete abandonment midway. Additionally, most TCoE initiatives meet heavy resistance and inertia from testing teams who perceive them as a threat to their independence and way of working. At the heart of these issues lie misalignment with business goals, faulty ROI analysis prior to investing, poor communication, and an incorrect organizational model. Drawing from their experience consulting with organizations on TCoE initiatives, Krishna Iyer and Mukesh Mulchandani share insights, experiences, and lessons learned from both their successes and failures. Learn how to go about creating your own TCoE while overcoming the common—and not so common—challenges you will face along the way. Draw on their experience to troubleshoot some of your unique problems.

More Information
Learn more about Krishna Iyer and Mukesh Mulchandani.
W14 Techniques for Agile Performance Testing NEW
Jim Hirschauer, AppDynamics
Wednesday, April 9, 2014 - 3:15pm - 4:15pm

The performance of your application affects your business more than you might think. Top engineering organizations think of performance as not just a nice-to-have feature but as a crucial feature of their product. Those organizations understand that performance has a direct impact on user experience and, ultimately, their bottom line. Unfortunately, most engineering teams do not regularly test the performance and scalability of their infrastructure. Jim Hirschauer shares the latest performance testing tools and insights into why your team should add performance testing to an agile development process. Learn how to evaluate performance and scalability with MultiMechanize, Bees with Machine Guns, and Google PageSpeed. Take back an understanding of how to automate performance testing and evaluate the impact it has on performance and on your business.

More Information
Learn more about Jim Hirschauer.
W15 Test Automation for Mobile Applications: A Practical Guide NEW
Kunal Chauhan, QA InfoTech
Wednesday, April 9, 2014 - 3:15pm - 4:15pm

The world of information technology is undergoing revolutionary changes. Advancements in mobile computing, fueled by mobile applications, are playing an important role in driving these changes. While developers build their technical skills to accommodate these evolving trends, it is equally important for testers to understand what it takes to test mobile applications. Testers must understand the scope of mobile device applications testing, whether automation is feasible, and what challenges will face the test team. Kunal Chauhan presents an optimized approach to testing smart devices, specifically focusing on mobile applications test automation, the various forms of applications (web, native, hybrid), and the tools available to assist in the automation process. Kunal demonstrates an automation framework using open source tools, providing a practical implementable solution to add to your mobile test automation toolkit.

More Information
Learn more about Kunal Chauhan.
W16 Creating a Better Testing Future: The World Is Changing and We Must Change With It NEW
Lee Copeland, Software Quality Engineering
Wednesday, April 9, 2014 - 3:15pm - 4:15pm

The IEEE 829 Test Documentation standard is thirty years old this year. Boris Beizer’s first book on software testing also turned thirty. Testing Computer Software, the best selling book on software testing, is twenty-five. During the last three decades, hardware platforms have evolved from mainframes to minis to desktops to laptops to tablets to smartphones. Development paradigms have shifted from waterfall to agile. Consumers expect more functionality, demand higher quality, and are less loyal to brands. The world has changed dramatically and testing must change to match it. Testing processes that helped us succeed in the past may prevent our success in the future. Lee Copeland shares his insights into the future of testing, sharing his Do’s and Don’ts in the areas of technology, organization, test processes, test plans, and automation. Join Lee for a thought provoking look at creating a better testing future.

More Information
Learn more about Lee Copeland.