The data tsunami is coming—or maybe it’s already here. Data science, big data, and machine learning are the buzzwords of the day. Data is changing our products and the way we build them, so we should also change the way we verify our products. In a world of increasing connectivity and accelerated deadlines, data can provide an edge. But what role should data play in assessing the quality of software? Where does it make sense to use data, and where is it inappropriate? Steve Rowe covers the history of how data fits into testing, explains why data is an important tool to have in your quality...
Steve Rowe
Steve Rowe is a graduate of the University of Illinois and a twenty-year veteran at Microsoft. He spent most of that time in the testing and quality organizations around Windows and worked on multimedia, helping bring DVD playback to the PC, introducing Media Player, redesigning Windows audio, and working on hardware accelerated video playback. After that he helped create Windows Runtime, the application and API model used by Universal Windows Applications. In the past several years Steve has focused on using data science to understand product quality. He was heavily involved in the shift within the Windows team from automated testing to assessing quality through telemetry. Steve’s blog can be found at http://steverowe.net.