Speakers

Alberto Savoia, Google
The way most software is designed, developed, and launched has changed dramatically over the last decade – but what about testing? Alberto Savoia believes that software testing as we knew it is dead – or at least moribund – in which case we should stick a fork in it and proactively take it out of its misery for good. In this opening keynote of biblical scope, Alberto will cast stones at the
old test-mentality and will try his darnedest to agitate you and convince you that these days most testers should follow a new test-mentality, one which includes shifting their focus and priority from “Are we building it right?” to “Are we building the right it?” The subtitle of this year’s GTAC is “cloudy with a chance of tests,” and if anyone can gather the clouds into a hurricane, it's Alberto – it might be wise to bring your umbrella.

Alberto Savoia is Director of Engineering and Innovation Agitator at Google. In addition to leading several major product development efforts (including the launch of Google AdWords), Alberto has been a lifelong believer, champion, innovator and entrepreneur in the area of developer testing and test automation tools. He is a frequent keynote speaker and the author of many articles on testing, including the classic booklet “The Way of Testivus” and “Beautiful Tests” in O’Reilly’s Beautiful Code.  His work in software development tools has won him several awards including the 2005 Wall Street Journal Technical Innovator Award, InfoWorld’s Technology of the Year award, and no fewer than four Software Development Magazine Jolt Awards.



Andre Arcilla, Yahoo!
Hadoop is the distributed technology that powers Yahoo! grids. Due to the complex nature of the Hadoop ecosystem, Hadoop stack deployment requires significant investments of time and engineering expertise. Hadoop engineering team came up with a solution for automatic deployment and validation of the software stacks on a small computational footprint, enabling wider use of the technology for running small-scale trials, and using and testing Hadoop-based products in realistic environments. This talk presents HIT - Hadoop deployment and integration test environment that enables rapid automated Hadoop deployment.


Andre Arcilla is the Hadoop Integration Architect at Yahoo!, focused on assembly and deployment of Hadoop ecosystem. He previous experience includes architecting and building scalable utility and distributed enterprise solutions.






Roy Williams, Google
WebGL will enable developers to create richer web applications than ever before by giving developers full access to the GPU on their users’ machines.  Testing this new capability presents unique challenges new to most web developers.  In this talk we'll be discussing techniques and tools used for testing WebGL applications inside of Google, as well as some tips for building your own WebGL applications.

Roy Williams has been at Google for 2 years, and was at Microsoft for 5 years prior to that. Some of his major externally visible accomplishments include launching Route Around Traffic in Google Maps for Android. Roy has his BSc in Computer Science from Duke University. He lives in Seattle with his wife Tara.





BidiChecker: Automated Bidi Testing of Web Applications
Yana Margolin & Jason Elbaum, Google

BidiChecker is a tool for the automated testing of web pages for errors in support of right-to-left (RTL) languages, also known as bidirectional (bidi) because they routinely include left-to-right items such as numbers and Latin-script words and phrases. Bidi support for a web page is a common requirement, even for pages in left-to-right scripts. Any page which accepts user input or displays multilingual content is likely to end up handling bidirectional text at some point, as it will eventually encounter data in Arabic, Hebrew or another RTL language. BidiChecker provides a Javascript API which can be easily integrated into an existing test suite. It also supports a browser bookmarklet which lets you run the checks manually on any web page and browse through the errors highlighted on the page. We’ll talk a bit about common bidi bugs, then describe and demonstrate BidiChecker.

Yana Margolin joined Google in 2009. Since then she’s been in charge of testing bidirectional text support implementation across various Google applications. Her focus includes promoting BidiChecker - an internally developed, open-sourced, automated tool for bidirectional text handling testing, as well as facilitating product development teams to switch from manual testing model to an automated one. Prior to Google, Yana was a QA Team Manager at WebCollage Israel LTD, a content syndication provider, who built the company's testing team from scratch.










Jason Elbaum has been at Google since 2008, where he develops infrastructure for supporting bidirectional languages. Before that he spent nine years at Motorola/Freescale Semiconductor. He has degrees in computer science from Princeton University and economics from the London School of Economics, and worked at Imperial College London as a research assistant in petroleum engineering (really!). Originally from suburban Washington, D.C., he now lives with his wife and two kids in Israel, where he umpires Little League baseball.









How to Hit a Moving Target in the Cloud
Vishal Chowdhary, Microsoft
In this session, we share our experiences for testing the Microsoft Translator (MT) service. Testing the translator service can be divided into two broad areas – (1) Testing of the machine learning system that returns the translation answer and (2) the web service that serves translate requests running on the Microsoft cloud. The MT service offers translation in 36 different languages for users across the globe through UI and API. Testing the MT cloud service without user data would be like shooting in the dark. From a product standpoint, we need to answer questions like
How do we prioritize languages for improvement? How do we divide resources in our data center for effective capacity distribution for the languages? From a test perspective we need to understand how the system is being used so that we can ensure that any new bits deployed will hold-up in production.

We will discuss the following three problems: Strategy for testing the machine learning system.  How do you perform load testing for the translation service running in the cloud? How can one extend the role of test by completely skipping the ‘opening bugs’ step?

With Microsoft for the past seven years, Vishal Chowdhary is currently with the MSR - Machine Translation (MT) team where he is responsible for both performance and scale testing of the MT cloud service and the machine translation quality. In the past, he has been involved with BizTalk, .NET Framework Wf and Wcf, and the Windows Azure AppFabric teams where he owned test strategy, test framework design, and feature testing. Vishal is passionate about making testing simpler and more interesting by employing new testing strategies and processes that prevent defects from entering the product.


Testing Cloud Failover
Roussi Roussev, VMware
The elastic properties of cloud computing introduce change. While extremely beneficial in terms of optimal resource utilization and agility, ch
ange can be dangerous. In this talk, I will describe several recent multi-day cascading failures and propose practical approaches to dealing with the problem. With the help of virtual machines, one can easily inject failures at device, host, cluster or datacenter level. And even if an issue slips through, multi-cloud environments can provide the necessary isolation and independence.

Roussi Roussev is a software engineer in the Datacenter Platform Group at VMware, where he works on configuration management and security solutions for VMware vSphere. Previously, Roussi held engineering position at Google where he worked on systems infrastructure, service monitoring and management. At Microsoft Research, he developed novel systems management, spyware and rootkit detection techniques, and helped build the most comprehensive client-side honeypot to identify malicious websites exploiting browser vulnerabilities. Roussi pursued graduate studies in computer sciences at Florida Institute of Technology, where he researched and developed model based testing, fault injection and malicious software detection methods. His interests include operating systems, security, testing and building large scale systems.


Behind Salesforce Cloud: Test Automation Cloud and Yoda
Chris Chen, Salesforce
How would you provide quick and accurate feedbacks to hundreds of check-in every day?  How would you triage hundreds of test failures each day? How would you validate each of more than one-hundred releases to production per year? These are the questions Salesforce.com has had to answer during its twelve year history. These are the challenges that led to the creation of its “test automation cloud” and Yoda.  Chris provides a quick overview of how Salesforce.com addresses those challenges.


In his role as senior manager of the test automation team at Salesforce.com, Chris Chen drives key elements of the overall test automation strategy for the organization and oversees the development and testing of many test automation projects. Chris joined Salesforce.com in 2001 and has worked in various areas of R&D. For the past six years, he has primarily focused on test automation and currently has several test automation-related patents pending.

ABFT in the Cloud
Timothy Crooks, CygNet
While an Automated Basic Functionality Test (ABFT) is not a new concept, especially with the days of Continuous Integration and Smoke Tests having preceded it, getting everyone in your organization to contribute isn't always as straightforward.  What I'd like to think we've done at CygNet Software is provide a internal-cloud-based testing platform for everyone's skill set and multiple forms of testing in our company.  Some of our QA and Dev folks are better script testers, some like our screen format (ActiveX controls for web/internal MDI client), while a majority of Dev is writing unit test console and UI apps.  My strength was in UI Automation, so I started with a user workflow oriented ABFT to cover user paths or the critical integration. Next, we created a script "bucket" and then a self-testing "screens" one, too.  Finally we wrote our own script-like wrapper format called "ARX" to order and pull in various apps, screens and scriptlets to get even more coverage done.

Tim is a Software Automation Engineer at CygNet Software.  He is the “Automation Architect” for UI workflow testing and the ABFT System, as well as test lead for various features in each release.  With a passion for “creative destruction” and software validation, he has tested and built multiple tools and frameworks for web, UI, and console-based applications.  Tim has 15+ years in Software Testing and Automation Frameworks from various companies—IBM, Microsoft, RealNetworks, VERITAS/Roxio and now CygNet Software.


ScriptCover: Javascript Coverage Analysis Tool
Ekaterina Kamenskaya, Google
The Javascript coverage analysis tool is a Chrome extension that generates line-by-line coverage statistics for any web page without user modifications. The results are collected starting as the page loads and continues as users interact with the page. These results can be viewed in real-time with general overall coverage scores and for each external/internal script by highlighting those lines that were run.

The tool’s broad scope is attractive for both testers and developers by providing data used in a variety of common techniques for debugging, analysis, and exploration. The use of Javascript coverage in automated UI tests such as WebDriver provides an indication of how well an application is tested. Manual testers can utilize the information to understand how much of the application they have covered. The detailed coverage report is useful for identifying an application’s components and functionality that was not covered while testing, and for narrowing down the areas in the code that gives rise to a bug while debugging an application.

Ekaterina Kamenskaya is a PhD with over 14 publications related to image processing algorithms, face recognition, and psychological profiling.  Her career started as management for the quality assurance departments at Falk AG and later Doubleclick Inc.  She has since moved on and is now a Software Engineer in Test at Google.  Her work interests are related to all aspects of software testing, and automated testing and debugging of web applications in particular.


Cloud Sourcing - Realistic Performance, Load and Stress Testing
Sai Chintala, AppLabs
Today, most enterprises are tapping the cloud to gain several business benefits from reducing enterprise IT costs to improving workload optimization and service delivery. Apart from the tactical cost reduction, enterprises are beginning to value the immediate availability of the cloud offerings as well as the business flexibility it brings to an organization, since the current scenario demands adapting to emerging business requirements. While cloud computing offers these benefits, it presents a new set of challenges such as security, privacy, availability, data integrity, etc, which must be mitigated effectively. This session presents an approach and methodology to web application performance testing from the Cloud.


Sai has over 21 years of IT experience. He is been with AppLabs (which is recently acquired by CSC) since its founding year. He currently heads Solutions Engineering Group which is responsible for providing pre-sales technical solutions support to business teams across 3 geographies (US, UK, and Emerging Markets). Prior to AppLabs, Sai has worked with large enterprise organizations in US for over 12 years where he played multiple roles for companies such as Wiltel, Schlumberger Well Services, Fidelity Investments, and Alliance Data Systems. Sai has an MS in Computer Science from Lamar University, Texas and a BS in Engineering from JNTU, Hyderabad.


Web Consistency Testing
Kevin Menard, Mogotest
Web Consistency Testing is a new form of automated Web testing that answers the simple question "does this page look the way it should?".  Historically, the way a page looks has been relegated to the status of "design artifact" and as such, been treated as something that must be tested with human eyes.  In my talk I present the results of research and development of an automated issue detection system that can be extended in numerous dimensions to detect cross-browser rendering issues, CSS regressions, and i18n differences.  Using the simplest representation possible, the "golden copy" of a page, this system requires no site-specific programming, making Web Consistency Testing available to every person in an organization, from product manager to QA engineer.


Kevin is the founder of Mogotest, a Web Consistency Testing service that aims to change the way we test Web sites and applications and simultaneously make testing available to all.  He's been involved with many open source projects over the years, most notably the Apache Cayenne and Tapestry projects, Selenium, svn2git, and the rubber Capistrano plugin for cloud provisioning and deployment of Ruby applications.  He is an Apache Software Foundation member and the current maintainer of the Selenium Grid project.


Keynote: How Hackers See Bugs
Hugh Thompson, People Security


Anthony Voellm, Google
The world of building software is undergoing rapid changes with the shift from desktop applications to highly connected and ubiquitous applications served from the Cloud. The shift to the Cloud poses new challenges (eq: how do you run xUnit frameworks on PAAS?) and opportunities on how to test while also creating whole new ways of testing in general (eg: fuzz testing on hundreds of machines for little cost). This talk will focus on helping you understand the challenges and opportunities while separating fact from fiction. It’s time to part the Clouds and understand what lies ahead.

Anthony F. Voellm currently leads the Google Cloud Test team and has a wide range of experience from kernel and database engines to graphics and automated image and map extraction from satellite images. Anthony is an avid inventor with 7 technology patents issued.  He is focused on delivering Rerformance, Reliability, and Security to existing products like Google Cloud Storage and Dremel while also innovating new offerings. Prior to joining Google in 2011, Anthony held multiple roles at Microsoft leading the Microsoft Windows Reliability, Security, and Privacy test team working on Windows7+; Microsoft Hyper-V Performance Team; and SQL Server Performance team. He has also been a developer and tester on the Windows Filesystem, SQL Server Engine, and SGI IRIX networking teams. Anthony has taught performance testing to over 2,000 people worldwide and given dozens of informative talks on software fundamentals. He keeps a personal technology blog on software fundamentals at perfguy.blogspot.com. In addition to computer interests his passions lie in growing engineers, building things, and doing anything outdoors. Anthony holds a Master of Science from George Washington University, BA in Physics and a BS in Computer Science and Mathematics from the University of Vermont.


WebDriver
Simon Stewart, Google
Google has a unique infrastructure for running web tests. This talk will focus on how this infrastructure evolved, from running tests on local machines, all the way up to the sophisticated tools available for Googlers now. Along the way, you'll learn how you, too, can build something similar using OSS, and a little bit of elbow grease. You'll walk away knowing just how much effort you want to expend on running your tests in the cloud, and whether a private or public implementation is the right thing for you.


Simon Stewart lives in London and works as a Senior Software Engineer in Test at Google. He is the current lead of the Open Source Selenium project and deeply involved with browser automation at work. Simon's first GTAC experience was in New York, where his carefully planned demo went horribly awry. It has been said before that Simon enjoys beer and writing better software, sometimes at the same time. This continues to be true. He is also the top hit for the search 'steel cage knife fight', a fact that makes him inordinately proud.



Dounia Berrada, Google
Dounia’s lightning talk will give a quick update on Mobile support in WebDriver.


Dounia is a Software Engineer in Test at Google since 2008 where she has been working on Mobile tools and infrastructure. As part of her role she contributes to the Open Source project WebDriver (http://selenium.googlecode.com), focusing on implementing the Android components of the browser automation framework. Dounia has also been working on cloud infrastructure within Google for Android web testing.  She has a Masters degree from Georgia Institute of Technology (Atlanta) in Computer Science, and a Masters from University of Technology of Compiegne (France).







jstestnet: CI with JavaScript Integration Tests
David Clarke, Mozilla
David's lightning talk focus will be on javascript integration testing in your own cloud.

David Clarke, born in London, England and raised in San Diego, California has a BS in Computer Systems Engineering from Boston University.  He was a Founding Engineer of Gizmo5 and a Platform QA Manager at LiveOps for 5 years, building java  / javascript / ruby-based test frameworks and performance profiling.  He is currently an Automation Services Engineer at Mozilla, with an emphasis on test frameworks and HTML5.








NativeDriver
Matt DeVore, Google
In Google there is no well-established native UI testing solution. There is a heavy reliance on manual testing and the automated solutions in place are not mature yet and not similar between platforms. Outside of the native world, WebDriver is used extensively to automate web application UI. NativeDriver aims to be the native version of WebDriver and to bring its simplicity to native platforms. It is an implementation of an extended WebDriver API which drives the UI of native applications. This makes the testing experience between the web and all native platforms very similar.

iOS NativeDriver has been open-sourced, and the Android version is also being used by multiple teams in Google to run automation tests. One issue encountered in the design was API mapping - how do you interpret the API of WebDriver so it makes sense on a non-web platform? Limited platform support for UI automation is also a significant challenge.

The client/server architecture of NativeDriver, which is shared between most implementations of WebDriver, has given technical benefits while enforcing the black box nature of NativeDriver.

Matt DeVore is a Software Engineer in Test at Google Japan. Since he joined Google one year ago, he has been working mainly on the NativeDriver team. NativeDriver is an automated UI testing tool for multiple platforms. He set the direction of NativeDriver and leads development on the Android version, which was released on Google Code a few months ago. Before joining Google, Matt worked in Microsoft Japan and Redmond, US for a total of 4 years. In Redmond he worked in the Visual Basic .NET compiler team developing UI and non-UI automation tests. After moving to Japan, he lead internationalization testing of the .NET Framework.



David Burns, Mozilla
Every developer, tester and automator has had this question at least once in their career: "Can you show me the coverage your UI tests on the page?". The reply is always: "I can't but I know where my tests are going by looking at them". Selenium WebDriver can provide some interesting insights into what is happening on the page and this talk will show how we can use these to build a heat map of our tests that use Selenium WebDriver.


David Burns is a Lead Software Engineer in Test at Mozilla. He is a lead maintainer on the Selenium Browser Automation Framework and is the author of Selenium Testing Tools: A Beginners Guide.







Jerome Mueller, Google
This presentation explains why it's good to separate WHAT is being tested from HOW it's being tested. It shows how that can be done and what advantages you can expect. At the end I reveal some plans I have for Google testing. The buzzword "Behavior Driven Development" will be used.


Jerome Mueller is a Software Engineer with a heart for testing (ever since the first XP book), who was successfully lured away from the glorious life of self-employment by Google. He's also a feared baseball player in his home country (Switzerland).








Browser Automation with NodeJS and Jellyfish
Adam Christian, Sauce Labs
In a world where Javascript is everywhere; your browser, server, database, mobile device -- you want and need code resuse to speed up development.  In order to do this, you need to know that code works in all the environments you care about.

Jellyfish is a node project focused on provisioning different environments and making it easy for you to execute your JS and get the results.

Adam is the co-creator of Windmill and various other open source projects, including Mozmill (the XUL test automation project), and Jellyfish. He also works on a small snowboarding video blog called EatPow. His personal blog is at adamchristian.com. He is currently employed as a Javascript Architect at Sauce Labs.



The Latest in Google Test Tools
Ibrahim El Far, Google
Today’s test engineering is labor intensive, requires expensive switches in context, involves so much grunt work that it stifles creativity and slows down productivity, and, finally, it often ignores product and customer risks. Come and learn about the latest Google has to offer in open source test tools that help manual testers by eliminating much of the grunt work, keeping them focused on testing, and helping them prioritize their efforts based on risk. Chief among these are BITE, Quality Bots, Test Analytics, and Script Cover, the first two of which are the focus of this talk.


Ibrahim is an engineering manager at Google leading a team dedicated to building next-generation test tools for the web. Earlier he was a software engineer building developer tools used across the company. Prior to Google, Ibrahim was at Microsoft where he led tools and QA teams in Bing and SQL Server. His academic background includes extensive graduate-level work in software testing at the Florida Institute of Technology.



Angular
Misko Hevery, Google
Angular teaches your old browser new tricks. It is what HTML would have been had it been designed for building web-applications. Angular is radical because it eliminates boilerplate code with declarative rather than imperative syntax.


Angular:
* Allows you to create custom HTML elements and attributes that provide dynamic behavior
* Declaratively describe web-applications behavior with little JavaScript.  
* Creates an environment that provides trivially reusable widgets, data-binding, "automatic MVC", server resources, and other primitives useful in building AJAX apps.
* Builds apps that have orders of magnitude less JavaScript than equivalent apps written in classical way.
* Eliminates waiting on compiling for UI changes.

Having an awesome framework is not enough, one needs to also have an awesome testability story. In this session we will focus on all of the work which we have done to make applications written in Angular a joy to test.


Misko Hevery works as an Agile Coach at Google where he is responsible for coaching Googlers to maintain the high level of automated testing culture. This allows Google to do frequent releases of its web applications with consistent high quality. Previously he worked at Adobe, Sun Microsystems, Intel, and Xerox (to name a few), where he became an expert in building web applications in web related technologies such as Java, JavaScript, Flex and ActionScript. He is very involved in Open Source community and an author of several open source projects such as Angular (http://angularjs.org) and JsTestDriver (http://code.google.com/p/js-test-driver).




Keynote: Secrets of World Class Software Organizations
Steve McConnell, Construx Software
Construx consultants work with literally hundreds of software organizations each year. Among these organizations, a few stand out as being truly world class. They are exceptional in their ability to meet their software development goals and exceptional in the contribution they make to their companies' overall business success. Do world class software organizations operate differently than average organizations? In Construx's experience, the answer is a resounding "YES." In this talk, award-winning author Steve McConnell reveals the technical, management, business, and cultural secrets that make a software organization world class.


Steve McConnell is CEO and Chief Software Engineer at Construx Software where he consults to a broad range of industries, teaches seminars, and oversees Construx’s software engineering practices. Readers of Software Development magazine named him one of the three most influential people in the software industry along with Bill Gates and Linus Torvalds. Steve is the author of Software Estimation: Demystifying the Black Art (2006), Code Complete (1993, 2004), Rapid Development (1996), Software Project Survival Guide (1998), and Professional Software Development(2004), as well as numerous technical articles. His books have won numerous awards for "Best Book of the Year" from Software Development magazine, Game Developer magazine, Amazon.com's editors, and other sources. Steve serves as Editor-in-Chief Emeritus of IEEE Software magazine, is on the Panel of Experts of the SWEBOK project, and is past Chair of the IEEE Computer Society’s Professional Practices Committee. He can be reached at stevemcc@construx.com.