Podcast Episodes

Episode 22: Test and Launch the Site (w/ Bob Davidson)

August 16, 2023 | 34:19 | Bob Davidson

Corey and Deane talk about the concept of the “Nails List.”

Then, Bob Davidson, Director of Development at Blend Interactive, joins to talk about how to get your site ready for launch, what makes a good QA practitioner, the role of quality assurance and testing in the development process, and how to prep the site so it doesn’t fall over when exposed to the real world. We also spend a lot of time talking up Jenna Bonn, Blend’s QA Practice Manager.

The Web Project Guide podcast is sponsored by Blend Interactive, a web strategy, design, and development firm dedicated to guiding teams through complicated web and content problems, from content strategy and design to CMS implementation and support.

Show Notes and Further Discussion:

Transcript

Corey (00:10):

Hello. This is the Web Project Guide Podcast and this is Episode 22, Test and Launch the Site. I'm Corey Vilhauer, director of strategy at Blend Interactive and co-author of the Web Project Guide. Later we'll be joined by Bob Davidson, director of development at Blend Interactive and an Optimizely OMVP. But first, I'm joined by my co-host Deane Barker. Hi, Deanee.

Deane (00:29):

Hi, Corey. How are you?

Corey (00:31):

I'm great.

Deane (00:31):

You're back to saying co-host, not co-criminal.

Corey (00:36):

I got tired of coming up with new ones. Sometimes you just have to go with a basic, easy thing.

Deane (00:44):

I'll be your co-host, Corey, at least for two more episodes.

Corey (00:46):

Thanks. Hey, do you want to talk about QA?

Deane (00:49):

Does anyone ever want to talk about QA?

Corey (00:52):

We'll probably talk about this later, but I used to do QA at Blend. I don't mind talking about it so much.

(00:59)
But as a developer, you were on the receiving end of QA. And I'm curious when it comes to the QA process. When you would get all of those tickets, when you would realize we were ready to launch, we're ready to push off a site, what are the things that frustrated you the most about like, "Oh crap, we forgot this again," or "oh no, they forgot to add this" type of thing?

Deane (01:22):

Well, I mean, developers can be clueless. Developers just love the chunky, fun things to do. When a developer bids a project, they often just like, "This is how long it'll take me to put up the walls of the house and stuff." They forget about putting up the drapes and doing all the trim inside. They forget about all that extra stuff.

(01:48)
And of course, as a developer, you never want to plan for QAs. You expect that your stuff should be right all the time. In a perfect, healthy environment as a developer, you would value your QA person. You would value them as someone who looks out for you and makes sure your work is great. And that you can learn through, I don't want to say learn from because I'm not a developer, but learn through because they're reviewing all your stuff and they can help you figure out what keeps going wrong and what do our customers look at.

(02:17)
But that's not usually how it works. There's an unfortunate, I don't know if this is how it is at Blend, I don't mean to apply that, but there's often an unfortunately antagonistic relationship between a developer and a QA person, which is not healthy. If you see a developer that has great respect for QA and appreciates the work their QA people do, then that's a very mature developer that's probably very, very productive.

(02:39)
But yes, there are a lot of things that get forgotten and this is the job of the QA person to catch them and the project manager to plan for them.

Corey (02:50):

Deane, I once searched a term hoping to find information about it, and I did not get a result because it turns out it's a thing that Blend made up, and that was that I looked up the explanation and a summary of what a Nails List was. Tell me what the Nails List is, Deane.

Deane (03:09):

I made that term up. Blend, early in the company's history had a problem with just projects dragging on, like at the tail end of the projects. You would show it to the customer and the customer would find a million things that they didn't like and were wrong. And they would just drag on. Not only that, but they would always come back from the dead. You'd launch the thing and then the customer would come back three days later with a bunch of things that weren't working. I want to stress, this was early in Blend's history before we had even hired you.

(03:40)
I remember one day at Blend saying, "We really need to nail these projects shut." And the metaphor in my head is like when someone is dead, you put them in a coffin and you literally nail the coffin shut so they can't get back out. And we really need to nail these things shut, and that morphed into what came to be known as the Nails List.

(04:02)
And the Nails List was a list of things that we had to check before a site launched. I mean, it was to check things like make sure there was a mail server running on the site if it was sending email.

Corey (04:15):

Is the favicon showing?

Deane (04:17):

Right, right, right. When you started at Blend, the Nails List was in place, right?

Corey (04:21):

Mm-hmm (affirmative) yeah.

Deane (04:22):

Okay. I came up with that term and that was our first QA process was the Nails List. And that morphed into hiring you to actually become a QA manager. And I'm actually laughing now at the thought that you tried to Google the Nails List.

Corey (04:41):

I just thought it was, I was like, "Okay, well, people do this, right?" Nope, that's just Deane.

Deane (04:42):

That was something we made up. I had just read a book by one of my favorite authors, Atul Gawande at the time, and the book is called The Checklist Manifesto. I recommend this book to anyone. Atul Gawande is a surgeon and he talks about the value of checklists in preventing stupid mistakes, particularly in the usage of medicine. Like if you're putting in a central line to a patient, there's like a checklist of seven things you need to make sure that were done and it reduces errors. I just read that book and I thought, "That's what I need me, a checklist," and that checklist became the Nails List, that became Corey Vilhauer, which has morphed forward to Blend's QA process now.

Corey (05:20):

And I think more than anything, there is an actual process, which was wild to think that there wasn't any of that when I first started, but it was a different time on the web.

Deane (05:29):

When you're starting a company, here's the thing about QAs, it costs money. It costs money and time. And when you're starting a company, you're fighting for work and you can't afford to lose a project because you had to add a bunch of stuff for QA. In the early days of any web development company, you are playing really fast and loose.

(05:48)
And Blend was no exception. In the early days, we had a skeleton crew. And we in no way, we didn't have anybody available to do QA much less were we going to add 35% to every project to account for it.

(06:03)
And QA is something that develops in a mature company, where Blend has been around now for 18 years, which it's like elderly in the world of web development companies. And so it has had time to develop a repeatable QA process and they get developed through massive levels of frustration and aggravation.

Corey (06:26):

And absolute necessity.

Deane (06:28):

Right. We were driving ourselves nuts. We hated having to go back and fix these mistakes that we would see over and over and over again.

Corey (06:35):

Well, our guest today will be Bob Davidson. Bob Davidson is Director of Development at Blend. He will be able to talk a little bit about how our QA process has evolved over the years and he is now responsible for managing the development department and all of the development processes at Blend.

(06:50)
But first, the Web Project Guide is sponsored by Blend Interactive, a web strategy design and development shop dedicated to guiding teams through complicated web and content projects. Blend's been building great websites for over 18 years, and we've been QAing those projects for most of those 18 years.

(07:05)
We're always looking for our next partnership, so visit us at blendinteractive.com.

(07:17)
All right, let's welcome our guest, Bob Davidson. Hi Bob.

Bob (07:19):

Hi, thanks for having me.

Corey (07:21):

Bob. I want to talk a little bit about when I started in this industry, weirdly enough, I did not start as a content strategist, I did not start as an information architect, I started as a QA practice manager. Part of that, I believe, is because the content strategy process was still kind of in its, I mean, we were sort of in its infancy. But also I remember Deane talking about how it was maybe one of the best ways to actually learn the process of how-

Deane (07:46):

It's also a great way to get developers to hate you [inaudible 00:07:50] for stuff. So Corey did our QA for probably at least a year, and he was supposed to be the last stop before websites went out the door, and I think he probably was.

(08:02)
Let's talk about the dirty little secret of why QA often doesn't get done. When you look at all the things that get done to do a project, QA is something that often falls between the cracks. Bob, what's been your experience with that and how do you counter that? How do you make sure that QA gets done and gets done to a decent standard?

Bob (08:19):

Well, I think there's a bunch of stuff that goes into that. I mean, doing good QA is not free, and it's one of those things that if you do it well, it's sort of like good sound design. If it's done well, you don't realize it's been done at all because you don't have any problems and things are smooth. But that comes with a cost. And so there's this cost that doesn't feel like there's any benefit until you're paying the other cost of not doing it and your site is down.

Deane (08:44):

We would add QA surcharge. When we were bidding projects, we would add a certain percentage for QA, and there were some customers that objected to that. And we would go back to those customers and we would say, "Okay, well will you do the QA then? Will you be our QA department?" And I think only one customer ever took us up on that. We marked that down and let the customer do their own QA.

(09:07)
But QA is the thing that everybody just expects to be done, but nobody wants to pay for it because everybody just expects it to be right the first time. And when you're bidding a project, if you're just bidding what I call principle construction, then you're assuming there's going to be no errors and no problems and nothing's ever going to be changed, and that's just never the case.

(09:25)
But let's talk about, Bob, can developers do their own QA?

Bob (09:29):

Can they? Yeah, I suppose in theory they could. The better question is, should they? And the answer is definitely no. My experience with devs is that, and I'm speaking as a dev, as someone who did dev for many, many years and still does some, we tend to do the happy path as we call it. Where we check that the form works in the sense that when I fill out all the expected information, it goes to the place it's supposed to go. What we don't like to do and tend to not do is what happens when I don't fill out the expected information or when I go down a path that we hadn't planned for or I do something silly. We don't look at that and that's what a good QA manager will do is kind of fuzz the forms and try things and do things in a different direction.

(10:11)
They'll also spend time looking at it from the user's perspective, whereas the dev tends to look at it from the perspective of what the specification says. "Does this meet the spec?" A QA person will look at it more like a user will look at it in that, "Is this behaving in the way that I expect it to? Is it performing the way I expect it to? Is there anything about this weird or confusing?" And that's sort of that softer QA that is also pretty valuable.

Corey (10:37):

Bob, do you think that a QA practice person, whoever's doing QA in the site, has to have a development background?

Bob (10:43):

Not necessarily. I think they do need to have some familiarity with the technology, but they don't necessarily have to be a developer.

Deane (10:53):

I think there's a difference between "this complies with requirements" and "this works." I mean, those are two really different things. I think developers really strive to get it to comply with the requirements.

(11:04)
I'm thinking about that funny video, it has the child's toy where you have to put the shapes through the slots. And it has this woman narrating this video, and so the person with the toy puts the square through the square slot, lady's very happy. And then the person with the toy picks up the triangle and she's like, "Yep, you put it through the triangle slot." And they just turn it sideways and they put it through the square slot. And they do that with all the different shapes. They just put them through the square. And this person is slowly self-destructing in the [inaudible 00:11:34] because they're watching them do this.

(11:35)
And I think it's often you see a tester do something you never intended, you never thought they would do the first instinct of developers is to get pissed off and say, "Well, why are you using it that way?" And you can't really blame a QA person because it's their job to proceed as if a user was doing it. So I think less development experience is often advantage.

Corey (11:54):

That's how I was so good at QA is I had zero developer experience whatsoever.

Deane (11:58):

Now since you stopped doing QA and since I left Blend, I know that Blend has a new person managing their QA. Bob, tell us about this person and tell us where this person fits in to the process. How do things kind of float through this person's world? Where do they come in the process and how involved are they before the product gets to QA?

Bob (12:19):

So our QA practice manager is Jenna. She's great. She's actually been out on maternity leave and she's been back a couple of weeks ago and we are all incredibly thrilled to have her back. You don't realize how much you need a good QA person until he or she is gone, and then it really stands out.

(12:39)
We try not to do QA at the end of the project. We like to do QA as an ongoing process, so as features and maybe pages or whatever the project might happen to be, as chunks of work are done, they get QAed as much as they can be. And then of course we have a final pre-launch QA pass as well where we do the pre-launch checklist and we kind of button up everything.

(13:01)
And basically in our process we'll sort of deliver a big chunk of functionality to our QA manager. Jenna will go through that chunk and test it, find bugs, find issues, and then it comes back to the devs to kind of clean those up and fix it. And then basically it goes back to her and we kind of ping pong it back and forth as often as needed until it passes, until she's happy with it. And then it goes to the client for QA. And we do that throughout the life cycle of the project and after launch as well.

Deane (13:31):

Where's the dividing line between "this is broken" and "this sucks?" Because as someone running QA, and I'm sure Jenna, things break and that's very easy for her. I mean, it's binary, right? It was supposed to work and it doesn't work.

(13:42)
But when you get into weird shades of grays, when Jenna might do something and it technically works, but it sucks, what latitude does she have to go back to a developer and say, "Look, this is crap. You need to make this work better?"

Bob (13:55):

In those scenarios, it comes back to dev and it kind of comes down to budget and how much time we've spent. If something really is just truly awful, we may take it to the project manager and we may try to find some time to fix it. If it's something we can fix, we will typically fix it because our goal is always to make the best possible product that we can. But if it's maybe just a little scuzzy but fixing it is going to be a huge effort, it's just going to be a little scuzzy, unfortunately.

(14:22)
Drawing that line is sort of a collaboration between the devs, the project managers, possibly the project owners and the QA manager.

Corey (14:30):

I mean, a lot has changed, I think, in Blend's process over the past, obviously the past decade since I was actively doing QA. And a lot of that is to the credit of Bob and the dev team at Blend. But when we used to do, we would do it all at once at the end, and what we would find is that there's so many little things that might be qualified as scuzzy that we could have caught earlier when it was less of an issue to manage it.

(14:56)
And now being able to do QA in phases where you've got five, six weeks worth of QA, you're QAing at the sprint level instead of at the end of the process, it helps that QA manager whose job is really the eyes and ears and voice of the common editor to be able to catch some of that stuff earlier.

(15:18)
Here's the thing, if something scuzzy gets through and a client looks at it and says, "This isn't working right," we end up having to fix it anyway. So it's literally you either do it now or you do it later.

Deane (15:29):

Bob, you're director of development, so you manage all the developers of Blend, correct?

Bob (15:33):

Right.

Deane (15:34):

Okay. So let's talk about your relationship with Jenna and a feedback loop that may exist between you two. Because I would think that Jenna is the person that would see code problems over and over and over again. And to what extent does she come to you or is she able to come to you and say, "Look, I've told developers three or four times to not do it this way and this keeps getting done this way?" How does a persistent QA problem turn into a training session?

Bob (16:02):

So we have a weekly meeting with all of the developers and Jenna, it's our L 10 meeting from the EOS system, and that is typically where we have that feedback loop. So if there's a common problem that Jenna sees all the time on a site or whatever, she'll bring that up as a topic to discuss and we'll talk about it as a group and kind of decide how are we going to handle this going forward? How are we going to fix it?

(16:24)
So it's not so much her coming to me and then me dictating to the developers. It's more every week we have this opportunity to talk about the things that aren't working well and make them issues, solve them and move forward from there.

Deane (16:37):

What is the split between manual QA, like Jenna just typing on a keyboard and do you do any automated, scripted QA?

Bob (16:44):

We do a little bit of automated, scripted QA. We have a few scripts that just basically are fairly limited. They check that this page is up or that search results return results that we can find on the page, those kinds of things that are sort of easy to build and reliable. That's the trouble that we've always run into when we're trying to do automated testing is that because it's a content management system and editors can put, well, I should say we give the editors a fair amount of leeway and freedom in where they put pieces of content, and so content kind of ends up everywhere, it can be kind of challenging to make sure that we've got a consistent environment set up for that kind of automated testing. The automated system really wants things to be the same from one run to the next. So most of our testing is more manual.

Corey (17:33):

It's interesting because I think there's a philosophical difference between web QA and software QA. And this is what was a struggle to me when I started. And I know, I've talked when I've, in conversations with Jenna, it's the same struggles in that there isn't any really good documentation on how to do web QA. It's always very focused on software QA.

(17:53)
And I think a lot of that has to do with the fact that software is pretty, I mean obviously it's always changing and it can change, but it's a little bit more locked down than a website, which literally you can do anything you want with it. Automation is nearly impossible because you change the title of a page and suddenly it looks wrong.

(18:09)
It's not so much a question but a comment. But why isn't there, Bob, why don't people write about web QA?

Bob (18:20):

I think because hard, to be blunt. And I think the web is so chaotic. There's basically an infinite number of device sizes in a browser because you can scale the browser however size you want. And so many of the issues we find are very distinct. It's not so much that this block doesn't work, it's that this block didn't work on this browser in this location. So it can be very hard to find those kinds of things through automation.

Deane (18:46):

QA to me is like infrastructure is to cities. It takes a lot of work to make sure that the sewer system is functional, but everybody just expects it to be functional. Nobody cares if it works 'cause it's supposed to work. And that's just it, what is huge success for someone like Jenna? That nothing breaks, that nothing happens. Her responsibility is to promote boredom.

(19:09)
It's true though. If everything's boring and everything works and there are no problems, she should be celebrated. That is just very unglamorous and anti-climactic. So nobody talks about it.

(19:18)
But let's move past QA. Let's assume Jenna did her job great, and we are ready to roll.

Corey (19:23):

A hundred percent no bugs.

Deane (19:24):

Yes. Where have we been building our website to this point, Bob, what infrastructure does this exist on?

Bob (19:31):

Most commonly these days it's on some cloud provider, usually through a managed web application of some kind. Occasionally we might have a client who has a data center for some reason and they might be hosting it on actual hardware, but most of the time it's in either AWS or Azure.

Deane (19:49):

Okay. So what you've done is you're building this thing in a completely separate environment from where, let's assume we're replacing an existing website, so you're building it in a completely separate place and the existing website has no idea this is happening.

Bob (20:01):

Right, and that carries with it a lot of advantages in that we don't have that issue of the switch over time where we're trying to build it on the same hardware and we have to maybe shut down a site while we bring up the new site. By building the site in two separate locations, launching is mostly a matter of switching traffic.

Deane (20:19):

I Remember the days where to deploy a site we had to put on a jacket and go into an air-conditioned data center that was like 45 degrees and actually install and pop out the cheesy little laptop style CD rom tray and actually load it on a rack server. I was doing that as late as 2007. 2008 I think was probably the last time I had to go in and do that.

Corey (20:41):

That sounds like something out of a spy movie.

Deane (20:43):

Oh yeah. Seriously, you would go into these long server racks and you would have to find the server and that was sometimes fun. You'd go in there and then have to figure out which server you were supposed to install this on. And when I say which server they were doing like counting down from the top, like third rack in, fifth machine down from the top, it was ridiculous.

(21:02)
And then even back in the day where companies would host their website just on desktop towers, back in the help desk or something. I remember one of the Blend clients took the website we built for them and they put it on a Compaq Presario tower back in their help desk. And when we went in there to upgrade the website, the guy had to remove his coffee cup from the top of the server on which it was running. So I'm glad that those days have changed.

(21:30)
And would you say at this point, Bob, hardware is more or less, or I don't want to say hardware, but server instances are more or less disposable? I hate to use that word, but you can set them up and throw them away in a heartbeat.

Bob (21:41):

For servers that are virtualized? Yeah, absolutely. You can use a tool like Terraform or any of those and spin up a new server in a matter of minutes.

Deane (21:51):

I remember when getting a new server involved ordering one and waiting two months for it to arrive, and then it would sit in the box on a pallet back in the loading dock for another month until someone got around to plugging it in.

(22:04)
So you set up this new environment, you're all ready to roll. Does the test, like the integration environment where you're building it, does that often become the production environment?

Bob (22:14):

It can depending on just how we decide to go. So it goes one of two ways. Either we set up a sort of smaller QA environment first and then when we go to launch we set up a bigger production environment and then move all of the code and assets and stuff from the small environment to the big environment.

(22:30)
Or we can just set up the big environment first, do all the building there. And then when we're ready to launch we can launch and then copy everything back to a smaller environment and create a new QA environment.

Deane (22:40):

Yeah, talk to me a bit about that. Once you launch, and testing environments can be a mess because sometimes a problem that you're seeing in one environment depends on the content that's being worked within the environment and it doesn't exist in another environment. Do you try to keep environments continually synchronized? For a long-running project do you keep a testing environment synchronized? Do you synchronize it only when you have to? How out of date is a testing environment? Generally

Bob (23:08):

Our preference is to keep those environments synchronized at least on a regular schedule. So maybe once a month we might copy down all of the content and assets from production into the stage environment. Especially if we have a three environment set up where we've got QA, which is sort of the wild west environment where anything goes on QA. We have stage which is meant to be as close to production as possible. We really like to keep stage up to date. And then we have, obviously, production. And so we try to move production back to stage at least once a month. We also like to try to keep QA up to date.

(23:42)
The issue we've run into is occasionally we'll have sort of long-running projects on the QA server where we're building out a major new feature, and when we copy down that content, it's a replace action. It is not an append action. So if we've built out a big project in content on QA and we move the production environment down, we're going to lose that project. And so we have to kind of freeze QA for a while and not sync it.

(24:09)
But otherwise, as much as we can, we try to keep them in sync because it does make tracking down bugs and stuff a lot easier.

Deane (24:14):

All right, well we're ready to roll. We've got through QA, we have our new website built on an environment standing by. You get the go ahead from the client bomb to launch. What does that mean? What do you do?

Bob (24:25):

It depends on the hosting, but most of the time it means we change the DNS entry from one server to point to a different server and cross our fingers.

Deane (24:35):

Wildly scientific process, clearly.

(24:38)
Well, let's talk about train racks. There have been disasters. In fact, I'll talk about one of them that I presided over back when I was at Blend. We had a client that we did a launch for, and when the website started up, on the first request to the website, it populated a table of URLs that had to redirect. And this operation took about one minute.

(25:02)
And what I didn't realize, and I wrote the code for this, I'm embarrassed to say, but what I didn't realize is that if that collection was not populated, it would attempt to restart that population every hit. So if there was one request to the website and then nobody else touched the website for three minutes, you'd be fine. But traffic started to pour in and the machine just fell to pieces because it had like 175 processes running trying to populate this collection of I think 120,000 URLs that it had to redirect.

(25:31)
And this was a train rack. The website kept falling over when we were starting it. And so what we had to do was basically point DNS back and sit back and figure out what the hell went wrong. And then I believe we did get launched three days later.

(25:46)
Now that I have discussed a humiliating story, why don't you tell me one Bob?

Bob (25:51):

Well, honestly, I'm trying to, I'm having a hard time thinking of anything off the top of my head.

Deane (25:55):

Oh, goodness sakes. That's [inaudible 00:25:56].

Bob (25:58):

No, I mean, so pretty much every site I've launched in the last decade, we've had a production server up and running before we move traffic over. And so that kind of thing where any amount of traffic would bring the site down doesn't really happen.

(26:13)
That said, I have had similar issues where it wasn't so much that more than two people were hitting a site, it was more that while testing it, we maybe had five or six concurrent users on the site at any given time. Once we hit the 50 concurrent users, then the site rolled over and died.

(26:29)
But again, because it's a separate server, what we did was move traffic back to the old production site, did the things we needed to do to get the production server, the new production server, able to handle the load, making the code changes and stuff. And then we relaunched.

Corey (26:42):

Listen, I hear terms like load testing and stress testing and penetration testing, and I only kind of know what those are. Is that the type of work you're doing to prepare for the real world vigors? Like the work they do at a factory on a tent to make sure that it stays waterproof throughout a certain amount of time?

Deane (27:02):

I'm so excited to hear that our former QA manager has no idea what those things are.

Corey (27:07):

Listen, somebody else did that stuff.

Deane (27:08):

Yep. Thank God you didn't do QA for long, Corey.

Bob (27:11):

Yeah, for sure. Load testing and well, really stress testing is definitely part of our pre-launch process. And basically when we stress test the site, we launch a script out on some cloud server or potentially cloud servers where we just hit the site as hard as we can and see just how much traffic it can take before it falls down and see how that compares to what our expected traffic is. And if we're not at least 10 times our expected traffic, then we've probably got an issue because we need to be able to handle not just the day-to-day average load, but those big spiky loads too when there's a campaign or something. We need to be able to do that.

Deane (27:47):

Okay. So once the site is up and running, are there things that you might see in production apart from load? Are there things that you might see in production that just did not exist in test? And how do you make the determination like, "We need to abort, we need to roll back?" Or how do you make the determination, "Let's just fix this in the legendary 1.1 release?"

(28:12)
There's the 1.0 release and there's always a 1.1 release. We always have great plans of things that get pushed to 1.1. But how do you decide that this is something we're just going to live with for 48 hours until we can launch a fix?

Bob (28:22):

I mean, it's definitely a collaborative decision between the technical team and the product owner. So we ask the client, "Hey, there's this problem, can you live with it?"

(28:34)
It also depends on how hard it is to roll back. Because if we don't get to do our usual just switching traffic, we can't do it the easy way, if we had to actually change hardware or something and rolling back to the old website is going to be a significant effort, you got to take that into account when deciding, are we going to roll back to the old site or we just going to push forward with what we have? So it's definitely, it's a balancing act, I think, of what you're willing to tolerate being less than perfect on your website and the severity of the thing that is wrong on the website.

(29:06)
We had to roll back a deployment once because, it was for a banking website, and the online banking login form was going to the wrong place because when it was first developed, it was going to the correct place that nobody looked at it again until launch, and by that time it was going to the wrong place.

(29:22)
Well, that's a huge part of what people come to the banking website to do is to log into their accounts. So we could not let that stand. We rolled back. But again, it was just a matter of switching traffic back to the old server in that case.

Deane (29:35):

Well, thank you for talking to us about QA and make sure [inaudible 00:29:39]. I feel like we should thank Jenna too. Jenna, thank you for being on our podcast, even-

Corey (29:44):

Though I Slacked her and let her know we were talking about her.

Deane (29:47):

You weren't here to defend yourself. I've never met you, but I feel like I know you.

(29:51)
So Bob, thank you very much for joining us and explaining what you do to get websites actually up and running out the door. We appreciate that.

Bob (29:58):

Thanks for having me.

Corey (29:59):

Thanks Bob.

(30:07)
Deane, we are back. We just talked to Bob.

Deane (30:09):

And Jenna.

Corey (30:10):

And Jenna. Jenna during this, I mentioned this during the interview, but I did, I sent a Slack to Jenna to essentially say, "Hey, we're talking about you. Just so you know."

Deane (30:19):

I know we were talking about her so much. I actually went to Blend's website and looked at the team because I wanted to see what she looked like. Because I thought I need to have a vision in my head of who this person is that has become the fourth character in our podcast.

Corey (30:29):

I don't know what else. Is there anything else to say about QA? I think we've covered the entire topic. I mean we-

Deane (30:32):

No.

Corey (30:35):

... we haven't. I do want to say: if there's anyone who listens to this and has any thoughts or interesting philosophies about QA, I promise you there is a gigantic opportunity to become a person who can talk about QA in a way that is interesting because no one talks about it. No one writes about it. There's nothing about it. Somebody coming into a QA practice will have zero place to start.

Deane (31:04):

I don't think anyone aspires to do QA. And I think that's really unfortunate. I think somebody should aspire to break new ground in QA. I think it would be really interesting for someone who specialized in QA and wanted to stay in QA for the rest of their career and really kind of broke new ground in QA. Is there QA thought leaders? Is that even a thing?

Corey (31:29):

Yeah, I think what's missing from the, I guess, discourse around QA is that it is all software focused. QA is a, when you talk about QA, you're talking about strictly software. You're talking about, it goes a lot deeper. It's more product focused and less web focused. And so there are thought leaders in it. I mean, there's books written about it. You gave me one and I can't remember the name. The Inmates Are Running The Asylum. What's the name of the book?

Deane (31:56):

The Inmates Running Are Running The Asylum is more about usability, but it's interesting because there are so many overlaps with QA and other stuff. There's overlaps between QA and UI design because the QA person is the person that goes back to the designer and says, "Yeah, this sucks, and nobody's going to understand this." So it touches everything.

Corey (32:15):

Well, maybe this podcast will be it. This podcast will be the first and only resource on web QA at this moment.

(32:22)
I do want to give a big shout-out to Bob Davidson. He's, again, our Director of Development at Blend. I forgot to have him mention this, but he does have his own YouTube series called Coding with Bob, and you can actually catch that series on YouTube. You just search Coding with Bob. It's great. He's very good at it.

(32:41)
The Web Project Guide is a product of Blend Interactive, a web strategy design and development shop that builds websites and guides teams through complicated web and content problems. That includes, yes, QA and actually launching your site. We promise we'll actually launch your site. We're dedicated to making great things for the web, and this is one of those things.

(32:58)
This is episode 22 of the Web Project Guide, Deane. There's only two chapters left in this book.

Deane (33:03):

That's crazy. Then we're going to take a break.

Corey (33:04):

And we're going to take a little break. We'll talk about that in a future episode. But until then, you can read the full text of this chapter, which is Test and Launch the Site at webproject.guide/launch.

(33:16)
If you're at the testing and launching phase of your site, you might think that everything is done and you're ready to go, but we can tell you a hundred percent that that is not the case. Your site is a living, breathing thing and it will undergo dozens of new web project cycles before it's finally replaced.

(33:31)
So if you want to know more about how to keep that cycle running smoothly, get a physical copy of the Web Project Guide, or you can get a digital copy and you can get either one of those at order.webproject.guide.

(33:43)
Go give us a five star review because we have egos and we like to see it. It gives us a little shot of adrenaline and makes our day worthwhile.

(33:52)
But anyway, thanks for joining us this month and subscribe and share, and we'll be back next month to talk about what happens when your perfect web project has become exposed to the harsh realities of life. Until then, go do amazing things.

Deane (34:05):

Good luck.