[00:00:00] Josh: Welcome to another episode of Dev Shop Stories. My name is Josh and I've got Tanner here with me today. Today we're gonna share a story about internal processes and why they are important. So the story starts off with a client that we had, that we were hired on to do kind of an augmentation and to essentially, the first step I remember, Tanner, was we actually were asked to do kind of a gap analysis, and you're the one that actually performed that, if you remember, right?
[00:00:26] Tanner: Yeah. Yeah. I was. They were over a year behind, in development. They had missed their second or third launch, just super far behind. The executive team was extremely worried, so they hired us just from a really good referral that we have to come in and do a gap analysis on their system. And really just see how far off they were cuz they had continually been told that they were closer.
It's close to being done or it is done. in fact, and because of that, the executive team was like, okay, well if it's done, why isn't it live? Anyway, so yeah, that's, that's kind of how it started.
[00:01:00] Josh: And I remember when we were first introduced, we asked, oh, show us the sandbox site, or Show us the site.
And we pulled it up and it's like, it doesn't look at all done, doesn't have any mobile ness to it, or responsive design or anything like that. And it just, Like, maybe there's just something they just haven't pushed yet or, you know, like, what's, what's going on here? Right.
[00:01:22] Tanner: Yeah. It's like, are you guys sure? You're sure this is the sandbox?
[00:01:25] Josh: Yep. And you were eventually given access kind of into their code set and into their deployment processes, their database. You got access to that. And what did you kind of find and see?
[00:01:36] Tanner: Yeah. Uh, come to find out it, it was zero process. It's none. even trying to figure out what branch of code to look at.
Was a task in and of itself. I had to get on a phone call with more than one developer at this other organization. And, he was confused. The first guy I talked to didn't, he wasn't quite sure which one was deployed to production or deployed to the sandbox. traditionally it tried to follow, you know, some naming convention.
You have master, main for your production environment or production, right? Like something that makes sense. But this one was like the guy's name. Underscore sandbox one or something random like that?
[00:02:12] Josh: Yeah. Or, or is like, was it like Jim's fixes or something like that? Yeah,
[00:02:15] Tanner: it was, it was some arbitrary thing that, you know, nobody, nobody knew.
But yeah, from, from that outcome, there, there were no processes in place. It was everybody was kind of a lone wolf developer, did their own thing, worked on their own branches so much in fact that they didn't do code reviews with one another. There, there was nothing. People just did their own thing and they got to the point where, the person who had last worked on stuff or so they thought would go through and cherry pick, commits to deploy.
So it was, it was pretty awful. And trying to present that to an executive team of non-technical people, I mean, that was a challenge in and of itself, trying to get them to understand the lack of process and how bad it actually was without making them feel like they had just wasted a whole year of development.
On this e-com system,
[00:03:07] Josh: right? And, and it was so it wasn't e-commerce site, so obviously very important to have everything really tied tightly. I mean, people were be transacting money buying it, and it was the core business of, of this company. I mean, they did, you know, significant volume through their current e-commerce site.
And this was to be an upgraded e-commerce site where they can have the marketing team now start having a, content management system, a CMS where they can actually go in. Make tweaks and changes to the text, change the images and all of that kind of stuff. And when we actually did the, the gap analysis and Tanner, you actually wrote a 42 page PowerPoint going over all the different kind of gap analysis on, on all of that.
And you know, just basically found tons of issues, tons of errors. But there is a happy ending to this story in that they engaged us. Red Sky Engineering actually come in and do some of the team augmentation and we were able to apply some of the processes that we've developed over time into their system.
We weren't able to get all of them, but a number that has actually helped us accelerate it. And we were actually able to, remind me again, how long did it take from going into a launch?
[00:04:18] Tanner: It was seven weeks. So we did the gap analysis, which was I think two weeks at time to go in and do that deep dive. They loved what we had.
Once we finally convinced them to. Audit it and view it, by bringing in a technical resource. and then we went into build and implementation. They opted for a time and material team augmentation versus fixed bid. We tried to get them to do a fixed bid. As that's what we prefer to do. But they decided to do team augmentation, and it was seven weeks from the time we touched their code to getting it fixed, deployed, putting in as many processes as we could.
They still limited us a fair amount, but it's still, you know, actively working through. Putting processes in place.
[00:05:00] Josh: but it went live, right? Oh yeah. Seven weeks. And I think there was very little issues. Couple, you know, hot fixes immediately from the rollout. I mean, massive code changes, complete whole website overhaul, but we were able to fix those real time and have a successful product and they've kept on engaging us for quite a while afterwards.
[00:05:19] Tanner: Yeah. Super happy with the launch. Uh, we expected a fair amount of triage. you know, at least for a week or so, especially with the, the drastic changes that had to take place. we, it was a day, it was one day of triage. It was awesome. Team killed it.
[00:05:32] Josh: So the rest of this episode, we're gonna go over the internal processes that Red Sky Engineering uses.
And I'm gonna kinda do this as a kind of a questioning and answer type of phase. So, Tanner we Do kind of a hybrid agile system. What does that mean? Okay.
[00:05:46] Tanner: Yeah. So, Agile's like this buzzword everybody wants to throw around and talk about, or scrum or whatever, and you know, it, it's great. There's a lot of good, you know, methodologies and, and processes that come out of that.
But what we've found is for us with going through dozens of development projects, the most effective thing is this hybrid approach, which is kind of waterfall with sprints. we understand from a high level and in depth down to the very little detail as we'll kind of get into, the project as a whole.
And then we put that into sprints and some of these other Agile type ceremonies using kaban and stuff like that. So we kind of break stuff down into sprint cycles, to do. So, yeah, it's this weird hybrid approach, but it's super, super effective and it allows us to make a lot of good changes that need to happen.
[00:06:39] Josh: So let's, let's imagine a scenario. So we have a new customer that wants something built, kind of one of our first steps that we'll, we'll go through with them.
[00:06:48] Tanner: Yeah. So we do once it's. Through the engagement and we get kind of the initial introduction with a client. We understand what our project's gonna be.
We put them into, kinda what we call a design and discovery phase. So we break our processes out into multiple phases. The first one is that design and discovery. So we take our, designer or our design team. And the project manager, and they work with the client, the product owner, we identify who these people are, who sits in what positions.
So we require from the client that they have a product owner and they go through and do a full end-to-end design of all of the requirements, all of the user flows, all of that into Figma.
[00:07:29] Josh: Right. And Figma being a, a tool that's, you know, web design tool that was just recently actually acquired by Adobe, you know, so hopefully it doesn't change a whole lot cuz it's an amazing tool.
works really well and we'll, like you said, we designed it all the way out in there. And show them like a real prototype so they can actually get the feel, the touch, the, the see the colors. We do high fidelity designs right out the get go cuz you can just do 'em so fast. You know, a lot of people talk about, you know, wire frame at first and then do the high.
We just go directly into that, right.
[00:08:00] Tanner: Yeah, we, we have our design team go right into high fidelity and then they prototype it all out, through Figma. They have a really good, utility that allows kind of these mockups to take place. and the awesome thing is, is that it lets the client get hands on with it and make some of those decisions and those changes before we ever touch any code.
Right. It's a lot easier to change the design, especially when you have a rockstar designer. It gets easier to change it there than potentially months of coding.
[00:08:27] Josh: sometimes clients have taken these designs and then they go and show their investors or they show, you know, other sources of income, you know, this is what it's gonna look like, and then it might be, you know, multiple months before they reengage with Red Sky.
[00:08:41] Tanner: Yeah. We, we try to work, I mean, this is kind of the, the common theme that we've come to is we very much wanna partner with o ur clients and be a consultant as much as we are a development arm for them. So if the best approach is to go through the design and discovery and get them that design, I mean, it's theirs, right?
At every one of our phases, we try to give them a deliverable or a set of deliverables. So if that's what they want to engage us with and take that to raise more capital or whatever they want to do, that's totally up to them. And if they want to continue in the phases and the processes, we do that too.
[00:09:14] Josh: Yeah. Now Figma isn't the only design tool that we'll use. we make use of another tool. What is that?
[00:09:20] Tanner: we use Zeppelin. so Zeppelin is a really good transition tool. so what we came to find is it's difficult to bridge that gap between design, thought processes and mentality to developers.
There's, there's a disparity there. So we use Zeppelin to really have behavior annotations and comments and get that dialogue between the design team. And the development team. And then the other really nice thing is it allows us to stamp, a set of designs with a version. So when we go through and we get that final sign off with a client, it's based on this version.
So we know at this time this is exactly what the requirements were.
[00:09:58] Josh: Absolutely. And then after we work through with the Zeppelin, we move on to kind of the, the bidding phase, right? Everybody wants to know what it's gonna cost and stuff and explain how we, we break that down.
[00:10:09] Tanner: Yeah. So we do an entire project breakdown.
This is where that kind of waterfall thought process goes into place. So at this point, we know the user flows, we know the user store or the, the user experiences that need to happen. We have all of the designs done. So we take those design. The annotations, the notes, all of that stuff from those client discussions.
And we do an entire project breakdown into epics, into stories, and into task.
[00:10:36] Josh: Now what? What is an epic?
[00:10:37] Tanner: An epic is a big conceptual user flow, right? It's like for this user, this is this feature set, right? for example, if I'm doing a cart checkout in an e-commerce system, that's gonna be an epic,
[00:10:49] Josh: Whereas a story, like an example of a story in that same epic as what?
[00:10:54] Tanner: Yeah. An example of a story would be like adding a single item to a cart, right? And then another one would be getting the total calculations. Tied into whatever third party calculation engine, or if you're doing it yourself.
[00:11:05] Josh: So you break it down to the highest level epics, and then underneath that would be composed of a number of stories. And then a story could be composed of a number of tasks. And those tasks are what?
[00:11:15] Tanner: Yeah. Task would be, you know, tie in, get tokenization with audient or with striper with one of these other payment processors. And specifically the, the actionable item that the developer can do.
[00:11:29] Josh: Perfect. So now you have a list of all these tasks. You will do what with them?
[00:11:34] Tanner: Yeah, so we go through a bid process. Our project manager and team lead will. Uh, sit down and it takes a couple days. It's, a pretty daunting task, but it's important. so they will sit down and they go through all of the tasks, every single one of 'em, and they bid things out in, man hours.
So, this task, this API call or this basic cru call is gonna take four hours to complete, right? And the important thing with that measurement is, it has to be consistent. The hard thing is a, you know, we've tried several other approaches on having the whole team involved and then you're measuring with a different stick, so to speak.
Right? So having that consistent measurement is the important part, which is why we have the team lead do it, and he tries to measure relative to the lowest developer on the team.
[00:12:24] Josh: Right. So not necessarily him doing the, the actual work and, you know, being super fast and knowing how all the pieces come together.
He's trying to think in the lens of, the junior or the mid-level developer that might be working on it.
[00:12:36] Tanner: Exactly. He tries to measure to that lowest level. the nice thing there is we cover the base case and then we get efficiency gains throughout the rest of the project based on that measurement.
Right. If the team lead s there's a lot of mentoring that has to happen. There's a lot of oversight that has to happen, and it helps kind of bridge that gap a little bit with some of that oversight that needs to take place.
[00:12:56] Josh: Taking a step back, you're obviously not breaking these down onto. Paper and pencil or whiteboards, what software are you using to actually break these down and, and record the values?
[00:13:07] Tanner: Yeah, yeah. Certainly not paper and pencil that, that would suck. we use Jira, so uh, we have the different, different compositions for team or company projects in Jira and we break them down in there. Jira has a massive, massive suite of tools that really allow us to do pretty much everything that we need.
So we use a. a couple of 'em. But we primarily use Jira for the kaban, the tasks, and then we also use, confluence to help annotate the acceptance criteria associated with the Epic, the user story and the specific task. I know it's a few of 'em, but it's all that Atlassian suite.
[00:13:44] Josh: And then once. Do that, do you do any other additional overview of the project?
[00:13:50] Tanner: Yeah, so, at that point, once the developer, the team lead and the QA or the PM rather go through and do their full deep dive on everything, we will have technical oversight from a senior architect who will come in and evaluate the more complex pieces.
[00:14:06] Josh: Kind of like an outside perspective.
[00:14:08] Tanner: Absolutely. Yeah. Somebody who hasn't been in the weeds on everything, who is gonna do a true in a sanity check on the, the complex issues and 9 times out of 10. They find a few things that just need to be tweaked, right? So we adjust those time estimates to things that are a little bit more accurate or a little bit more realistic based on experience and stuff like that.
With that, again, a third party lens.
[00:14:31] Josh: you have all this in Jira, you have all the different sprints and tasks broken down. You've got man hours associated to them. What next? What do, what do you do next?
[00:14:39] Tanner: Yeah, so. It's, it's really nice. from doing this for quite a long time, we have a lot of historical data.
based on past project performance. We have a good evaluation of what a healthy project looks like. So we've developed a, calculator, basically a project calculator, that uses like Google sheets and stuff. And it's all driven off of his historical data. So we can plug a project in based on the total man hours that we've estimated, and it's gonna give us a proper distribution of what a healthy project looks like.
a pitfall that a lot of people get into is, it's like, oh, cool. Well that's gonna take me, 10 hours to due. well, that's 10 hours of development time. You also have meeting time. You have administrative oversight. You have PM time, you have code reviews, you have standups, you have all of these other things that occupy some of that time.
So your, eight hours is really. 14 hours or whatever that is. Right. But we use historical data to derive what that breakdown actually is. So your traditional project is anywhere between 40 and 60% of development time. So that other 40 to 60% is everything else, and there's a lot of it.
[00:15:45] Josh: So to get that historical data, it almost feels like you would have to use some type of, you know, really refined process or tools to, gather that information. What, what kind of software does Redsky use?
[00:15:56] Tanner: Uh, yeah, we use a good suite of tooling. So again, with Jira we have a harvest plugin. So Harvest is a time tracking. Software. Yeah. So we have every developer track the time that they're doing or they're executing against a task, through Jira into harvest. And then we take that, data and kind of pull it out and extrapolate what the time took relative to the estimate. And this is why it's so important that we have that same measurement, right?
If the measurement change. , then your data's skewed and it's all kind of varied, and arbitrary. So
[00:16:29] Josh: Other thing is it's harvest would not necessarily just be on the task, but it'd be all their meetings, whether they're doing a code review and so then you can get the ratios of what they actually spend a normal eight hour workday on, and then you feed that into your calculation engine.
[00:16:44] Tanner: yeah, absolutely. So, we're pretty particular about having people, track and attribute their time, whatever it is, I don't actually care. Necessarily what it is, I just, we need to know so that we know what a healthy, environment looks like. if you're tracking, if you're in a meeting, it's a distinct task code of a meeting.
If you are doing pure programming, that's a distinct task code billed against the same ticket. code reviews, again, kind of everything. And we have it through development and QA. it's it's really nice and it allows us to have that historical data.
[00:17:13] Josh: So now you've bid out everything, so you have a full breakdown.
You know what the costs you are gonna do, what do you do with that information?
[00:17:21] Tanner: Yeah, so at that point, once we have a full bid, a full breakdown, we have all of the acceptance criteria associated and we have high fidelity designs. We walk through that with the client and we give them an overview, Hey, this is what you want.
This is exactly how it needs to be Executed.
[00:17:37] Josh: And by the way, here's the price.
[00:17:39] Tanner: Yeah, exactly. It's like, It's not cheap, right? People have this, this weird assumption that software is not expensive, but they're wrong. so we walk through that and based on their budget, the timeframe that they're looking at for launch, if they have one, we either, build out the entirety of it.
We cut features, we scope things appropriately. We put it into an mvp. We put it into a, post production. Build, we kind of help them and try to work with them to break out what their project looks like. This is their ultimate vision right now. Do you need the whole thing all front or do you wanna par it down, get a cheaper budget, start getting some of that opportunity cost back in there and we can continue building on some of the other pieces that are just bells and whistles.
[00:18:21] Josh: So there's a couple different ways of costing a project, right? You can do a time and materials and you can do a fixed bid. Explain to me what Red Sky prefers and and how'd you get there?
[00:18:30] Tanner: Yeah, so we wanna do fixed bid with everything that we do. fixed bid is, it's fair to the client, it's fair to us. it gives them exact understanding of what they're gonna get and the timeline and all of that stuff.
So we want to do fixed bid 100% if we can. We started off with time and material, and used the data. Associated with the time and material builds to understand the velocity and the kinda the efficiency that we have with our development team. And based on that, we can do projections and hit a fixed bid model.
and it, it's been phenomenal. We've proven it out more than once at this point. and it's excellent. Clients, extremely happy cuz they have a very well polished project. That's on budget, on timeline, I mean, we usually get it done before, we even projected we would, but it's on budget, it's on timeline.
It's polished to a really good level. it really is the right approach.
[00:19:21] Josh: All right, so the client says, yes, I am accepting this bid and we're gonna go forward. So we now move into kind of the execution phase of the project, right? And so explain to me the cadence, what happens, you know, what, what's the daily life of a developer, you know, all that kind of stuff, right?
[00:19:38] Tanner: we call it the development phase. this isn't just development, it's development and QA. So we kind of couple the two together. and the way that we operate is we run in two week sprints. This is kind of where that agile, this, this hybrid agile process comes in. So we run in two week sprints.
We also do a biweekly sprint planning with the client. And that is kind of one of those expectations that we have to set up front is, hey, product owner, we have to have you on these meetings, either you or somebody who can speak for you, but preferably you. So we have your interaction, your feedback.
You can see what's coming up. You can. what we're doing. If you want to have other things prioritized over different pieces, you need it for some meeting or whatever it happens to be, right? We want that involvement and that engagement, and then it also allows us to kind of communicate the state of everything in the progress is being made and all of that stuff.
So we do that, and then we do a executive meeting with them as well. It's a biweekly executive meeting. Again, we expect the product owner, the pm, the senior architect, and traditionally the stakeholders for the client and our kind of client rep or, you know, sales guy, to really have that, that dialogue.
And it's really to. This is an open forum. Let's talk about how happy are you with things, with the progress. Are there any red flags? Did things change on your end that we need to account for? And put that into kind of a change order process, right? If we're going fixed bid, well, clients are gonna change stuff.
They always do. We have to kick that off into some change order process cuz it, it has to be accounted for and attributed a little differently. So we do that.
[00:21:09] Josh: Now, I would say, I mean, a lot of people listening that are familiar with Agile and, and Scrum and, and sprint cycles and stuff, they might call us kind of a postmortem, but it's, it's really not.
In that sense, I mean, the postmortem is reflecting back on the last two weeks, what are things we could do better as developers and that, but this is really kind of a check in touch point with the, you know, basically the, the people that are providing them the money, the client specifically in trying to understand, you know, are you guys happy?
Are you, solid with a success. Cause we found that if you delay this conversation to, to the end, so you deliver the project there might have been things that, just completely dropped the weeds. And sometimes, right. It allows us to bring in stakeholders that are, you know, very high level, but they, they get to see, you know, just little snapshots of it.
But we get to basically understand. Hey, are you happy with this? You know, cause we might deal day to day with their maybe PM or their cohort and everything sounds groovy and, and it's just, you know, going alright. But when we get that executive in there, we can say like, here it is, you know, is this what your vision was, what your goal was, what your thoughts were.
We can actually. Something that we are heading down the wrong path and actually correct course before you know issues, you know, before get
[00:22:27] Tanner: Yeah. It helps stop those issues from propagating. Absolutely 100%. On that point, you know, the product owner doesn't have to be the visionary for the company 9/10.
It isn't. They're a process person, not that creative mind. So it, it's imperative that, those stakeholders are on the call to make sure that their vision is what's being. Brought through from their team and the way that we understand it. So it, it really allows us a quick course correction if we have to.
so it's really, Important to make sure that we have these meetings and then make sure that we kind of hold the client accountable for those meetings as well. It's like, Hey, we have to have this if you want this to be successful. So then we get into the day to day of a developer.
the day to day for development, the development phase is a daily standup. This is a 10 to 15 minute. Where it's a developer level ceremony, right? But we expect and invite the product owner and whomever on their team to be involved and it's, Hey, how was yesterday? How is today? What are you doing today?
Is there anything in your way? If it is great, let's pull that off onto another meeting with the people who need to be there. But it just allows that, visibility into the expectation that they have going forward. And again, the nice thing is we typically overperform on expectation and they get to see a lot of rapid progress very quickly.
So it's really, really exciting.
[00:23:50] Josh: And then, you know, so you have this daily standup. After that's over, usually they're held in the morning. Right. And then after that's over, what does a developer go do?
[00:23:59] Tanner: Yeah. Yeah. They get into kind of their daily execution, right? So we slot anywhere between six and seven hours a day for, you know, quote on quote development level tasks.
That is coding, code reviews, peer programming, stuff like that. So the developer will go to Jira. On the Kanban board and we have a whole bunch of different, columns on there for different things and what to track. Progress on tickets. It's how Kanban boards are.
[00:24:23] Josh: so they go and they, they say they didn't have any tasks, so they, they have to go there and try to find a task to grab, right?
[00:24:29] Tanner: Yep. Yeah. So they would take a task out of the backlog or out of the, the current sprint and they would move it into the doing column and add their name. Right? That's the very first thing that they're gonna do. And we don't try to silo people. they tend to do it themselves anyway. People like to work on the front end or the back end or whatever, but we don't wanna silo people.
To different areas. it's an open board. You can pick whatever you want. The PM's not gonna assign those tasks, it's up to the developer. So they'll go put their name on a ticket, move it into to our doing column. And at that point the through Jira will hit, start basically on our harvest plugin. That tracks and attributes that development time, and it's whatever they're doing on that task.
If they're doing planning, great. If they're programming, great. If they're doing peer review or peer programming, great. Right? So they hold that attribution. On on all of those items.
[00:25:18] Josh: And that's, again, very important with that data collection and being able to go over historicals and track that so much so that I believe you monitor their efficiency score such that a ticket was maybe gonna be 10 hours of, you know, 10 man hours of development time and they clock.
Eight hours and I went all the way through, they actually get a high higher efficiency score than a hundred percent. Right.
[00:25:43] Tanner: Yeah. And we have kind of, buckets and breakdowns of where we expect efficiency based on the skill set and type of that developer. So a junior developer. We know they're not gonna perform at a hundred percent efficiency, but that, and that's okay.
It's a known value, right? But if we have any crazy outlier who's underperforming or overperforming, right? We can kind of go look at it and see what's going on there. Maybe the ticket was bid wrong or what, you know, whatever it is. But yeah, we, we try to evaluate the individual efficiency of every developer on the team.
And then there's a collective team efficiency as well, which is really the metric for the overall project statis. But tracking it on the individual developer level. Allows us to kind of see how that person is doing specifically, and then gauge their skill set. Also, is it a front end task? Is it a backend task? Where are their strengths and weaknesses?
How can we help build them in these other areas? Or is it just their interest or you know, whatever. It really allows us that granular, granular detail on the developer. You know, on every single one of them so.
[00:26:43] Josh: Now one of the features that I find really amazing about this is that developers can do some introspection on the tasks themselves.
So the task was bid at five hours. If they get up to five hours and they realize I'm not even close to being done, that kind of allows them to throw up a red flag and go over and say, Hey. Maybe I'm just stuck or I need some help, or whatever. Because their ultimate goal is to be the most efficient developer they can. Right? And so that leads to these types of scenarios.
[00:27:11] Tanner: Yep. Yep. It does. And we, and we try to incentivize efficiencies, right? We, we try to do a, gamify it a lot with, gift cards or whatever to try to incentivize this higher efficiency. Letting the individual developer be accountable for that.
So it's like, Hey, you should plan early because this is gonna help your overall efficiency. This is gonna help you hit your numbers. Right. But if they are underperforming in that, in that sense, you had a four hour ticket and you're at four hours and you're halfway done, it's like, okay, well great, you can go talk to your team lead or whomever, doesn't have to even be on your team, but somebody to help you kind of get a second set of eyes and approach that properly.
Right? Cause. Already behind or you pull off the ticket and do something else, and you just have to kind of eat that negative efficiency of, hey, four hours. I didn't do anything in effect.
[00:27:57] Josh: All right, so the developer finishes their code and they just push it right to production, right.
[00:28:02] Tanner: Hey at some companies, they don't. Yeah. Uh, no. So for us they move it into a waiting for merge column. And we put branch restrictions, in our repository. So we use GitLab. It gives you a lot of granular, nice controls. So we put merging restrictions on all of our branches and stuff. So it has to go through a peer. 100%. Every single thing you're doing goes into peer review.
Where anyone on the team, again, we don't try to silo and restrict things for people. We want to help everybody grow. So, they will pull tickets, they put it out into the Slack channel. Hey, here's my merge request. This is what it is. Have a good detailed comment on their commit, and somebody will go pull that merger review or that pull request and go through it, right? They do line by line review on it, check for semantics, logic, things like that. several times we'll pull in the code, just pull their branch and get it up and running something complex and, Really want to make sure that it's good to go, before it gets, pushed out.
[00:28:58] Josh: And, and it's not just code reviews that happen in GitLab, right? There's testing that actually automatically runs on each branch. Commit, push, right?
[00:29:06] Tanner: Yeah, yeah. So we have C I C D so for our continuous integration, there's uh, you know, testing, automatic test, automated testing that goes, we do a build test, we do a prettier test, we run unit tests.
[00:29:18] Josh: Prettier. What's that?
[00:29:20] Tanner: Yeah. So linter makes it format, it's a formatter, it makes it pretty,
[00:29:24] Josh: Makes kind of the code uniformed, right? Yeah. So you don't have these wars of the zip bracket belong on the line above or below.
[00:29:31] Tanner: Yeah. Yeah, exactly. It standardizes your code base regardless of the individual developers' preferences.
So yep, they do that. For continuous integration, it has to pass those, if it fails any of those tests. It, it's failed, right? You have to go fix your, fix your code, fix whatever's going on, or if there's comments, right? Those comments have to be resolved before your code's gonna be approved and pushed.
[00:29:54] Josh: And GitLab will even block you. They won't, they'll just gray out the button and that says, you know, approve or merge, you know?
[00:30:00] Tanner: Yeah. So we have it again set up on the branches where that's the process, right? We, we want everything addressed and whether you fix it or you come to. You know, a common ground with whomever made that comment or whatever.
Right. There has to be a resolution to all of those items. So,
[00:30:17] Josh: all right, so it gets through all that. It gets approved and merged and then what happens?
[00:30:23] Tanner: At that point? It goes into a waiting for build stage, and this is build, deployed, whatever, right? Depending on the type of code base that you're working on.
If it's a mobile app, then it's a build and deployment there. If it's rest or whatever, it's a deployment out to that stuff. So it sits in there until the deployment takes place. The deployment process we don't formalize it necessarily. It's kind of needs of the project. But we'll go through and do the deployment and then move those tickets into another stage which is the developer review stage.
and with this too it is the kind of the CD this the continuous deployment, right? So it gets approved and. Part of that pipeline in GitLab is to deploy out to the servers, right? So it deploys to our sandbox server, and then again at that point we move it over to the developer review column.
[00:31:14] Josh: And so what does that mean? Developer review? Didn't they just review their own code or whatnot?
[00:31:18] Tanner: Yeah. I mean they should, right? But it's one last sanity check that we put. Where the individual developer who worked on that task will check it on the server. Right. so
[00:31:29] Josh: So instead of just their local machine Yeah. Where they've been running it, you know, and it, they've got it set up in a certain way, the server might be different.
[00:31:35] Tanner: Yeah. Yeah. There's not always, but I mean, most people have ran into the, the situation where things just, there's a little Yeah, exactly. That's, it works on my machine. Right. It helps avoid that because it's like, Hey, you need to go sanity check it on the server, what you built, right? I don't expect you to do full QA.
We'll get into that later. We have the full QA team and QA process that goes on, but this is just the developer knows how they built it. Make sure that it works the way you built it on the server.
[00:32:02] Josh: And then what's next? You know, so the, the developer says they're good, they're happy with that. What do they do with their ticket and Jira?
[00:32:09] Tanner: Yeah, so once they're happy and comfortable with with what they've done, they move the ticket into kind of a ready for test, which is signal to QA. Right? That's QA starting point.
[00:32:19] Josh: That's separate team.
[00:32:20] Tanner: Yep. Full separate team. And that's, that's where they pick up those tickets so it could sit there for a minute.
You know, before QA is available to get to it or whatever, they try to go through all of those tickets in the ready for test and start doing their attribution and everything. But they do it again concurrently with the development sprint. They're actively qing against those tasks.
And the reason is, is we have a few different designations for what a ticket is. Right. And this is kind of. Discretion of the QA personnel and the pm. So a ticket from QA, once it's in ready for test, a QA person put their name on the ticket and go through and start doing the testing. And they test function and they test look and feel right, make sure it looks the way it's supposed to relative to the designs that are time are version stamped in Zeppelin and the acceptance criteria on functionality, and if it fails functionality in any way that is defined in acceptance criteria, it gets a label put on it called Rework, and it gets kicked all the way back. So clear back, and for all intent and purposes, it is not done.
[00:33:32] Josh: So those hours that you were tracking the developer, they, maybe they completed in eight hours of the 10, it just got kicked back to them.
So they just restart their timer then?
[00:33:41] Tanner: They pick up on where they left off so they have two hours to fix whatever else they were gonna do. If they want to hit a hundred percent efficiency and it's compounding like that. And if it gets kicked back again on rework, right? Those are, those are kind of flags that we can identify.
Of, okay, well why is this getting kicked back so many times? Is it QAs just being an ass or, you know, is it, the developers performing,
[00:34:05] Josh: trying to speed through it and, and just trying to game the system? Right?
[00:34:07] Tanner: Exactly. So this kickback process really helps us do that. And now how we account for that is through sub-tasks in Jira.
So a ticket will get a sub-task attributed to it with the specific either look and feel that was missed. Or the function that was missed. And it's very explicit, so we know how many times things were kicked back. We know exactly why they were kicked back. And again, we can kind of do this attribution. It helps save us from having to, you know, nobody, it's not a finger pointing thing, it's just I want to help people get better and figure out why these inefficiencies are happening so collectively we can perform at a higher efficiency.
[00:34:44] Josh: So you can just kind of say, you know, slow down, take a step back, just go through your stuff yourself, click your buttons that you added, you know, make sure it's it's working. and,
[00:34:53] Tanner: Exactly. Cause it's like, I mean, the, the thought process is if you, even if you're two hours over your time estimate and you have an eight hour ticket and you take 10 hours, Well, that's still saving us a lot of time as opposed to going all the way through the process it, going through a code, review it, getting put into QA, QA, spending time on it, it getting kicked all the way back, and then you going back into it after you've lost kind of that thought and context.
Just take the extra time and do it right At first, it's okay, right? We can, we can work on the performance and the planning and some of those other things to help the initial engagement.
[00:35:25] Josh: You'd much rather have somebody that goes, you know, constantly over, hopefully not by a ton, but always delivers a solid commit rather than somebody that's always constantly under, but they get kicked back 15 times, eating up time from QA, the systems.
[00:35:41] Tanner: Yeah, exactly. Cause what that does is I can, I can account for your negative time. I can account that, hey, this person is always, 15% over time,
[00:35:51] Josh: the time estimate, which is just this other person out there that's just saying, Hey, I think this is four hours. Well, for this person, it's usually a little bit longer than that.
[00:35:59] Tanner: Yeah. And that's okay, right? It's all relative, but it's all relative to that same measurement, and that's why that is so important. But we can plan an account for that time. Versus stuff getting kicked back subjectively like, well, I have no idea. I don't, I can't account for that. So that's the one route is the rework route.
The other route is it gets created as a bug and put into the backlog. And the distinction there is, it's, is is it a direct issue? Is it direct to what that ticket is? Is it direct to what that task was that person was working on? Again, is it missing functionality or is the functionality broken or not doing what it's supposed to?
Or from a UI perspective, does the look and feel not match kind of our, our acceptance criteria of what that looks like? If it works the way it's supposed to and it looks the way it's supposed to, but something else is going on. We create a bug and it gets put into backlog. Is it a new ticket or it's It's a new ticket. Yep. Labeled as a bug versus a task.
[00:36:56] Josh: So maybe just while they're exercising it, they found some other air state that that happened.
[00:37:01] Tanner: Yeah, yeah. Some other thing. Right? And it's not necessarily fair to attribute that to the developer who worked on this specific thing, cuz for all intent and purposes that was not their task.
Right. So it creates this bug. That gets put back into the backlog and we can put that into a future sprint later on if it meets a proper level of like priority severity you know, and the needs of what the project is. So,
[00:37:26] Josh: And then, and then the final scenario, right?
[00:37:28] Tanner: Yeah, yeah. Everything's great and it passes, right? Which is exactly what we hope for.
[00:37:32] Josh: Which then you move this Jira ticket to over to complete it. Complete it. Yeah. Perfect. All right, so taking a step back, so you've done your two week sprint cycles, you've had your demo every two weeks, you've had your executive meeting, you've had, you know, awesome timelines and now you're kind of coming to a wrap where you're actually gonna.
Deploy kind of final to the production, you know, is, is that right? It you would take this project and just push it out to production, you know, after you say it's good enough or, or what is the actual steps you do there?
[00:38:01] Tanner: So once, once we get through kind of all of that stuff, we do that weekly demo or the bike end of sprint demo with the client.
We walk them through on our sandbox system. Here's, here's what we did, right? And we give them that, that really thorough walkthrough of a polished piece that's been through development, been through QA, been through rework, right? It, it's, it meets all of this, these needs that we have at that point.
We put it into a UAT.
[00:38:28] Josh: Which means what?
[00:38:29] Tanner: It's the user acceptance testing, right? So end of development. We've gone through the entirety of the development phase. We want the client to get hands on. We want them to walk through it cuz we try to do this as much as we can up front, but it's a little bit different when.
You have it in your hands, right? If you do a mobile app, and I'm the one clicking through all of this stuff, I can see that, oh man, I know we thought about this and we really pushed for this thing, but it sucks, right? Which happens and that's okay. So we give that to the client and let them play around with it and see if it's exactly what they wanted or if they wanna change stuff.
[00:39:03] Josh: So why wouldn't you have done this at the beginning? Why wouldn't you have given the client just access to it and started going on it? You know.
[00:39:10] Tanner: It's development. Everything's broken sometimes, and that's okay. You know, whether you overwrite something in the database or whatever it happens to be, stuff breaks, and you don't want the clients to have that experience of something being broken, right?
So that's why we do the demo. We are gonna give you this controlled environment that it's, it's already been walked through. We know exactly. Is going on. We've extrapolated the priority bugs for future sprints and development, and this environment is good. This is, we feel as though it's good enough to go to production.
Do you feel the same way, is really kind of the question that we ask?
[00:39:45] Josh: Yep. And a lot of times these clients will get other people on that might have been kind of in the backseat or haven't, you know, seen the project that might have heard about it and they, they usually kind of get them beta keys or, you know, on the app store with app testing and stuff, right?
[00:40:00] Tanner: Yeah, we want as many, many hands on it as we can. Like, Hey, this is up to you guys if you guys say that this is good to go. This is what we're going into the launch phase with, right? So we get into that U A T and it's like, hey, figure out the nuances. Let other people have opinion that want to have an opinion.
Voice their opinion. If you wanna change stuff, great. Let's kick that off into a change order. We'll put that right back into our development cycle. And if that scenario occurs, which a lot of the time it does, once they get it in their hands, we put it all the way back through. So that goes back into a design and discovery
right. We have them go through, okay, well you wanted to add this new feature, which is fine. We're happy to do it.
[00:40:37] Josh: Do you wanna pause and delay your rollout. Right, exactly. And, and add this new feature. Is that important enough or should we do the rollout and actually do like a V2 kind of thing?
[00:40:47] Tanner: Exactly. But it, it allows us to have that conversation with the client. Cause otherwise it can be kinda awkward, right? They always wanting to add stuff or change their minds or whatever. And that's okay, right? It's their product. They should have an, a strong opinion of how it works. But it allows us to give them that kind of measured approach. It's like, okay, well this is A gonna cost more money. B, it's gonna extend your timeline and extend your launch. Is the opportunity cost associated with that? Worth it cuz if it is, great, we'll do that. Or like you said, we can slate that for a V2 and let's get the MVP out there that we planned.
[00:41:21] Josh: Okay. So UAT comes and they're happy with it. They give the thumbs up, they want this thing launched. Do you go right to launch or what do you do?
[00:41:28] Tanner: Yeah, so our, our launch phase is really nice. We go into a like, kind of a, a two-ish week. Load testing, beta testing. We get a bunch of beta users on there from the client, and we're really working with them on planning this launch, right?
We want that launch to be this big, exciting thing. I mean, they spend a lot of money and a lot of time and you know, it should be their baby. So we work with them on this, this process, and we go into this beta testing. We do two weeks up to launch. We do our launch and it's all kind of all hands on deck.
We've done the load testing. We know what the kind of their client or their customer requirements are gonna be and all of that stuff. We go into the launch and then we maintain it for two weeks post-launch. Right? This is all encompassed in that fixed bid right up front, so we get it. They get two weeks of buffer.
If something's gonna break, there's a high probability it's gonna break within that first two weeks, right? Especially once you go from a beta group of 50 or a hundred people into. You know, 5,000 people. It's like there's a lot more variability in that group. So if it's gonna break, it's gonna break in that two weeks.
[00:42:30] Josh: Perfect. And then after your launch, so you've gone two weeks, there is a maintenance phase, which is a whole nother topic of itself, right?
[00:42:39] Tanner: Yeah. That's a whole other thing. But the important factor is, You have to have a maintenance piece. Either you have it internally,
[00:42:47] Josh: the code just doesn't run forever by itself, and you build a billion dollar company off of it is that.
[00:42:51] Tanner: Yep. Yeah, you can totally do that with, with no additional development. No, it, it's, you have to have maintenance. It's either we are gonna do your maintenance for you, which we're happy to do. or you're gonna have internal maintenance and we're gonna offload it onto that team.
[00:43:04] Josh: And then there's a nice, you know, handoff transition, handing over keys of credentials of source code in that.
[00:43:12] Tanner: Yep, yep. Absolutely. We crosstrained that team and really try to get them up to speed so they can effectively maintain and support. That product going forward again, or alternatively, we have a whole maintenance package and maintenance system that we do where we'll host it and we'll support it and kind of do all of that management as well.
And then continue on V2 or. Whatever they want to do, so.
[00:43:35] Josh: Well, that's perfect. This is, again, a glimpse at the internal processes that Red Sky Engineering uses is basically wasn't something we thought of on the first day and rolled out. It's been through years and years of trial and error and frankly, failures on ourself and from obviously interacting as the first story did with different developers and seeing how things were done and how things definitely shouldn't be done.
They were refined through this fire.
[00:44:01] Tanner: Absolutely. So we've, we've been there, we've been at the, we have, we have no processes. We don't know what we're doing, we're just, I mean, we do know how to code, but we don't know how to formalize these processes. So we started there. Everybody kind of does, and then over time you build it into something that's awesome and you know, it completely shifted the paradigm of things.
[00:44:21] Josh: Well, thank you for listening to our story. We'll be back next week with more stories, personal experiences, and advice on running a dev shop.
[00:44:28] Tanner: Awesome. Thanks guys.