We believe we have solved the performance issue for the most part. Initial tests were promising. We are now in simulation and the plan is to run a new file set each day and have the results save to a TPS simulation database on the QA server. Apparently in production the application will only run about once a day at the start, which is a far cry from the original plan of every 15 minutes, but we’ve decided that running it that often is unnecessary. This process is sort of challenging for deployments however because it means every time we fix a bug and need to redeploy we need to run all the file sets through it before allowing the testers to use the database for testing, which will become challenging as we get further and further into it. We’ll see how it goes.
Day: January 15, 2016
Week 22 (September 28 – October 2)
More of the same as far as testing goes. Baseline is almost done so simulation is up next. There is concern now that our application can’t handle the workload required for production. We have been testing it locally with multiple file sets and it seems to really bog down after about 10 file sets; which is bad since it needs to handle 30 without any problems. Most of the team has been staying late again trying to figure out to speedup up the application without affecting its data output. I’ve been saying late again to help out which isn’t bad cause i’m getting free pizza again.
Week 21 (September 21 – September 25)
This week was more or less the same as last week; bug reported, bug found, bug fixed, fix released. The original goal was to be in UAT testing by October but considering we’re still in baseline testing I don’t think that will happen. Everyone now including the testers and executives are realizing the enormous size of this application and are starting to realize the amount of work needed to get this application into production. It’s been a great learning experience to see the development life cycle of a large application like this. Everyone on the team has been commenting about how much they’ve learned about software development just from this project alone.
Week 20 (September 14 – September 18)
The first bug report came and as expected there was a lot of them. We do assume a lot of them are confusion about how BOL reconciliation works and its limitations so we’ve been tracking down a lot of those and grouping them together. There’s several meeting scheduled to explain the process to the testers and we’ve been having a lot of refreshers within the team about the algorithms involved. I’ve mostly been frantically tracking the cause of bugs and applying fixes. We also need to send out a daily status report of all the open bugs; Wednesday is my day to fill that out. The application finishes without any errors using the test files which is good, it’s just going to be a matter of fixing all the data errors.
Week 19 (September 7 – September 11)
We finally started QA testing today. Basically it will work in 3 stages. first there’s baseline testing where we import 1 set of files and test the application to make sure the data output is correct. Second is simulation testing where we run several file sets though application in a similar manner to how it will run in production to verify that it can handle the load and still output correct results. Finally we have user assurance testing where the testing team hands off the application to the end users who verify that everything works as expected and is logically correct, this is also where Insight will be tested. We spent a lot of the week practicing deployments and testing locally. We are now waiting for the first bug report from the testers which will probably come next week.
Week 18 (August 31 – September 4)
This week was mostly more meetings about how the QA process will work. As well as design meetings for what else needs to be added before production. I also learned about another part of the application known as insight. Basically after all the building and analyzing and calculating and saving to the database is done for TPS reporting, the insight application should take the data from the TPS database and import it into a separate Insight database. From here several other processes and reports are generated but I don’t quite understand the process yet. It was definitely surprising to me however considering we’re one week away from QA and I’ve never heard of this application. Apparently our team already laid most of the ground work for it so that’s good; but there is still plenty of work to be done on this thing before we can go into production