So the main concern with TPS currently is technical debt. In it’s current state every time transportation data changes, be it the whole load or one order, the entire freight move is re saved and re evaluated leading to a lot of duplicated data. The large amount of data we’re saving has slowed down our database, and caused our program to use excessive amounts of ram. Presently TPS works almost as a bulk operation; getting 30 days of freight move data and processing it, then analyzing and saving it if needed, the dream would be to have the application work freight move by freight move, however there is a lot of work involved in that and frankly we don’t have time, so we need another plan. This plan is to only save data that has changed, which seems obvious but it has a lot of implications given our current design. Our main concerns now is how to do we we build a freight move given data with multiple revisions on separate executions, and when do we evaluate this data and how? My main concern is about deleted data, mainly determining if any data has been deleted, and deciding what we do after this discovery. In non TPS related news the template project has begun, my job here is to create a GUI design in Photoshop and present it to the team. There was also more training this week.