The first draft is complete. It still needs more input and critique and a few lines of code that will discover the strength of a couple of additional relationships I realised are important. The general message is there, it just needs to be honed. I’ve got some good ideas written up on my white board and some of it has been added in.
Next week I am doing a Class III first aid course and so will take a break from this chapter. Spend some time doing readings instead. Look for things to pop into the chapter. A week away from the thing should give my brain some time to repair itself and come back at it with renewed perspective. That being said, today went well and I feel like I contributed a good amount of new, valid content.
Besides some admin and attending journal club today I made my first pass at writing up the discussion section for the chapter. While writing it I came to realize a good many issues at the heart of the project as well as some less critical things that should be added in results that will require a couple more light lines of code. The disquieting issue is that I feel like I have lost my direction on this project and am not sure how relevant the results are as I may not have approached the important issues directly. So I am going to have a think about the bigger picture and return to the project tomorrow, hopefully able to see it in a different light so I can understand where I have gone wrong, if indeed I have. Specifically, in the interest of performing experiments on the time series’ that are not post hoc I have deviated from what would otherwise be more sound practices (e.g. not using very short time series’ and not including values that are clearly outliers). I wanted this additional range to be present in the time series but am now thinking this wasn’t a good idea, and that I should go back and redo everything with a minimum time series length of ten years. That way a more meaningful discussion can be had without spending over half of it attempting to validate why the findings are relevant regardless of the sloppiness of the methodology. Besides this point I also wonder at the usefulness of power tests when the median power is only 0.14 (0.80 would be a medium good result on a scale of 0.00 to 1.00). The whole thing fell a little flat out of the gate and I’m standing here wondering how to dress it up so people will still be interested in reading it…
The results section has now had all changes made to the code propagated throughout it. There were many changes to the results due to the new methodology and I have since edited/ added a few figures that needed to be included in the text (i.e. coast_power_length). These changes and additions really helped to strengthen the results section and I am very happy with the developments of the day. I also addressed several items from my snag list and noticed a couple more, but a net positive movement towards sorting out snags was made. I also decided throughout the day that I have entirely to many tables. All of the big ones were removed and the meta data table was trimmed down so that it now fits, snugly, across a normal width of paper. This helps to give the paper a cleaner feel. There are now only two small tables in addition to the meta-data table. The rest is explained via pretty figures. The latter still need some touching up and colour correction etc. but I have taken a couple of large steps towards more effectively describing the data.
After rewriting a good amount of code to switch from means to medians and quantiles, the changes to the methods section needed to be written up. This has been completed and now reflects the current code. The changes were passed on to the tables and figures as well.
At a previous meeting with AJ and Rob W it was recommended that rather than using mean values I switch over to median and quantile values to show the spread in the data. I’ve also calculated confidence intervals for means where they are/ would be used. This does help to give more complete picture of the data as the range is very large and the data are not normally distributed. There are generally a few very large values that offset the more numerous smaller values. For example, on a couple of sources of data the median power is half that of the mean power. The error on the mean is almost half the mean value, too. Making the mean not particularly meaningful. (pun)
Started the day off with a bit of diving. We’d forgotten a reel at Oudekraal last week. Went back today and the thing was still tied to the mooring where we left it. Great success. Took the long way back underwater as we were diving from the shore. Just Ross and I. Found some fun junk near the rocks that form the beach head. Took a nice china plate home to add to my underwater collection. We took some videos of the clearance site as well
After that I got through a few old e-mails that had some attending coding that needed writing/ doing. Produced some deliverables and sent some reminders to people I need other sources of info from. Still more in the back log however. And I haven’t touched my reading back log in too long… Scheduled an R course for Friday. Going to teach my younglings how to create maps and add stuffs to them.
Rounded out the day with several hours of editing the code and wording for my methods section. Got through all of AJ’s edits, but still need to put much of his advice into effect. Have some big plans for my figures, too. Things are definitely moving in the right direction. Tomorrow will be another good day.
Today ended up being busy. Lot’s of personal things to attend to, and one of the first dreary overcast cold days of the year. Didn’t get through nearly as much editing as I had planned on. But I did make a few meaningful changes and things are starting to shape up, which feels good.
But most importantly one of my friends gave birth today! So I am now off to see the larval human 🙂
The first draft of the results section is complete. Will have the discussion section draft finished tomorrow. I moved a lot of the figures and tables around as well so they are closer to being in the correct position. Also corrected all references to all tables and figures so that they accurately link to the thing they are referring to. Also was working on getting the reference section to come in correctly but there appears to be an issue somewhere between the .bib file created by Mendeley and Sweave’s ability to load that up. It is likely a package issue somewhere down the line that isn’t installed correctly on my computer. Or perhaps a bit of necessary code or syntax is missing somewhere in my script. Either way it will be a problem for a fresh brain to solve tomorrow.
The first draft of my methodology is complete. And it is all nicely bundled up in a Sweave document with the code I used to run the analysis. It looks pretty aweful at the moment but that can be fixed. The important thing is that the content is there!
I really wish I would have started using Sweave earlier on. With the code used to conduct the analyses directly next to the text describing what I have done makes everything so much easier and more clear. I made a number of changes to the way in which my analyses functioned as well as a couple of more sweeping changes and that is wrapped up, for now. I am hoping that no more major changes to the code will be necessary but that is as likely as…
Anyway, it is all in the Sweave document now and compiles correctly. The paragraphs of text that describe what each bit of code does and why (also known as the methodology section) are coming along. I already had a decent amount of this written but have since made some substantial changes and a few additions so it requires quite a bit more work. But it is certainly a small task in the grand scheme of things and should be finished by midday tomorrow. After that is the results section. I can already see a couple of odd bits here and there and it will be in the results write up where I need to check up on those wobbles. If these expose weaknesses in my analyses or data management then I will have to return to the code and smooth it out, potentially adding another day to the process. But I am learning and future chapters will go much more smoothly with the work flow I am developing for this first chapter. So it’s not such a train smash that I am already a couple of months behind schedule. I have also spent a considerable amount of time diving and organizing other things, like the R courses.