“Patience is the art of concealing your impatience.†Guy Kawasaki
It would be the dream if every dataset that you come across was ready to go: with (1) no QA/QC required, (2) could be used for an alteration study, lithogeochemistry study, and/or to vector towards mineralization, and (3) could be put into a model. As we were reminded here at LKI Consulting last week that is not the case, BUT that does not mean that the data cannot be used at all… you just have to let it speak to you.
Very flower child perhaps, but not really. By forcing your geochemistry data into a box (for example, that it will be used for a lithogeochemistry study) you are limiting the data in what it can tell you… leading both to frustration and limiting you as to what the data can do for you, which may have been even better than your original intention.
Take this stream sediment data we were working on for example. Quickly we learned a few things:
There was at least three different datasets in here (heavy mineral concentrates, fine sediments and coarser sediments), in addition to two labs and at least six distinct protocols… trying to level the data so as to use all samples in one analysis would not do us any favors.
- There were unmarked duplicates.
- The data had sampling bias caused by many samples taken spatially close together, whereas others are spaced further apart.
- There was certainly some Fe-oxide scavenging going on… definitely had to address this.
- Out of all the elements we seemed to have… we could only use 16 of the elements, particularly for advanced analytics.
Once we accepted that we were not going to get any lithogeochemical or typical pathfinder information from the data and that we were best off (once the data was clean and ready to go) not worrying that there was no established plot for how we were going to interpret this database – a lot interesting (and simple!) observations started to come out of the data quickly.
Dealing with historic data is frustrating, but that does not mean that it does not hold value.